iStock-628370432
[from: https://www.wired.com/story/the-ai-chatbot-will-hire-you-now/%5D
Alexa and Siri are great examples of what  we think of AI chatbots. They are conversational systems that can reply when talked to, and carry out orders that are given to them. Evorus, a new chatbot system, works the same way. What differentiates Evorus from Siri or Alexa is that humans are continuously training the system which makes it less dependent on humans.

“Evorus recruits crowd workers on demand from Amazon Mechanical Turk to answer questions from users, with the crowd workers voting on the best answer. Evorus also keeps track of questions asked and answered and, over time, begins to suggest these answers for subsequent questions. The researchers also have developed a process by which the AI can help to approve a message with less crowd worker involvement.”

Evorus is available here: http://talkingtothecrowd.org/

Currently, most chatbots can sufficiently reply to basic conversations and demands, but the scope of their capacities are quite narrow.  However, since Evorus incorporates human replies and feedback, there are more possibilities open for conversational chatbots. Jeff Bigham, associate professor in the Human-Computer Interaction Institute says, “with the exception of concierge or travel services for which users are willing to pay — agents that depend on humans are too expensive to be scaled up for wide use.”

“Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its Tay chatbot in 2016, said Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI). Huang developed Evorus with Bigham and Joseph Chee Chang, also a Ph.D. student in LTI.”

References

Carnegie Mellon University. “Crowd workers, AI make conversational agents smarter: Human/machine hybrid system can answer wide array of questions.” ScienceDaily. ScienceDaily, 7 February 2018. <www.sciencedaily.com/releases/2018/02/180207101710.htm>.

 

AI Software for Underwater Vehicles

oceans-main
https://www.packard.org/what-we-fund/ocean/

Personally, marine robotics always seemed impossible. I believed that if the robots could function underwater, they had to be waterproof inside and out, be really sturdy, and sophisticated enough to be maneuvered in such an environment. However, what I didn’t know was the limits of underwater robots: communication abilities. Moreover, according to robotplatform.com, aquatic robots, especially those that swim and dive (e.g. robot fish), have little to no intelligence built inside them. MIT Professor Henrik Schmidt says, “In underwater marine robotics, there is a unique need for artificial intelligence — it’s crucial.”

“Augmenting robotic marine vehicles with artificial intelligence is useful in a number of fields. It can help researchers gather data on temperature changes in our ocean, inform strategies to reverse global warming, traverse the 95 percent of our oceans that has yet to be explored, map seabeds, and further our understanding of oceanography.”

Unmanned Marine Vehicle Autonomy, Sensing and Communications, a course at MIT, which is also led by Professor Henrik Schmidt, focuses on this topic. Students of the class are to design, code, and build a robot with AI that would function when released in the Charles River. “According to graduate student Gregory Nannig, a former navigator in the U.S. Navy, adding AI capabilities to marine vehicles could also help avoid navigational accidents. ‘I think that it can really enable better decision making,’ Nannig explains. ‘Just like the advent of radar or going from celestial navigation to GPS, we’ll now have artificial intelligence systems that can monitor things humans can’t.'”

References

O’Leary, Mary Beth. “Unlocking Marine Mysteries with Artificial Intelligence.” MIT News, 14 Dec. 2017, news.mit.edu/2017/unlocking-marine-mysteries-artificial-intelligence-1215.

AI and Animation

22-despicable-me-2-animation-movie
[from:http://marionettestudio.com/top-5-animation-blogs-to-learn-from/%5D
We have all seen an animated film at least once in our lives. Not only is it common in movies and tv shows, but it is used in various media sources such as tv ads and such. These films are created by using animated images, which are made by rendering systems that compute thousands of light rays to achieve the needed color and texture, resulting in a frame that will be used for the film. It is a very labor-intensive and time-consuming process, and if it is made with a few light rays instead of thousands, it would indeed take less time and effort, but it would create a relatively low-quality image with flaws and inaccuracies called “noise”.

UC Santa Barbara electrical and computer engineering Ph.D. student Steve Bako and his advisor, Pradeep Sen have been researching a solution to this problem. Both have experience within the animation industry as they have worked at Disney and Pixar over the past few years. What they have been working on is an alternative rendering system using Artificial Intelligence and deep learning to minimize and eliminate noise while producing images in a shorter amount of time; they are trying to maximize efficiency of the rendering process.

“The team tested the software by using millions of examples from the film “Finding Dory” to train a deep-learning model known as a convolutional neural network. Through this process, the system learned to transform noisy images into noise-free versions that resemble those computed with significantly more light rays. Once trained, the system successfully removed the noise on test images from entirely different films, such as Pixar’s latest release, “Cars 3,” and their upcoming feature “Coco,” even though they had completely disparate styles and color palettes.”

“The work presents a significant step forward over previous state-of-the-art denoising methods, which often left artifacts or residual noise that required artists to either render more light rays or to tweak the denoising filter to improve the quality of a specific image. Disney and Pixar plan to incorporate the technology in their production pipelines to accelerate the movie-making process.”

 

References

Badham, James. “Intelligent Animation-engineers Collaborate to Incorporate AI into a Computer-based Rendering System.” Phys.org – News and Articles on Science and Technology. N.p., 26 July 2017. Web. 27 July 2017.

“K-Eye”; A Facial Recognition System

facialrecognition
http://www.wavestore.com/technologies/analytics/facial-recognition

From the Korea Advanced Institute of Science and Technology (KAIST), Professor Hoi-Jun Yoo of the Department of Electrical Engineering has developed semi-conducter chip called CNNP, that is used to run K-Eye, a facial recognition system.

“The K-Eye series consists of two types: a wearable type and a dongle type. The wearable type device can be used with a smartphone via Bluetooth, and it can operate for more than 24 hours with its internal battery. Users hanging K-Eye around their necks can conveniently check information about people by using their smartphone or smart watch, which connects K-Eye and allows users to access a database via their smart devices. A smartphone with K-EyeQ, the dongle type device, can recognize and share information about users at any time.

When recognizing that an authorized user is looking at its screen, the smartphone automatically turns on without a passcode, fingerprint, or iris authentication. Since it can distinguish whether an input face is coming from a saved photograph versus a real person, the smartphone cannot be tricked by the user’s photograph.”

In order to maximize efficiency and accuracy, the team developed two technologies for K-Eye: an image sensor with “Always-on” face detection and the CNNP face recognition chip.

The image sensor is able to determine whether there is a face present in the view of the user, and also distinguish between faces and backgrounds. After recognition, the device only operates when there is a face present, therefore saving energy and increasing efficiency.

“These chips were developed by Kyeongryeol Bong, a Ph. D. student under Professor Yoo and presented at the International Solid-State Circuit Conference (ISSCC) held in San Francisco in February. CNNP, which has the lowest reported power consumption in the world, has achieved a great deal of attention and has led to the development of the present K-Eye series for face recognition.

Professor Yoo said “AI — processors will lead the era of the Fourth Industrial Revolution. With the development of this AI chip, we expect Korea to take the lead in global AI technology.””

 

References

The Korea Advanced Institute of Science and Technology (KAIST). “Face recognition system ‘K-Eye’.” ScienceDaily. ScienceDaily, 15 June 2017. <www.sciencedaily.com/releases/2017/06/170615100700.htm>.

The Definition of Consciousness

art-changes-consciousness_feature
[from:http://upliftconnect.com/art-changes-consciousness/%5D
We do not know what consciousness is, all we know as humans is that we have a will or impulse, instincts to do certain things according to what our brain tells us to do. The question of what the consciousness it, or what a soul is, has been an age old debate; whether it is an object that has physical form, or is it something that we can physically access is unknown.

According to Karl Friston, Wellcome principal research fellow and scientific director at the Wellcome Trust Centre for Neuroimaging and professor of neurology at University College London, he believes consciousness to be “nothing less than a natural process”.

“According to physicists, complex systems can be characterised by their states, captured by variables with a range of possible values. In quantum systems, for example, the state of a particle can be described by a wave function that entails its position, momentum, energy and spin. For larger systems, such as ourselves, our state encompasses all the positions and motions of our bodily parts, the electrochemical states of the brain, the physiological changes in the organs, and so on. Formally speaking, the state of a system corresponds to its coordinates in the space of possible states, with different axes for different variables.”

Humans are playful loops of thought, who are part of a process that is a performance expressed through our actions and lives. The difference between being “conscious” and “unconscious” is how individuals make inferences about time and action in a sense that consciousness enables individuals to have the capacity to understand the two concepts and can/try to alter them.

If consciousness is truly a natural process that is a result of evolution and a human trait, there is a need to change what we are looking for in artificial intelligence, and find a different objective. Since another objective could be better than forcing human consciousness, a concept that is barely understood at the moment.

References

Friston, Karl. “Consciousness Is Not a Thing, but a Process of Inference – Karl Friston | Aeon Essays.” Aeon. Aeon, 18 May 2017. Web. 22 May 2017.

AI and Bias

bias_fi
[from:http://chrissanders.org/2017/01/know-your-bias-2-anchoring/%5D

Machines are normally perceived as cold, unemotional entities made of wires and metal. However, when it comes to AI(Artificial Intelligence), they can be more human than we thought. A new study has proven that machines are able to reflect a human characteristic of cultural bias. 

Researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society, has claimed that fairness and bias in machine learning is crucial for society, since these artificial intelligence systems may become part of our future; therefore, they would have to be neutral and bias-free of any negative or unacceptable social behaviour. 

The Princeton research team used a Stanford developed machine learning test called GloVe, which is an algorithm that can represent co-occurences of words by arranging them by association. The results were that the system unintentionally had certain stereotypes for certain words. 

“For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender — like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.””

References

Princeton University, Engineering School. “Biased bots: Human prejudices sneak into artificial intelligence systems.” ScienceDaily. ScienceDaily, 13 April 2017. <www.sciencedaily.com/releases/2017/04/170413141055.htm>.

Autocorrect Drawing

shareimg.png

Recently, Google has launched an AI program that will help people who are “terrible” at art enhance their drawing skills and communicate visually. It is called AutoDraw, and the program allows users to draw out simple figures, and then suggests a professionally drawn version of it. For example, if I draw a stick figure of a person, the program will suggest a drawing of a person.

According to Dan Motzenbecker, a creative technologist at Google, the program’s roots lie in a neural network created to recognize handwriting. Since handwriting in the first place is intricate drawing and figures put together using lines, apparently the drawing recognition program isn’t that much of a huge leap from handwriting.

An important aspect of the technology is being able to recognize variety. There isn’t a standard way of drawing or writing, and that a machine is able to understand that is quite crucial. The more a system is able to understand vague concept and imperfect human input, it seems to me that it is getting more advanced.

“The researcher team’s goal was to train “a machine to draw and generalize abstract concepts in a manner similar to humans,” according to a blog item written by David Ha, a Google Brain Resident. The system works by taking human input—say, a drawing of a cat or just the word “cat,” according to a Google spokesperson—and then making its own drawing.”

Another interesting and similar AI system is Quick, Draw. It is also based on a neural network that recognizes doodles, and has a database of information from 5.5 million pictionary doodles. A user is prompted to draw a certain object, and the system tries to recognize what it is. For example, if the user is asked to draw a snail, the user draws a quick doodle until the system recognizes the snail. After a few doodles, the user will be presented with their doodles, how other people drew the same prompts, and what the doodle looked like to the system.

Screen Shot 2017-04-16 at 11.18.41 PMScreen Shot 2017-04-16 at 11.18.49 PM

These are examples of what the program will show you, after I personally tried it out.

As much as both computer science and art interest me, I believe there will be a lot of art related progress in AI, since art represents quintessential human creation. When machines start to understand art, it will be able to comprehend how human emotions, perspective, and decisions work.

Fake News

_92276035_istock_19861919_medium
http://www.bbc.com/news/blogs-trending-37846860

Don’t trust everything on the Internet. That is the golden rule that everyone is reminded of time to time, and is becoming more important as time passes. These days, fake news is a big issue, spreading false or wrong information to the masses. “The WVU Reed College of Media, in collaboration with computer science students and faculty at the WVU Benjamin M. Statler College of Engineering and Mineral Resources, is hosting an artificial intelligence (AI) course at its Media Innovation Center that includes two projects focused on using AI to detect and combat fake news articles.”

The team is trying to create a system that detects fake news by using a machine learning system to grade news articles by their likeliness of being fake news. The score is based on transparency of the content and rating of the article.

Center’s Creative Director Dana Coester claims that fake news isn’t only a social issues, but it also relates to technology, a problem that requires professional collaboration.

How to “train” this AI system is similar to how Snapchat’s face recognition program works: make the system go through numerous high quality news articles as well as a collection of fake news articles. If the system works and further research is conducted, it will not only help solve the problem of fake news but also provide a solution to misinformation on the Internet.

References

West Virginia University. “Can artificial intelligence detect fake news?.” ScienceDaily. ScienceDaily, 27 March 2017. <www.sciencedaily.com/releases/2017/03/170327143654.htm>.

Marconi, Francesco. “Can Machine Learning Detect “Fake News” ?” Chatbotslife. N.p., 23 Feb. 2017. Web. 9 Apr. 2017.

The Missing Part of AI

SEC-cognitive-next-landing-v4-quest-for-AI-creativity
https://www.ibm.com/cognitive/advantage-reports/future-of-artificial-intelligence.html

The purpose of Artificial Intelligence, more often known as AI, is to mimic the human though process and mind. By figuring out how our mind works and being able to recreate it as technology, it will give non-human machines the same consciousness and what it makes us “human”. As much as it is a crucial topic in areas such as computer science and mathematics, personally, the idea itself is still very distant and intangible. We do call components that create intricate game design and sophisticated search engines artificial intelligence, but for some reason, it always seemed to me that AI is far away from our reach.

According to Ben Medlock, co-founder of SwiftKey, which designed a communication system for Stephen Hawking in 2012, “Things took a wrong turn at the beginning of modern AI, back in the 1950s.”

“Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself.”

This method is problematic since it cannot emulate the real world, or at least recreate what we as humans experience and register. Since the machines rely on symbols and straightforward language, while reality is consisted of vague definitions and sensations, understanding the real world doesn’t seem to be possible. Indeed, modern technology introduced machine learning, a more crude but efficient method where the machine “learns” through repetition and mass data interpretation. However, this still isn’t enough.

Medlock claims that it is our biology itself that allows us to be us. After countless years of evolution and survival, our body has created its own way of interpreting the environment and our surroundings, therefore leading to formation of how the human mind works today. Unless machines are able to feel and understand how our senses work and how we register the world, it will be impossible to perfectly recreate our minds in a machine’s perspective.

 

References

Medlock, Ben. “The Body Is the Missing Link for Truly Intelligent Machines – Ben Medlock | Aeon Ideas.” Aeon. Aeon, 26 Mar. 2017. Web. 26 Mar. 2017.

computers v.s. humans: art

copy-of-round160_3_b-jpg-highres_80_9
[from:http://www.businessinsider.com/google-to-auction-trippy-artwork-made-by-ai-2016-2%5D
The trippy piece of artwork above is created by Google’s AI. We are slowly reaching a time where even art can be created by machines, and while some are doubtful and scared, some are looking forward to the grandeur of the future.

According to this article from the Aeon magazine, there is actually no difference between artwork created by artificial intelligence and humans. It is true that algorithmic art, which is basically art equivalent to AI art, has been around for more than one or two decades. It is also true that it isn’t as influential or as popular as “normal” art; what I mean by “normal” art is fine art, design/digital art, and begrudgingly contemporary art. In my opinion, the reason may be because of the lack of excitement and composition compared to human made art.

Anyway, the important idea is that algorithmic/AI created art is still man-made. The code and algorithm that produced the artwork is, at the end of the day, put together by an artist, not a machine or computer.

Google has already introduced several projects that will create sophisticated art and music. For example, Magenta, which composed its first song in 2016, and Deep Dream Generator, which can be used to generate unique images. (Personally, I found the Deep Dream Generator extremely interesting so here’s a taste of what it can do.)

In conclusion, at first I was doubtful about the idea that AI art is the same as human art. However, I know believe that there is, indeed, not much to worry and it is actually exciting that in the not-so-far future, there will be more tools to use and various genres as well as different artists.

 

References

Roeder, Oliver. “There Is No Difference between Computer Art and Human Art.” Aeon. Aeon, 03 Jan. 2017. Web. 03 Jan. 2017.