Monday, May 31, 2010

Robot Evolution

“It’s a Bot-Eat-Bot World”

Insect expert Laurent Keller has become the first researcher to be able to experimentally address questions of evolution. In his lab at the University of Lausanne in Switzerland, his team is using robots to condense thousands of years of evolution into days of experiment. Particularly, they are using robots to study the evolution of communication in species. Their results have been more enlightening than expected, showing how species evolve for survival from one generation to the next.
They call these small robots the “S-Bots,” which have been built so that each generation lasts two minutes. They stand only 15 cm tall and are equipped with lights, which can be turned on or off, and what Keller calls a virtual “genome.” This programming dictates their response to the environment that surrounds them.
The S-Bots were placed in an “environment” which consisted of a “food source” and a “poison source.” The methodology that follows is simple: robots that found the food source reproduced and passed their programming on to the next generation.
After 500 generations, equivalent to thousands of years of evolution, the robots had begun to communicate using their lights. Although some used them to signal food and others to warn of poison, the robots ultimately became much more efficient within their environment. Keller had expected the robots to act independently, largely unaffected by the existence of the other robots. However, not only did the robots develop a system of communication, they also became deceptive. Although the robots had managed to survive, they did so in different ways and with different programming. Robots with similar programming clearly favored each other, and even signaled “strange” robots incorrectly to decrease their chance for survival. They began to cooperate in communities, an evolution much more sophisticated than what was anticipated by researchers.
The results of this experiment were astounding. They have greatly amplified our understanding of social communication. Until now, we were limited to the evaluation of the results of the evolution of communication, but this research has allowed the evaluation of the process of this evolution. Understanding of this process is much more insightful and the information is more versatile to similar fields of research.
This experimental research that uses robots to observe social behavior, particularly the evolution of communication, is expanding our understanding of communication and widely expanding the possibilities for future research of social behavior and evolution. These robots make it possible for researchers to study thousands of years of evolution within the limits of a laboratory. Lee Dugatkin, an evolutionary biologist at the University of Louisville finds the new research invigorating: “using robots to understand the evolution of communication opens the door. It has tremendous potential to address all sorts of questions that haven't been answered yet.”

Monday, May 3, 2010

Psychopathic Robots?

"If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath." Such is the opinion of Josh Hall, who wrote the book "Beyond AI: Creating the Conscience of a Machine." Throughout this discussion of bionics, we have only briefly mentioned the ethical issues that surround the topic and the research that could potentially make advanced bionics possible. As the last blog of the semester, it seems appropriate to dedicate some time to "robo-ethics."

Although the aid of robots have been unparalleled in many circumstances where the risk to a human would be too great, it has resulted in the death of humans. Two years ago, for example, a military robot in the South African army killed nine soldiers. This was, of course, a malfunction and not a result of the robot taking control of free will, but nevertheless, it forces us to question how to what extent are willing to trust such robots with our lives or the lives of others. In a Swedish factory, a robot machine injured one of the workers. Although this was attributed partly to "operator error," the factory was still held responsible and forced to pay a fine for the injury of its worker.

Asmiov's Three Laws of Robotics:
1. A robot may not injure a human being or allow one to come to harm
2. A robot must obey orders given by human beings
3. A robot must protect its own existence

Each of these laws take precedence over the following one. For example, a robot may not injure a human being, even if it is ordered to do so. It must obey law #1 over law #2. What becomes interesting then is law # 3: "a robot must protect its own existence." This means that if ordered to obey law #2, it must unconditionally follow any ordered given by any human being, even if it results in its own destruction. This brings up something that I had not considered in the first bog, when we were discussing what it means to be "human." Perhaps we should add a sense of self-preservation to the list we had originally compiled. In this case, if we strictly follow these 3 laws for the safety of humans in a world where robots are becoming more and more advanced, it seems that these robots could never truly behave in a human fashion as long as they are willing to follow orders over the preservation of their own "lives."

Chien Hsun Chen and Yueh-Hsuan Weng recently coauthored a very interesting paper, which was published in the International Journal of Social Robotics. It can be found here: http://works.bepress.com/cgi/viewcontent.cgi?article=1000&context=weng_yueh_hsuan. It attempts to provide solutions and guidelines to all of these ethical robotic dilemmas. For example, develop a set of guidelines for the punish of a robot and the creation of what they call a ”legal machine language” to help police future bionic bots. They explain that it is important to distinguish who takes the credit or the blame for the performance of a robot. In the example of the Swedish factory mention earlier, the factory was held responsible for the actions of the machine, even though the malfunction was attributed in part to faulty operation by the user. This reflects a general notion today: if you build a robot, you are responsible for the actions (good or bad) of that robot. But what if the robot was complex enough to make its own decision. In that case, the creator of the robot would be very much like a mother or father: you raise your kids as well as you can, and then hope that they make the right decisions. A parent does not go to jail if their son or daughter commit murder. Should this be the case with the creator of a robot that is able to make its own independent decisions?

In the paper, Chen states that a "human-robot co-existence society" could be possible as early as 2030. If he is correct, this co-existence would happen during our lifetime. As far away as a completely automated human robot may seem, there is more evidence of our quick advances than we may recognize. Looking though UM's own Miami Magazine, the topic of bionics and research toward that goal can be easily found. An article title "Differently Enabled" tells readers of the No Barriers Festival that took place at UM this past June. This event was part of the Clinton Global Initiative at UM. The BrainPort, which turns a video image into electrical impulses that are sent to the brain via the tongue, among many other new technologies were showcased at this event.