10 things that is difficult to teach robots




To be human is much easier than to create man. Take, for example, the process of playing the ball as a child with a friend. If we expand these activities into separate biological functions, the game will no longer be simple. Need sensors, transmitters and effectors. You need to calculate how much to hit the ball, so he closed the distance between you and your companion. You need to take into account the sun glare, wind speed, and all that can distract. Need to determine how to rotate the ball and how to take it. And there is space for outsiders scenarios: What if the ball will fly over your head? Will fly over the fence? Knocks out the window neighbor?

These questions demonstrate some of the most acute problems of robotics, as well as lay the foundation for our countdown. Here is a list of the ten most difficult things to teach robots. This top ten we have to win, if you ever want to implement the promises made ​​by Bradbury, Dick, Asimov, Clarke, and other visionaries who saw imaginary worlds where machines behave like humans.
Pave the way


Movement from point A to point B seemed to us a simple childhood. We, the people, do it every day, every hour. Robot, however, navigation – especially through a single environment that is constantly changing, or through a medium, which he had not seen before – the hardest thing. First, the robot must be able to perceive the environment as well as to understand all the incoming data.

Robotics solve the first problem by arming their cars an array of sensors, scanners, cameras and other high-tech tools that help robots to evaluate your surroundings. Laser scanners are becoming more popular, although they can not be used in an aqueous medium due to the fact that light is seriously distorted in water. Sonar technology seems a viable alternative for underwater robots, but in terrestrial conditions, it is far less accurate. In addition, to “see” your landscape helps the robot vision system, consisting of a set of integrated stereoscopic cameras.

Collect data about the environment – this is only half the battle. Much more difficult task is to process these data and use them to make decisions. Many developers run their robots using a predetermined map or making it on the fly. In robotics, this is known as SLAM – one-time navigation and mapping. Mapping here means that the robot converts the information obtained by the sensors in a certain form. Navigation also implies that the robot positions itself relative to the map. In practice, these two processes must occur at the same time, in the form of “chicken and egg”, which is done only by using powerful computers and advanced algorithms to calculate the position on the basis of probabilities.
Demonstrate agility


Robots collect packaging and parts in factories and warehouses for many years. However, in such situations, they usually do not occur with humans and practically always work with similar shaped objects in a relatively free environment. Life is such a robot factory is boring and ordinary. If the robot wants to work at home or in the hospital, for that he will need to have an advanced sense of touch, the ability to detect people nearby and impeccable taste in the choice of action.

These skills, the robot is very difficult to teach. Typically, scientists do not teach robots touch, programming them to fail if they come into contact with another object. However, over the past five years or so there have been significant advances in the combination of compliant robots and artificial leather. The compliance refers to the level of flexibility of the robot. Flexible machines more malleable, hard – less.

In 2013, researchers at Georgia Tech have created a robotic arm with spring joints that allow the arm to bend and interact with objects like a human hand. Then they covered all “skin” that could detect the pressure or touch. Certain skin types robots comprise hexagonal chips, each of which is equipped with an infrared sensor which detects any approach closer than a centimeter. Others are equipped with electronic “fingerprints” – ribbed and rough surface, which improves grip and facilitates the processing of the signal.

Combine these high-tech manipulators with advanced vision system – and you get a robot that can make a gentle massage, or to sort the documents folder, select from a huge collection.
Keep the conversation


Alan Turing, one of the founders of computer science, made in 1950, a bold prediction: once the machine will be able to speak so freely that you can not distinguish them from the people. Alas, while the robots (and even Siri) fell short of expectations Turing. This is because speech recognition is very different from natural language processing – what do our brains by extracting the meaning of words and sentences in the conversation.

Originally, scientists thought that it would be repeated as easy as plugging in the rules of grammar to the machine’s memory. But trying to program the grammatical examples for each language simply failed. Even define the meaning of individual words proved to be very difficult (because there is such a thing as homonyms – the key to the door and the key treble, for example). People have learned to determine the values ​​of these words in context, based on their mental abilities, developed over many years of evolution, but to beat them again on the strict rules that can be put on the code, it was simply impossible.

As a result, many robots are treated language based on statistics. The scientists fed them huge texts, known as the cabinet, and then allow computers to break up long text into pieces, to find out what words often go together and in what order. This allows the robot to “teach” the language, based on the statistical analysis.
Learn something new


Imagine that someone who has never played golf, I decided to learn how to swing a club. He can read a book about it, and then try or watch as practiced by the famous golfer, and then try yourself. In any case, you can learn the basics quickly and easily.

Robotics encountered some problems when trying to build an autonomous vehicle capable of learning new skills. One approach, as in the case of golf, is to break the exact activity in steps and then programmed in the brain of the robot. This implies that every aspect of the activity is to be divided, to describe and code that is not always kind and easy to do. There are certain aspects in-waving a golf club, and that the words, it’s hard to describe. For example, the interaction between the wrist and elbow. These fine details easier to show than to describe.

In recent years, scientists have had some success in teaching robots to imitate human operator. They call it the simulation training, or training for the demonstration (the technique LfD). How do they do it? Arm machine arrays of wide-angle cameras and refinement. This equipment allows the robot to “see” the teacher performing certain active processes. Training algorithms process the data to create a mathematical function card that integrates visual input and the desired action. Of course, the robots must be able to LfD ignore certain aspects of the behavior of his teacher – like itching and runny nose – and to deal with similar problems, which are produced due to the difference in the anatomy of the robot and the human.


The curious art of deception has evolved in animals to beat the competition and avoid being eaten by predators. In practice the art of deception as survival can be very, very effective mechanism for self-preservation.

Robots to learn to deceive people or other robots can be incredibly difficult (and perhaps good for us). Cheating requires imagination – the ability to generate ideas or images of external objects not related to the senses – and the car it is usually not. They are strong in the direct processing of data from the sensors, cameras and scanners, but can not form concepts that go beyond the sensory data.

On the other hand, the robots of the future may be more aware of the deception. Georgia Tech scientists have been able to transfer some of the skills of deception protein robots in the lab. First, they studied the artful rodents who defend their caches of food to lure competitors in old and unused storage. Then encode this behavior in simple rules and downloaded into the brains of their robots. Machines able to use these algorithms to determine when a fraud can be useful in particular situations. Therefore, could deceive his companion, luring him to another place where there is nothing of value.
Anticipate the actions of human


In the “Jetson” robot maid Rosie was able to maintain a conversation, cook, clean and help George, Jane, Judy and Elroyu. To understand the build quality Rosie enough to remember one of the initial episodes: Mr. Speysli boss George Jetson comes to the house for dinner. After the meal, he takes out a cigar and puts it in her mouth, and Rosie rushes forward with a lighter. This simple action is a complex human behavior – the ability to predict what will happen next, based on what just happened.

As the hype, the anticipation of human action requires the submission of the future state of the robot. He should be able to say, “If I see a man does And that means, as I can imagine based on past experience, most likely, he will do B”. In robotics, this item was extremely difficult, but people are making some progress. The team at Cornell University has developed an autonomous robot that could respond on the basis of how the companion interacts with the objects of the environment. To do this, he uses a pair of 3D-cameras to get a picture environment. The algorithm then determines the key objects in the room and sets them apart from the rest. Then, using a huge amount of information obtained as a result of previous training, the robot generates a set of specific expectations of movements of persons and objects that it touches. Robot draws conclusions about what will happen next, and acts accordingly.

Sometimes Cornell robots are wrong, but pretty confident moving forward, including by as improved camera technology.
Coordinate with other robots


Construction of a single large-scale machines – even the android, if you want to – requires significant investment of time, energy and money. Another approach involves the deployment of the army of the simpler robots that can work together to accomplish complex tasks.

There are several problems. Robot, working in a team should be able to position itself well in connection with his teammates and be able to communicate effectively – with other cars and a human operator. To solve these problems, the researchers turned to the world of insects that use complex swarm behavior to search for food and solve problems that benefit the entire colony. For example, studying ants, scientists realized that some individuals use pheromones to communicate with each other.

Robots can use the same “pheromone logic” only rely on light rather than chemicals when communicating. It works like this: a group of tiny robots dispersed in a confined space. First, they explore this area randomly, until one stumbles upon a light trail left by other bot. He knows that you have to walk along the trail, and is leaving its own footprint. As the tracks merge into one, more and more robots follow each other in single file.


God told Adam and Eve, “Be fruitful and multiply, and replenish the earth.” The robot, which would have received such a command, would have felt embarrassed or frustrated. Why? Because he is not able to reproduce. It’s one thing to build a robot, but quite another – to create a robot that will be able to make copies of itself or regenerate lost or damaged components.

Remarkably, the robots can not take people for example reproductive model. You may have noticed that we do not share into two equal parts. Simple, but doing it all the time. Relatives of jellyfish – Hydra – practice a form of asexual reproduction, known as budding: a small ball is separated from the body of the parent, and then comes off to become a new, genetically identical individual.

Scientists are working on robots that can perform the same simple procedure cloning. Many of these robots are built of repeating elements, usually cubes that are made in the image of a cube, and also contain a self-replicating program. At cubes have magnets on the surface, so they can be connected and disconnected from other nearby cubes. Each cube is divided into two parts along the diagonal, so each half can exist independently. All the same the robot comprises several blocks, assembled in a certain shape.
Operate on the principle of


When we deal with people every day, we make hundreds of decisions. In each of them we weigh each of our choice, determining what is good and what is bad, honest and dishonest. If robots want to be like us, they would need to understand ethics.

But as in the case of language, encode ethical behavior is extremely difficult mainly because a single set of generally accepted ethical principles do not exist. Different countries have different rules of behavior and different systems of law. Even in some cultures, regional differences may affect how people evaluate and measure their actions and those of others. Trying to write a global and suitable to all robots ethic is practically impossible.

That’s why scientists decided to build robots, limiting the scope of ethical problems. For example, if the machine is working in a particular environment – in the kitchen, for example, or in the patient’s room – it will be much less than the rules of conduct and laws for ethical decision-making. To achieve this, engineers introduced robotics based on the ethics of choice in the machine learning algorithm. This choice is based on three flexible criteria: what good will effect the damage it will cause and extent of justice. Using this type of artificial intelligence, your future home robot will be able to determine exactly who in the family has to wash the dishes, and who gets the TV remote for the night.
Feel emotions


“That’s my secret, it is very simple: vigilantly one heart. The most important invisible to the eye. ”

If this remark Fox of “The Little Prince” by Antoine de Saint-Exupery is true, robots will not see the most beautiful and the best in the world. In the end, they are perfectly probed around the world, but can not convert sensory data into specific emotions. They can not see the smile of a loved one and feel the joy, or to fix an angry grimace stranger and tremble with fear.

This, more than anything else in our list, separates man from machine. How to teach a robot to fall in love? How to program a disappointment, disgust, surprise or pity? Whether it is worth trying?

Some people think that it is worth. They believe that the robots of the future will combine cognitive and emotional systems, and thus work better, faster to learn and communicate more effectively with people. Believe it or not, prototypes of such robots already exist, and they can express a limited range of human emotions. Nao, a robot developed by European scientists has emotional qualities year-old child. He can express happiness, anger, fear and pride, accompanying emotion gestures. And this is just the beginning.


In: Technology & Gadgets Asked By: [15554 Red Star Level]

Answer this Question

You must be Logged In to post an Answer.

Not a member yet? Sign Up Now »