World

Antonio Torralba: “The sense of touch is essential to develop intelligent systems capable of learning by themselves” | Training | Economy

For Antonio Torralba (Madrid, 1971), the purpose of artificial intelligence could be summed up in his continuous search to try to understand the mechanisms of natural intelligence, “what are we, how do we perceive the world and how do we work”, a question to which that humanity has wanted to find an answer since it made cave paintings inside caves. Understand it and then recreate it through complex systems “that are capable of solving complex tasks and adapting to a changing world.” From Madrid to Mallorca, Barcelona, ​​Grenoble and Boston, where he directs the Faculty of Artificial Intelligence and Decision Making from the prestigious Massachusetts Institute of Technology (MIT), Torralba’s fascination with this area of ​​knowledge began in adolescence, when he was given a Commodore 64 computer and felt the deep desire to “understand everything that happened inside, how it worked and program it a thousand ways”.

From its laboratory at MIT, the Torralba research group works to improve the perception that machines have of the world around them, incorporating sensors capable not only of seeing and hearing, but also of interacting with their environment through touch. “If one thinks, for example, of those videos of robots that can be seen on YouTube, you realize that they perceive the world through cameras and some position sensors in the joints, extremities… while the human being his body is covered with sensors: in addition to vision and hearing, we have the sense of touch, which is (along with computation) essential for developing intelligent robotic systems capable of learning on their own”. Artificial intelligence, he says, is part of our own evolution, and its development already has countless practical applications that will only get better with time.

Ask. How has artificial intelligence changed since its beginnings in the 1950s?

Response. There have been several moments in which there have been qualitative changes. In the 1970s, for example, expert systems were very popular, which, based on logic and the knowledge of experts in a specific area, tried to make deductions that would serve to solve very specific tasks. They were systems that required a lot of human intervention when entering all that knowledge, which a programmer had to enter in a certain way. And although today it still has to be entered by hand, systems are already capable of handling huge amounts of low-process data, such as written text, which is not intended to be consumed by a machine.

The advancement of neuroscience studies later allowed the development of neural networks and other similar systems, according to which, in order to build intelligent systems, it was not necessary for the human being to introduce rules, but rather the system had to learn by itself to interpret the world and reason. We monitor the machine and give it information on how it has to solve certain tasks, but in the end it has to figure it out for itself. And it is from 2012 when these systems began to solve problems in a much more efficient way.

P. How important are the senses in the future of artificial intelligence?

R. Today, there is a lot of emphasis on computing. We work with very complex systems that process a lot of information and have to learn from a lot of data, and to solve these problems a lot of computation is needed. But the human being works in a very different way; because the emphasis in biological systems is on the perception of the world. We have many sensors that allow us to perceive and interact with the world, and artificial systems do not yet have this capacity.

We always think that vision is the most important sense that the human being has, and it is true (in fact, it is said that a third part of the brain is dedicated to processing the sense of vision), but touch is not far behind. behind, because if vision allows us to know what things are beyond, touch is what really allows us to enter into direct communication with the world around us. That is why we are working on developing systems that learn to perceive the world by integrating all these senses (vision, hearing and touch) and that are capable of learning to discover objects and their properties, without the need for a person to provide knowledge about them.

P. The question, of course, is how to achieve it.

R. Well, at bottom, touch is no different from sight. What the eye does is detect light coming from outer space, in a particular direction. With touch, each sensor on the skin is indicating the pressure exerted on that point, apart from other functionalities such as measuring thermal conductivity. Touch forms an image on the skin, a map of pressure; and the moment you have an input image, it doesn’t matter if it’s black and white, color or tactile. Whether it is an image captured by the eye or by the skin makes no difference computationally, or how the signal is going to be treated later on.

P. What practical applications can this development have?

R. The incorporation of tactile information in these systems is something more or less recent, which generates a significant number of challenges on which several research groups are working. And one of those challenges is that of sensors. The world of vision is an area that has been researched for a long time, and today there are incredibly sophisticated cameras, capable of capturing images that the human eye cannot perceive. And the same goes for hearing. However, in the world of touch there are no such advances, partly because until now there has been no commercial interest, and partly because it is a fairly complex sensor: the camera is a sensor that does not change, but the skin is a sensor that is constantly changing shape, because every time you move it deforms. So there are a number of engineering challenges in designing these sensors, which is one of the things we’re doing at MIT.

One of the problems that robotics has today is that it is not yet at the level of other areas of artificial intelligence. For example, we still don’t have robots at home that are flexible and help us with housework (aside from the Roomba, which isn’t very sophisticated). Why? On the one hand, you have the battery problems, because as you have to do more complex things, there will come a time when you run out of battery in five minutes. But they also perceive the world in a very crude way. And if they had sensitive skin, they would be able to understand, when they come into contact with an object, how they are coming into contact with it, whether the contact is dangerous or not, what the force of the collision is…

It can also be used in other ways. Imagine that, instead of being the skin, you put the touch sensor on an object in the house, like a carpet. From the image that your feet produce when walking on it, as it is of very high resolution, we can see the shape of the foot, what is the pressure that the different parts of the foot are exerting on the carpet and from there deduce the posture of the body, if you are walking or if you are doing some exercise… And this can be useful for, for example, health applications, where you need to monitor a person, know if they are walking well or not or what difficulties they have.

P. What are some of the practical applications of artificial intelligence that we can already see today, and what can we expect in the future?

R. It is important to remember that there are intelligences of very different degrees. We already constantly interact with artificial intelligence systems: for example, when you want to take a photo with a mobile phone or a digital camera, it detects the person’s face and uses it to regulate the intensity, the opening of the camera or the time of focus. Today it is a very simple system, but one that entertained researchers for 20 or 30 years. When you interact with web pages, there are very simple intelligence engines behind it.

And then there is the world of health, where artificial intelligence has been incorporated since its origins: some of the first applications tried to make automatic diagnostic systems that could help a doctor make some decisions, and today many of the equipment used by doctors uses techniques that come from artificial intelligence to, for example, capture magnetic resonance images and be able to process the signal and form an image that the doctor can interpret. Another of the practical applications is in assisted operations, with robotic systems that help the surgeon to perform tasks that would be very difficult, such as an open heart operation. The Da Vinci system makes it possible to translate what the surgeon wants to do on top of a beating heart.

All these systems seem very surprising, but deep down they are very simple from the point of view of artificial intelligence. But we are approaching a time when these AI systems are going to start to help perform more complex tasks that require a much higher cognitive effort on the part of doctors, such as helping to diagnose rare diseases or when it comes to filter out the simplest cases, so they can focus on more complex cases, where artificial intelligence can also help. And another application is to facilitate the discovery of new treatments, new antibiotics or drugs capable of attacking genetic diseases, something that could not be done before.

P. What will happen in the next decade?

R. Today there is a lot of research on how to build AI systems that are capable of learning, predicting and generalizing from very little data, which, deep down and in some way, is what human beings do: we do not learn with bigdata, and yet we learn quite well (which is not to say that if we didn’t have more data we wouldn’t learn better). There is a lot of progress in this direction, and many groups are working on that. For example, we are trying to build a system that is capable of learning without seeing anything in the world: in many biological beings, the visual system already knows how to see a little before birth, thanks to retinal waves. A spontaneous activity is generated in the retina with random figures that serve for the system to begin to train and recognize movement patterns. We have been exploring if we can generate these shapes, so that an artificial visual system can learn from them and then work even with real images.

P. What are the biggest challenges facing AI in the future?

R. There are at least two fundamental challenges. First of all, educational: artificial intelligence is a very powerful tool that allows us to amplify our capabilities in many areas, so there has to be an education in AI, in the same way that it is important to be educated in computer science from very early on. All disciplines will have AI tools that will solve a small part of the problem you are working on, or that will be there to give you information and assist you.

The other challenge is the interaction between artificial intelligence and society. AI is an intelligent system that is beginning to integrate quite well with human behavior and can amplify it, affecting it in different ways. That is why it is very important to understand what are the effects it has, both positive and negative. People have to learn to interact with AI, its potential and its limitations; if an AI recommends something to you, it is very important to understand when you can trust or not trust that recommendation. And then there is an important research area that studies AI from a more humanistic point of view, the implications of these technologies and their integration in society. Also, as there are many different cultures, they will not have the same effect on one as on another.

EL PAÍS TRAINING in Twitter Y Facebook

Subscribe to the newsletter of Training of EL PAÍS

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button