A ‘congenial companion’ robot is being developed at Boğaziçi University

Levent Akın, professor of Computer Engineering at Boğaziçi University is known in our country for his pioneering research in the field of artificial intelligence and robotics. Led by Levent Akın, a team of researchers at Boğaziçi University Computer Engineering including Binnur Görer, İbrahim Özcan and Yiğit Yıldırım are developing a humonoid robot with emotions. Emotional interactions will be the most important characteristic of this robot: it will understand whether the person it is interacting with is sad or happy and react accordingly. We asked Prof. Dr. Levent Akın and his team about this “congenial companion”.
Kenan Özcan

Professor Levent, you are now working on a robot with emotions after you developed social robots (exercise robot and guide robot) in your earlier project at Boğaziçi University. Can you tell us about this project? 

Levent Akın - Our aim is to produce a multi-purpose humanoid service robot. We are developing this project to find answers to some much-discussed questions of today, as well. For example, there is this question that we come across often today; should a robot be humanoid? Our thesis is that this kind of a robot could be accepted more easily by people. We observe that human looking robots are being designed often in our days. We can give various examples. Do they have to be human looking to that degree or should they look like humans only functionally? These are the themes we are working on.  For this purpose, we produce different kinds of robots, and work on them.

One of the features of this robot is that it provides a multipurpose platform for us to try out more than one idea. We have developed various faces and still thinking about developing other varieties from them. By using them in interaction we will be able to explore which ones get the best reactions from people. Because, a robot becomes a more complicated system as you begin to add different components to it then you have to control everything individually, and it has to function on its own. For example, it is one thing for the robot to stand still and make contact with people, while it is another thing for it to look around while walking. In fact, our long-term aim is to organize this robot for different purposes, e.g. to help people exercise, home care for patients, or be a congenial companion to people…

We can define this project as a multipurpose robot project that combines different characteristics of the robotic systems developed in this laboratory before, right?

Levent Akın - Yes, the reason people react to the humanoid robot is that we are accustomed to live with people around us. We are accustomed to living with other people since our childhood; however, to what degree should a robot be humanoid? This is what we are investigating. The thing is people at any age, young and old, should know that it is a robot. The robot should not fool them. No one should say “Ooh this is a human!” They should always know that it is not a human, but a robot that is easy to interact with. In this context, we are working on the questions such as: Does the robot have to have facial expressions? Should it express its feelings, and if it does how, and how truly?

Binnur Görer - Normally 55% of what we say or what information we want to give to the others in people to people interactions is carried out through facial expressions, motions, and gestures that are as important as the contents of the words. Therefore, we are looking primarily into this aspect of the matter. In our communications, we try to decide whether what we heard from a person is positive or negative by investigating his/her facial expressions, motions and gestures rather than the semantic meanings of the words and sentences. We are researching what the robot should do emotionally by receiving these signals from people. Scenarios can be specific; for example, if the robot is a trainer it has to develop a different model; or if it is a friend robot and you are having a discussion over a daily subject it needs to show different expressions. We show different emotional situations through three channels in general; “dominance” channel shows who is the dominant party in the communication. The other two channels showing moods are positiveness/negativeness and excited/calm. We try to make our decisions based on these.

Robot-human interaction

The aim here is to ensure that the robot speaks and reacts according to the human’s changing emotional conditions, right?

Binnur Görer- Yes, the communication between the robot and the human breaks off easily when the robot reacts independently from the human’s emotional conditions, because the robot does not observe the human. It is important to know that the robot observes you and receives input from you in order to establish a social communication. We are investigation the question “when and how should I give feedback using emotional signals coming from the person?”

What kind of research study are you conducting in order for the robot to figure out facial expressions? 

Binnur Görer: Different methods can be developed in order to understand facial expressions. Basic moods are generally categorized through six emotions: Happiness, sadness, surprise, fear, disgust and anger… The experiments based on these emotions are generally not so applicable to the real world, because people are asked to demonstrate certain emotions in the laboratory studies and the robot is expected act accordingly. Actors are used to show these expressions, because they show these emotions in a clear, exaggerated way. But in fact, we show our moods in very subtle ways.

We can recognize these details as humans. I do not need someone to be quite and sad for half an hour in order to understand s/he did not like what I said. We do not act like that. Actually, this is a new field of study. Human-robot interaction has only a 10 to 15-year background. Those produced so far have been developed in the direction to confirm hypotheses or theorems, but they are not very applicable or dynamic.

Levent Akın: They are not dynamic, because for example let’s take a look at your picture now. This picture tells us something, but you are talking and moving not standing still. What I mean by change is that you are in a true interaction. The position of your face is changing constantly.

Binnur Görer: That is why long-term learning and interaction models are aimed, and that is what we want to do. We expect the robot to learn this through long-term interactions.

İbrahim Özcan: I have worked on a robot face I called ‘’Boğaziçi University Social Robotic Assistant” or shortly BUSRA. This face had to be in a form to show six different moods to be able to express emotions. Binnur and I discussed and looked into what degree of freedom it needed. We used pull-push tendon systems for its eyes. I designed its mouth structure myself, because I could not find a form with a high degree of freedom that could be easily produced.

We used springs for lips. We can make the sounds for a, o, u when we talk. We used springs in the robot for that effect. So, the robot can mimic to talk as well as to express itself. In fact, this is a speaking system. The robot can understand you and gives you simple answers when you talk to it. Unlike the current static situation, we are trying to equip the robot with human-like mouth motions for human-robot interactions. 

In terms of the areas of usage; which areas will this robot be used, for example health or other special areas?

Levent Akın – It is possible to do such things with artificial intelligence systems. Of course, other inputs are needed, too. There are systems that can do that without using a robot doctor. We call these expert systems, we on the other hand, look more into the aspect of the robotic interaction. This robot does not need to be a medical doctor, an ordinary care giver will do it. But even that seems to be a very complex system.

This humanoid robot will not be legged, so I guess you thought of a different system for its movement?

Levent Akın – We do not mean a two-legged robot when we talk about humanoid robot. Why? We can do that, but it is a very difficult thing to make the robot walk. Whereas a wheelchair system has a much higher stability. The robot’s motions should be smooth in order to be able to interact with people continuingly and regularly. We use the wheels for this purpose.

How long do you think it will take for this project to turn into a product?

Levent Akın - We are in the beginning yet, it could take a few years. We are working on a few problems; for example, our researcher Yiğit Yıldırım is looking into the problem of the robot’s movement in human-populated environments. Another researcher İbrahim Özcan is working on the faces. Another friend is investigating the ethics side of the matter. For example, the robot entered a room, is this any room or the bedroom of the house it stays at? The robot has to understand this, and recognize the things in the room. It needs to remember the place of the things in the room, and even beyond that, it should be able to relocate the things that had been displaced. There are many issues such as these. We are working on these. At the same time, we use various sensors that will gradually give more detailed inputs and become much cheaper systems.  I believe these developments will help us progress more.

We have already been seeing some models of humonoid robots in the market in recent years…

Levent Akın - People usually think of a humanlike structure not something with wheels when it comes to robots, partly with the influence of the science-fiction movies. This is a psychological reaction of the people however, it has to be a goal-oriented project when we approach it from the engineering perspective. There are many problems; one of them is that you have to make it cheap, another is that people should be able to use it and believe that it will do the job. They need to be able to interact with it in order to believe that it will work. We are now talking about this. This robot can provide some services, and it should be able to fulfil these services. These are quite difficult issues from our perspective.

How will your robot understand emotions?

Levent Akın - Let’s think this way; for example, we met you for the first time an we do not have much idea about each other, but as our acquaintanceship progresses models about one another begin to form in our heads. The robot should also be able to do this. S/he didn’t react to this, what does this mean normally? The robot may not be able to receive new information in a certain interaction, but in time it will know for example the meaning of your silence and such just like we do now. Our future goal is for the robot to form a model for the person it interacts with and react accordingly. The service the robot provides does not necessarily have to be related to health and care. It could be a just congenial companion for you, it could chat with you, talk to you about newly published books by accessing the internet…

How will the robot’s learning process be, will it be based on imitations?

Binnur Görer: If we talk about learning through imitation in terms of scientific robot development, we can think this way; We will record human communications in doubles and groups, in different situations, for example our current communication. We will make lots of records that would mean lots of data. The robot will learn through this data if we enter this kind of system.

Will this robot have a name?

Binnur Görer: I called it “I am for you’’ in my thesis. I called it this especially because it was going to provide emotional support. We have not thought of anything else so far.  

This project is also about a real human-robot friendship, right?  

Levent Akın- It will help the person to get more involved in this activity when s/he knows that the robot recognizes her/him and that s/he feels how the robot improves day after day.  This is our goal. We are talking about a robot and a human who interact in the long run. Our expectation is to make people’s lives easier, and to make sure that they do only what they want to do in the long term.

The researchers should be supported

What would you like to say about our country’s performance in the field of artificial intelligence when you compare it with the world in general, except your pioneering projects at Boğaziçi University?

Levent Akın - In my opinion, human resources and education in the field of artificial intelligence are the biggest deficiencies. It is necessary to receive education in some other areas in addition to computer engineering in order to work on artificial intelligence. Foreign language (English) is a must, because almost all sources are in English. High level programming skills and working with patience are also necessary qualities.


The research in this field is mostly conducted in the universities in our country. Whereas it is especially carried out at private companies in the US and in many other developed countries. In my opinion, current companies in Turkey have very limited vision; they aim to make profit immediately in the short-period, therefore they do not allocate resources for the projects that would bring much more profit in the long-period. For this reason, supporting start-ups in the area of artificial intelligence could lead to the emergence of more products. TÜBİTAK should not overlook the fact that the real resource it supports is human resource in the projects that it supports. Young researchers working in these projects are paid the half of what research assistants in the universities make. The wages paid to these young researchers are generally much higher in the companies. That’s why it is very hard to find researchers for long-term academic projects, and even if found, they usually leave the projects for better paying jobs (generally other than the artificial intelligence jobs). More importantly, researchers are needed across the world, and there is a high demand for well-trained researchers.

According to a Linkedin research, currently there are about 1.900.000 engineers of artificial intelligence in the world. 1.000.000 of these engineers are in the U.S.. Even China has 50.000 engineers. According to predictions published on People’s Daily newspaper, China will need 5 million artificial intelligence engineers in the near future. Even though China has allocated quite much resources to this field the number of institutions providing artificial intelligence education in the US is six times higher than that of China’s.

In addition, roadmaps have already been created and investments and education programs are being motivated accordingly, EU being in the first place. Turkey also needs to create an artificial intelligence roadmap in order to mobilize the necessary resources. This roadmap should incorporate topics such as short and long-term goals and human resources training as well as supports for related projects. This roadmap should be implemented loyally and updated when needed.

To watch researcher Binnur Görer’s video ‘’Real time imitation of facial expressions on Robotic Head Fritz’’: