He was one of the keynote guests at the last big tech ICT Spring exhibition, which was held in a phygital format. Dr. Balázs Kégl Head of AI research at Huawei France took the controls for 15 minutes and detailed the flight plan to improve AI process management. His motto? Putting humans (back) in the loop. Here is our interview with this outstanding expert.
Photo: Dr. Balázs Kégl Head of AI research at Huawei France / Credit © Huawei Technologies
What is your role and mission at Huawei?
I’m leading a team of fifteen AI researchers and engineers at the Paris Research Center of Huawei. Our main mission is to make engineering systems more efficient, safer, and less energy hungry using AI.
Examples of these systems are the wireless telecommunication stations that serve a large chunk of our communication, or data centers that consume immense energy to serve the world’s computational needs. The airplane is a good example of these systems: they are “driven” (controlled) by highly trained systems engineers (“pilots”) whose task is to run them safely and efficiently. Our goal is to make their job easier by adding automation.
What I love about this mission is the beautiful technical challenges but also the human aspect, which is necessary to get these engineers on board, both as end users of our algorithms and as co-developers.
Is AI really that artificial?
That’s an interesting question. Most of the time it’s the “intelligence” part that is questioned. “Artificial” in this context means that it is not an organically developed intelligence. The goal is to make it look as “natural” as possible, create systems that behave intelligently, indistinguishable of human intelligence.
“Most of the current controversies around AI are in systems where AI was put into a fully automated loop, without human supervision”.
If algorithms can improve themselves by confronting themselves, what is the place of humans in the process?
Another great question! Look at AlphaGo and its successor AlphaZero, the algorithms that beat Lee Sedol, the world champion of Go, and which today are far exceeding the level of any human player. These algorithms are in their own league, and there is no reason to play against them. So one could have predicted the end of Go and Chess: why invest in a game where humans have no chance against a machine. Yet the opposite happened, these games are now reinvigorated. Human pros are using these algorithms to invent new strategies, unorthodox moves that make the games beautiful. We are in a new era where human players co-operate with AI instead of competing with them.
I am expecting similar dynamics in other areas. I don’t believe in the dystopia of a singular runaway AI that will destroy us. I see AI rather as a partner, a mate we can discuss with, ask questions, use them to do complex chores, ask them to evaluate possible strategies to support our decisions.
Again the airplane is a great example: the autopilot did not eliminate the human pilot, it made her job easier. Most of the time the plane is driven by the autopilot, but the human pilot can take the wheel back any time.
In fact, most of the current controversies around AI are in systems where AI was put into a fully automated loop, without human supervision. Think about delicate content moderation on social media sites or the automated face recognition systems that the police may use today.
How to take over the control of controlled algorithms?
Well, we don’t have to take over, we need to design systems where the algorithm is not in control, rather in a decision support role. This approach has its own challenges, and to a certain extent these are more difficult than the technical challenges of building a fully automated system: we need to design interfaces that support the communication between AI and the engineer.
Look at the Paris metro. The original setup is one driver per train. On the automated lines this has changed: trains are driven automatically, but there are engineers in a control room who overlook the whole line, and may step in if something unexpected happens. How that dashboard in the control room looks as important as the design of the train automation itself.
“The first role of the Data Value Architect is to break that vicious circle by establishing a clear value proposition to the decision maker”.
When you say that we need to put people back in the loop of the process, what do you mean exactly?
When software serves human clients (like apps), user experience design becomes crucial. It is the same in the case of human-facing AI. We need to design AI systems with which we can “talk to”. On top of this classical human-in-the-loop issue, there is something peculiar in the AI process. In classical software design, there is usually a single handshake: the client specifies what she wants and the software engineer develops the software according to the specifications. AI is inherently iterative. AI algorithms are empty shells without data. And data comes from the client. In our case, this is the system engineer. It is crucial to involve them in the development process.
A lot of projects die in their birth because of the lack of data. Data taking, especially the taking of high quality data required by AI algorithms, is expensive. It is usually a “side job” of the system engineer: the plane may log its system state time to time, mainly for safety reasons, but AI may need to get data from more sensors and at a higher frequency. Higher level executives are reluctant to invest into this without seeing early signs of value creation, and the data scientist cannot show that value without accessing quality data. The most important project management task in the beginning is to break this vicious circle.
What type of personalities and skills do we need to achieve AI projects? What are their missions and goals?
I already mentioned the two main technical roles: the data scientist and the domain expert (the pilot). They can rarely work together spontaneously, so we need a crucial middle person. This is a complex role. I call it the Data Value Architect.
The first role of the DVA is to break that vicious circle by establishing a clear value proposition to the decision maker. The best approach is to launch small well-directed pilot that jump starts the data-collection–AI development cycle. The most important here is to stay lean and control investments by going towards the value.
Once the value is established, the data is collected and the first AI algorithms are trained, the role of the DVA becomes change management: the deployed AI algorithm often changes the way the domain expert works. It may replace or deplace them (from the driver’s seat of the train to the control room of the metro line) which means handling understandable resistance.
To be able to manage this, the DVA must almost be superhuman: understanding the technical aspects of both data science and the application domain, while also managing the psychological effects on the various participants. Unsurprisingly, this is rarely one person. Successful consulting companies (and internal projects) usually organize commandos of four to six people with both technical and change management expertise.
“Why would AI want to govern our life? The real danger is _humans_ with immense and more and more concentrating power”.
What do you think of the newly born role of CDO? How can he play a key role in the company?
The biggest mistake a lot of companies make is to give data science projects to IT. IT does what they know how to do: buy machines and install big data software, waiting for the projects to be served. At the same time, data scientists are also hired, producing one prototype after another. Instead, at Huawei, we believe that the successful strategy is to start with the BU, the domain experts, and ask them how they create value, and where in this value creation AI could step in. The CDO should have a strong but narrow mandate, with resources to run value creation workshops and pilot projects, selecting one or two to implement. He should be on good terms with IT, but need not necessarily be part of IT.
Can we imagine AI going beyond human intelligence and governing our lives?
I think those are two different questions. AI will go beyond human intelligence. Governing our life, I don’t think so. Why would AI want to govern our life? The real danger is _humans_ with immense and more and more concentrating power who are willing and able to govern our life. AI enables them, but it’s not the algorithm but the human we need to control. This, however, requires classical political tools and actions.