The Unreal: Making Synthetic, Authentic

André Meyer, Head of the Cybersecurity & Defense Consultancy Practice at Accenture Luxembourg (Photo © Marion Dessard)

Artificial intelligence (AI) is advancing rapidly and, whilst it enables industries to innovate quicker and safer than ever, it also opens up possibilities for the manipulation of people and processes on a major scale. André Meyer, Head of the Cybersecurity & Defense Consultancy Practice at Accenture Luxembourg, explains why, if we develop AI responsibly, the benefits outweigh the risks.

AI helps to innovate faster and safer

The third trend identified in Accenture’s Tech Vision 2022 is the increasing use of AI to make the synthetic seem real, ranging from chatbots to deepfakes. We already know that AI creates opportunities to relieve humans from tasks that are time consuming, boring, or dangerous.

Automation is one example. As Meyer explains, although automation is not new, the advance of AI is radically changing it. “In the past, automation was about providing a sequence for a repeatable task. Today, AI learns from possible automation options based on past datasets or limits that we define, then helps you to accelerate this.”

Machine learning (ML) is another and some businesses have been using it for years. For example, Spotify leverages ML to offer music recommendations by analyzing what you like, what time of day you listen, etc., and consolidating these for you. “As a result, you don’t have to waste time looking for songs, and you’ll discover new artists,” adds Meyer.

AI is also used in medical research. If you want to understand the behavior of a virus, or the side effects of a new drug, instead of testing it on people or animals, you can run simulations based on millions of data sets. The benefit, Meyer highlights, is that “no-one suffers. Plus, it’s faster, cheaper, and you get the same results – even better ones – because of the enormous amount of data processed in a short time.”

Startups lead AI innovation in Luxembourg

AI is already being widely leveraged in Luxembourg. In 2020, the government launched AI4Gov public tender to pinpoint projects using AI for menial or repetitive administrative tasks. Eight projects have already been identified.

However, according to Meyer, the most interesting innovations in this area are coming from startups. Luxembourg-based EmailTree, for example, uses AI to filter out ‘noise’ in emails, identifying what’s interesting and creating useful summaries. Another example is MoodMe, which uses facial recognition to understand a person’s emotion in a particular moment. Meyer highlights its potential use in online sales or health consultation contexts where body language is harder to read via a screen: “its AI registers all the little hints that might otherwise be missed.”

“We just need to make sure that we teach AI the right things. AI is a human creation, and we should treat it like a child. If we teach it good manners and behaviors, it will help us. If we make a brat out of it, it will bite us.”

André Meyer, Head of the Cybersecurity & Defense Consultancy Practice at Accenture Luxembourg

The challenge of trust remains

Using AI to understand someone’s emotions might seem unsettling. And this highlights the challenge of AI: building trust. As Meyer highlights, “AI has enormous advantages, but it can also get very scary. You see this with targeted ads. On the one hand it’s convenient to only see relevant ads but, on the other, how did they find that out about you?” Meyer sees a fine line between the benefits AI can bring to businesses, and how people can be manipulated by the data they share.

Another challenge is ensuring human bias doesn’t filter through – both via the choice of data that’s collected, and the way algorithms are trained. As Meyer explains, “we’ve already seen that some AI has bias towards certain races because the data itself was flawed. AI is like a child – it only learns what it is taught.”

According to Meyer, “trust is based on transparency. Tell people why and how you are using AI to make sure that it actually works. You also need to incentivize people to try the technology and experience the benefits. For example, the Luxembourg government uses AI to make tax returns easier. You – as a taxpayer – can see this for yourself when you go through the process and come out with the right result.”

However, trust can be challenged when malicious actors manipulate AI. As ML has improved, we’ve seen a rise of deepfakes which, in certain contexts (such as manipulating real life events), can be very dangerous. Meyer explains: “Look at TikTok. Imagine a deepfake of something happening somewhere that you’re not familiar with. It is short – you have very little time to work out if it is real or not – it might be very impactful and it goes viral. All this can be generated very easily. Anyone can do it at home today using existing technology. People need to be aware of this and never accept things at face value.”

The benefits of AI outweigh the risks

Ultimately, Meyer believes the benefits of AI outweigh the risks. “We just need to make sure that we teach AI the right things. AI is a human creation, and we should treat it like a child. If we teach it good manners and behaviors, it will help us. If we make a brat out of it, it will bite us.” With that in mind, companies should prepare for a future with more advanced AI technology. According to Meyer, they should start preparing by really understanding what their business does. “Many companies think they know, but if you dig deeper, you find a general appreciation of what their product or service is, but not really how they get there. To be able to teach a machine or an algorithm to help you, you need to know what to teach it.” Once that understanding is in place, get the experts in. “Technology changes so fast, it’s important to work with people who can keep pace with it.”

Total
0
Shares
Related Posts
Total
0
Share