Europe hopes to set the global standard for AI ethics for years to come through the Artificial Intelligence Act. Silicon unpacks the draft legislation and examines how it could impact startups.
In a week when major European tech players gathered at the Vivatech conference in Paris and London Tech Week, technology also dominated the European Parliament. On 14 June it adopted a negotiating position on the AI Act, a draft legislation, which could still change subject to talks with EU member states. The rules ensure that AI use and development in Europe aligns with EU values and rights.
The Act weighs AI systems based on the risk they pose. AI systems posing an unacceptable level of risk to human safety would be prohibited. The document cites as an unacceptable level of risk features such as social scoring (classifying people based on their social behaviour or personal characteristics). Also banned will be intrusive and discriminatory uses of AI such as:
- “real-time” remote biometric identification systems in publicly accessible spaces;
- “post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location or past criminal behaviour);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
High-risk AI systems are those which are viewed as causing “significant harm to people’s health, safety, fundamental rights or the environment”. AI systems “used to influence voters and the outcome of elections and in recommender systems used by social media platforms (with over 45 million users)” form part of the high-risk list.
General purpose AI
Under the draft, companies and individuals providing foundation models (large machine learning models trained on large quantities of data at scale, often by self-supervised learning) must assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law) and register their models in the EU database before releasing them on the EU market. Generative AI systems based on models like ChatGPT, will have to be transparent and disclose when content has been AI-generated. They will need to introduce safeguards so that systems don’t generate illegal content. And they must make detailed summaries of any copyrighted data used for their training publicly available.
Research activities and AI components provided under open-source licences will be exempted from the law. Meanwhile, the legislation promotes the use of regulatory sandboxes established by public authorities to test AI before it is deployed.
Impact on startups
The legislation is still several years away from being fully implemented. Before it passes into law, businesses will need to audit any AI-based software they use based on the AI Act’s category and implement compliance measures. Extra work and costs aside, startup players have raised concerns about what it will mean for AI startups and for Europe’s competitiveness.
In a 2022 survey of 15 venture capitalist firms at the European AI Forum almost three-quarters of respondents expected the AI Act to significantly reduce the competitiveness of European AI startups. The same survey suggested that the legislation may also influence investment in the sector in Europe. VCs also said they would be more inclined to focus on startups active in the AI Act’s low-risk category.
“Similar to our observations with the implementation of GDPR, the AI Act could potentially have an indirect negative impact on us in the future if companies become more hesitant or more reluctant to use AI on their data,” said Assaad Moawad, co-founder of Luxembourg intelligent software systems developers Datathings, adding: “Therefore, we believe that conducting an awareness campaign is necessary to educate our prospective clients about the boundaries of the AI Act, ensuring it is not perceived as an obstacle to innovation.”
Olivier Debeugny, founder and president of fintech Lingua Custodia said his company has always been transparent with customers about the risks and opportunities of its AI systems, which largely depend on the usage and autonomy of the solution. Debeugny accepted that the Parliament’s risk-based approach in relation to AI autonomy was “practical” and expected it to create new opportunities for first movers in AI-compliance.
“As an SME AI system producer and an AI expert, we trust that the EU will figure out how to make this new regulation applicable in real life for us and, above all, our clients and we see those new developments as an opportunity to be part of a new emerging market: the AI compliance business,” he said.
Romit Choudhury, co-founder of AI customer feedback platform provider Softbrik was mostly optimistic. “By their very nature start-ups are agile, which means that we are in a better position to set up these mechanisms from scratch compared to large institutions,” he said.
Choudhury praised the creation of innovation sandboxes for testing algorithms, having worked with an EU Horizon Project Sandbox in 2022 and had a positive experience. Nevertheless, he said: “The lack of clarity in high-risk compliance or information of the quality of innovation sandboxes make entrepreneurs fear that we will slow down further our adoption of AI.”