LIST Develops AI Regulatory Sandboxes And Releases Ethical AI Leaderboard

Francesco Ferrero, director of the IT for innovative services department at LIST (Photo © LIST)

At the AIMMES 2024 conference in Amsterdam, the Luxembourg Institute of Science and Technology (LIST) unveiled its latest work on AI regulatory sandboxes and ethical bias leaderboard.

With the European Union AI Act well on its way, underscoring the importance of inclusive development and equal access to AI technologies while mitigating discriminatory impacts and biases has become a priority for the R&D community. 

In light of these developments, LIST has developed AI regulatory sandboxes which provide supervised testing environments where new AI technologies can undergo trials in a regulatory-compliant fashion.

“Our AI sandbox aligns closely with these objectives, providing a platform for testing and refining AI systems within a compliance-centric framework. This is not the regulatory sandbox envisaged by the AI Act, which will be set up by the agency that will oversee the implementation of the regulation, but it is a first step in that direction.”

Francesco Ferrero, director of the IT for innovative services department at LIST.

16 LLMs x 7 biases 

The development of LIST’s ethical bias leaderboard evaluates 16 LLMs on seven ethical biases, including ageism, LGBTIQ+ phobia, political bias, racism, religious bias, sexism, and xenophobia. This platform aims to provide transparency by showcasing each model’s performance across different biases, while facilitating user engagement. 

Jordi Cabot, head of the software engineering RDI unit at LIST and team leader of this project, said: “The architecture of the leaderboard is designed to offer transparency and facilitate user engagement. Users can access detailed information about the biases, examples of passed and failed tests, and even contribute to the platform by suggesting new models or tests.” Users can now access detailed information about biases, examples of passed and failed tests, and even contribute to the platform by suggesting new models or tests.

Considering recent insights gained from building the leaderboard, LIST recognizes the importance of context in selecting LLMs and the significance of larger models exhibiting lower biases. Encountered challenges during evaluations have highlighted the necessity for transparency in assessment procedures.

Advancing fairness and equality

LIST remains active in advancing AI research, while encouraging an environment that promotes fairness, transparency, and accountability in AI technologies. Partially funded by the Luxembourg National Research Fund (FNR) through the PEARL Program, the Spanish government, and the TRANSACT project, this work represents an advancement in the evolution of AI regulations and ethics, that have become the talk of the AI community.

The role of explainability in fostering trust and facilitating continuous improvement in AI technologies is crucial to future advancements. These efforts and collaborations aim to increase awareness about the limitations of AI and promote responsible usage of LLMs and other AI tools.

Total
0
Shares
Related Posts
Total
0
Share