Symposium on

Scaling AI Assessments

Tools, Ecosystems and Business Models

September 30th & October 1st, 2024 | Design Offices Dominium, Cologne

Motivation

Trustworthy AI is considered a key prerequisite for Artificial Intelligence (AI) applications. Especially against the background of European AI regulation, AI conformity assessment procedures are of particular importance, both for specific use cases and for general-purpose models. But also in non-regulated domains, the quality of AI systems is a decisive factor as unintended behavior can lead to serious financial and reputation damage. As a result, there is a great need for AI audits and assessments and in fact, it can also be observed that a corresponding market is forming. At the same time, there are still (technical and legal) challenges in conducting the required assessments and a lack of extensive practical experience in evaluating different AI systems. Overall, the emergence of the first marketable/commercial AI assessment offerings is just in the process and a definitive, distinct procedure for AI quality assurance has not yet been established.

 

AI assessments require further operationalization both at level of governance and related processes and at the system/product level. Empirical research is pending that tests/evaluates governance frameworks, assessment criteria, AI quality KPIs and methodologies in practice for different AI use cases.

Conducting AI assessments in practice requires a testing ecosystem and tool support, as many quality KPIs cannot be calculated without tool support. At the same time automation of such assessments is a prerequisite to make the corresponding business model scale.

Impressions

Target group

This symposium is aimed particularly at practitioners from the TIC sector, tech start-ups offering solutions to the above-mentioned challenges and researchers from the field of trustworthy AI. A further objective of the symposium being to clarify the legal framework of AI particularly with regard to the recently adopted AI Act of the European Union and its operationalization, it is also aimed at legal experts in the field of AI. The conference places particular emphasis on the commitment of young researchers along more experienced participants in the conference.

Attending & Registration

Participation is possible in three different ways. First, we offer a conventional academic track that accepts full papers. Second, we offer a practitioner track that is particularly suitable for industry or start-up representatives. Third, it is also possible to attend the conference without submitting own work. This provides the opportunity to get up to date with latest research insights, present and discuss practical challenges, and identify possible ways to test promising approaches in practice.

 

  • Academic track: This track is suitable for scientific papers that present or discuss novel ideas on concepts, technology or methodologies related to the topics listed above. Submissions should include a clear description of the problem statement / research gap that is addressed, and the contribution of the research presented.
  • Practitioner track: This track is suitable to apply for a presentation slot during the conference. As application for a presentation slot, a brief abstract of the topic to be presented must be submitted in order to review that the presentation fits the conference theme. It is possible to submit extracts from whitepapers or position papers that have been published already.
  • Attendance of the conference: Besides one of the above tracks, it is also possible to register for sole attendance of the conference. This involves active participation in the exchange formats. To this end, participants should select a suitable field of interest (either “Operationalization of market-ready AI assessment” or “Testing tools and implementation methods for trustworthy AI products”).

Keynotes & Panel

Keynote speakers

An overview of the results of Confiance.ai towards trustworthy AI systems for critical applications

Bertrand Braunschweig
Scientific Coordinator of the Confiance.ai Programme

Abstract

I will present the results obtained by the programme after four years of joint work by our group of major industrials, academics and research institutions. I will show some practical results (software tools, environments), our major scientific results addressing several factors of trustworthy AI, and our vision for the future including the challenges ahead. Our results can and will contribute to AI trustworthiness improvement and assessment, in support of some key requirements of the AI act.

Bio

Bertrand Braunschweig began his career in the oil industry as a researcher in systems dynamics and artificial intelligence. He then joined IFP Énergies Nouvelles to manage AI research and coordinate international interoperability standards projects. He spent five years at the ANR as head of the ICST department, before joining Inria in 2011 as director of the Rennes research centre, then of the Saclay research centre, and then to steer the research component of the national AI strategy. He is now an independent consultant and provides support to various organisations, notably as scientific coordinator of the Confiance.ai programme operated by IRT SystemX.

Bertrand Braunschweig is an ENSIIE engineer, PhD from Paris Dauphine University and Habilitation from Pierre and Marie Curie University.

Lessons Learned from Assessing Trustworthy AI in Practice

Roberto V. Zicari
Z-Inspection® Initiative Lead.

Abstract
Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements.

The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI.

This talk is a methodological reflection on the Z-Inspection® process. I will illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. I will share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system.

Bio
Roberto V. Zicari is an affiliated professor at the Yrkeshögskolan Arcada, Helsinki, Finland, and an adjunct professor at the Seoul National University, South Korea.

Roberto V. Zicari is leading a team of international experts who defined an assessment process for Trustworthy AI, called Z-Inspection®.

Previously he was professor of Database and Information Systems (DBIS) at the Goethe University Frankfurt, Germany, where he founded the Frankfurt Big Data Lab.

He is an internationally recognized expert in the field of Databases and Big Data. His interests also expand to Ethics and AI, Innovation and Entrepreneurship. He is the editor of the ODBMS.org web portal and of the ODBMS Industry Watch Blog. He was for several years a visiting professor with the Center for Entrepreneurship and Technology within the Department of Industrial Engineering and Operations Research at UC Berkeley (USA).

Panel with legal experts

  • What requirements are imposed by the AI Act on generative AI and how can compliance be ensured? Drawing from a specific case study, what distinct challenges do high-risk AI systems pose and how does the AI Act address them? How does the AI Act intertwine with other complementary legislative frameworks such as the GDPR?
  • These questions will be subject to a series of presentations given by selected legal experts followed by a discussion.
Andreas Engel

Andreas Engel

Abstract
Regulation of Generative AI by the AI Act): The AI Act contains specific rules for generative AI and foundation models, in addition to the general risk-based approach: Obligations regarding documentation and information for providers of general purpose AI models, additional obligations regarding systemic risk, and transparency obligations for AI-generated content. This presentation briefly introduces these obligations and discusses the various mechanisms in the AI Act to promote compliance and to make the AI Act future-proof by maintaining flexibility.

Bio
Andreas Engel researches the challenges of digitalisation, primarily from a private law perspective. He is a co-editor of the Oxford Handbook of the Foundations and Regulation of Generative AI (forthcoming). He is a senior research fellow at University of Heidelberg. He studied in Munich, Oxford and at the Yale School and wrote his doctoral thesis at the Max Planck Institute for Comparative and International Private Law in Hamburg.

Linardatos

Dimitrios Linardatos

Abstract
His talk will explore the regulatory framework for high-risk AI systems in the AI Act, analyzing its provisions, criteria for classifying high-risk AI, and the implications. It will cover the main legal, technical, and ethical standards, along with specific requirements and compliance obligations for developers and operators of high-risk AI.

Bio
Prof. Dr. Dimitrios Linardatos holds the Chair of Civil Law, Digitalisation Law, and Business Law at Saarland University. His research focuses on digital economy topics, especially liability law.

Cole

Mark Cole

Abstract
His talk will put the AI Act into context in a twofold manner: firstly, it will show its place in and interconnection with an already complex regulatory framework for the digital and online environment in the EU. This will include questions of obligations as well as oversight and monitoring. Secondly, in a brief look beyond the EU its potential and approach compared, for example, with the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

Bio
Professor Dr iur. Mark D. Cole is Professor of Media and Telecommunications Law at the University of Luxembourg (since 2007) and Director for Academic Affairs of the Institute for European Media Law (EMR) in Saarbrücken (since 2014). He specialises in European and comparative media law across the entire spectrum from the legal framework for traditional media to regulatory issues for the internet, data protection and copyright law and is co-editor of AIRe – Journal of AI Regulation and Law.

Program

preliminary (last update 9/26/24)

Day 1

Monday, September 30th

Welcome (9:45)

Day 1 Session 1
Safeguarding and assessment methods

Coffee Break

Keynote
An overview of the results of Confiance.ai towards trustworthy AI systems for critical applications
Betrand Braunschweig

Day 1 Session 2
Risk Assessment & Evaluations

Lunch Break

Day 1 Session 3
Ethics

Day 1 Session 4
Standards

Coffee Break

Panel Discussion with
Legal Experts

Closing (17:45)

Get-Together Dinner
at local brewery restaurant “Brauhaus Früh am Dom"
(self-paid, optional)

Day 2

Tuesday, October 1st

Welcome (9:00)

Day 2 Session 1
Governance and regulations

Coffee Break

Day 2 Session 2
Transparency + XAI

Lunch Break

Keynote
Lessons Learned from Assessing Trustworthy AI in Practice
Roberto Zicari

Invited Talk
MISSION KI - Development of a voluntary AI Quality and testing standard
Carolin Anderson

Coffee Break

Day 2 Session 3
Certification

Closing (16:00)

Accepted Papers

  • Sergio Genovesi. Introducing an AI Governance Framework in Financial Organizations. Best Practices in Implementing the EU AI Act.
  • Benjamin Fresz, Danilo Brajovic and Marco Huber. AI Certification: Empirical Investigations into Possible Cul-de-sacs and Ways Forward.
  • Nicholas Kluge Corrêa, James William Santos, Éderson de Almeida, Marcelo Pasetti, Dieine Schiavon, Mateus Panizzon and Nythamar De Oliveira. Codes of Ethics in IT: do they matter?
  • Fabian Langer, Elisabeth Pachl, Thora Markert and Jeanette Lorenz. A View on Vulnerabilites: The Security Challenges of XAI.
  • Oliver Müller, Veronika Lazar and Matthias Heck. Transparency of AI systems.
  • Susanne Kuch and Raoul Kirmes. AI certification: an accreditation perspective.
  • Marc-Andre Zöller, Anastasiia Iurshina and Ines Röder. Trustworthy Generative AI for Financial Services.
  • Sergio Genovesi, Martin Haimerl, Iris Merget, Samantha Morgaine Prange, Otto Obert, Susanna Wolf and Jens Ziehn. Evaluating Dimensions of AI Transparency: A Comparative Study of Standards, Guidelines, and the EU AI Act.
  • Ronald Schnitzer, Andreas Hapfelmeier and Sonja Zillner. EAM Diagrams – A framework to systematically describe AI systems for effective AI risk assessment.
  • Afef Awadid and Boris Robert. On assessing Ml model robustness: A methodological framework.
  • Bastian Bernhardt, Dominik Eisl and Höhndorf Lukas. safeAI-kit: A Software Toolbox to Evaluate AI Systems with a Focus on Uncertainty Quantification.
  • Christoph Tobias Wirth, Mihai Maftei, Rosa Esther Martín-Peña and Iris Merget. Towards Trusted AI: A Blueprint for Ethics Assessment in Practice.
  • Daniel Weimer, Andreas Gensch and Kilian Koller. Scaling of End-to-End Governance Risk Assessments for  AI Systems.
  • Adrian Seeliger. AI Readiness of Standards: Bridging Traditional Norms with Modern Technologies.
  • Joachim Iden and Felix Zwarg. Risk Analysis Technique for the evaluation of AI technologies and their possible impacts on distinct entities throughout the AI deployment lifecycle.
  • Carmen Mei-Ling Frischknecht-Gruber, Philipp Denzel, Monika Reif, Yann Billeter, Stefan Brunner, Oliver Forster, Frank-Peter Schilling, Joanna Weng and Ricardo Chavarriaga. AI Assessment in Practice: Implementing a Certification Scheme for AI Trustworthiness.
  • Padmanaban Dheenadhayalan and Eduard Dojan. Leveraging AI Standards for Analyzing AI Components in Advanced Driver Assistance Systems (ADAS).
  • Nikolaus Pinger. Efficient Implementation of an AI Management System Based on ISO 42001.

Instructions for authors

We invite members of the international trustworthy AI community – from researchers to practitioners – to submit and present their work in either the academic track or the practitioner track. Submissions will be subject to a double-blind review by the programme committee based on the following criteria: relevance and significance, originality, soundness, clarity and quality of presentation. The symposium is planned as an in-person event and at least one author of each accepted paper must register for the conference and give an oral presentation of the submitted work.

 

Academic track:

  • Length: We expect papers with a length of around 4.800 – 6000 words, excluding references.
  • Submission Instructions: Papers must be written in English and the submission can be made using either the OASIcs LaTeX Template or the (initially provided) Springer LNCS template (LaTeX or Word). Please note that the proceedings will use the OASIcs template and the camera-ready version of all accepted papers (by September 16) shall therefore be formatted using the OASIcs template, see section ‘templates’ below.
  • The review will be double-blind, thus make sure you use the „anonymous article” template for submission (i.e., anonymous is added as argument of the \documentclass of the OASIcs template) or anonymize your submission in another suitable way.
  • Submission Deadline: 05.08.2024 (23:59 AoE)
  • We accept submissions of previously unpublished material as well as previously published papers that contain at least 30% new content.
  • To submit your paper, please use the button below.

Practitioner track:

  • Length: We expect a brief abstract with a length of around 200 – 1000 words, excluding optional references.
  • Submission Instructions: Practitioner abstracts must be written in English and the submission can be made using either the OASIcs Template or the (initially provided) Springer LNC Template. Please note that the proceedings will use the OASIcs template.
  • A single-blind review is intended, but fully anonymized contributions are also accepted.
  • Submission Deadline: 05.08.2024 (23:59 AoE)
  • We accept submissions of previously unpublished material as well as new compilations of previously published work, e.g. white papers.
  • To submit your paper, please use the button below. Alternatively, the summary can be sent by e-mail to the contact e-mail address zki-symposium@iais.fraunhofer.de.

Templates:

Proceedings:

  • All accepted papers (both tracks) will be given the option to be included in the conference proceedings, which will be published in the OASIcs series. For this purpose, the camera-ready version of the submission must fulfill the requirements of the OASIcs series (e.g. style guide, LaTeX template, author agreements). The LaTeX template can be downloaded below.
  • In addition to the LaTeX template, which can already be used for the submission of conference papers, authors will be sent further information on the publication process of the conference proceedings with the final notification regarding acceptance of the conference papers.
  • Note regarding paper length: If authors want to pursue this option, their submitted paper should be formatted via the OASIcs template with a length of about 2-6 pages (practitioner track) and a length of about 10 to 15 pages (academic track) until 16th September. Excluded from this page limit are the bibliography, the title page(s) (authors, affiliation, keywords, abstract, …) and a short appendix (up to 5 pages). Especially this is also possible after the initial submission. This means that the initial submission by August 5 (brief abstract, short paper or regular paper) can then be extended to the required amount of pages by September 16 (if this length is not already reached with the initial submission), if a contribution in the proceedings is desired. 

Note: Of course, authors also have the option of deciding against integration into the proceedings. In this case, the papers do not have to be adapted to the requirements of the proceedings.

Call for Papers

Submission deadline August 5th

This symposium aims to advance marketable AI assessments and audits for trustworthy AI. Specifically, papers and presentations both from an operationalization perspective (including governance and business perspectives) and from an ecosystem & tools perspective (covering approaches from computer science) are encouraged. Topics include but are not limited to:

Perspective:
Operationalization of market-ready AI assessment

  • Standardizing AI assessments: How can basic concepts of AI assessments such as the target-of-evaluation, the operational environment and the operational design domain (ODD) be specified in a standardized way? How must assessment criteria be formulated and which AI quality KPIs are suitable to make AI quality and trustworthiness measurable? How can compatibility with existing assessment frameworks for other domains (e.g. safety, data protection) be guaranteed? How to deal with third party components, in particular general-purpose AI models, that are difficult to access during an assessment?” 
  • Risk and vulnerability evaluation: What methodologies can be employed to effectively characterize and evaluate potential risks and vulnerabilities, considering both the technical aspects and broader implications? How must AI governance frameworks look like to mitigate those risks efficiently?
  • Implementing regulatory requirements: How can compliance with the AI Act and upcoming regulations be implemented into AI software and AI systems during operationalization, particularly in specific use cases, and what steps are required for achieving and maintaining compliance? In other words, how does a trustworthy AIOps framework look like?
  • Business models based on AI assessments: What are business models based on AI assessments and what are key success factors for them? How need AI quality seals be designed and how do they influence consumers’ decisions?

Perspective:
Testing tools and implementation methods for trustworthy AI products

  • Infrastructure and automation: What infrastructure and ecosystem setup is necessary for effective AI assessment and certification, including considerations for data and model access, protection of sensitive information, and interoperability of assessment tools? Which approaches are there to automate the assessment (process) as much as possible?
  • Safeguarding and assessment methods: What strategies or methods can developers employ to select suitable testing or risk mitigation measures tailored to the specific characteristics of their AI systems? What are novel techniques, tools or approaches for quality assurance? How can generative AI be used as part of novel assessment tools (e.g., for generating test cases)?
  • Systematic testing: How can systematic tests be performed and what guarantees can these tests give? In particular, how can diverse test examples be generated, including corner cases and synthetic data, to enhance the robustness and quality of AI products?

Program commitee

Bertrand Braunschweig

Bertrand Braunschweig

Confiance.ai

Lucie Flek

Lucie Flek

University of Bonn, Lamarr Institute for AI and ML

Antoine Gautier

Antoine Gautier

QuantPi

Marc Hauser

Marc Hauser

TÜV AI.Lab

Manoj Kahdan

Manoj Kahdan

RWTH Aachen

Foutse Khomh

Foutse Khomh

Polytechnique Montreal

Julia Krämer

Julia Krämer

Erasmus School of Law in Rotterdam

Qinghua Lu

Qinghua Lu

CSIRO

Jakob Rehof

Jakob Rehof

TU Dortmund, Lamarr Institute for AI and ML

Franziska Weindauer

Franziska Weindauer

TÜV AI.Lab

Stefan Wrobel

Stefan Wrobel

University of Bonn, Fraunhofer IAIS

Jan Zawadzki

Jan Zawadzki

Certif.AI

Important dates

 

7th July 2024
Start of submissions via easychair

22nd July 2024 Submission deadline extended until August 5th
Paper submission deadline (academic track and practitioner track)

19th August 2024
Author notification (Practitioner track)

26th August 2024
Author notification (Academic track)

9th September 2024 16th September 2024

Camera-ready version deadline (academic track). Optional: Camera-ready version of the practitioner abstract to prepare it for possible integration into the Proceedings.

22nd September
Registration deadline (for all tracks and all additional participants).

30th September & 1st October 2024
Conference (all tracks)
including key notes, paper presentations, exchange formats, panels and a social event

Registration

Note: Registration and participation is free of charge. Please be aware that the event is planned as an on-site event and that the event language is English. There will be no possibility to participate virtually.

Registration for the Symposium on Scaling AI Assessments is closed. If you have missed the registration deadline, please contact us by e-mail so that we can check whether we can still offer you a participation slot.

Location

Design Offices Dominium

Cologne

Tunisstraße 19-23
50667 Cologne
Germany

Contact

Organization Committee

Rebekka Görge, Elena Haedecke, Fabian Malms, Maximilian Poretschkin, Anna Schmitz

Email: zki-symposium@iais.fraunhofer.de

Contact for the Legal Perspective

Mathias Schuh, Erik Weiss, Maxine Folschweiller, Lena Zeit-Altpeter

Email: mathias.schuh@uni-koeln.de