Onderzoek, Bronnen & Hulpmiddelen

knowledgebase

Welkom bij de Kennisbank, een speciale ruimte waar we toegang bieden tot onze uitgebreide bronnen over Trustworthy AI. Als pioniers in dit snel evoluerende veld, zijn we toegewijd aan het delen van onze expertise en inzichten met de bredere gemeenschap.

In dit gedeelte vind je onze nieuwste onderzoekspublicaties en praktische hulpmiddelen.

Deze Kennisbank is niet alleen een informatieopslagplaats - het is een dynamisch platform waar we continu de nieuwste ontwikkelingen en best practices in Trustworthy AI delen. Of je nu een professional, onderzoeker bent, of gewoon gepassioneerd over AI, deze ruimte is er om je te ondersteunen bij je ontdekkingsreis.

Verken onze bronnen en sluit je aan bij ons om normen te stellen voor Trustworthy AI.

coming soon

Advancing the field of bias detection and mitigation in Large Language Models and Traditional AI Models

8 October 2024

This whitepaper presents a comprehensive analysis of bias detection tools and techniques across traditional AI models, Large Language Models (LLMs), and federated learning, applied to six different use cases in areas like credit scoring, HR, gender bias, and law enforcement. Conducted in collaboration with the University of Amsterdam, the research highlights the strengths, limitations, and emerging trends in bias mitigation. It also introduces novel methods that could enhance our understanding of bias identification as well as new directions for their mitigation, guiding industries in selecting the most effective approaches for ensuring fairness in AI-driven decisions.

Download the whitepaper as PDF:

Download the full research as PDF:

Creative Commons License
Advancing the field of bias detection and mitigation in Large Language Models and Traditional AI Models by Rhite is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

From Inception to Retirement: Addressing Bias Throughout the Lifecycle of AI Systems

3 September 2024, by Suzanne Snoek & Isabel Barberá

Conducted in collaboration with the Radboud University, this research explores the different types of bias that can emerge at every stage of the AI lifecycle, offering practical guidance on identification and mitigation. The study aims to fill critical knowledge gaps and empower stakeholders to confidently address bias in their AI projects.

Stay tuned — over the next month, we will release an interactive version of this research, complete with additional tools and tips.

Download the full document as PDF:

Creative Commons License
From Inception to Retirement: Addressing Bias Throughout the Lifecycle of AI Systems by Rhite is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

threat modeling

Threat Modeling Generative AI Systems

24 April 2023, by Isabel Barberá & Martijn Korse

This document provides an overview of 63 potential threats to generative AI systems, identified during a privacy threat modeling session at Rhite using the AI risk assessment tool PLOT4ai. The threats are classified into 8 categories: Technique & Processes (9), Accessibility (6), Identifiability & Linkability (3), Security (12), Safety (3), Unawareness (3), Ethics & Human Rights (14), and Non-compliance (13).

Each threat is further organized into subcategories, based on Rhite's ongoing research SARAI™, a self-assessment tool for Responsible AI aligned with the EU Ethics Guidelines for Trustworthy AI and the OECD principles.

The full report can be download as PDF:

Creative Commons License
Threat Modeling Generative AI Systems by Rhite is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.