Research, Resources & Tools

knowledgebase

Welcome to the Knowledge Base, a dedicated space where we provide access to our extensive resources on Trustworthy AI. As pioneers in this rapidly evolving field, we are committed to sharing our expertise and insights with the broader community.

In this section, you’ll find our latest research publications and practical tools.

This Knowledge Base is not just a repository of information—it’s a dynamic platform where we continuously share the latest advancements and best practices in Trustworthy AI. Whether you’re a professional, researcher, or simply passionate about AI, this space is here to support your journey.

Explore our resources and join us in setting the standard for Trustworthy AI.

coming soon

Advancing the field of bias detection and mitigation in Large Language Models and Traditional AI Models

8 October 2024

This whitepaper presents a comprehensive analysis of bias detection tools and techniques across traditional AI models, Large Language Models (LLMs), and federated learning, applied to six different use cases in areas like credit scoring, HR, gender bias, and law enforcement. Conducted in collaboration with the University of Amsterdam, the research highlights the strengths, limitations, and emerging trends in bias mitigation. It also introduces novel methods that could enhance our understanding of bias identification as well as new directions for their mitigation, guiding industries in selecting the most effective approaches for ensuring fairness in AI-driven decisions.

Download the whitepaper as PDF:

Download the full research as PDF:

Creative Commons License
Advancing the field of bias detection and mitigation in Large Language Models and Traditional AI Models by Rhite is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

From Inception to Retirement: Addressing Bias Throughout the Lifecycle of AI Systems

3 September 2024, by Suzanne Snoek & Isabel Barberá

Conducted in collaboration with the Radboud University, this research explores the different types of bias that can emerge at every stage of the AI lifecycle, offering practical guidance on identification and mitigation. The study aims to fill critical knowledge gaps and empower stakeholders to confidently address bias in their AI projects.

Stay tuned — over the next month, we will release an interactive version of this research, complete with additional tools and tips.

Download the full document as PDF:

Creative Commons License
From Inception to Retirement: Addressing Bias Throughout the Lifecycle of AI Systems by Rhite is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

threat modeling

Threat Modeling Generative AI Systems

24 April 2023, by Isabel Barberá & Martijn Korse

This document provides an overview of 63 potential threats to generative AI systems, identified during a privacy threat modeling session at Rhite using the AI risk assessment tool PLOT4ai. The threats are classified into 8 categories: Technique & Processes (9), Accessibility (6), Identifiability & Linkability (3), Security (12), Safety (3), Unawareness (3), Ethics & Human Rights (14), and Non-compliance (13).

Each threat is further organized into subcategories, based on Rhite's ongoing research SARAI™, a self-assessment tool for Responsible AI aligned with the EU Ethics Guidelines for Trustworthy AI and the OECD principles.

The full report can be download as PDF:

Creative Commons License
Threat Modeling Generative AI Systems by Rhite is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.