Year: 2024

Year: 2024

Uncovering the True Nature of Microsoft’s Copyright Claim Coverage for AI Solutions using LLMs

One way or another, we all know that the biggest language models (LLMs) have been trained on all the data their creators found available on the web. It is not very difficult to imagine that the chances this data contains copyrighted material are very high. This is also evident in the output of these models, […]

Read More

AI Act Series: gebruiksverantwoordelijke en getroffen personen

Words matter: Terminologie in de (NL vertaling van de) AI-verordening In de meest recente Nederlandse versie van de AI-verordening (Rectificatietekst van 15 april 2024) wordt de term “exploitant” niet langer gebruikt als vertaling van de Engelse term “deployer”. De nieuwe vertaling is “gebruiksverantwoordelijke” geworden. Wat is een “gebruiksverantwoordelijke” volgens de AI-verordening? Artikel 3 definieert deze […]

Read More

AI Act Series: Deployers and Affected Persons (English version)

Words matter: Terminology in the (Dutch translation) of the AI Act In the most recent Dutch version of the AI Act (Rectificatie Text of April 15, 2024), the term “exploitant” is no longer used as the translation of the English term “deployer”. The new translation is “gebruiksverantwoordelijke”. What is a “gebruiksverantwoordelijke” according to the AI […]

Read More

Having fun hacking AI: My Deep Dive into PortSwigger’s LLM Labs

Introduction In the dynamic world of cybersecurity, the emergence of Large Language Models (LLMs) has introduced a new frontier for both innovation and vulnerability. Researchers and enthusiasts alike are continually exploring ways to test these models, often employing techniques like prompt injection and indirect prompt injections. Recently, PortSwigger expanded its repertoire and introduced four new […]

Read More