Category: Security
Uncovering the True Nature of Microsoft’s Copyright Claim Coverage for AI Solutions using LLMs
One way or another, we all know that the biggest language models (LLMs) have been trained on all the data their creators found available on the web. It is not very difficult to imagine that the chances this data contains copyrighted material are very high. This is also evident in the output of these models, […]
Read MoreHaving fun hacking AI: My Deep Dive into PortSwigger’s LLM Labs
Introduction In the dynamic world of cybersecurity, the emergence of Large Language Models (LLMs) has introduced a new frontier for both innovation and vulnerability. Researchers and enthusiasts alike are continually exploring ways to test these models, often employing techniques like prompt injection and indirect prompt injections. Recently, PortSwigger expanded its repertoire and introduced four new […]
Read More