Author: Martijn Korse
Having fun hacking AI: My Deep Dive into PortSwigger’s LLM Labs
Introduction In the dynamic world of cybersecurity, the emergence of Large Language Models (LLMs) has introduced a new frontier for both innovation and vulnerability. Researchers and enthusiasts alike are continually exploring ways to test these models, often employing techniques like prompt injection and indirect prompt injections. Recently, PortSwigger expanded its repertoire and introduced four new […]
Read More