Lakera:

What is a Visual Prompt Injection?

Prompt injections are vulnerabilities in Large Language Models where attackers use crafted prompts to make the model ignore its original instructions or perform unintended actions. 

Visual prompt injection refers to the technique where malicious instructions are embedded within an image. When a model with image processing capabilities, such as GPT-V4, is asked to interpret or describe that image, it might act on those embedded instructions in unintended ways.