Posted inResearch From Schneier on Security – Jailbreaking LLMs with ASCII Art Posted by Samir K March 12, 2024 [[{“value”:”Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions. Research paper.”}]] Read More Share this:FacebookX