In a remarkable display of creativity, a researcher showcased how an artificial intelligence (AI) system’s tightly guarded “system prompt” could be indirectly accessed not through brute force or technical hacking but by manipulating the AI’s tendency to excel at storytelling. System prompts are instructions, guidelines, and contextual details provided to AI models before interacting with
The post Researcher Jailbreaks an AI’s System Prompt to Leak Its Core System Function appeared first on Cyber Security News. Read More

Posted inNews