[[{"value":"Two Microsoft researchers have devised a new jailbreak method that bypasses the safety mechanisms of most AI systems. The post New CCA Jailbreak Method Works Against Most AI Models appeared…
Companies pursing internal AI development using models from Hugging Face and other open source repositories need to focus on supply chain security and checking for vulnerabilities. Read More
The secret use of other people's generative AI platforms, wherein hijackers gain unauthorized access to an LLM while someone else foots the bill, is getting quicker and stealthier by the…
In a concerning development for the machine learning community, researchers at ReversingLabs have identified malicious models on the popular Hugging Face platform. These models exploit vulnerabilities in the Pickle file…