A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

Security, Spoken - Un pódcast de SpokenLayer

Categorías:

Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. Read the story here. Learn more about your ad choices. Visit megaphone.fm/adchoices

Visit the podcast's native language site