The Hidden Dangers of Generative AI: Who is Responsible for Protecting our Data?

Security Voices - Un pódcast de Security Voices

Categorías:

The breakaway success of ChatGPT is hiding an important fact and an even bigger problem. The next wave of generative AI will not be built by trawling the Internet but by mining hordes of proprietary data that have been piling up for years inside organizations. While Elon Musk and Reddit may breathe a sigh of relief, this ushers in a new set of concerns that go well beyond prompt injections and AI hallucinations. Who is responsible for making sure our private data doesn’t get used as training data? And what happens if it does? Do they even know what’s in the data to begin with?We tagged in data engineering expert Josh Wills and security veteran Mike Sabbota of Amazon Prime Video to go past the headlines and into what it takes to safely harness the vast oceans of data they’ve been responsible for in the past and present. Foundational questions like “who is responsible for data hygiene?” and “what is data governance?” may not be nearly as sexy as tricking AI into saying it wants to destroy humanity but they arguably will have a much greater impact on our safety in the long run. Mike, Josh and Dave go deep into the practical realities of working with data at scale and why the topic is more critical than ever.For anyone wondering exactly how we arrived at this moment where generative AI dominates the headlines and we can’t quite recall why we ever cared about blockchains and NFTs, we kick off the episode with Josh explaining the recent history of data science and how it led to this moment. We quickly (and painlessly) cover the breakthrough attention-based transformer model explained in 2017 and key events that have happened since that point.

Visit the podcast's native language site