Emerging Trends

W3-2D. Exploitable Weaknesses in GenAI Workflows: From RAG to Riches

Wednesday, June 12, 2024 1:15 PM - 2:15 PM

Description

Everyone’s building AI chatbots using Retrieval Augmented Generation (RAG) with Large Language Models (LLM), but how many of these teams understand the risks they’re opening themselves up to, especially as they mix confidential data with new types of databases and other infrastructure. This session will demonstrate attacks on the “memory of AI,” vector databases, which are used in countless ways from RAG to facial recognition to medical diagnoses. The AI data is a treasure trove for attackers. We’ll end by showing how to defend against these completely new attacks.

Learner Objectives

Attendees will learn more about data weaknesses in GenAI workflows Attendees will see a demonstration of embedding inversion attacks and membership inference attacks on vector embeddings (the “memory of AI”) Attendees will gain an understanding of what questions to ask of their vendors and engineering teams to understand how risks are being managed and mitigated Attendees will learn how to minimize the risks of building or leveraging GenAI workflows and other modern large-model AI workflows.