AppSec & DevSecOps

T4-3G. Threat Modeling for Large Language Models

Tuesday, June 11, 2024 1:00 PM - 4:00 PM

Description

Large language models represent a historic opportunity to further accelerate the pace of software development. A GitLab survey reported that 67% of organizations planned to use AI in software development in the immediate future. Unfortunately, many organizations are moving quickly to adopt AI in development with little thought of security consequences. Threat modeling enable security analysts to understand the additional risks that development with LLMs represents. This session will provide an overview of AI and LLM security challenges and demonstrate how threat modeling can identify potential security weaknesses. Inspired by the OWASP Top 10 list for LLMs, the session will provide a threat modeling approach for LLMs that is straightforward to adopt in production.

Learner Objectives

After this session, learner will be able to: - Understand basic AI concepts and large language models (LLMs) - Identify how threat modeling applies to the world of applications developed with artificial intelligence (AI) - Identify emerging security vulnerabilities in LLMs - Develop basic threat models for LLMs - Apply threat modeling concepts for LLMs within their organization