During the AWS re: Inforce 2024 keynote, securing generative AI – while maintaining speed – was a priority.
Amazon CEO Steve Schmidt shared his thoughts on how to scale generative AI, create a flywheel for security operations, and act quickly without compromising customer trust.
Table of Contents
How to Define Security Needs: The Security Scope Matrix for Generative AI
“Safety basics still apply,” Steve said.
These core elements, such as identity and access management, vulnerability testing, and threat modeling, must be expanded to include generative AI.
The security practices you should focus on will depend on the scope of your generative AI solution.
The AWS AI Security Scope Matrix identifies five types of generative AI use cases.
Consumer application: Using a third-party general AI solution such as ChatGPT
Enterprise app: Use a third-party enterprise app with built-in AI capabilities
Pre-trained model: Build an application on a pre-trained model from a third-party
Fine-tuned model: Improve an existing third-party model using your business data
Self-Training Model: Build and train a generative AI model using your business data
Once you define the scope of your generative AI solution, you can answer the following questions to secure it:
Where is my data?
What happened to my query and its associated data?
Are the results of these models accurate enough?
What should I think about when it comes to governance, compliance, legal issues, privacy, controls, resilience, and risk management?
How to build a security operations flywheel for generative AI
No matter the size of your project, your security team has limited resources, and the growing need for AI expertise increases this challenge.
“Finding AI talent is hard. Finding security talent is hard. Obviously, it’s hard to find talent that understands the intersection of those two things,” Steve said.
When you find AI security experts, you may be tempted to involve them in every project. “It makes sense, but it’s not on a large scale,” he explained.
These security professionals essentially become doors and obstacles, slowing down business innovation. Eventually, software developers will find ways to outsmart security teams.
Steve explained how to overcome this challenge and accelerate generative AI innovation using Flywheel.
Build a core security team and create artificial intelligence
First, build a team of experts who specialize in AI security.
These experts are not evaluators who slow down progress. Instead, it enables rapid testing and innovation by giving developers and researchers the tools to safely explore genetic AI.
They create AI security solutions that other teams can use, create guardrails and use cases, and connect people and resources across the organization to accelerate delivery.
“It’s a gas pedal, not a door,” Steve said.
Developing generative security standards for artificial intelligence
The AI security team must establish guidelines for managing confidential data, templates, and agile workflows.
The standards set expectations from the beginning and ensure safety is not secondary.
Create a guide to threat modeling for genAI applications
The Threat Modeling Guide will help developers create productive AI applications. They will understand how to address and mitigate risks for each application systematically.
Produce internal testing tools and share results
Testing lets you see how generative AI solutions handle interesting prompts. Compile that knowledge in one place as teams learn so everyone can benefit from other teams’ discoveries.
Conduct regular security reviews
Organizations re-implement AppSec reviews when significant code changes occur. But for AI-powered applications, you’re not dealing with stagnant code. As such, your organization needs an ongoing review and audit process with updated AI security guardrails and tasks.
“In AI, the model is the code,” Steve said. “Answers change over time as users interact with them. You’re never done with AppSec reviews.”
Together, these elements create a flywheel, enabling teams to deliver solutions quickly and safely. Throughout the cycle, teams continue to iterate and implement discoveries.
4 areas to focus on for generative AI security
How to put these principles into practice? Steve shared four tips.
Managing sensitive data during model training
When training AI models, protect sensitive data. Anonymization of data and implementation of data perimeter. Know what you own, where it is stored, who has access to it and why, and how the data is used over time.
Application of confidence limits to RAG generation.
Establish trust boundaries to ensure that generative AI applications have the appropriate user context and only access data for which users are authorized.
Perform continuous testing of artificial intelligence models
We’ve already covered ongoing security testing and reviews, but it’s worth repeating. Regularly test forms for potential injection issues, data leaks, and other security risks.
Implementation of security barriers on the entrances and exits of the artificial intelligence system
Create protections and guardrails on the inputs and outputs of AI solutions.
This ensures that the system avoids certain terms, topics, or answers. These must constantly evolve to adapt to the changing landscape.
Develop skills in artificial intelligence and security
“Act quickly without compromising customer trust and security,” Steve said. This is the ultimate goal of generative AI security.