Securing the phases of the Generative AI lifecycle

 
English Intermediate Other

Large Language Models (LLMs) are the core of enterprise Generative AI applications. They can process and generate natural language, but they require additional components to handle user interactions, security, and other functionality to respond to or act on user inputs. The collection of these components and services that form a functional solution is called a Generative AI application. A best practice when developing a Gen AI application is to follow a standard AI Lifecycle and embed security at each phase of the AI lifecycle. Implementations must not neglect the basics such as authentication, authorization, protecting customer data, etc., which are required for AI solution just like for any other cloud solution. After we have addressed the basics, there are specific aspects for LLM and Generative AI security requiring special attention. Details vary depending on the implementation and are constantly changing in this fast-growing field, but understanding something about the properties allows us to address specific implementation needs. One way to think about securing it is to view the model as essentially intelligence in a black box that has certain defining characteristics: • AI models are stochastic/non-deterministic, requiring an engineer to account for them spontaneously failing or being incorrect. In some sense it doesn't matter if the failure is intentional (due to poisoned training data, adversarial input, etc.) or unintentional (the model is not accurate). Engineers need to expect failure, and it is likely best to assume failure in the worst and most spectacular way. • If you're giving it any data at all, either in the context window (via patterns like RAG) or by fine-tuning the model, it may include/leak data in its response. Building on these two attributes the session provides security guidance for the phases of the Generative AI lifecycle that will ensure that your AI solution has healthy security posture from the start.

Speaker

Yogi Srivastava

Yogi Srivastava - Principal Security Software Engineer at Microsoft Melbourne

As a Principal Security Software Engineer at Microsoft, I help customers secure the security of their solutions and meet the security compliance requirements. I have over 18 years of experience securing applications and its platform, Cloud and AI security including Identity and Access management as well. I have a proven track record of delivering meaningful outcomes for business and customers, by applying my skills and knowledge in secure architecture and design, secure code review, penetration testing, vulnerability assessments, security controls auditing, security risk assessments, and security compliance. I hold an MBA degree, as well as ITIL v3 and AgilePM certifications, which enable me to manage projects and processes effectively and efficiently. I am passionate about security and constantly learning new technologies and best practices to stay ahead of the evolving threats and challenges.

Code of Conduct

We seek to provide a respectful, friendly, professional experience for everyone, regardless of gender, sexual orientation, physical appearance, disability, age, race or religion. We do not tolerate any behavior that is harassing or degrading to any individual, in any form. The Code of Conduct will be enforced.

Who does this Code of Conduct apply to?

All live stream organizers using the Global Azure brand and Global Azure speakers are responsible for knowing and abiding by these standards. Each speaker who wishes to submit through our Call for Presentations needs to read and accept the Code of Conduct. We encourage every organizer and attendee to assist in creating a welcoming and safe environment. Live stream organizers are required to inform and enforce the Code of Conduct if they accept community content to their stream.

Where can I get help?

If you are being harassed, notice that someone else is being harassed, or have any other concerns, report it. Please report any concerns, suspicious or disruptive activity or behavior directly to any of the live stream organizers, or directly to the Global Azure admins at team@globalazure.net. All reports to the Global admin team will remain confidential.

Code of Conduct for local live streams

We expect local organizers to set up and enforce a Code of Conduct for all Global Azure live stream.

A good template can be found at https://confcodeofconduct.com/, including internationalized versions at https://github.com/confcodeofconduct/confcodeofconduct.com. An excellent version of a Code of Conduct, not a template, is built by the DDD Europe conference at https://dddeurope.com/2020/coc/.