Generative AI: A pragmatic model for data security

Visit our on-demand library to view VB Transform 2023 sessions. Sign up here

The rapid rise of large language models (LLMs) and generative AI has presented new challenges to security teams around the world. By creating new ways to access data, Generation AI goes beyond traditional security paradigms aimed at preventing data from reaching people who are not meant to have it.

To enable organizations to quickly move to Generation AI without introducing undue risk, security vendors must update their programs, taking into account new types of risk and how they put pressure on their existing programs.

Unreliable Intermediaries: A New Source of Shadow IT

An entire industry is currently being built and grown through LLMs hosted by services such as OpenAI, Hugging Face, and Anthropic. Additionally, there are a number of open models available such as LLaMA from Meta and GPT-2 from OpenAI.

Access to these models could help an organization's employees solve business problems. But for various reasons, not everyone is able to directly access these templates. Instead, employees often look for tools, such as browser extensions, SaaS productivity apps, Slack apps, and paid APIs, that promise easy template usage.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

These intermediaries are quickly becoming a new source of shadow IT. Using a Chrome extension to craft a better sales email doesn't feel like hiring a vendor; This looks like a productivity hack. It's not obvious to many employees that they're leaking important sensitive data by sharing it all with a third party, even if your organization itself is comfortable with the underlying models and vendors.

Training beyond security boundaries

This type of risk is relatively new to most organizations. Three potential limits come into play in this risk:

Boundaries between users of a founding model Boundaries between the customers of a company that is refined on a fundamental model Boundaries between users within an organization with different access rights to data used to refine a model

In each of these cases, the problem is understanding what data goes into a model. Only individuals with access to training or debugging data should have access to the resulting model.

As an example, let's say an organization uses a product that refines an LLM using content from its productivity suite. How would this tool ensure that I cannot use the template to retrieve information originally from documents that I do not have permission to access? Additionally, how would it update this mechanism after the access I initially had was revoked?

These are treatable problems, but they require special attention.

Generative AI: A pragmatic model for data security

Visit our on-demand library to view VB Transform 2023 sessions. Sign up here

The rapid rise of large language models (LLMs) and generative AI has presented new challenges to security teams around the world. By creating new ways to access data, Generation AI goes beyond traditional security paradigms aimed at preventing data from reaching people who are not meant to have it.

To enable organizations to quickly move to Generation AI without introducing undue risk, security vendors must update their programs, taking into account new types of risk and how they put pressure on their existing programs.

Unreliable Intermediaries: A New Source of Shadow IT

An entire industry is currently being built and grown through LLMs hosted by services such as OpenAI, Hugging Face, and Anthropic. Additionally, there are a number of open models available such as LLaMA from Meta and GPT-2 from OpenAI.

Access to these models could help an organization's employees solve business problems. But for various reasons, not everyone is able to directly access these templates. Instead, employees often look for tools, such as browser extensions, SaaS productivity apps, Slack apps, and paid APIs, that promise easy template usage.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

These intermediaries are quickly becoming a new source of shadow IT. Using a Chrome extension to craft a better sales email doesn't feel like hiring a vendor; This looks like a productivity hack. It's not obvious to many employees that they're leaking important sensitive data by sharing it all with a third party, even if your organization itself is comfortable with the underlying models and vendors.

Training beyond security boundaries

This type of risk is relatively new to most organizations. Three potential limits come into play in this risk:

Boundaries between users of a founding model Boundaries between the customers of a company that is refined on a fundamental model Boundaries between users within an organization with different access rights to data used to refine a model

In each of these cases, the problem is understanding what data goes into a model. Only individuals with access to training or debugging data should have access to the resulting model.

As an example, let's say an organization uses a product that refines an LLM using content from its productivity suite. How would this tool ensure that I cannot use the template to retrieve information originally from documents that I do not have permission to access? Additionally, how would it update this mechanism after the access I initially had was revoked?

These are treatable problems, but they require special attention.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow