How to Minimize Data Risk for Generative AI and LLMs in the Enterprise

Visit our on-demand library to view VB Transform 2023 sessions. Sign up here

Companies have quickly recognized the power of generative AI to uncover new ideas and increase productivity for developers and non-developers alike. But moving sensitive and proprietary data into publicly hosted Extended Language Models (LLMs) creates significant security, privacy, and governance risks. Businesses need to address these risks before they can begin to take advantage of these powerful new technologies.

As IDC points out, companies are rightly concerned that LLMs can "learn" from their prompts and leak proprietary information to other companies that grab similar prompts. Companies are also concerned that the sensitive data they share could be stored online and exposed to hackers or accidentally made public.

This makes feeding data and prompts into publicly hosted LLMs a failure for most companies, especially those operating in regulated spaces. So how can companies extract value from LLMs while sufficiently mitigating risk?

Work within your existing security and governance perimeter

Instead of sending your data to an LLM, bring the LLM to your data. This is the model most companies will use to balance the need for innovation with the importance of protecting customers' personal information and other sensitive data. Most large enterprises already maintain a strong security and governance boundary around their data, and they should host and deploy LLMs within this protected environment. This allows data teams to further develop and customize the LLM and employees to interact with it, all within the organization's existing security perimeter.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

A strong AI strategy requires a strong data strategy from the start. This means breaking down silos and establishing simple, consistent policies that give teams access to the data they need within a strong security and governance framework. The end goal is to have actionable and reliable data that is easily accessible and usable with an LLM in a secure and governed environment.

Create domain-specific LLMs

LLMs educated across the web present more than just privacy issues. They are prone to “hallucinations” and other inaccuracies and can reproduce biases and generate offensive responses that create additional risks for businesses. Additionally, foundational LLMs haven't been exposed to your organization's internal systems and data, which means they can't answer questions specific to your business, customers, and perhaps even your industry.

The answer is to extend and customize a template to make it smart for your own business. While you are hosting...

How to Minimize Data Risk for Generative AI and LLMs in the Enterprise

Visit our on-demand library to view VB Transform 2023 sessions. Sign up here

Companies have quickly recognized the power of generative AI to uncover new ideas and increase productivity for developers and non-developers alike. But moving sensitive and proprietary data into publicly hosted Extended Language Models (LLMs) creates significant security, privacy, and governance risks. Businesses need to address these risks before they can begin to take advantage of these powerful new technologies.

As IDC points out, companies are rightly concerned that LLMs can "learn" from their prompts and leak proprietary information to other companies that grab similar prompts. Companies are also concerned that the sensitive data they share could be stored online and exposed to hackers or accidentally made public.

This makes feeding data and prompts into publicly hosted LLMs a failure for most companies, especially those operating in regulated spaces. So how can companies extract value from LLMs while sufficiently mitigating risk?

Work within your existing security and governance perimeter

Instead of sending your data to an LLM, bring the LLM to your data. This is the model most companies will use to balance the need for innovation with the importance of protecting customers' personal information and other sensitive data. Most large enterprises already maintain a strong security and governance boundary around their data, and they should host and deploy LLMs within this protected environment. This allows data teams to further develop and customize the LLM and employees to interact with it, all within the organization's existing security perimeter.

Event

VB Transform 2023 on demand

Did you miss a session of VB Transform 2023? Sign up to access the on-demand library for all of our featured sessions.

Register now

A strong AI strategy requires a strong data strategy from the start. This means breaking down silos and establishing simple, consistent policies that give teams access to the data they need within a strong security and governance framework. The end goal is to have actionable and reliable data that is easily accessible and usable with an LLM in a secure and governed environment.

Create domain-specific LLMs

LLMs educated across the web present more than just privacy issues. They are prone to “hallucinations” and other inaccuracies and can reproduce biases and generate offensive responses that create additional risks for businesses. Additionally, foundational LLMs haven't been exposed to your organization's internal systems and data, which means they can't answer questions specific to your business, customers, and perhaps even your industry.

The answer is to extend and customize a template to make it smart for your own business. While you are hosting...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow