How AI is reshaping the rules of business

Join senior executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Find out more

Over the past few weeks, there have been a number of important developments in the global debate on AI risk and regulation. The emerging theme, both from the US OpenAI hearings with Sam Altman and the EU's announcement of the amended AI law, has been a call for more regulation.

But what has surprised some is the consensus among governments, researchers and AI developers on this need for regulation. During testimony before Congress, Sam Altman, the CEO of OpenAI, proposed creating a new government agency that licenses the development of large-scale AI models.

He made several suggestions for how such a body could regulate the industry, including "a combination of licensing and testing requirements," and said companies like OpenAI should be audited for independently.

However, while there is a growing consensus on the risks, including the potential impacts on people's jobs and privacy, there is still little consensus on what these regulations should look like or how what potential audits should focus on. At the first Generative AI Summit hosted by the World Economic Forum, where leaders from business, government and research institutions came together to align on how to navigate these new ethical and regulatory considerations , two key themes emerged:

Event

Transform 2023

Join us in San Francisco on July 11-12, where senior executives will discuss how they've integrated and optimized AI investments for success and avoided common pitfalls.

Register now The need for responsible and accountable AI auditing

First, we need to update our requirements for companies developing and deploying AI models. This is particularly important when we ask ourselves what "responsible innovation" really means. The UK has led this discussion, with its government recently providing guidance for AI across five core principles, including security, transparency and fairness. Recent research from Oxford has also pointed out that "LLMs such as ChatGPT drive an urgent need to update our concept of accountability".

One of the main drivers of this push towards new responsibilities is the increasing difficulty of understanding and auditing the new generation of AI models. To consider this shift, we can consider "traditional" AI versus LLM AI, or Large Language Model AI, in the example of recommending candidates for jobs.

If traditional AI were trained on data that identifies employees of a certain race or gender in higher-level positions, it could create a bias by recommending people of the same race or gender. same sex for positions. Fortunately, this is something that could be detected or audited by inspecting the data used to train these AI models, as well as the...

How AI is reshaping the rules of business

Join senior executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Find out more

Over the past few weeks, there have been a number of important developments in the global debate on AI risk and regulation. The emerging theme, both from the US OpenAI hearings with Sam Altman and the EU's announcement of the amended AI law, has been a call for more regulation.

But what has surprised some is the consensus among governments, researchers and AI developers on this need for regulation. During testimony before Congress, Sam Altman, the CEO of OpenAI, proposed creating a new government agency that licenses the development of large-scale AI models.

He made several suggestions for how such a body could regulate the industry, including "a combination of licensing and testing requirements," and said companies like OpenAI should be audited for independently.

However, while there is a growing consensus on the risks, including the potential impacts on people's jobs and privacy, there is still little consensus on what these regulations should look like or how what potential audits should focus on. At the first Generative AI Summit hosted by the World Economic Forum, where leaders from business, government and research institutions came together to align on how to navigate these new ethical and regulatory considerations , two key themes emerged:

Event

Transform 2023

Join us in San Francisco on July 11-12, where senior executives will discuss how they've integrated and optimized AI investments for success and avoided common pitfalls.

Register now The need for responsible and accountable AI auditing

First, we need to update our requirements for companies developing and deploying AI models. This is particularly important when we ask ourselves what "responsible innovation" really means. The UK has led this discussion, with its government recently providing guidance for AI across five core principles, including security, transparency and fairness. Recent research from Oxford has also pointed out that "LLMs such as ChatGPT drive an urgent need to update our concept of accountability".

One of the main drivers of this push towards new responsibilities is the increasing difficulty of understanding and auditing the new generation of AI models. To consider this shift, we can consider "traditional" AI versus LLM AI, or Large Language Model AI, in the example of recommending candidates for jobs.

If traditional AI were trained on data that identifies employees of a certain race or gender in higher-level positions, it could create a bias by recommending people of the same race or gender. same sex for positions. Fortunately, this is something that could be detected or audited by inspecting the data used to train these AI models, as well as the...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow