US, Britain and other countries sign 'secure by design' AI guidelines
THE guidelines suggest cyber security practices AI companies should implement When design, development, launch, And monitoring AI models.
![The United States, Great Britain and other countries are secure by design, AI guidelines](https://images.cointelegraph.com/cdn-cgi/image/format=auto,onerror= redirect,quality=90,width=1434/https://s3.cointelegraph.com/uploads/2023-11/8ca93440-80d3-4256-9cf3-552eb418add9.jpg)
THE United States, United Kingdom, Australia, And 15 other countries to have released global guidelines has help protect AI models Since be falsified with, exhorting companies has TO DO their models "secure by design. »
On Nov. 26, THE 18 countries released A 20 pages document describing how AI companies should handle their cyber security When development Or using AI models, as they claims "security can often be A secondary consideration" In THE fast industry.
THE guidelines consisted of Above all general recommendations such as maintain A tight leave alone on THE AI models Infrastructure, monitoring For any of them falsification with models Before And After release, And training staff on cyber security risks.
Exciting news! We joined forces with @NCSC And 21 international the partners has develop THE "Guidelines For Secure AI System Development"! This East operational collaboration In action For secure AI In THE digital age: https://t.co/DimUhZGW4R#AISafety #SecureByDesign pic.twitter.com/e0sv5ACiC3
— Cyber security And Infrastructure Security Agency (@CISAgov) November 27, 2023Not mentioned were certain controversial problems In THE AI space, including What possible controls there should be around THE to use of
![US, Britain and other countries sign 'secure by design' AI guidelines](https://images.cointelegraph.com/cdn-cgi/image/format=auto,onerror=redirect,quality=90,width=840/https://s3.cointelegraph.com/uploads/2023-11/8ca93440-80d3-4256-9cf3-552eb418add9.jpg?#)
THE guidelines suggest cyber security practices AI companies should implement When design, development, launch, And monitoring AI models.
![The United States, Great Britain and other countries are secure by design, AI guidelines](https://images.cointelegraph.com/cdn-cgi/image/format=auto,onerror= redirect,quality=90,width=1434/https://s3.cointelegraph.com/uploads/2023-11/8ca93440-80d3-4256-9cf3-552eb418add9.jpg)
THE United States, United Kingdom, Australia, And 15 other countries to have released global guidelines has help protect AI models Since be falsified with, exhorting companies has TO DO their models "secure by design. »
On Nov. 26, THE 18 countries released A 20 pages document describing how AI companies should handle their cyber security When development Or using AI models, as they claims "security can often be A secondary consideration" In THE fast industry.
THE guidelines consisted of Above all general recommendations such as maintain A tight leave alone on THE AI models Infrastructure, monitoring For any of them falsification with models Before And After release, And training staff on cyber security risks.
Exciting news! We joined forces with @NCSC And 21 international the partners has develop THE "Guidelines For Secure AI System Development"! This East operational collaboration In action For secure AI In THE digital age: https://t.co/DimUhZGW4R#AISafety #SecureByDesign pic.twitter.com/e0sv5ACiC3
— Cyber security And Infrastructure Security Agency (@CISAgov) November 27, 2023Not mentioned were certain controversial problems In THE AI space, including What possible controls there should be around THE to use of
What's Your Reaction?
![like](https://vidianews.com/assets/img/reactions/like.png)
![dislike](https://vidianews.com/assets/img/reactions/dislike.png)
![love](https://vidianews.com/assets/img/reactions/love.png)
![funny](https://vidianews.com/assets/img/reactions/funny.png)
![angry](https://vidianews.com/assets/img/reactions/angry.png)
![sad](https://vidianews.com/assets/img/reactions/sad.png)
![wow](https://vidianews.com/assets/img/reactions/wow.png)