The U.S., U.K. and over 15 countries introduced the first detailed international agreement on how to keep artificial intelligence safe from rogue players, pushing for companies to develop AI systems which are “secure by design,” Reuters reported.
The document, which was unveiled on Sunday, the countries agreed that companies designing and using AI need to build and deploy it in a way which keeps customers and the wider public safe from misuse, the report added.
Besides U.S. and Britain, the 18 countries include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.
The agreement is non-binding and carries mainly general guidelines such as monitoring AI systems for abuse, protecting data from being tampered and vetting software suppliers.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” said Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, to the news agency.
The guidelines are related to how to keep AI technology from being hijacked by hackers and includes recommendations such as releasing models only after proper security testing, the report noted.
It does not tackle other issues such as the appropriate uses of AI or how the data being provided in the models is collected.
The recommendations is the latest in a series of initiatives by governments globally to shape the development of AI. Earlier in November, France, Germany and Italy, reportedly, reached an agreement on AI regulation, which is expected to accelerate negotiations at the European level.
In October, U.S. President Joe Biden issued an executive order which establishes new standards for AI safety and security, and protect privacy, to manage the risks of AI. The executive order requires developers of AI systems share their safety test results and other critical information with the U.S. government.
The move built on previous actions the U.S. had taken, which included voluntary commitments from 15 companies to drive safe and trustworthy development of AI. In July, tech giants including Amazon (AMZN), Google’s parent Alphabet (NASDAQ:GOOG) (GOOGL), Meta Platforms (NASDAQ:META) and Microsoft (NASDAQ:MSFT) made voluntary commitments to the White House to implement certain measures for the safe use of AI.
The EU has pushed for a tougher stance regarding governing of AI, while Japan has looked at more easier approach, closer to what the U.S. has to strengthen economic growth. The Southeast Asian nations have also gone for a more business-friendly approach to AI. China is also expected to launch an initiative to govern AI from multiple angles.
The U.N. Security Council had held its first formal meeting in July to discuss risks of security and misinformation posed by the use of AI.
Generative AI services have become the talk of the town since the launch of Microsoft (MSFT)-backed OpenAI’s ChatGPT last year. Alibaba’s (BABA) Tongyi Qianwen 2.0 and Tongyi Wanxiang, Baidu’s (BIDU) Ernie Bot, OpenAI’s text-to-image tool DALL·E 3, Google’ Bard, Meta’s Emu Video, Emu Edit, AudioCraft, SeamlessM4T, and Llama 2, Samsung’s (OTCPK:SSNLF) Gauss, and Getty Images’ (GETY) model called Generative AI by Getty Images, are some of the large language models, or LLMs, among the many, being developed by companies worldwide.
More on Microsoft, Alphabet and Meta
Source link