October 30, 2023
The G7 bloc are reportedly putting together a ‘voluntary’ code of conduct for the development of AI systems, according to a document.
The rules ‘will set a landmark’ in regards to how the G7 countries – Canada, France, Germany, Italy, Japan, the UK and the US – govern AI in relation to privacy concerns and security risks, according to Reuters who have got their hands on a document.
The process of drawing up an 11-point code began in May at a ministerial forum dubbed the ‘Hiroshima AI process’, and the document apparently says they aim ‘to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.’
It is also ‘meant to help seize the benefits and address the risks and challenges brought by these technologies.’ In practice it sounds as if the code of conduct will be about urging companies to take mitigate risks as they roll out new AI systems, and tackle ‘misuse’ once they have hit the market. It also apparently calls on these firms to post public reports on what their products can do and invest in security controls.
The devil will be in the details with regards to all this, and it will have to be pursued once it’s been announced properly – but whatever the cut and thrust of the code of conduct is will presumably not stray too far from an executive order US president Biden put out today on the matter.
The lengthy set of aims can similarly be categorised as either setting guardrails for bad things that could stem from the proliferation of AI, and on making the most of the goof things it might bring.
These include using AI to engineer biological materials, methods of detecting AI generated content, tools to prevent AI to ‘exacerbate discrimination’, create best practices on the use of AI in the criminal justice system, and fostering a ‘government-wide AI talent surge.’
Meanwhile back in the UK The AI Safety Summit 2023 is due to take place this week at Bletchley Park, presumably as some sort of symbolism in relation to the site’s role in cracking the Enigma codes on World War 2.
The summit will gather governments, AI companies, civil society groups and assorted experts in order to consider the risks of AI, and discuss how they can be mitigated. It’s stated goals are:
A shared understanding of the risks posed by frontier AI and the need for action
A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
Appropriate measures which individual organisations should take to increase frontier AI safety
Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
Showcase how ensuring the safe development of AI will enable AI to be used for good globally
Its announcement prompted an open letter to Prime Minister Sunak signed by trade unions, rights campaigners and a host of other organisations, which essentially says the summit excludes workers who are most effected by the expansion of AI into the market. It read:
“As it stands, the Summit is a closed door event, overly focused on speculation about the remote ‘existential risks’ of ‘frontier’ AI systems – systems built by the very same corporations who now seek to shape the rules… For many millions of people in the UK and across the world, the risks and harms of AI are not distant – they are felt in the here and now… This is about being fired from your job by algorithm, or unfairly profiled for a loan based on your identity or postcode.”
The spirit of the letter, regardless of how influential this specific summit in the UK ends up being, does raise an important point with regards to the ethics of self-regulation. Perhaps it can’t be any other way due to the cutting-edge nature of AI, but having the firms that produce AI so integrated in process of drawing up the rules around them does seem likely to pose some complications when it comes to separating corporate self-interest from the wider good.
Other ambiguities to raise up the flagpole with all these proclamations are the efficacy of any set of rules that are voluntary, and what is considered misuse. One person’s misuse is another person’s business opportunity after all – you can certainly find those who would argue the profiting of personal data that underpins much of social media and search is misuse of a kind.
There’s clearly a lot to shake out in how governments and society wrestle with the growing influence of AI – but with so many groups, summits, and general chatter coming out of governments all over the world, if nothing else it does seem at least to be taken a lot more seriously than it was a year or so ago.
About the Author(s)
You May Also Like