Ofcom joins generative AI debate but adds little

Not wanting to be left out, UK watchdog Ofcom has shared its thoughts on what AI could mean for society and what it is doing about it.

Nick Wood

June 8, 2023

3 Min Read
Artificial Intelligence Concept. Microprocessor with the letters AI.
Artificial Intelligence Concept. Microprocessor with the letters AI.

Not wanting to be left out, UK watchdog Ofcom has shared its thoughts on what AI could mean for society and what it is doing about it.

Rather than proffer anything particularly original on the subject – or point out that ChatGPT and its ilk aren’t actually intelligent, they merely reconstitute other people’s ideas – Ofcom has re-hashed many of the warnings that have gone before.

“Voice clones created by generative AI tools could be used to scam people over the phone by impersonating loved ones. Fraudsters could also use generative AI models to create more effective phishing content,” Ofcom warned. “Generative AI could pose various risks to users of online services, for instance by enabling people to more easily access instructions for self-harm or by providing advice on smuggling illegal substances.”

On the one hand, Ofcom has held back from suggesting that AI represents an existential threat to humanity, which is quite refreshing by today’s standards. And unlike some, Ofcom is not calling for a pause on AI development, merely that the technology should be applied conscientiously. The regulator also talks up some of the benefits it could bring to the content and telecoms industries, specifically regarding the production of realistic visual effects and the detection of malicious network traffic.

But on the other hand, the last sentence in the quote above appears to be a somewhat ham-fisted attempt to bring the generative AI debate within the scope of the government’s controversial Online Safety Bill. There’s nothing quite like a bit of scaremongering to help drum up support for legislation that some argue has the potential to undermine personal privacy and freedom of speech.

Indeed, in explaining how it is getting to grips with AI, Ofcom states that it is “working with companies that are developing and integrating generative AI tools which might fall into [the] scope of the Online Safety Bill, to understand how they are proactively assessing the safety risks of their products and implementing effective mitigations to protect users from potential harms.”

Ofcom is also following the development of AI detection techniques and the role of transparency in order to help people distinguish between real and AI-generated content, and inform them as to whether content has been generated by a human or a computer.

In tandem with this, Ofcom is also keeping tabs on how people’s media literacy might be affected by generative AI, as well as augmented and virtual reality (AR/VR) technologies too. It is also providing information to the industries it regulates about what AI means for them and consequently their responsibilities to the customers they serve.

Telcos take note, for they are some of the biggest proponents of AI, leveraging it for everything from enhanced customer care and service recommendation engines, to predicting network traffic spikes and automated capacity provisioning.

Earlier this week, for example, Amdocs launched its bid to make it even easier for CSPs to reap the benefits of AI, with amAIz, which will enhance its various OSS/BSS and CRM offerings with generative AI.

“We are pleased to see many stakeholders across our sectors undertaking work to realise the benefits of generative AI while minimising the potential risks,” Ofcom said.

“When companies and service providers are integrating generative AI models into their products and services, we expect them to consider the risks and potential harms that might arise. We also expect firms to think about what systems and processes they could deploy to mitigate those risks,” it continued. “Transparency about how these tools work, how they are used and integrated into services, and what steps have been taken to build in protections from harm are likely to be critical to building confidence that risks can be minimised while allowing users to enjoy the benefits generative AI can provide.”

Get the latest news straight to your inbox. Register for the Telecoms.com newsletter here.

About the Author

Nick Wood

Nick is a freelancer who has covered the global telecoms industry for more than 15 years. Areas of expertise include operator strategies; M&As; and emerging technologies, among others. As a freelancer, Nick has contributed news and features for many well-known industry publications. Before that, he wrote daily news and regular features as deputy editor of Total Telecom. He has a first-class honours degree in journalism from the University of Westminster.

Get the latest news straight to your inbox.
Register for the Telecoms.com newsletter here.

You May Also Like