The UK watchdog Ofcom has shared its thoughts on what AI could mean for society and what it is doing about it. The regulator has reiterated many of the warnings that AI-generated content could pose risks to online users, highlighting the potential use of voice clones in scams and malicious content.
While Ofcom does not view AI as an existential threat to humanity, it emphasizes the need for responsible use of the technology. The regulator highlights the positive impact of AI on the content and telecom industries, such as improved visual effects and detection of malicious network traffic. However, it is also concerned about the potential for AI-generated content to harm users in conjunction with the government’s controversial Online Safety Bill.
In response to these risks, Ofcom is working with companies developing and integrating AI tools that might fall under the scope of the Online Safety Bill. The regulator is focusing on proactive assessment of safety risks and effective mitigation strategies to protect users from potential harms.
Additionally, Ofcom is closely monitoring the development of AI detection techniques and the role of transparency in helping users distinguish between real and AI-generated content. This includes informing users about whether a piece of content was created by a human or a computer.
Ofcom is also looking at how media literacy may be impacted by generative AI, as well as AR/VR technologies. It is providing information to the industries it regulates about the implications of AI and their responsibilities towards their customers.
Telecommunications companies, in particular, make extensive use of AI for various purposes, such as customer service, network traffic prediction, and automated capacity provisioning. For example, Amdocs recently launched its amAIz product to further enhance the integration of AI within their OSS/BSS and CRM offerings.
Ofcom expects companies and service providers integrating generative AI models to consider the risks and potential harms of their technology, and to devise systems and processes to mitigate those risks. Transparency about the functioning, usage, and integration of these AI tools, as well as the steps taken to protect users from harm, is essential for building confidence that risks can be minimized while allowing users to enjoy the benefits generative AI can provide.