Nations sign declaration on AI in UK; recognise some risks as ‘catastrophic’ | World News


India, China, USA, UK and 24 other nations along with the European Union have recognised that some risks associated with artificial intelligence (AI) could be “catastrophic”. The signatories of the Bletchley Declaration have resolved to “work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe” at the UK’s AI Safety Summit.

Britain's Prime Minister Rishi Sunak speaks to journalists upon his arrival for the second day of the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England. (AFP)
Britain’s Prime Minister Rishi Sunak speaks to journalists upon his arrival for the second day of the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England. (AFP)

The statement has recognised the unforeseen risks of deepfakes and the need to address them urgently. “[T]he protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed,” the declaration says.

The Declaration has also acknowledged concerns around “unintended issues of control” over general purpose AI models that could be opposed to human intent. “These issues are in part because those capabilities are not fully understood and are therefore hard to predict,” the statement says. Such AI systems can amplify disinformation and pose risks in fields of cybersecurity and biotechnology, the statement said.

Calling for international cooperation, the statement calls on countries to balance the need for “pro-innovation and proportionate governance and regulatory approach that maximises the benefits” with risks associated with AI.

When it comes to addressing risks related to ‘frontier AI’, that is, the most advanced and cutting-edge AI, the agenda will focus on identifying and addressing AI safety risks “of shared concern” and building risk-based policies across countries that would include transparency by private actors developing frontier AI, and tools for safety testing. To do that, the signatories will support “an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration”.

“We look at AI and indeed technology in general through the prism of openness, safety, trust and accountability,” Rajeev Chandrasekhar, the Indian minister of state for electronics and information technology, said at the summit. He also said that regulation of technology and innovation should be “driven by a coalition of nations” rather than a couple of countries. He said that the institutional framework should be less episodic, more sustained, and have strategic clarity.

Chandrasekhar reiterated the Indian government’s focus on the need for greater accountability from platforms. “There is a new regime, a new framework that needs to be built where there is greater accountability of platforms on issue of user harm. … on ensuring safety and trust of all those who use their platforms, whether it is AI, or indeed the internet at large,” he said.

“We have learnt in the last 10-15 years as governments that by allowing innovation to get ahead of regulation, we open ourselves to the toxicity and misinformation and the weaponisation that we see on the internet today, represented by social media. And we certainly can agree today that that is not what we should chart for the coming years in terms of AI,” Chandrasekhar said.


Source link

Leave a Comment