Article

Regulatory Strategy of AI in the United Kingdom: Balancing Benefits and Risks

Published
September 5, 2023
Marysabel Villafuerte

Every day, AI is gaining more significance due to its diverse forms and applications. This has led to the development of computer systems capable of performing tasks with human-like intelligence. Consequently, we now have AI for image recognition and language translation. Other AI applications include computer vision, speech recognition, natural language processing, robotics, and machine learning. In the financial sector, AI can detect fraud. On the other hand, AI has proven to be immensely helpful for radiologists, enabling remote patient monitoring. However, for AI to become such a useful tool, it learns from datasets to predict events using algorithms. Thus, AI can range from specialized "narrow" AI to human-like "strong" AI. Machine learning, including deep learning, is used for AI development. Unfortunately, AI models like Chat GPT can occasionally produce inaccurate information ("hallucinations"). Therefore, AI,from an economic perspective, holds the potential for significant growth but also poses risks.

 

For instance, in the UK, AI has a positive impact on the economy, with over 3,170 AI companies generating £10.6 billion in revenue and employing over 50,000 people. As a result, investments in AI have reached £18.8 billion since 2016, with various applications across different areas. AI also offers social benefits, such as improved medical services and the development of public products and services.However, risks arise from its use, including bias, security concerns, job displacement, and ethical considerations. According to a PwC report, 7% of jobs in the UK are at high risk of automation within the next five years, and this percentage increases to 30% in two decades. Creative industries face particularly concerning challenges. Currently, discussions about intellectual property, ethics, and economic impact are taking place not only in the UK but also in various parts of the world.

 

For this reason, many AI leaders and experts have made calls for technology regulation. Consequently,over 1,000 specialists and leaders, including figures from Google, Amazon, and Apple, have signed a letter urging a six-month pause in training more powerful Artificial Intelligence systems than GPT-4 to study and mitigate potential risks. The letter emphasizes the importance of robust third-party audits,access to computational power regulation, trained national AI agencies, liability for AI-induced damages, measures to prevent AI model leaks, funding for technical AI security research, and standards for AI-generated content.

 

The UK government's approach to AI regulation involves a multi-pronged strategy detailed in its National AI Strategy. The Artificial Intelligence Office, under the Department for Science, Innovation, and Technology, oversees this strategy. In March 2023, a white paper titled "A Pro-Innovation Approach to AI Regulation" was published, focusing on regulatory reform. The UK seeks to balance AI benefits with risks, foster innovation, and address security, transparency, equity,accountability, and contestability concerns. The five principles of the framework guide AI development across all sectors and aim to gain public trust.The UK initially proposes a non-statutory framework, allowing regulators to implement principles based on their specific expertise in each area. However, a statutory duty for regulators to consider these principles may be introduced in the future. Coordination, support functions, and oversight mechanisms are being developed to create a coherent regulatory framework.

 

The US approach to AI risk management is characterized by risk-based, sector-specific distribution among federal agencies. This approach, while advantageous, has led to uneven development of AI policies and a lack of consistent federal legislation. The US has invested in non-regulatory infrastructure, such as AI risk management frameworks and research funding. In comparison, the EU established more comprehensive legislation to regulate specific digital environments. The UK proposes divergence from both approaches by combining flexible regulation with sovereign state capabilities to attract AI startups while building technical expertise to establish and enforce standards. Collaboration among international partners is essential, with the EU-US Trade and Technology Council successfully cooperating on AI standards development.

 

If you want to read the article by James Tobin, please click the following link: https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/

Check out our HCI case studies here.
Read next
Singapore's Investment in AI: An Ambitious Path, Yet with Pending Challenges

Innovation with Responsibility: The European Union Leads the Way in AI Regulation

Navigating the Future: Analyzing the Biden Administration's Strategic Approach to Responsible AI Development.