How governments are looking to regulate AI


  • The EU has taken the lead with the strictest rules, which will have the biggest impact globally
  • Chinese rules are also strict, but their impact is domestic, and geared towards keeping control and power with the ruling party
  • The US favours innovation, and its political system hinders any attempt at regulation, giving the courts and states oversized power
  •  In any case, AI regulations will further contribute to fragmentation and regionalisation, which has become a major trend in the tech world

As artificial intelligence (AI) gathers pace, so are attempts at regulation. Every country is looking to have its own set of rules, which could lead to further fragmentation of the global digital market. The risks associated with AI, whether misinformation, job losses or bias, are key reasons behind the move towards regulation.  This article is a snapshot of the situation as it is now, with particular focus on the steps taken by the US, the EU and China, but it is worth bearing in mind this is evolving, so that things will necessarily change as the technology continues to develop. 

The EU wants to remain the global tech regulator

As with other digital regulation, the EU has taken the lead over the US or China. The AI Act, which the European Parliament has passed and is currently under tripartite discussions with the European Council and European Commission, delves into the different levels of risks associated with this technology. AI with unacceptable risks, such as social scoring, is banned, whereas AI with high risk will require both registration and a declaration of conformity before being allowed in the market. The framework was originally proposed with AI capable of one single task in mind, so the rise of generative AI and foundation models, capable of multiple tasks, meant a rewrite. AI systems are essentially all considered to be high risk, with an extra focus on transparency especially of copyrighted material, and liability. It remains to be seen whether the final text, expected by the end of 2023 or early 2024, will go that far.

Other rules will also impact the development of AI in the EU. The AI Liability Directive, still under discussion, focuses on algorithms and putting the burden of proof on the company rather than the user; if there is any damage, the companies will need to prove their systems were not harmful, as opposed to the users proving they were. The Data Act, which is also under discussion, looks into fair access and use of data, with users (both individuals and businesses) having control, whereas the Data Governance Act, which has been passed, is about the mechanisms to enable data-sharing. The General Data Protection Regulation (GDPR), effective from 2018, focuses on the privacy of personal data and therefore how it can be used by AI systems, while the Digital Markets Act, which has also passed, focuses on competition, and will target the largest cloud players which are essential for AI systems.

The US has a dysfunctional political system, while being wary of China

The US has always favoured innovation over regulation, and would prefer if the market introduced its own self-regulatory principles. This approach has been strengthened by its tech rivalry with  China, which has increased the pressure to innovate and led the US to impose strict controls, on semiconductors for instance. 

The US political system also poses challenges to frame regulations. The legislative branch has looked at AI, with the Senate introducing its SAFE innovation framework, and the House introducing the Algorithmic Accountability Act, but neither is likely to pass before Congress’ term ends in 2024, considering  no substantial tech legislation was passed in the previous term with the Democrats in control. This means the executive branch has to use its current legal authority to regulate new use cases. The White House has introduced its own AI Bill of Rights, and has secured voluntary commitments from major tech companies to manage the risks of AI. The Federal Trade Commission (FTC), alongside the Department of Justice (DoJ), the Consumer Financial Protection Bureau (CFPB) and the Equal Employment Opportunity Commission (EEOC) released their own joint statement. The FTC also opened an investigation into OpenAI,  focusing on consumer protection and consumer harm. The judiciary also has to get involved and decide whether the executive can use its existing legal authority. In two recent decisions (Gonzalez vs Google and Twitter vs Taanmeh), the Supreme Court ruled that Section 230, which gives internet companies immunity in terms of the content available on their platforms, still applied to the recommendations made by their algorithms. More AI-related rulings should be expected, and individual states may also gain in pre-eminence if little is passed at the federal level.

Chinese regulation is about power and control

While there may be a pause in terms of overall tech regulation in China, this is not the case with AI, with a full set of regulations expected to pass by the end of the year. While China wants to become the global AI leader, the ruling Communist Party does not want for alternative spheres of power to emerge. Early rules had focused on how recommendation algorithms are used to sell to customers, and were part of the greater control and scrutiny that tech players have faced in the country. 

The focus on control and power were also apparent in the April 2023 draft regulation on generative AI, which made clear that respecting socialist values were at the core of this use of AI. Accuracy and preventing fake content are critical parts of the proposed rules, and there have been discussions about the need to register models and algorithms, as the training data is also under scrutiny. The rules will focus on responsibility, privacy, accuracy and misinformation, and this makes them the closest to the EU rules anywhere in the world, though they are applicable under a very different regime. These rules will also have far less impact globally, as the Communist Party remains focused on ensuring control in the domestic market. 

Other countries have introduced or are looking to introduce AI rules, but the EU remains very much an outlier because of the potential strictness of its rules. It is hoping to have the same global impact with AI as it did with privacy, as the GDPR has become the global standard for data regulations with businesses outside the bloc also following the rules globally to ensure they can still access the European market. However, many other countries see AI as a competitive advantage. They are prone to favour innovation over regulation through a light-touch approach, and this could put the EU at a disadvantage. In any case, introduction of AI regulations by governments around the world will  increase fragmentation and regionalisation, which has become a major trend in the tech world in recent years.

The analysis and forecasts featured in this piece can be found in EIU’s Country Analysis service. This integrated solution provides unmatched global insights covering the economic, political and policy outlook for nearly 200 countries, enabling government departments to direct foreign policy.