OpenAI's lobbying efforts in the European Union are centered around modifying proposed AI regulations that could impact its operations. The tech firm is notably pushing for a weakening of regulations which currently classify certain AI systems, such as OpenAI's GPT-3, as "high risk."
Altman's Stance on AI Regulation:
OpenAI CEO Sam Altman has been very vocal about the need for AI regulation. However, he is advocating for a specific kind of regulation - those favoring OpenAI and its operations.
OpenAI's White Paper:
OpenAI's lobbying efforts in the EU are revealed in a document titled "OpenAI's White Paper on the European Union's Artificial Intelligence Act." The document focuses on attempting to change certain classifications in the proposed AI Act that classify certain AI systems as "high risk."
"High Risk" AI Systems:
The European Commission's "high risk" classification includes systems that could potentially harm health, safety, fundamental rights, or the environment. The Act would require legal human oversight and transparency for such systems. OpenAI, however, argues that its systems such as GPT-3 are not "high risk," but could be used in high-risk use cases. It advocates that regulation should target companies using AI models, not those providing them.
Alignment with Other Tech Giants:
OpenAI's position mirrors that of other tech giants like Microsoft and Google. These companies also lobbied for a weakening of the EU's AI Act regulations.
Outcome of Lobbying Efforts:
The lobbying efforts were successful, as the sections that OpenAI opposed were removed from the final version of the AI Act. This success may explain why Altman reversed a previous threat to pull OpenAI out of the EU over the AI Act.
Source (Mashable)
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!