Today’s NewsQuick ReadsE-PaperStockRecosStream
Read on App

ChatGPT creator OpenAI lobbied EU for less stringent rules: report

ETtech
Sam Altman, CEO, OpenAI

Synopsis

In several cases, OpenAI sought amendments that eventually made it to the final text of the draft proposal for the AI Act, TIME magazine reported, citing documents obtained from the EU via freedom of information requests.

While its CEO Sam Altman called for stricter regulation of artificial intelligence technology as he toured the world, ChatGPT maker OpenAI reportedly “lobbied” with the European Commission to weaken significant elements of its proposed AI legislation.

ADVERTISEMENT
In several cases, OpenAI sought amendments that eventually made it to the final text of the draft proposal for the AI Act, TIME magazine reported, citing documents obtained from the EU via freedom of information requests.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering CollegeCourseWebsite
IIT DelhiCertificate Programme in Data Science & Machine LearningVisit
MIT xPROMIT Technology Leadership and InnovationVisit
Indian School of BusinessProfessional Certificate in Product ManagementVisit
“By itself, GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high-risk use cases,” OpenAI said in the seven-page document sent to the EU.


Systems classified as "high risk" will be subject to obligations of disclosure and are expected to be registered in a special database. They will also be subject to various monitoring or auditing requirements.

In a bid to avoid stricter rules, OpenAI also pushed back against a proposed amendment to the AI Act that sought to classify generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.”

OpenAI said in its white paper that “GPT-3 and our other general purpose Al systems such as DALL-E may generate outputs that could be mistaken for human text and image content”.
ADVERTISEMENT

The company argued that the law requires companies to put into place “reasonably appropriate mitigations around disinformation and deep fakes, such as watermarking content or maintaining the capability to confirm if a given piece of content was generated by their system”.

Notably, Altman had last month said that the ChatGPT maker might consider leaving Europe if it could not comply with the new AI regulations.
ADVERTISEMENT

“Before considering pulling out, OpenAI will try to comply with the regulation in Europe when it is set,” Altman said at an event in London.

OpenAI’s white paper differs from the view of CEO Altman, who has, on multiple occasions, pitched for stricter regulation of AI technologies.
ADVERTISEMENT

At an ET event earlier this month Altman had said, “We have explicitly said there should be no regulations on smaller companies or on the current open source models—it’s important to let that flourish. The only regulation we have called for is (on) people like ourselves or bigger.”

The EU’s new AI regulations

ADVERTISEMENT
On June 14, the European Parliament voted to approve its draft proposal for the AI Act, a piece of legislation that it hopes will shape global standards in the regulation of AI.

It will become the first legislation in the world dedicated to regulating AI in almost all sectors of society, except defence.

The law will regulate AI according to the level of risk; the higher the risk to individuals' rights or health, for example, the greater a system's obligations.

The legislation moots a strict penalty on erring companies, which can go up to 6% of its total worldwide annual turnover. It also lays out specific restrictions and safeguards along with clear obligations for service providers.

READ MORE ON

NEXT READ

NEXT STORY