Kodlama 29 Mayıs 2024

OpenAI’s new safety committee is made up of all insiders

OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company’s projects and operations. But, in a move that’s sure to raise the ire of ethicists, OpenAI’s chosen to staff the committee with company insiders — including Sam Altman, OpenAI’s CEO — rather than outside observers.

Altman and the rest of the Safety and Security Committee — OpenAI board members Bret Taylor, Adam D’Angelo and Nicole Seligman as well as chief scientist Jakub Pachocki, Aleksander Madry (who leads OpenAI’s “preparedness” team), Lilian Weng (head of safety systems), Matt Knight (head of security) and John Schulman (head of “alignment science”) — will be responsible for evaluating OpenAI’s safety processes and safeguards over the next 90 days, according to a post on the company’s corporate blog. The committee will then share its findings and recommendations with the full OpenAI board of directors for review, OpenAI says, at which point it’ll publish an update on any adopted suggestions “in a manner that is consistent with safety and security.”

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to [artificial general intelligence,],” OpenAI writes. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

OpenAI has over the past few months seen several high-profile departures from the safety side of its technical team — and some of these ex-staffers have voiced concerns about what they perceive as an intentional de-prioritization of AI safety.

Daniel Kokotajlo, who worked on OpenAI’s governance team, quit in April after losing confidence that OpenAI would “behave responsibly” around the release of increasingly capable AI, as he wrote on a post in his personal blog. And Ilya Sutskever, an OpenAI co-founder and formerly the company’s chief scientist, left in May after a protracted battle with Altman and Altman’s allies — reportedly in part over Altman’s rush to launch AI-powered products at the expense of safety work.

More recently, Jan Leike, a former DeepMind researcher who while at OpenAI was involved with the development of ChatGPT and ChatGPT’s predecessor, InstructGPT, resigned from his safety research role, saying in a series of posts on X that he believed OpenAI “wasn’t on the trajectory” to get issues pertaining to AI security and safety “right.” AI policy researcher Gretchen Krueger, who left OpenAI last week, echoed Leike’s statements, calling on the company to improve its accountability and transparency and “the care with which [it uses its] own technology.”

Quartz notes that, besides Sutskever, Kokotajlo, Leike and Krueger, at least five of OpenAI’s most safety-conscious employees have either quit or been pushed out since late last year, including former OpenAI board members Helen Toner and Tasha McCauley. In an op-ed for The Economist published Sunday, Toner and McCauley wrote that — with Altman at the helm — they don’t believe that OpenAI can be trusted to hold itself accountable.

“[B]ased on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,” Toner and McCauley said.

To Toner and McCauley’s point, TechCrunch reported earlier this month that OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources — but rarely received a fraction of that. The Superalignment team has since been dissolved, and much of its work placed under the purview of Schulman and a safety advisory group OpenAI formed in December.

OpenAI has advocated for AI regulation. At the same time, it’s made efforts to shape that regulation, hiring an in-house lobbyist and lobbyists at an expanding number of law firms and spending hundreds of thousands of dollars on U.S. lobbying in Q4 2023 alone. Recently, the U.S. Department of Homeland Security announced that Altman would be among the members of its newly formed Artificial Intelligence Safety and Security Board, which will provide recommendations for “safe and secure development and deployment of AI” throughout the U.S.’ critical infrastructures.

In an effort to avoid the appearance of ethical fig-leafing with the exec-dominated Safety and Security Committee, OpenAI has pledged to retain third-party “safety, security and technical” experts to support the committee’s work, including cybersecurity veteran Rob Joyce and former U.S. Department of Justice official John Carlin. However, beyond Joyce and Carlin, the company hasn’t detailed the size or makeup of this outside expert group — nor has it shed light on the limits of the group’s power and influence over the committee.

In a post on X, Bloomberg columnist Parmy Olson notes that corporate oversight boards like the Safety and Security Committee, similar to Google’s AI oversight boards like its Advanced Technology External Advisory Council, “[do] virtually nothing in the way of actual oversight.” Tellingly, OpenAI says it’s looking to address “valid criticisms” of its work via the committee — “valid criticisms” being in the eye of the beholder, of course.

Altman once promised that outsiders would play an important role in OpenAI’s governance. In a 2016 piece in the New Yorker, he said that OpenAI would “[plan] a way to allow wide swaths of the world to elect representatives to a … governance board.” That never came to pass — and it seems unlikely it will at this point.

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.


source

Spread the love <3

You may also like...

Mar
05
2024
13
Film yapımcıları Reddit’e savaş açtı!

Film yapımcıları Reddit’e savaş açtı!

Film yapımcıları, Reddit platformuna karşı haklarını aramaya devam ediyor. Platformda sayısız filmin kaçak olarak paylaşıldığını öne süren yapımcılar, bu durumun...

Spread the love <3
Mar
17
2024
10
BBC Learning English – The English We Speak / Say no more – BBC

BBC Learning English – The English We Speak / Say no more – BBC

When you understand something that someone’s suggesting or trying to explain, there’s a way to tell them you understand! Learn...

Spread the love <3

Optimizing both hardware and software for GenAI

SPONSORED BY INTEL Ryan and Ben chat with Raymond Lo, AI software evangelist at Intel, about the AI PC, the...

Spread the love <3
Haz
06
2024
0

MG iSMART TUR Uygulaması Apple Store’da Yayınlandı!

MG Motors Türkiye, yılın ikinci yarısında kullanıma sunmayı planladığı MG iSMART TUR uygulamasını Apple Store’da yayınladı. Ancak, araçlardaki e-sim modülleri...

Spread the love <3
Whatsapp İletişim
Merhaba,
Size nasıl yardımcı olabilirim ?