Editors: Zhang Jinhe, Song Xinyue OpenAI, the global heavyweight AI giant, has been experiencing constant turmoil recently. After OpenAI co-founder and chief scientist Ilya Sutskever officially announced his resignation, OpenAI security director and head of the "Super Intelligenc

Each editor: Zhang Jinhe, Song Xinyue

Openai, the global heavyweight AI giant, has been experiencing constant turmoil recently. Following the official announcement of the resignation of OpenAI co-founder and chief scientist Ilya Sutskever, OpenAI security director and head of the "Super Intelligence Alignment Team" Jan Leike also posted on social media, Announced resignation. The departure of the two heavyweights

has undoubtedly cast a shadow on the future of openai.

sources revealed that OpenAI disbanded the artificial intelligence long-term risk team-"Super Alignment" just one year after announcing the establishment of the team.

As the news of the resignation of senior executives has fermented, a piece of news about Openai requiring resigned employees to sign a "hush agreement" has also spread. It is rumored that the agreement requires departing employees to promise not to slander openai for life, otherwise they will face the risk of losing all equity they have received. The news of quickly exploded on the Internet, triggering extensive discussions on OpenAI's corporate culture and even industry ethics. The perception of this series of events in

is particularly bad. The two executives who proposed to resign were co-leaders of the "Super Intelligent Alignment Team" of openai. Jane Lake, one of the leaders of , revealed the inside story of his resignation, saying that Altman Not providing resources and pushing for commercialization regardless of safety.

In the face of various speculations and doubts about the resignation of executives and the "hush agreement", Altman also posted a rare long message on social media on May 19, local time, in response: Openai has not implemented the "hush agreement." , who was unaware of the existence of the agreement and stated that the clause would be eliminated in the future.

Altman responded: No employee equity has ever been recovered

Picture source: x

Senior executives have resigned one after another, and "hijacking agreements" are flying everywhere. Even CEO Sam Altman (Sam Altman) can't sit still. At local time On May 19, he issued a rare long response, stating that OpenAI had never implemented this provision and that he himself was not aware of the existence of this provision before.

But he later admitted the existence of the "hijacking agreement" in disguise. Altman responded that the company's resignation documents contained a clause regarding the "potential equity cancellation" of departing employees, but it was never put into practice. Altmann further stated that this clause will be removed from future contracts.

Altman also emphasized: "At openai, we adhere to the principle that the equity received by employees is a legitimate return for the fruits of their labor, and no additional conditions should be attached." Altman's words are undoubtedly a response to various suspicions from the outside world. A positive response in an effort to rebuild public trust in the company.

He said: "This is my responsibility, and it is also one of the few moments when I really felt guilty when I was running openai. I should have known about this, but in fact I really didn't know. Openai's operations The team has been modifying the terms of the separation agreement in the past month or so. If any former employees who have signed the old agreement are concerned about this, you can contact me and openai will actively solve the problem. "

Image source: x

and At the same time, OpenAI President Greg Brockman also issued a long response on social media. The response was jointly signed by him and Altman. The length of the statement was about 500 words, but in fact it did not Provides too much substantive information.

’s statement read: “We know we cannot foresee every possible future scenario, so we need to establish a very tight feedback mechanism, rigorous testing process and careful consideration at every step to ensure we have world-class security. security and functionality. We will continue to conduct security research on different time scales, and will also continue to work with governments and many stakeholders on security issues.”

openai core security team disbanded.

According to a report on the US CNBC website on the 17th, sources revealed that just one year after OpenAI announced the establishment of the artificial intelligence long-term risk team-"Super Alignment", it disbanded the team and some team members were assigned to other teams in the company. In July last year, OpenAI announced the establishment of this team to focus on "scientific and technological breakthroughs to guide and control artificial intelligence systems that are much smarter than us." At that time, OpenAI stated that it would invest 20% of its computing power into the team within 4 years. Tesla CEO Musk commented on the news that the "Super Alignment" team was disbanded: "This shows that security is not OpenAI's top priority."

The latest openai executive to leave and the leader of the Super Alignment team, Jane Lake, directly He broke his skin and revealed the inside story of his resignation - Altman refused to provide resources and promoted commercialization regardless of safety.

Lake is also the co-founder of openai and belongs to the "conservative" camp. He posted just a few hours after Suzkowe announced his resignation: I resigned.

Image source: x

The day after the official announcement of his resignation, Lake posted more than ten posts in a row, publicly explaining the reason for his resignation - the team did not allocate enough resources: the Super Alignment team has been "sailing against the wind" in the past few months. The calculations were taxing, and completing the research was increasingly difficult; safety was not a core priority for Altmann: over the past few years, safety culture and processes had given way to flashier products. " I have been at odds with OpenAI leadership about the company's core priorities for a long time, until we finally reached a breaking point. "

Image source: x

This is the first time an executive-level figure at OpenAI has publicly acknowledged the company's focus. Product development trumps safety.

Altman's response was still decent. He said that he was very grateful for Jane Lake's contribution to openai's consistency research and security culture, and was sorry to see him leave. "He is right, we still have a lot to do, and we are committed to doing that. I will publish a longer post in the next few days."

Image source: x

Daily Economic News Comprehensive Self-Disclosure Information

Daily Economic News