Text | Crow Intelligence said that after the launch of GPT-4o, OpenAI ushered in multiple rounds of personnel changes. Last week, OpenAI co-founder and chief scientist Ilya Sutskever announced his resignation on Twitter. In his resignation tweet, Ilya expressed his gratitude to t

entertainment 8153℃

article | Crow Intelligence said

After the launch of gpt-4o, openai ushered in multiple rounds of personnel changes.

Last week, openai co-founder and chief scientist ilya sutskever announced his resignation on Twitter. In his resignation tweet, ilya expressed his gratitude to the company and Sam Altman and others, who also tweeted in response and thanked ilya for his efforts.

Although everyone knows that there is an irreparable rift between the two sides. But at least on the surface, both sides behaved in harmony. But with the resignation of jan leike, the leader of the super alignment team, the differences in the openai team have once again been put on the table. At the same time as

resigned, Jan Leike announced the reason for his resignation on Twitter: In the past few months, my team has been sailing against the wind. At times we struggled computationally, and completing this important research became increasingly difficult.

According to "Wired", the OpenAI Super Alignment team has been disbanded, and the remaining members have either resigned or will be included in other OpenAI research efforts.

Super Alignment Team is out!

openai formed the Super Alignment Team in July last year, led by Yang Lake and Ilya Sutskvi, with the goal of solving the core technical challenges of controlling super-intelligent AI in the next 4 years. The team was promised 20% of the company's computing resources. The mission of Super Alignment is to ensure that future general artificial intelligence aligns with human goals and does not go rogue.

According to Jan Leike, OpenAI leadership is divided over the company's core priorities. According to Jan Leike, more bandwidth should be devoted to preparing the next generation of models, including security, monitoring, preparedness, security, adversarial robustness, (super) alignment, confidentiality, social impact and related topic. In the past few years of OpenA's development, security culture and processes have given way to shiny products. This has caused Jan Leike's team to encounter many difficulties in the past few months, including insufficient computing resources, making key research work increasingly difficult.

It is worth noting that this is also the first employee who resigned from Openai to express his dissatisfaction clearly. After Jan Leike publicly expressed his dissatisfaction, Atlantic Monthly editor Sigal Samuel published an article explaining the resignation in detail.

The resignation of ilya sutskever and jan leike is a continuation of the controversy in which the openai board of directors tried to fire sam altman in November last year. Since then, at least five of the company's most security-minded employees have either resigned or been pushed out.

Other security-focused former employees tweeted about Jan Leike's resignation, along with a heart emoji. One of them is Leopold Aschenbrenner, an Ilya ally and Super Alliance team member who was fired from OpenAI last month. Media reports indicate that he and Pavel Izmailov, another researcher on the same team, were allegedly fired for leaking information. But openai did not provide any evidence of leaks.

Despite clear disagreements, few have expressed their dissatisfaction as openly as Jan Leike. The reason is that openai usually requires departing employees to sign a severance agreement that contains a non-disparagement clause. If they refuse to sign, they will lose their equity in the company, which could mean losing millions of dollars.

One day after the article was published, Sam Altman tweeted and admitted that the company's resignation documents contained a clause regarding "potential equity cancellation" for departing employees, but they had never actually activated this clause to recover anyone's equity. At the same time, he did not know that this clause was included in the agreement, and the company was also revising this clause.

Text | Crow Intelligence said that after the launch of GPT-4o, OpenAI ushered in multiple rounds of personnel changes. Last week, OpenAI co-founder and chief scientist Ilya Sutskever announced his resignation on Twitter. In his resignation tweet, Ilya expressed his gratitude to t - Lujuba

At the same time, in response to the concerns about model security mentioned by jan leike, sam and grog also tweeted separately, with similar meanings: openai has put a lot of effort into model security and done a lot of basic work. In the future, the company will continue to work with The government and many stakeholders collaborate on security. Should

develop or be safe?

In essence, the contradiction between Sam and Ilya is ultimately the conflict between the ideals of accelerationism and super "love" alignment.

The former regards AI more as a tool for productivity improvement and unconditionally accelerates technological innovation, while the latter regards AI as the digital life of the future, so it must be abandoned before injecting "love for humanity" into it through super alignment. Development strategies for effective accelerationism.

During the turmoil last November, a common view was that ilya saw a next-generation AI model internally named q (pronounced q-star)*, which was too powerful and advanced and might threaten humanity. Later, ilya and sam's routes conflicted.

As the contradiction between openai's management was once again made public, it once again triggered a debate about whether AI should develop or be safe.

Microsoft AI’s new CEO Mustafa Suleiman once said: Humanity may need to suspend AI in the next five years. Shane Legg, chief AGI scientist of Google DeepMind, once said: "If I had a magic wand, I would slow down."

And quite a few people believe that it is unfounded to worry about the security of AI models, including Meta chief scientist and Turing Award winner Yang Likun. According to Yang Likun, we need to start designing a system that is smarter than a house cat before we “urgently figure out how to control artificial intelligence systems that are much smarter than us.”

He also made an analogy that people who are worried about the safety of AI now are very much like someone who said in 1925, "We urgently need to figure out how to control airplanes that can transport hundreds of passengers across oceans at nearly the speed of sound." Before the invention of the turbojet engine, , until any aircraft can fly non-stop across the Atlantic Ocean, the safety of long-distance passenger aircraft will be difficult to guarantee. Now, however, we can safely fly halfway around the world in a twin-engine jet.

In his view, this kind of prejudice against AI security that is divorced from reality is an important reason why the Super Alignment team is marginalized in OpenAI. "Even though everyone realized there was nothing to be afraid of, the alignment team kept insisting that there was. So, they were kicked out."

A similar view has been echoed by many. Daniel Jeffries believes that the departure of the super-aligned team was not because they saw super-advanced AI that they could not cope with, but because the openai team realized that this kind of AI would not be possible in a short time, and investment based on concerns about AI running out of control was seen as a Waste of resources. Therefore, Open AI’s leadership began to reduce the resources given to the Super Alignment Research Team and began to do more practical things, such as building products to improve the experience.

It is foreseeable that with the departure of ilya sutskever and jan leike, the debate on this matter within openai can come to an end for a short time. But in terms of the logic of AI development, such debates will never stop before humanity reaches its final low point, AGI.

Tags: entertainment