OpenAI Co-founder Launches New Startup

OpenAI

Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has announced the launch of his new venture, Safe Superintelligence Inc. (SSI). Alongside co-founders Daniel Gross from Y Combinator and ex-OpenAI engineer Daniel Levy, Sutskever aims to address what they believe to be the most critical problem in the field of AI: developing a safe and powerful superintelligent AI system.

Sutskever believes AI superintelligence, a vague term for AI that matches or exceeds human intelligence, will be possible within ten years. The company’s statement, posted by Sutskever on X, declares, “Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our​ time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.” The founders describe SSI as not just their mission but also their name and entire product roadmap. “SSI is our mission, our name, and our entire product roadmap because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI,” the statement reads.

An Opposite of OpenAI?

While Sutskever and OpenAI CEO Sam Altman have publicly expressed mutual respect, recent events suggest underlying tensions. Sutskever was instrumental in the attempt to oust Altman, which he later stated he regretted. Sutskever formally resigned in May, having kept a low public profile that left onlookers wondering about his whereabouts.

This incident and the departure of other key researchers citing safety concerns at OpenAI raise questions about the company’s priorities and direction. OpenAI’s “superalignment team,” tasked with aligning AI to human values and benefits, was practically dismantled after Sutskever and fellow researcher Jan Leike left the company this year. Sutskever’s decision to leave seems to stem from his desire to pursue a project that aligns more closely with his vision for the future of AI development—a vision where OpenAI is seemingly failing as it drifts from its founding principles.

AI With Safety First

The risks surrounding AI are hotly contested. While humanity has a primal urge to fear artificial systems that are more intelligent than us—a totally fair sentiment—not all AI researchers think this is possible shortly. However, a key point is that neglecting the risks now could be devastating in the future. SSI intends to tackle safety while simultaneously developing AI: “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.

“We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,” the founders explain. This approach allows SSI to “scale in peace,” free from the distractions of management overhead, product cycles, and short-term commercial pressures. “Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the statement stresses.

The Dream Team

To achieve their goals, SSI is assembling a “lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.” “We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent,” the statement notes. “If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.” With SSI, yet another player joins the ever-expanding field of AI. It will be very interesting to see who joins SSI, and particularly if there’s a strong movement of talent from OpenAI.

Conclusion

The launch of Safe Superintelligence Inc. marks a significant shift in the AI landscape, driven by a mission to ensure the development of safe and powerful superintelligent AI. Sutskever’s departure from OpenAI and the founding of SSI underscore a growing concern within the AI community about the need for focused efforts on safety and ethical considerations. With a dedicated team and a clear mission, SSI aims to lead the way in developing AI systems that not only advance technology but also prioritize safety and alignment with human values. The success of SSI will likely influence the future direction of AI research and development, making it a venture to watch closely.

See also: The Anthropic Claude 3.5 Sonnet Outperforms GPT-4o

The Anthropic Claude 3.5 Sonnet Outperforms GPT-4o
Women+ in Data and AI Summit 2024

Trending Posts

Trending Tools

FIREFILES

FREE PLAN FIND YOUR WAY AS AN TRADER, INVESTOR, OR EXPERT.
Menu