OpenAI delays open-weight model release: summer launch to rival DeepSeek’s R1

OpenAI’s latest update reveals the open-weight AI model, initially slated for June, will arrive later this summer, promising reasoning power on par with or exceeding DeepSeek’s R1. This delay underscores the complexity of delivering advanced, publicly accessible AI weights that balance performance and openness.

Sources:
The Economic TimesThe Hindu+2
Updated 8h ago
Tab background
Sources: The HinduThe Economic Times+1
OpenAI has postponed the launch of its much-anticipated open-weight language model, pushing the release from early summer to later this summer. CEO Sam Altman announced the delay on X, emphasizing the need for additional development time.

The open-weight model is designed to match the reasoning capabilities of OpenAI's existing o-series models, such as the recently rolled out o3-pro, which has been praised for improved performance in science, education, programming, data analysis, and writing.

"We are going to take a little more time with our open-weights model, i.e. expect it later this summer but not June," Altman stated.

Open-weight models differ from fully open-source AI by releasing only the trained parameters, allowing researchers and developers to run the models locally without cloud API dependency, while maintaining some usage restrictions.

The upcoming model aims to outperform competitors like DeepSeek's R1, a leading open reasoning model, signaling OpenAI's commitment to advancing accessible AI technology.

Despite the delay, OpenAI continues to improve its offerings; the o3-pro model, while lacking image generation and Canvas support and having slower response times than its predecessor, has been consistently preferred by expert reviewers.

As AI models grow more powerful, concerns about energy consumption rise. A typical ChatGPT query uses about 0.34 watt-hours of energy and 0.000085 gallons of water, with predictions that AI's total power use could surpass Bitcoin mining by 2025.

"In expert evaluations, reviewers consistently prefer OpenAI o3-pro over o3, highlighting its improved performance in key domains science, education, programming, data analysis, and writing," OpenAI noted.

The delay underscores the complexity of developing open-weight models that balance accessibility, performance, and responsible deployment.
Sources: The Hindu
OpenAI has delayed the release of its highly anticipated open-weight language model, initially expected in early summer, to later this summer. CEO Sam Altman said the model aims to rival DeepSeek’s R1 with advanced reasoning capabilities, following the rollout of the improved o3-pro model.
Section 1 background
We are going to take a little more time with our open-weights model, i.e. expect it later this summer but not June.
Sam Altman
CEO of OpenAI
1
Key Facts
  • OpenAI launched the o3-pro model for ChatGPT Pro and Teams users, with expert reviewers consistently preferring it over the previous o3 model due to improved performance in science, education, programming, data analysis, and writing.The Economic Times
  • The o3-pro model has limitations: it does not support image generation or Canvas, and its responses generally take longer compared to the o1-pro model.The Economic Times
  • CEO Sam Altman announced a delay in the release of the highly-anticipated open-weight model, which will not launch in June as initially expected but later in summer 2025.The Hindu2
  • The upcoming open-weight model aims to match or outperform leading open reasoning models such as DeepSeek's R1, featuring reasoning capabilities on par with OpenAI's o-series models.1
  • Open-weight models share their trained parameters publicly, enabling offline use with some restrictions, and strike a balance between fully open-source and closed AI systems by releasing only trained weights with usage limitations.1
Key Stats at a Glance
Date of CEO Sam Altman's delay announcement
June 11
1
Expert reviewers' consistent preference for o3-pro over o3
100%
The Economic Times
Article not found
CuriousCats.ai

Article

Source Citations