Liputan6.com, Jakarta - OpenAI has once again made waves in the world of artificial intelligence with the launch of its latest large-scale language model, GPT-5.4.
Released in early March 2026, this model is touted as the most efficient and capable "frontier model" designed specifically for professional work.
GPT-5.4 integrates agent reasoning, coding, and workflow capabilities into a single, expected to be a cutting-edge solution for professionals who need AI tools with high accuracy and multifunctional capabilities.
Advertisement
With six key enhancements, GPT-5.4 promises unprecedented efficiency and performance.
GPT-5.4 is here.Native computer-use capabilities.Up to 1M tokens of context in Codex and the API.Best-in-class agentic coding for complex tasks.Scalable tool search across larger ecosystems.More efficient reasoning for long, tool-heavy workflows.https://t.co/xuLt562S9b pic.twitter.com/mgAuVcOvp4
— OpenAI Developers (@OpenAIDevs) March 5, 2026
GPT-5.4 Launch and Availability
GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT.GPT-5.4 is also now available in the API and Codex.GPT-5.4 brings our advances in reasoning, coding, and agentic workflows into one frontier model. pic.twitter.com/1hy6xXLAmJ
— OpenAI (@OpenAI) March 5, 2026
OpenAI officially released GPT-5.4 on March 4, 2026.
This advanced model is now widely available on OpenAI's major platforms, including ChatGPT, Codex, and the OpenAI API.
For ChatGPT users, two main variants are being launched.
GPT-5.4 Thinking is available to Plus, Teams, and Pro customers, designed for deep reasoning and multi-step problem solving.
Meanwhile, GPT-5.4 Pro is intended for ChatGPT Enterprise and Edu customers, and is available via the API, ideal for highly demanding tasks such as advanced coding and complex analysis.
As part of this evolution, GPT-5.2 Thinking will be deprecated three months after the release of GPT-5.4.
Advertisement
Revolutionary Capabilities for Professionals
GPT-5.4 is OpenAI's first general-purpose model with native computer-based capabilities, enabling it to operate autonomously across a wide range of applications and operating systems on behalf of the user.
In the OSWorld-Verified benchmark, which tests the model's ability to navigate a desktop environment, GPT-5.4 achieved a 75.0% success rate, even surpassing human performance of 72.4%.
The model also integrates the programming capabilities of GPT-5.3-Codex, while simultaneously improving work with tools, software environments, and professional tasks in spreadsheets, presentations, and documents.
GPT-5.4 is claimed to bring advancements in reasoning, coding, and agent workflows within a single, unified model.
On the SWE-Bench Pro benchmark, which tests real-world software engineering tasks, GPT-5.4 scored 57.7%, outperforming GPT-5.3-Codex (56.8%) and GPT-5.2 (55.6%).
The /fast mode in Codex offers up to 1.5x faster token speeds, significantly improving coding efficiency.OpenAI calls GPT-5.4 "the most capable and efficient frontier model for professional work."
The model is specifically designed to excel at the types of work professionals do every day, such as building financial models, editing presentations, drafting legal documents, and managing complex spreadsheets.
In a spreadsheet modeling task test designed for junior investment banking analysts, GPT-5.4 scored 87.5%, up from 68.4% for GPT-5.2.
In a test of its ability to generate knowledge work across 44 occupations, GPT-5.4 matched or outperformed industry professionals in 83% of comparisons.
Improved Accuracy and a Large Context Window
Individual GPT-5.4 responses were 33% less likely to contain errors compared to responses from GPT-5.2, and the new model was 18% less likely to make errors overall.
OpenAI also stated that hallucinations were less likely to occur with GPT-5.4, making it more reliable for critical tasks.Furthermore, GPT-5.4 supports a context window of up to 1 million tokens (922K inputs, 128K outputs).
This represents a 2.5-fold increase over the 400K tokens of GPT-5.3 Codex, enabling the analysis of entire codebases, long document collections, or extended agent trajectories in a single request.
On the MMMU-Pro test, which tests visual comprehension and reasoning, GPT-5.4 achieved an 81.2% success rate.
While GPT-5.4 is more expensive per token than GPT-5.2, its higher token efficiency can reduce overall costs for many tasks.
The API price for GPT-5.4 is $2.50 per million input tokens and $15.00 per million output tokens.
Advertisement
:strip_icc()/kly-media-production/avatars/3882201/original/089958900_1753245613-Softcopy_of_photograph.jpeg)
:strip_icc()/kly-media-production/medias/5522624/original/070613700_1772771420-gpt-5.4.jpg)
:strip_icc()/kly-media-production/medias/5522591/original/064691700_1772769865-britney-spears.jpeg)
:strip_icc()/kly-media-production/medias/5522572/original/020883400_1772769181-11672011_13256.jpg)
:strip_icc()/kly-media-production/medias/5522418/original/073459100_1772765522-586425.jpg)
:strip_icc()/kly-media-production/medias/5522045/original/012593500_1772709285-harley-davidson-rEPAjhzt7TY-unsplash.jpg)
:strip_icc()/kly-media-production/medias/5522024/original/006739900_1772707779-cnc_2025_08_21_1755773522632016_HH7IEOI3pV.jpeg)
:strip_icc()/kly-media-production/medias/5521997/original/053960700_1772705962-e-mens-GpLByd_h_yw-unsplash.jpg)
:strip_icc()/kly-media-production/medias/5521927/original/034597300_1772704335-MV5BNGNjNGEzNDAtZjVjMi00MjJiLTk4YzgtZmQyZGI4YTI2NTk4XkEyXkFqcGc_._V1_FMjpg_UX1000_.jpg)
:strip_icc()/kly-media-production/medias/5521893/original/018501600_1772703507-MV5BOTFmMzI4NjAtNzkxOC00NTAxLTg0ZTctNjZhOTA5MWE2MWE2XkEyXkFqcGc_._V1_FMjpg_UX1000_.jpg)
:strip_icc()/kly-media-production/medias/5521837/original/066038300_1772701531-wesley-tingey-3mGnYRUNIck-unsplash.jpg)
:strip_icc()/kly-media-production/medias/5521775/original/084654800_1772700143-sarah-j-maas-instagram.jpeg)
:strip_icc()/kly-media-production/medias/5521736/original/012390100_1772698624-young-sherlock-prime-video.jpeg)
:strip_icc()/kly-media-production/medias/5521681/original/031933100_1772697286-lanterns-trailer-hbo-max.jpg)