Tools

News

Notícias

Classificados

Cursos

Broker

IPv4:

IPv6:

 

UpOrDown
Ping
MTR
Smokeping
MTU Detect
Portscan
DNS
HTTP/SSL
My IP
IP Calc
IP Extractor
Uptime Monitor

OpenAI Debuts Fast Codex-Spark on Cerebras

Image © Arstechnica
OpenAI releases GPT-5.3-Codex-Spark on Cerebras hardware, marking the first production model to run on non-Nvidia chips. The coding model delivers about 1,000 tokens per second—roughly 15 times faster than its predecessor—and launches in a text-only mode with a 128,000-token context window. Spark is currently a research preview for ChatGPT Pro subscribers, with API access to select design partners.

OpenAI has rolled out its GPT-5.3-Codex-Spark on Cerebras hardware, making the first production AI model from the company to run on non-Nvidia chips. Spark targets coding tasks and is capable of delivering about 1,000 tokens per second, a pace OpenAI says is roughly 15 times faster than its predecessor. The model ships with a 128,000-token context window and, at launch, handles text-only output.

“Cerebras has been a great engineering partner, and we’re excited about adding fast inference as a new platform capability,” said Sachin Katti, OpenAI’s head of compute.

Codex-Spark is offered as a research preview to ChatGPT Pro subscribers ($200/month) via the Codex app, a command-line interface, and a VS Code extension, with API access rolling out to select design partners. Spark is positioned for speed, not the broader, agentic coding tasks of the full GPT-5.3-Codex model, and is designed specifically for coding use cases.

OpenAI describes Spark as a text-only model tuned for speed, while the full GPT-5.3-Codex aims to handle heavier, more complex coding tasks. Spark’s 128k-token context window makes it well-suited for rapid prototyping and iterative coding workflows, though it may sacrifice some depth of knowledge compared with the larger model.

On SWE-Bench Pro and Terminal-Bench 2.0, Spark reportedly outpaces the older GPT-5.1-Codex-mini by completing tasks in a fraction of the time. OpenAI has not provided independent validation of these numbers. By contrast, independent benchmarks on Nvidia hardware show other OpenAI models topping out well below Spark’s reported speed, with tokens-per-second in the low hundreds for models such as GPT-4o and its variants.

 

Arstechnica

Related News

Wi-Fi 7 Rollouts by Sparklight and Spectrum
Blue Stream Fiber Leadership Promoted for Growth
FCC Clears Amazon Leo for 4,500 More Satellites
Comcast Indiana BEAD Project Delivers Big Results
Gemini cloned via 100k prompts, Google says
DojoNetworks and Gigapower: MDU Bulk Wi-Fi

ISP.Tools survives thanks to ads.

Consider disabling your ad blocker.
We promise not to be intrusive.

Cookie Consent

We use cookies to improve your experience on our site.

By using our site you consent to cookies. Learn more