OpenAI Dev Day 2025: Major Agent Platform and Apps Launches
YES, OpenAI Dev Day 2025 is happening today (October 6, 2025) at Fort Mason in San Francisco with over 1,500 developers attending. The opening keynote livestreamed at 10:00 AM PT revealed OpenAI’s most ambitious developer platform expansion to date, centered on AgentKit—a complete toolkit for building production-grade AI agents—and Apps in ChatGPT, which enables developers to build interactive applications that run directly inside ChatGPT conversations. The company also announced that ChatGPT now serves 800 million weekly active users (up from 700 million just last month) and processes 6 billion tokens per minute through its API. With Codex moving to general availability, GPT-5 Pro launching in the API, and a massive 6-gigawatt AMD chip partnership announced the same morning, OpenAI is making its boldest push yet to cement developer loyalty amid intensifying competition from Anthropic, Google, and Meta.
This is OpenAI’s third annual DevDay, following the inaugural 2023 event and a 2024 multi-city tour. The 2025 event represents a significant scale-up, returning to a single-city format but with triple the attendance of the 2023 debut. CEO Sam Altman and Head of Developer Experience Romain Huet led the opening keynote, while President Greg Brockman delivered the Developer State of the Union at 3:15 PM PT. The event concludes with a fireside chat between Altman and legendary designer Jony Ive (whose AI device startup OpenAI acquired for $6.4 billion in May 2025) discussing “the craft of building in the age of AI.”
AgentKit transforms agent development from prototype to production
AgentKit represents OpenAI’s comprehensive response to the challenge of building reliable AI agents at scale. Sam Altman described it as “all the stuff that we wished we had when we were trying to build our first agents,” and the toolkit includes five integrated components that address the full lifecycle of agent development.
Agent Builder, now in beta, provides a visual drag-and-drop interface for designing agent logic—“like Canva for building agents,” according to the presentation. In a striking live demonstration, engineer Christina Huang built an entire AI workflow and two complete agents in under eight minutes on stage. The tool supports preview runs, inline evaluation configuration, full versioning, and includes pre-built templates to accelerate development. It’s built on top of the Responses API, giving developers both simplicity and power.
ChatKit, now generally available, offers developers a simple, embeddable chat interface they can integrate into their own applications. As Altman explained, it allows developers to “bring your own brand, your own workflows, whatever makes your own product unique” while leveraging OpenAI’s conversational infrastructure. The component includes built-in streaming response handling, thread management, and in-chat experiences—eliminating months of frontend development work.
Evals for Agents, also generally available today, provides sophisticated testing and optimization capabilities. The system offers step-by-step trace grading, datasets for assessing individual agent components, and automated prompt optimization. Notably, developers can run evaluations on external models directly from the OpenAI platform, enabling cross-model performance comparisons without additional infrastructure.
The Connector Registry, beginning its beta rollout, consolidates data sources into a single admin panel with secure connections to internal tools and third-party systems. Pre-built connectors include Dropbox, Google Drive, SharePoint, and Microsoft Teams, with support for third-party Model Context Protocol (MCP) servers. The registry includes an admin control panel for security and permissions management, and it’s available to API, ChatGPT Enterprise, and Education customers with the Global Admin Console.
Guardrails rounds out the suite with an open-source, modular safety layer that can mask or flag personally identifiable information and detect jailbreak attempts. It’s available as a standalone deployment or via a guardrails library for JavaScript, giving developers flexibility in how they implement safety controls.
Launch partners showcasing AgentKit capabilities include HubSpot (which improved its Breeze AI assistant), financial platforms Ramp and Klarna, and data enrichment service Clay. These early adopters demonstrate AgentKit’s applicability across industries from fintech to sales automation.
Apps in ChatGPT creates a new generation of conversational applications
The Apps SDK represents perhaps the most revolutionary announcement from DevDay 2025, fundamentally changing how developers can build interactive experiences. Apps appear naturally within ChatGPT conversations and can be invoked by name (“Spotify, make a playlist for my party”) or automatically suggested by ChatGPT when relevant to the conversation. Unlike traditional web apps or plugins, these apps render fully interactive interfaces directly within the chat experience while maintaining natural language interaction.
The Apps SDK, now in preview, is built on the open Model Context Protocol (MCP) standard, allowing developers to connect data sources, trigger actions, and render fully interactive UIs. OpenAI has introduced Developer Mode in ChatGPT specifically for testing apps during development, with comprehensive documentation and example apps available to help developers get started. App submissions and monetization options are coming “later this year,” though specific dates weren’t announced.
Seven partner apps launched today and are already available to all logged-in ChatGPT users on Free, Go, Plus, and Pro plans (initially outside the EU, starting in English). Booking.com enables travel reservations directly in chat, while Canva allows users to design posters and pitch decks through conversational commands. In a compelling demo, a user asked Canva to create promotional materials for a dog walking business, and the app generated professional designs with iterative refinement—all without leaving the ChatGPT interface. Coursera provides interactive learning experiences where ChatGPT can elaborate on course content while users watch videos. Figma brings design tools into conversations, Expedia handles travel planning, Spotify creates custom playlists, and Zillow enables natural language property searches with integrated maps.
Four additional high-profile apps are coming soon: DoorDash, Instacart, Uber, and AllTrails. The breadth of these partnerships—spanning travel, food delivery, education, music, real estate, and design—signals OpenAI’s ambition to make ChatGPT a central hub for getting things done, not just a conversational interface.
Business, Enterprise, and Education tiers will receive access later this year, with additional languages rolling out progressively. The platform’s open standard approach suggests OpenAI is positioning Apps in ChatGPT as a genuine ecosystem play, potentially creating a new distribution channel for developers comparable to mobile app stores.
Codex graduates to general availability with enterprise features and Slack integration
Codex, OpenAI’s cloud-based software engineering agent, officially moved from research preview to general availability at DevDay 2025. The transition brings significant new capabilities, particularly for enterprise deployment and workflow integration. Codex runs on codex-1 (an optimized version of o3 for software engineering) and GPT-5-Codex, which has already served over 40 trillion tokens in just three weeks since its September 23 launch.
The most immediately useful new feature is Slack integration, allowing teams to tag @Codex in channels or threads. The agent automatically gathers context from conversations, completes requested tasks, and returns results with links to the Codex cloud environment—bringing AI coding assistance directly into team workflows without context switching. This reflects OpenAI’s understanding that developer tools must integrate seamlessly with existing collaboration patterns.
The new Codex SDK (initially TypeScript, with more languages coming) enables developers to embed the Codex agent into custom workflows, tools, and applications. It delivers state-of-the-art performance without extra tuning, supports structured outputs for parsing agent responses, and includes built-in context management—addressing common pain points in agent integration.
For enterprise customers on Business, Education, and Enterprise plans, new admin tools provide environment controls, monitoring and analytics dashboards, and the ability to track usage across CLI, IDE, and web interfaces. Administrators can edit or delete Codex cloud environments, enforce safer defaults for local usage, and monitor code review quality—critical capabilities for security-conscious organizations. A new GitHub Action enables easy integration into CI/CD pipelines, and developers can use Codex directly in shell environments via the codex exec
command.
Usage metrics demonstrate rapid adoption: daily usage grew 10x since early August, and nearly all OpenAI engineers now use Codex (up from just over half in July). Engineers at OpenAI are merging 70% more pull requests each week, and Codex automatically reviews almost every PR to catch critical issues. Enterprise customers including Duolingo, Vanta, Cisco, and Rakuten have deployed Codex, with Cisco reporting review times reduced by up to 50%.
The cost structure includes codex-mini-latest ($1.50 per million input tokens, $6 per million output tokens), with starting October 20, Codex cloud tasks counting toward usage limits. Codex is available in VSCode, Cursor, and Windsurf IDEs, with GitHub integration for automatic code reviews and availability via GitHub Copilot (GPT-5-Codex model in public preview).
New models bring specialized capabilities and dramatic cost reductions
OpenAI unveiled several model updates designed for specific use cases and price points. GPT-5 Pro joins the API lineup as a model explicitly designed for finance, legal, and healthcare applications requiring “high accuracy and depth of reasoning.” Priced at $1.25 per million input tokens and $10 per million output tokens, it provides extended reasoning capabilities for sensitive domains where accuracy is paramount. This represents OpenAI’s move toward vertical specialization rather than a single general-purpose model for all use cases.
The gpt-realtime mini voice model delivers the same voice quality and expressiveness as the advanced gpt-realtime model but at 70% lower cost, making voice-first applications economically viable for a much broader range of use cases. It supports low-latency streaming interactions for audio and speech, addressing a key barrier to adoption for developers building conversational interfaces.
Sora 2, which launched to the public on September 30, is now available in the API in preview. The latest video generation model produces more realistic and physically consistent scenes, with synchronized dialogue and sound effects, greater creative control, and detailed camera direction capabilities. OpenAI demonstrated impressive capabilities, including the ability to “take iPhone view and expand into sweeping, cinematic wide shot” with rich soundscapes and ambient audio. Mattel has already partnered with OpenAI to use Sora 2 for turning sketches into toy concepts, demonstrating real-world commercial applications beyond marketing and advertising use cases.
These model announcements complement the broader GPT-5 family launched in August 2025, which achieved 94.6% on AIME 2025 mathematics problems, 74.9% on SWE-bench Verified coding tasks, and 88% on Aider Polyglot multi-language coding benchmarks. GPT-5 is approximately 45% less likely to hallucinate than GPT-4o and 80% less prone to hallucination than o3 when using thinking mode, addressing one of the most persistent criticisms of large language models.
Platform scale reaches 800 million weekly users as infrastructure expands dramatically
The numbers Sam Altman shared during the keynote reveal OpenAI’s extraordinary growth trajectory. ChatGPT now serves 800 million weekly active users, up from 700 million just one month ago and 100 million in early 2023. The platform hosts 4 million developers (doubled from the previous reporting period) and processes 6 billion tokens per minute through its API—up from 300 million tokens per minute in earlier periods.
To support this explosive growth, OpenAI announced a massive infrastructure partnership with AMD the same morning as DevDay. The deal provides 6 gigawatts of AMD Instinct GPUs over multiple years, with the first gigawatt deployment of AMD Instinct MI450 GPUs scheduled for the second half of 2026. AMD issued OpenAI a warrant for up to 160 million shares (approximately 10% of AMD), with vesting tied to deployment milestones and AMD share price targets. AMD expects the partnership to generate “tens of billions of dollars in revenue,” and AMD stock soared 23-24% on the announcement, adding $63.4 billion in market value.
Greg Brockman emphasized the scale of OpenAI’s compute needs: “We need as much computing power as we can possibly get.” The AMD partnership is explicitly “incremental” to OpenAI’s existing $100 billion, 10-gigawatt partnership with Nvidia announced in September 2025, as well as a $300 billion Oracle deal for cloud equipment. These partnerships support the Stargate initiative, which plans five new data centers with 7 gigawatts of planned capacity. OpenAI also has agreements with Samsung and SK Hynix for memory chips and a $10 billion custom AI chip deal with Broadcom.
Dr. Lisa Su, AMD’s CEO, framed the partnership as transformational: “This partnership brings the best of AMD and OpenAI together to create a true win-win enabling the world’s most ambitious AI buildout and advancing the entire AI ecosystem.” The deal represents OpenAI’s deliberate diversification strategy to avoid single-vendor dependency while racing to meet insatiable demand for AI compute.
Developer ecosystem expands with comprehensive tooling and partnerships
Beyond the headline announcements, OpenAI unveiled a substantial expansion of developer resources and tools. The platform now provides pre-built templates in Agent Builder, comprehensive documentation for the Apps SDK with example apps, the open-source Codex CLI, GitHub Actions for Codex, and a JavaScript Guardrails library. Developer Mode in ChatGPT allows testing apps before submission, and the Responses API received enhancements to support Agent Builder’s capabilities.
Strategic partnerships announced or highlighted at DevDay span multiple verticals. Beyond the infrastructure deals, product integrations now include Microsoft 365 Copilot, GitHub Copilot, Azure AI Foundry, and planned integration with Apple Intelligence in iOS 26, iPadOS 26, and macOS Tahoe. The acquisition of Jony Ive’s AI device startup “io” for $6.4 billion in May 2025 positions OpenAI for hardware ambitions, with Ive now overseeing “deep creative and design responsibilities across OpenAI.” His fireside chat with Altman closed the event (not livestreamed, but recorded for later release).
Sam Altman’s closing remarks reflected on the rapid pace of change in software development: “We’re watching something significant happen. Software used to take months or years to build. You saw that it can take minutes now to build with AI. You don’t need a huge team. You need a good idea, and you can just sort of bring it to reality faster than ever before.” This vision—of AI dramatically lowering the barrier to software creation—underpins all of OpenAI’s developer-focused announcements.
Event schedule and how to access the content
For developers who couldn’t attend in person, the opening keynote was livestreamed on openai.com/live and the OpenAI YouTube channel. The schedule included:
- 10:00 AM PT: Opening keynote with Sam Altman and Romain Huet (livestreamed)
- 11:15 AM - 2:00 PM PT: Technical sessions including “Context Engineering & Coding Agents with Cursor,” “Orchestrating Agents at Scale,” and sessions on Codex and Sora
- 3:15 PM PT: Developer State of the Union with Greg Brockman and Olivier Godement
- 4:15 PM PT: Closing fireside chat with Sam Altman and Jony Ive
While only the opening keynote was livestreamed, OpenAI confirmed that other sessions will be recorded and posted to YouTube for the broader developer community. The event also featured interactive experiences including “Sora Cinema” (a mini-theater showing AI-generated short films), a “Living Portrait” of Alan Turing that responds to questions, and custom arcade games built with GPT-5.
Conclusion: OpenAI doubles down on developers amid intensifying competition
OpenAI Dev Day 2025 represents the company’s most comprehensive developer platform expansion to date, with announcements spanning the full stack from infrastructure partnerships to end-user applications. The central message is clear: OpenAI is building not just models but a complete ecosystem for AI-powered software development, with AgentKit addressing the prototype-to-production gap and Apps in ChatGPT creating a new distribution channel for AI-native applications.
The timing is strategic. With Anthropic’s Claude gaining traction among developers, Google’s Gemini advancing rapidly, and Meta releasing open-source Llama models, OpenAI faces its most competitive landscape yet. The DevDay announcements—particularly the comprehensive AgentKit suite and the open MCP-based Apps SDK—represent significant investments in developer lock-in through superior tooling rather than just model performance.
Three key developments deserve special attention. First, the Codex general availability with enterprise features and Slack integration transforms it from an experimental tool into a production-ready platform that fits existing workflows. Second, Apps in ChatGPT with its MCP foundation could create a genuine third-party ecosystem, potentially as significant as mobile app stores if it achieves critical mass. Third, the infrastructure partnerships totaling over $400 billion in committed spending signal OpenAI’s determination to avoid compute constraints as a competitive disadvantage.
For developers, the practical implications are substantial: lower costs (70% reduction for voice models), more powerful tools (AgentKit’s integrated suite), new distribution channels (Apps in ChatGPT), and specialized models (GPT-5 Pro for regulated industries). The platform now supports 4 million developers processing 6 billion tokens per minute—a foundation for the next generation of AI-native applications that Altman envisions being built “in minutes” rather than months.