Codex is now in the ChatGPT mobile app
180 points
6 hours ago
| 24 comments
| openai.com
| HN
Alifatisk
4 hours ago
[-]
Whats crazier is that Codex is free. I thought I had to pay to even try it out but nope, you can use the desktop app or cli for free, its apparently included in the free plan. You just have to sign in to your ChatGPT account.

Of course I am aware that the caveat here is that all my interaction is part of training, but I’m fine with that. Even Qwen Cli discontinued the free plan.

reply
orionsbelt
34 minutes ago
[-]
First hit is free… got to get you hooked.
reply
beering
15 minutes ago
[-]
Can’t you just turn off training on your data in the settings?
reply
firesteelrain
53 minutes ago
[-]
How much better is it than Claude? I have both but Claude sucks up so many tokens.
reply
bobbylarrybobby
41 minutes ago
[-]
5.5 is absolutely comparable to opus 4.7 (both on highest effort), maybe even better. It generally seems less lazy, faster, and writes code closer to what I'd write. The only downside is that for very very long tasks, it can kind of lose track of the goal. For tasks under ten minutes I'll go with codex every time.
reply
yieldcrv
37 minutes ago
[-]
Less gibliterrating and more doing

Very fast

reply
thorum
2 hours ago
[-]
I was really unimpressed by the free Codex (for nodejs/react dev). I think it must be using a less powerful model or they’re limiting it in some other way.
reply
jwilliams
2 hours ago
[-]
Are you specifically pointing at a different experience between free + paid? Or just that the free version is unimpressive?

I'm using paid on TypeScript and it's genuinely terrific. Subjectively I think it has the edge over Opus.

I'd be surprised if OpenAI is hamstringing the free version. That would seem crazy from a GTM PoV. If anything the labs seem to throttle the heavy paid users.

reply
fragmede
2 hours ago
[-]
Yes, the free version doesn't have access to the same models that the paid does.
reply
debian3
1 hour ago
[-]
You have access to 5.5 xhigh on free. Which model is missing except the 5.3 that run on cerebras?
reply
wahnfrieden
1 hour ago
[-]
It's only missing the trash models. Likely a user skill issue.
reply
throw03172019
54 minutes ago
[-]
The free version of ChatGPT is definitely worse as well. My SO uses the free version and I can tell a significant downgrade.
reply
wahnfrieden
2 hours ago
[-]
Post your chat session
reply
ssl-3
1 hour ago
[-]
Can Codex chats be shared? (This is a genuine question; so far, I've only used Codex in CLI on Linux.)
reply
wahnfrieden
1 hour ago
[-]
Via jsonl file
reply
dakolli
1 hour ago
[-]
I'm unimpressed by all LLMs, and especially unimpressed by the people claiming to be impressed by them.
reply
Rover222
4 hours ago
[-]
I think it's free for about 2 useful requests and then you have to upgrade or wait?
reply
melagonster
1 hour ago
[-]
Switching to GPT 5.4-mini can increase the number of requests we can use freely.
reply
osiris970
3 hours ago
[-]
So basically a 20$ Claude plan lmao
reply
replwoacause
2 hours ago
[-]
I stopped using my Claude subscription because it became so prohibitive. Back to ChatGPT and Codex full time and been pretty happy. I miss the tone/writing style of Claude, but don't miss the frustration of being told I've reached my plan limits in a comically short amount of time.
reply
dmd
1 hour ago
[-]
Using these prompts/steering[0], setting Base style to Friendly, Warm to More, Enthusiastic to Default, Headers, Lists, and Emoji to Less, I have found I can get gpt-5.5 about ... 80% of the way there to writing as non-annoyingly as Claude. And it's so much faster and has such higher limits that that's worth it for me.

I also put together this ridiculous thing[1] because I missed the font and color scheme of Claude.

[0] https://gist.githubusercontent.com/dmd/91e9ca98b2c252a185e8e...

[1] https://github.com/dmd/aimpostor

reply
firesteelrain
47 minutes ago
[-]
How do you fit that entire prompt in the customized instructions ?
reply
dmd
34 minutes ago
[-]
Some of it is in my customized instructions, some of it I fed pieces in at a time saying "remember this please:" so it goes into Memories.

I'm not entirely clear on the mechanism by which memories make it into context, so it's possible some of it isn't all the time, but it does seem to be working reasonably.

Again, it's not as good as Claude when it comes to writing "not like an AI". But it's significantly better than it was.

reply
replwoacause
1 hour ago
[-]
Thanks, I’ll give those a try!
reply
dmd
1 hour ago
[-]
FYI I'm actively working on aimpostor, so check back in a couple days for some quality improvements. (I'm definitely not going to bother with a Sparkle updater or anything like that.)
reply
Razengan
2 hours ago
[-]
on Codex I ran into limits maybe like 2 times in 3 months, after doing several "upgrade this experimental game to my latest shared framework" passes on 5.5 Extra High
reply
ssl-3
1 hour ago
[-]
On which plan?

I can go through a 5-hour limit with a $20/mo Plus subscription in a few minutes with 5.5 Extra High. This causes me to reserve the latest/best rev for the harder problems.

5.5 really does seem to be very superior to 5.4, but it's also very expensive to run: The gas gauge moves fast. It's not very clearly defined whether 5.5 will cost less to get a problem solved quickly, or if a bunch of automatic iterations of 5.4 will solve it less-expensively. Both are often frustrating to me on the $20 plan.

(Also: Are you sure you're seeing it right? 5.5 has been in the wild for less than a month, so far. https://openai.com/index/introducing-gpt-5-5/ )

reply
breatheoften
30 minutes ago
[-]
This is extremely what Ive been wanting -- I had previously thought about using one of the hackish apps that try to deliver this experience - or spinning up something for this myself ... - but integrating this directly is definitely the right way to provide the best system and product experience -- and this seems to work out of the box exactly as I would want!
reply
jumploops
3 hours ago
[-]
I’ve been using Codex from my phone for the past couple of months (through a tunnel, not this app).

I was initially quite excited, but I’ve found the results are less than great compared to being at a keyboard.

Something about the smaller screen size and/or lack of keyboard causes me to direct the agent less, which in turn creates more tech debt/code churn/etc.

Maybe I’m just showing my age, and I should practice voice dictation or something more, but my thoughts flow faster and more clearly on a keyboard (less ums).

reply
redanddead
5 minutes ago
[-]
the main thing is functionality, you can always work around the ergonomics
reply
cm2012
23 minutes ago
[-]
Wispr flow cuts out ums. I love it
reply
aiscoming
2 hours ago
[-]
the ums are exactly the sign that you speak much faster than you type, so you need a pause for your thoughts to catch up
reply
keyle
2 hours ago
[-]
I'm not sure I follow, you develop code on a remote machine by speaking to your phone and are unimpressed by the result?
reply
selcuka
1 hour ago
[-]
They are unimpressed by their (current) ability to use it, not the technology.
reply
fowlie
3 hours ago
[-]
I've been trying voxtype (using whisper models) lately, and to my surprise all my ums are filtered out. It's really good now actually!
reply
esperent
1 hour ago
[-]
I don't see any way to use that on a phone.
reply
boodleboodle
12 minutes ago
[-]
Could Codex CLI get this support also? I am sure a lot of us are running remote linux machines with Nvidia GPUs, with codex CLI running
reply
beering
8 minutes ago
[-]
I think this is a thing, maybe need to upgrade your codex cli.
reply
reassess_blind
3 hours ago
[-]
Is there a native way to work remotely with Claude/Codex on a local folder or git repo on your main machine without having to connect it to GitHub? For creating apps for personal use I’d rather just keep the files local.

Edit: Running into issues setting it up on Windows. There's no "/remote-control" command in the CLI, so I installed the Windows Codex app. Then I updated the iOS app which now has the "Codex" feature in the sidebar, which should allow remote access to the Windows machine's instance - except it doesn't connect. The iOS app shows my desktop's hostname, so it knows there's an instance there, but refuses to connect. Issues like this would persuade a lot of folks to switch back to Claude.

reply
barrkel
2 hours ago
[-]
This is what /remote-control does in Claude Code, once it's running on your main machine. You can open it up in the phone app.
reply
wahnfrieden
2 hours ago
[-]
That’s this announcement.
reply
reassess_blind
1 hour ago
[-]
I ask because I tried the other week to use /remote-control in Claude, and it prompted to connect a Github repo with no local alternative. Things may have changed since then.

My experience today with the new Codex remote control has been that it doesn't connect at all.

reply
iamjs
3 hours ago
[-]
I think the `/remote-control` feature does this, if I understand you correctly.
reply
DonsDiscountGas
2 hours ago
[-]
It's supposed to. I've always found it buggy and unreliable but maybe that's just me. (This command exists in Claude btw not sure about Codex)
reply
rovr138
2 hours ago
[-]
Looks like codex has it too since last week, https://github.com/openai/codex/releases/tag/rust-v0.130.0
reply
rovr138
2 hours ago
[-]
You can also connect remotely. Tailscale to connect to your network/machine. Then use SSH to login. Then use tmux to persist the session even if you log out.
reply
maille
2 hours ago
[-]
Does it work on windows? And how do you then remote in?
reply
Salgat
2 hours ago
[-]
I wish codex supported this, I use it all the time for claude.
reply
vohk
4 hours ago
[-]
Dang, I thought this was going to be integration for Codex Cloud, not the (still not available for Linux) Codex App. Not even Codex CLI, alas. You can still access the Cloud option from a mobile browser well enough but I prefer an app UI for poking at the things on the go.
reply
beering
9 minutes ago
[-]
Codex Cloud has been in the chatgpt app for quite some time now. If you click out of the new dialogs then you can access your cloud threads
reply
tekacs
4 hours ago
[-]
You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

They might just not have cut a new build yet, today. It 'works' on master, but the mobile app thinks that your build is outdated (v0.0.0) if you build from master without overriding version, so probably easiest to wait until they cut a build if they haven't.

reply
embedding-shape
3 hours ago
[-]
> You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

Woah, hadn't seen this before!

Off-topic, how long compile times do people have for codex-rs in openai/codex? Even my very beefy computer takes like 30 minutes to compile in release mode, makes me wonder why it's so slow and how this TUI got so large. But then I remember, agents like to write a lot of code, compilers get slower when they have to compile a lot of code :)

reply
tekacs
2 hours ago
[-]
Try turning off LTO. Their default codex-rs/Cargo.toml uses `lto = "fat"`, which is... expensive and slow and... you really really don't need it for a local build that you're not distributing.

In my experience, although the build is a little slow, it's that LTO step that takes a million years.

reply
vohk
3 hours ago
[-]
Oh, that's promising, thanks! I've just been using the npm version.
reply
asadm
4 hours ago
[-]
thanks. i dont use the app and so this is cool
reply
miohtama
2 hours ago
[-]
I have been using Omnara now some months, on desktop and mobile. It's web/mobile remote for Claude and Codex.

I can do some tasks on mobile, especially if they are follow up and steering only, greatly increasing productivity as you can keep working whilst in transit, etc.

reply
impulser_
2 hours ago
[-]
Say what you want about OpenAI, but their software is actually pretty dam good especially compared to Anthropic and Google. Anthropic is just sloppy, and Google just doesn't live on this planet.

Both of the Codex apps are very good.

I tried this out and it works significantly better than Claude's remote control in fact the first few times I tried Claude's remote control it didn't even work and to this day is very buggy.

reply
jsemrau
45 minutes ago
[-]
I don't understand OpenAI's product strategy.
reply
breatheoften
32 minutes ago
[-]
This is super nice!!!!!!
reply
Loranubi
32 minutes ago
[-]
macOS only so far. "Windows is coming soon"
reply
iridione
3 hours ago
[-]
This is neat! Now I'm curious, what's left to innovate in the coding agent space? Sure there are the usual suspects like maintenance, security, reliability and other scalability improvements and looks like they will be addressed in the next year or two.
reply
thornewolf
3 hours ago
[-]
there is something "wrong" with the ux that is hard to pin down. these things generate even text summaries more rapidly than i can read them. i need a better method for dumping info into my brain + dynamic control (if necessary)
reply
jpalomaki
1 hour ago
[-]
Tell it to create html summaries with diagrams and sidebar for navigation.

Or ask Codex to create image that explains xyz.

reply
ssl-3
2 hours ago
[-]
When I take time to read all of the output, I often find that it's mostly noise. I don't like noise so I usually don't bother.

But a person can use subagents, if they want, to filter that down. This burns tokens in a big hurry, but I think subagents can be arbitrary local commands (eg, a local LLM).

Or, you know: Just slow down. :) It doesn't always have to be a race, does it?

reply
deadbabe
1 hour ago
[-]
Agent farms. Have agents make tons of random high fidelity variations around the clock of the same app or feature from some vague ideas, and you use each of them to see which one you like best and can productize, and you skip the need to do iterative prompts.
reply
asadm
4 hours ago
[-]
I use Termius on my phone to remote and make agent do stuff while i chill or am on road. This seems useful too.
reply
ahmadyan
2 hours ago
[-]
i'm not sure if i'm hallucinating, but i swear i had codex in the chatGPT app from long time ago (like the original codex on the web).

they added some new stuff, like remote control to wherever the desktop codex app is running, but these companies need to work much more on their press releases.

reply
wahnfrieden
2 hours ago
[-]
That was cloud codex. Not comparable
reply
schnitzelstoat
4 hours ago
[-]
This is really useful for when you just need to approve plans or make small decisions.
reply
tekacs
4 hours ago
[-]
It's refreshing that unlike Anthropic's Remote Control, this actually... works.

Feels like a testament to the value in taking time and doing it properly.

Now if only codex got its 1M token context window back.

---

Edit: Hmmm. Maybe I spoke too soon. Sigh. Definitely _more_ reliable by far overall, but still have queued messages with responses on my phone that don't show up on my computer, and responses that don't show up on my phone.

Edit 2: New threads created from my phone seem to have a little stall-out, but ones that are underway are behaving reasonably well.

reply
20kleagues
4 hours ago
[-]
Out of curiosity, what issues did you face with remote control on claude? I use it daily and it seems to work pretty well (bar the issues when my Mac would sleep and then the session would disconnect, but that's an issue on my end).
reply
hamza_q_
50 minutes ago
[-]
Made a menu bar app you may find useful for MacBook sleep prevention, even when the lid is closed:

https://github.com/narcotic-sh/modafinil

reply
RayVR
4 hours ago
[-]
My own experience has been that it works for about five minutes before it just disconnects or hangs. I’ve never been able to use it successfully.
reply
tekacs
4 hours ago
[-]
Myriad, to be honest. I find it to just constantly be in a 'torn' state, the UI is very mushy on mobile with a lot of the affordances from desktop missing, and... it's distinctly less useful when you can't... edit, rewind, start a new thread, etc.
reply
sbinnee
1 hour ago
[-]
I don't like this direction. For accessibility aspect, sure it is good. But Codex is a coding product. I am increasingly concerned of lack of reviewing practice. I doubt that a mobile app is good for reviewing code changes.

> Stay connected to active work from anywhere

... (and anytime because it's on your phone). No thanks.

reply
fHr
3 hours ago
[-]
rust and opensource W
reply
Razengan
2 hours ago
[-]
Codex has been great in the last 3-4 months I've been using it, almost exclusively to review existing GDScript code, and this was the feature I wanted most, because with gamedev you get the best ideas when you're out and about or in bed :)

Claude on the other hand has been jank all around from the UX to the UI to the AI itself that it's baffling how it's more popular here on HN: https://i.imgur.com/jYawPDY.png

Sadly this remote control feature doesn't seem to be for Mac to Mac yet? I love the MacBook Neo as a "thin client" for AI and keep the MacBook Pro at home/hotel, and it would be nice to share Codex desktop sessions (without SSH → resume link)

reply
cyanydeez
2 hours ago
[-]
opencode behind a nginx proxy with a standard user/password is sufficiently powerful. You can also upgrade to https://docs.linuxserver.io/images/docker-code-server/ and run any vscode plugins; opencode's plugin is pretty rudimentry but cline has been making a lot of strides.

You can run your local LLM and just connect the docker containers. I'm paranoid of being disconnected from the LLM, so I never run any of this on the same machine, so orchestrating a docker-compose file that provides the necessary services is important.

I'm still trying to find a good remote file system to loop into the setup for improved switching between cli and these web containers.

reply
Squab
3 hours ago
[-]
friends, you don’t have to always be productive. leave the agent on the computer and take care of yourself.
reply
jorl17
3 hours ago
[-]
For many people, that's exactly why this is useful: less time on the computer, more time doing other things and occasionally checking in.

In those scenarios, the goal is not "work at any time" but to "be anywhere at any time", or, rather, to "be able to work from anywhere, doing anything".

Sort of....I guess.

reply
stavros
4 hours ago
[-]
The best way I've found to work with LLMs is another OpenAI project, Symphony (which I implemented for Linear/GitHub and OpenCode[0]).

It integrates with your issue tracker and makes the tracker the UI for the LLM. It also clones the repo for every ticket, and can set up fixtures/etc. I can work on multiple items at a time, which is fantastic because otherwise you have to wait for the LLMs a lot.

[0] https://github.com/skorokithakis/symphony

reply
fa3ax
38 minutes ago
[-]
They needed to announce something after the Anthropic slop rewrite of Bun.

In an ideal world the would allocate 50% of compute to find errors in that rewrite and publish how bad Claude is, but that would undermine confidence in slop in general so that is not going to happen.

reply
mv4
4 hours ago
[-]
Can someone recommend an IDE that can be used with a self-hosted model (via OpenAI or similar)?
reply
aiscoming
2 hours ago
[-]
vs code supports local models (bring your own key/model)

you need a model server - ollama/llama.cpp/lm studio

reply
no-name-here
1 hour ago
[-]
> bring your own key

Do you mean supporting oai-compatible api URLs in copilot? If so then you need either VS Code Insiders, or a VS Code extension I believe?

reply
suyash
2 hours ago
[-]
Look up OpenCode
reply