Bottom line is that I am extremely grateful for AI has a teammate. As a solopreneur even more so. I'm building an application that I know would have taken at least $10 to 20K to build but all I'm paying is $60 a month Cursor Pro+ and my public facing server. And only $60 because I ran into a Cursor Claude limit.
Buckle up guys and gals, the midwit you always feared has the keys to the tank now...
I'm confident I can do anything with enough time. But I only have so much.
AI is going to enable so many more ideas to come to fruition and a better world because of it!
If someone is in their 30’ or 40’ planning to work the next 5+ years on a project is no problem, even if it takes 10+ years in the end.
For the ones over 65 or older, it’s a different story…
Are they really late? Has everyone started using agents and paying $200 subscriptions?
Am I the one wrong here or these expressions of "falling behind" are creating weird FOMO in the industry?
EDIT: I see the usefulness of these tools, however I can't estimate how many people use them.
If you rephrase the question as "Are most engineers already using AI?" -- because it transcends the specific modality (agents vs chat vs autocomplete) and $200 subscriptions (because so many tools are available for free) -- signs point to "yes."
Adoption seems to be all the way upto 85% - 90% in 2025, but there is a lot of variance in the frequency of use:
https://dora.dev/research/2025/
https://survey.stackoverflow.co/2025/
https://newsletter.pragmaticengineer.com/p/the-pragmatic-eng...
If there is FOMO, I'm not sure it's "weird."
If anything in my small circle the promise is waning a bit, in that even the best models on the planet are still kinda shitty for big project work. I work as a game dev and have found agents to only be mildly useful to do more of what I've already laid out, I only pay for the $100 annual plan with jetbrains and that's plenty. I haven't worked at a big business in a while, but my ex-coworkers are basically the same. a friend only uses chat now because the agents were "entirely useless" for what he was doing.
I'm sure someone is getting use out of them making the 10 billionth node.js express API, but not anyone I know.
That would be fine if our value delivery rate were also higher. But it isn’t. It seems to actually be getting worse, because projects are more likely to get caught in development hell. I believe the main problem there is poorer collective understanding of generated code, combined with apparent ease of vibecoding a replacement, leads to teams being more likely to choose major rewrites over surgical fixes.
For my part, this “Duke Nukem Forever as a Service” factor feels the most intractable. Because it’s not a technology problem, it’s a human psychology problem.
Especially considering that these 200$ subscriptions are just the start because those companies are still mostly operating at a loss.
It's either going to be higher fees or Ads pushed into the responses. Last I need is my code sprinkled with Ads.
At the very least, it can quickly build throwaway productivity enhancing tools.
Some examples from building a small education game: - I needed to record sound clips for a game. I vibe coded a webapp in <15 mins that had a record button, keyboard shortcuts to progress though the list of clips i needed, and outputted all the audio for over 100 separate files in the folder structure and with the file names i needed, and wrote the ffmpeg script to post process the files
- I needed json files for the path of each letter. gemini 3 converted images to json and then codex built me an interactive editor to tidy up the bits gemini go wrong by hand
The quality of the code didn't matter because all i needed was the outputs.
The final games can be found: https://www.robinlinacre.com/letter_constellations https://www.robinlinacre.com/bee_letters/ code: https://github.com/robinL/
How long did it take to learn how to use your first IDE effectively? Or git? Or basically any other tool that is the bedrock of software engineering.
AI fools people into thinking it should be really easy to get good results because the interface is so natural. And it can be for simple tasks. But for more complex tasks, you need to learn how to use it well.
They “type” faster than me, but they do not type out correct PowerShell.
Fake modules, out of date module versions, fake options, fake expectations of object properties. Debugging what they output makes them a significant speed down compared to just, typing and looking up PowerShell commands manually and using the -help and get-help functions in my terminal.
But again, I haven’t forked over money for the versions that cost hundreds of dollars a month. It doesn’t seem worth it, even after 3 years. Unless the paid version is 10 times smarter with significantly less hallucinations the quality doesn’t seem worth the price.
Tools like GitHub copilot can access the CLI. It can look up commands for you. Whatever you do in the terminal, it can do.
You can encode common instructions and info in AGENTS.md to say how and where to look up this info. You can describe what tools you expect it to use.
There are MCPs to help hook up other sources of context and info the model can use as well.
These are the things you need to learn to make effective use of the technology. It’s not as easy as going to ChatGPT and asking a question. It just isn’t.
Too many people never get past this low level of knowledge, then blame the tool.
No, the $20/month plans are great for minimal use
> Because every time, without fail, the free ChatGPT, Copilot, Gemini, Mistral, Deepseek whatever chatbots, do not write PowerShell faster than I do.
The exact model matters a lot. It's critical to use the best model available to avoid wasting time.
The free plans generally don't give you the best model available. If they do, they have limited thinking tokens.
ChatGPT won't give you the Codex (programming) model. You have to be in the $20/month plan or a paid trial. I recommend setting it to "High" thinking.
Anthropic won't give you Opus for free, and so on.
You really have to use one of the paid plans or a trial if you want to see the same thing that others are seeing.
I haven't had an issue with a hallucination in many months. They are typically a solved problem if you can use some sort of linter / static analysis tool. You tell the agent to run your tool(s) and fix all the errors. I am not familiar with PowerShell at all, but a quick GPT tells me that there is PSScriptAnalyzer, which might be good for this.
That being said, it is possible that PowerShell is too far off the beaten path and LLMs aren't good at it. Try it again with something like TypeScript - you might change your mind.
whatever you learn now is going to be invalid and wasteful in 6 months
And I reject that anything you learn today will be invalid. It’ll be a base of knowledge that will help you understand and adopt new tools.
I had ChatGPT Codex GPT5.2 high reasoning running on my side project for multiple hours the last nights. It created a server deployment for QA and PROD + client builds. It waited for the builds to complete, got the logs from Github Actions and fixed problems. Only after 4 days of this (around 2-4 hours) active coding I reached the weekly limit for the ChatGPT Plus Plan (23€). Far better value so far.
To be fully honest, it fucked up one flyway script. I have to fix this now my self :D. Will write a note in the Agent.md to never alter existing scripts. But the work otherwise was quite solid and now my server is properly deployed. If I would switch between High reasoning for Planing and Middle reasoning for coding, I would get even more usage.
"... brought to you by Costco."
But seriously, I can't help but think that this proliferation of massive numbers of iterations on these models and productizations of the models is an indication that their owners have no idea what they are doing with any of it. They're making variations and throwing them against the wall to see what sticks.
Codex = The model trained specifically for programming tasks. You want this if you're writing code.
GPT5.2 = The current version. You don't have to think about this, you just use the latest.
High Reasoning = A setting you select for balancing between longer thinking time or quicker answers. It's usually set and forget.
I think for me it's a case of fear of being left behind rather than missing out.
I've been a developer for over 20 years, and the last six months has blown me away with how different everything feels.
This isn't like JQuery hitting the scene, PHP going OO or one of the many "this is a game changer" experiences if I've had in my career before.
This is something else entirely.
There are absolutely maintainability challenges. You can't just tell these tools to build X and expect to get away with not reviewing the output and/or telling it to revise it.
But if you loosen the reigns and review finished output rather than sit there and metaphorically look over its shoulder for every edit, the time it takes me to get it to revise its work until the quality is what I'd expect of myself is still a tiny fraction of what it'd take me to do things manually.
The time estimate above includes my manual time spent on reviews and fixes. I expect that time savings to increase, as about half of the time I spend on this project now is time spent improving guardrails and adding agents etc. to refine the work automatically before I even glance at the output.
The biggest lesson for me is that when people are not getting good results, most of the time it seems to me it is when people keep watching every step their agent takes, instead of putting in place a decent agent loop (create a plan for X; for each item on the plan: run tests until it works, review your code and fix any identified issues, repeat until the tests and review pass without any issues) and letting the agent work until it stops before you waste time reviewing the result.
Only when the agent repeatedly fails to do an assigned task adequately do I "slow it down" and have it do things step by step to figure out where it gets stuck / goes wrong. At which point I tell it to revise the agents accordingly, and then have it try again.
It's not cost effective to have expensive humans babysit cheap LLMs, yet a lot of people seem to want to babysit the LLMs.
I basically have two modes
1. "Snipe mode"
I need to solve problem X, here I fire up my IDE, start codex up and begin prompting to find the bug fix. Most of the time I have enough domain context about the code that once it's found and fixed the issue it's trivial for to reconcile that it's good code and I am shipping it. I can be sniping several targets at anyone time.
Most of my day-to-day work is in snipe mode.
2. "Feature mode"
This is where I get agents to build features/apps, I've not used this mode in anger for anything other than toy/side projects and I would not be happy about the long term prospects of maintaining anything I've produced.
It's stupidly stupidly fun/addictive and yes satisfying! :)
I rebuilt a game that I used to play when I was 11 and still had a small community of people actively wanting to play it, entirely by vibe coding, it works, it's live and honestly I've had some of the most rewarding feedback from making that I've had in my career from complete strangers!
I've also built numerous tools for myself and my kids that I'd never of had time to build before, and I now can. Again the level of reward for building apps etc that my kids ( and their friends ) are using, is very different from anything I've been career wise.
It doesn't work on mobile, and unless you played it back in the day the feedback from my friends who I've introduced it too, is that it's got quite the learning curve.
You can see all the horrible vibe coding here ( it's slop, it's utter utter slop, but it's working slop )
https://github.com/battlecity-remastered/battlecity-remaster...
I think ultimately I've succumbed to the fact that writing code is no longer a primary aspect of my job.
Reading/reviewing and being accountable for code that something else is written very much is.
I'm also fairly confident having it write my code is not a productivity boost, at least for production work I'd like to maintain long term
No, most programmers I know outside of my own work (friends, family, and old college pals) don't use AI at all. They just don't care.
I personally use Cursor at work and enjoy it quite a bit, but I think the author is maybe at the tail end of _their circle's_ adoption, but not the industry's.
The $20/mo I pay is quite affordable given the ROI.
I could see jumping between various free models.
It also goes very fast if you don't actively manage your context by clearing it frequently for new tasks and keeping key information in a document to reference each session. Claude will eat through context way too fast if you just let it go.
For true vibecoding-style dev where you just prompt the LLM over and over until things are done, I agree that $100 or $200 plans would be necessary though.
Not because I think either way is better, just because personally I work well with AI in the latter capacity and have been considering subscribing to Claude, but don't know how limiting the usage limits are.
EDIT: I also wasn't going to say it but it's not about the money for me, I just don't want to support any of these companies. I'm happy waste their resources for my benefit but I don't lean on it too often.
Its not even SOTA open source anymore, let alone competitive with GPT/Gemini/Grok.
I couldnt use GPT3 for coding and deepseek is at GPT3 + COT levels.
I'm not going to be sending money every month to billion dollars companies who capitulate to a goon threatening to annex my country. I accept whatever consequences that has on my programming career.
The $20/month subscriptions go a long way if you're using the LLM as an assistant. Having a developer in the loop to direct, review, and write some of the code is much more token efficient than trying to brute force it by having the LLM try things and rewrite until it looks like what you want.
If you jump to the other end of the spectrum and want to be in the loop as little as possible, the $100/$200 subscriptions start to become necessary.
My primary LLM use case is as a hyper-advanced search. I send the agent off to find specific parts of a big codebase I'm looking for and summarize how it's connected. I can hit the $20/month windowed limits from time to time on big codebases, but usually it's sufficient.
(“Scrape kindle highlights from the kindle webpage, store it in a database, and serve it daily through an email digest”).
No success so far in getting it to do so without a lot of handholding and manually updating the web scraping logic.
It’s become something of a litmus test for me.
So, maybe there is some FOMO but in my experience it’s a lot of snake oil. Also at work, I manage a team of engineers and like 2 out of 12 clearly submit AI generated code. Others stopped using it, or just do a lot more wrangling of the output.
It is the very definition of FOMO if there is an entire cult of people telling you that for a year, and yet after a year of hearing about how "everything has changed", there is still not a single example of amazing vibe-coded software capable of replacing any of the real-world software people use on a daily basis. Meanwhile Microsoft is shipping more critical bugs and performance regressions in updates than ever while boasting about 40% of their code being LLM-generated. It is especially strange to cite "Windows as a great example" when 2025 was perhaps one of the worst years I can remember for Windows updates despite, or perhaps because of, LLM adoption.
Azure, Office, Visual Studio, VS Code, Windows are all shipping faster than ever, but so much stuff is unfinished, buggy, incompatible to existing things, etc.
Enshittification is not primarily caused by "we can fix it later", because "we can fix it later" implies that there's something to fix. The changes we've seen in Windows and Google Search and many other products and services are there because that's what makes profit for Microsoft and Google and such, regardless of whether it's good for their users or not.
You won't fix that with AI. Hell, you couldn't even fix Windows with AI. Just because the company is making greedy, user-hostile decisions, it doesn't mean that their software is simple to develop. If you think Windows will somehow get better because of AI, then you're oversimplifying to an astonishing degree.
If you want to script in Rust, xshell (https://docs.rs/xshell/latest/xshell/) is explicitly inspired by dax.
I just got native LSP working this past weekend and in sublime it's as much as: { "clients": { "remote-gopls": { "command": [ "tool", "lsp", "gopls" ], "enabled": false, "selector": "source.go", }, } }
From what you built so far, do you think there's any appetite in people paying for this type of tool which lets you spin up infra on demand and gives you all the capabilities built so far? I'm skeptical and I may just release it all as OSS when it gets closer to being v1.0.
(I'm not the author) The easiest way to charge for this kind of software is to make it SaaS, and I think that's pretty gross, especially for a CLI tool.
> I'm skeptical and I may just release it all as OSS
It doesn't have to be one or the other: you could sell the software under libre[1] terms, for example.
It's very easy to get hit with a massive bill due to just leaving instances around.
sudo shutdown +15 (or other amount of minutes)
when I need a compute instance and don’t want to forget to turn it off. It’s a simple trick that will save you in some cases.
/5 * * * [ -f /var/run/keepalive ] && [ $(( $(date +\%s) - $(stat -c \%Y /var/run/keepalive) )) -gt 7200 ] && shutdown -h now
Ollama with qwen3 and starcoder2 are ok.
I'd recomment to experiment with the following models atm. (eg. with "open-webui"): - gpt-oss:20b ( fast) - nemotron-3-nano:30b ( good general purpose)
It doesn't compare to the large LLM's atm. though.
> I personally don’t know how to solve it without wasting a day. So, I spent a day vibecoding my own square wheel.
This worries me, in the case of OP it seems was dillegent and reviewed everything thoroughly but I can bet that that's not the majority.. And pushing to prod an integral piece without fully knowing how it works just terrifies me.
Wait, this is how people vibe code? I thought it was just giving instruction line by line and refining your program. People are really creating a dense, huge spec for their project first?
I have not seen any benefit of AI in programming yet, so maybe I should try it with specs and like a auto-complete as well.
Lots of people are using PRD files for this. https://www.atlassian.com/agile/product-management/requireme...
I've been using checklists and asking it to check off items as it works.
Another nice feature of using these specs is that you can give the AI tools multiple kicks at the can and see which one you like the most, or have multiple tools work on competing implementations, or have better tools rebuild them a few months down the line.
So I might have a spec that starts off:
#### Project Setup
- [ ] Create new module structure (`client.py`, `config.py`, `output.py`, `errors.py`)
- [ ] Move `ApiClient` class to `client.py`
- [ ] Add PyYAML dependency to `pyproject.toml`
- [ ] Update package metadata in `pyproject.toml`
And then I just iterate with a prompt like: Please continue implementing the software described in the file "dashcli.md". Please implement the software one phase at a time, using the checkboxes to keep track of what has already been implemented. Checkboxes ("[ ]") that are checked ("[X]") are done. When complete, please do a git commit of the changes. Then run the skill "codex-review" to review the changes and address any findings or questions/concerns it raises. When complete, please commit that review changeset. Please make sure to implement tests, and use tests of both the backend and frontend to ensure correctness after making code changes.Using those as hint I bet CC would have one-shotted it pretty easily