None of the boards worked and I had to just do the project in codex. Opus seemed too busy congratulating itself to realize it produced gibberish.
What OP is doing here is actually the mitigation: SPICE + scope readout is a verifier the model can't talk its way past. The netlist either simulates or it doesn't, the waveform either matches or it doesn't. That closes the feedback loop the same way tests close it for code.
The failure mode that remains, in my experience, is a layer down: when the verifier itself errors out (SPICE convergence failure, missing model card, wrong .include path), the agent burns turns "reasoning" about environment errors it has seen a hundred times.That's where most of the token budget actually goes, not the design work.
Did the model itself do that? Was it a paste error?
I can’t recall either using a wrong word prior this month for some time.
I wouldn't really ascribe it to any "attempt to seem more human" when "nondeterministic machine trained on lots of dirty data" is right there.
I also don’t want to pretend there is no incentive for AI to seem more human by including the occasional easily recognized error.
Would be good to see the prompt out of morbid curiosity
--courtesy for all the LLM pushers so they don't have to bother commenting on this one
Curious how spicelib-mcp handles models that aren't in the bundled library. Do you pass the .lib path as a tool arg, or does the server own a registry?
The thing that bit me hardest wasn't architectural though, it was a hardcoded 60-second tool call timeout in the MCP SDK used by Claude Desktop. app.asar confirms it — no config knob to raise it. For any long-running tool (mine: extracting and summarizing a 50-page PDF) the only option is detached spawn: Phase 1 kicks off work and returns "queued" within 60s, Phase 2 runs fire-and-forget and writes results to disk for a later kioku_list call to pick up.
If your server ever does work that might exceed ~45 seconds on Desktop, worth designing that in early. Claude Code's CLI doesn't have this limit, but Desktop users will hit it.
waiting for FPAA to get better so we can vibecode analog circuits
https://www.eetimes.com/podcasts/making-analog-chip-designs-...
But in a my short two years in Analog IC design industry, i have been so divorced from the actual silicon that I rarely got a chance yet to go in lab and probe around the teeny tiny block I worked on in the complex labyrinth of the SoC. I don’t wish for it (I learnt the hard way, be careful what you wish for; and in this case, if I am in lab debugging something in silicon, means something terrible has happened to what I worked on and it might have cost the company about $200k or more), but someday soon i will get into the lab just to play around with the fancy ass oscilloscope.
In the meanwhile, I did realize the invaluable power of having a python frontend API for querying basic details of your devices. (Python and not SKILL/Lisp since it pretty much works with any AI, and is very well worked on) and AI has been okayish with it. I feel AI would be a good aid in actual circuit design if it understood the Topology of the circuit, which at this point I am tempted to say might require something akin to AST but for SPICE. However, AI has been awesome at regexes and scripting which is also the meh and boring part of the circuit design process.
Ye'ol poop splatter (Claude) is getting worse, more expensive, and anti-user. Local may be slower, but it is where the future of LLMs are going to.