Of course it is; I use it every day.
That is how I use it almost every day. "Here's the link to an API doc. Read it, write a new Skill around it, and [solve this problem with it]."
"look into the possibilities for X search the web, do thorough comparison. look on HackerNews both to gather sources and gauge sentiment in the comments"
this yields pretty good results IMO.
If I am using an API/library, I will also ask "is this approach idiomatic? what does the documentation say? look through at least 10 pages online"
Doesn't that assume there are many interchangable services available on the web which essentially do the same thing?
I can see this would be the case if there were many online services for say compiling C++ code. But for more human-centric services, are there many "replaceable services" out there? An API is not only its syntax, but also its semantics.
I came back to my terminal to find it had written its own tool to decompile the assembly, and successfully completed the task using that info.
> Claude is never on the web reading documentation - as far as I can tell that's not even in its toolkit.
Add the Context7 and grep MCP servers to your installation, it'll change your life (well it'll make claude less dumb).
Claude Code v2.1.29
Opus 4.5 · Claude Team
~/stow/tmp
I have a new project
In it, I want to scrape hacker news data.
The project will be built in python.
I want to search first what are the latest projects in python that could help me extract the data. This could be via api or scraping it.
Once you have a plan, ask me 10 questions to help navigate the structure.
I'll help you research the latest Python options for extracting Hacker News data. Let me search for current tools and approaches.
Web Search("Python Hacker News API scraping library 2026")
⎿ Did 1 search in 20s
Web Search("Hacker News official API Python wrapper 2025 2026")
⎿ Did 1 search in 17s
Web Search("best Python library scrape Hacker News data extraction")
⎿ Did 1 search in 20s
Let me get more details on the official Hacker News API and some specific library documentation.
Fetch(https://github.com/HackerNews/API)
⎿ Received 373.8KB (200 OK)
Fetch(https://pypi.org/project/hackernews-python/)
⎿ Received 3KB (200 OK)
Fetch(https://github.com/avinassh/haxor)
⎿ Received 378.1KB (200 OK)
I've gathered comprehensive research on the available options. Let me write up my findings and then ask you the 10 questions.
Updated plan
⎿ /plan to preview
Research Summary
I found several solid options for extracting Hacker News data in Python:
Best Options Found:
1. Official HN API (Firebase-based) - Free, no rate limits, real-time data
- Source: https://github.com/HackerNews/API
2. haxor - Most popular Python wrapper for the official API
- Source: https://github.com/avinassh/haxor
- Install: pip install haxor
3. DIY Scraping - BeautifulSoup + Requests/httpx for data not in the API
- Source: https://www.proxiesapi.com/blog/scraping-the-hacker-news-with-python-and-beautiful.html.php
4. Commercial - https://apify.com/epctex/hackernews-scraper for hosted solutions
---
Now, here are my 10 questions to help define your project structure:
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
← Data Source Data Types Storage Use Case Submit →
Do you want to use the official HN API (recommended, no rate limits) or scrape the website directly?
1. Official API (Recommended)
Use Firebase-based API - reliable, real-time, no rate limits
2. Web Scraping
Scrape HTML directly - more flexible but requires maintenance
3. Both
API as primary, scraping for data not available via API
4. Type something.
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
5. Chat about this
6. Skip interview and plan immediately
Enter to select · Tab/Arrow keys to navigate · Esc to cancelIt's got a couple tools built into it that it can use.
I filled out the PDF using FireFox PDF-editor, at which point it occurred to me, this is not so different from using an application which has a form for me to enter data into it.
Maybe in a few years Government has a portal where I can submit any of their forms as PDF documents, and they would probably use AI to store the contents of the form into a database.
A PDF-form is kind of a Universal API, especially when AI can extract and validate the data from it. Of all the API-formats I've seen I think PDF-forms is the most human-friendly. Each "API" is defined by the form-identifier in the PDF-form. It is easy for humans to use, and pretty easy for office-clerks to create such forms, especially with the help of AI. I wonder will this, or something similar, catch on?
If we're at the point where they use ai to make form pdfs, might as well cut the middleman and ask the ai to generate a form on a website.
Maybe vibe coders will do this, but I don't expect a business that operates like this to last very long.
If the expectation is that the customer is still going to put the work in to evaluate you, then is there no room for them to build a client, to expose their own internal API?
I can't count how many times I could have made an API that no one would use. I would also have to document it at which I'm exclusively terrible (I find out later trying to make sense of my own docs) I can now generate and test the docs which changes everything.
I wonder, could a spa (while still doable) now be the wrong approach? The logic will have to be in the API, why have it twice? It seems offloading work to the client might just add bloat. Could it be that if the website works with [dumb] queries the AI won't even need to read docs?
At some point the agents reading your article ARE the agents that will consume those improved docs. The feedback loop becomes fully autonomous.
IOW, a human customer is not the customer your business is going to directly serve, their agent is. What's required from the human is to provide a payment method, I assume, else it's not a real customer, but a free-tier user.
Thats always missing from these sorts of articles and comments, is why is this better
Related query: how is this really any different than what the W3C has proposed with Hydra[0] or other linked data APIs and formats? Who benefits by making their APIs more transparent, when 15 years or so ago there was a big push for this exact thing and it failed due to business concerns, not technical ones.
It's simply assuming what the way forward is without positing why other ways will not work to justify the position.
My secondary critque of the article, is using this as a basis of comparison:
>any product that can't be used by an agent will be as dead as a product without mobile support is today
First, I think its important 'mobile support' is defined. Is it as simple as a mobile friendly website or an app? Are we talking equivalent functionalities as well with the desktop counterpart?
Second, it ignores a ton of successful projects and products. Blender, Maya, Unreal engine. There's also a huge swath of games that only launch on PC or consoles.
If equivalent 1:1 functionality is to be considered, Adobe suite is still primarily a desktop tool, as is Figma.
I know there are a huge host of apps I'm missing too.
While more consumer apps migrated to web + mobile, you can often find functional differences between accessing on desktop vs mobile, where desktop is more complete. Its still not uncommon to have a mobile / tablet version of an app that is missing features that the desktop or website version is not.
Same reason you’d use an automated approach to anything - you want to solve a problem and want to solve it along some Pareto front of minimal effort and maximum results.
You use search engines to find things and all sorts of other automated time saving approaches. Agents are just another one. I used one to find an old email receipt for a car seat that I’d not been able to find myself with keywords.
> Related query: how is this really any different than what the W3C has proposed with Hydra[0] or other linked data APIs and formats? Who benefits by making their APIs more transparent, when 15 years or so ago there was a big push for this exact thing and it failed due to business concerns, not technical ones.
Common data formats have thrived where they’re actually used by middle services that users actually use - aggregations of some sort. Scientific papers have common data so they are indexed, webpages have metadata so they appear in google search results.
Linked data for technical reasons fails because the real world is too messy. There needs to be a good business reason so that the formats get nailed down to a small reasonable set and everyone just does it.
Now, part of the problem with supplying apis is who is going to use them? Even the nicest ones are only useful for people building a product on it, or a few nerds.
This has changed. We have systems that can use apis from natural language. That means that normal end users can integrate multiple services nicely and easily, and add others into the same interface they’re already using.
Because you get to do other stuff while the agent's working. Maybe spending hours optimizing the best flight possible is fun for you, or actually reading online reviews, I ain't judging. I don't care about half the features the marketing copy brags about, I only care about the parts that affect me. This theoretical agent knows me, what I care about, and can optimize based on that.
That "other stuff" you get to do is up to you to take advantage of. It could be scrolling TikTok, or it could be learning a foreign language or calculus, it's totally up to you.
And I think that products making billions today by only exposing a 20yo SOAP API are still going to make billions in 4 years.
I hate having to work with them, but I've been playing this game long enough to know that most of the world is running on legacy software and it won't change in about 1400 days.
User-facing side was written with Delphi 7 (quite a huge client application, it even had constructor to build new forms inside the application and GUI to build algorithms, like n8n of 90-s, I learned a lot working on that system).
I've used Java 6, then migrated to Java 7 to write some new components for that system with web UI. I was bad at Delphi and they didn't really care about consistency, so I went that route.
AFAIK this system is in use nowadays. They wanted to update it with modern database and other modern stuff 15 years ago. They wanted to update it every year since that, but every year this modernization project is postponed. So probably one day they'll replace everything, but so far it works.
I won't delve into details, but this system serves as a database of criminals and some other stuff for a small country, so it's actually quite important one.
TBH I don't think that AI could even build it. When I had to build it last time, the build consisted of several VMs. You had to build various DLLs and then build final application. Every part was built in a separate VM with separate installed Delphi components of a specific versions. It was a huge mess.
there are many Claude Code variants waiting to be built. not TUI code editors, but environments in which LLMs get superpowers
that's the product you should build rn
They aren't API first not because they can't - but because customers will use whatever GUI website supports to use it.
Unless your product has no value - users will fill out those forms manually if you don't provide API if they need your product.
Agents don't change this.
Think of agent support sort of like food delivery. Once people get used to the convenience, places that don't offer it are at a huge disadvantage.
"Talk to Sales for pricing" is bullshit and has always been bullshit. Shitty web UIs with no bulk change capabilities, but that you can script using the internal API from the Dev Tools Network tab are bullshit. If AI helps drag this industry into the future away from that, I welcome it.
my indian visa my 45 servers remote desktops on hetzner
all thanks to cc
How many jobs involve purely clicking things on a computer without human authorities, rules, regulations, permits, spending agreements, privacy laws, security requirements, insurance requirements, or licensing gates?
I wager, almost none. The bottleneck in most work isn't "clicking things on a computer." It's human judgment, authorization chains, regulatory gates, accountability requirements, and spending approvals. Agents automate the easy part and leave the hard part untouched. Meanwhile, if the agents also get it wrong, even 1% of the time, that's going to add up like compound interest in wasted time. Anything that could actually be outsourced to an agent, would have already been outsourced to Kenya.
How many caught attempts will it take for someone to find the right prompt injection to systematically evade LLMs here?
With a random selection of sub-competent human reviewers, the answer is approximately infinity.
Another parallel here is that AI agents will probably end up being poor customers in the sense of repeat business and long-term relationships. Like how some shops won’t advertise on some platforms because the clicks aren’t worth as much, on average, maybe we’ll start to see something similar for agents.
Without any of these, yes. With very basic rules, a LOT of them.
Think of this like going to a doctor with a simple question. It probably won’t be simple to them. At the end though, we usually do whatever they tell us. Because they are the experts, not us.
1. read the first 5 pages of HN
2. if there is a common theme, use a search engine to 'do your own research' and collect references
3. read your existing blog posts
4. write a new article with the same theme but in your style
5. post the link back to HN
Create video based on hacker news post related to any topic.
Doing the same in text would be trivial.