Presupposing whether everyone reading HN likes or dislkkes something not even agreed on yet seems silly.
It's a joke that is intentionally vague so that the reader could come up with their own definition. Hopefully no one tries to make one so we don't need to define it.
> Presupposing whether everyone reading HN likes or dislkkes something not even agreed on yet seems silly.
Jokes tend to be on the silly side.
The thing not yet defined?
In this joke the "Torment Nexus" by name is clearly something you don't want, no one wants to be tormented. It's also the McGuffin because something named a Torment Nexus would HOPEFULLY be something someone wouldn't build, but the joke is "hey this guy went and built the horrible thing we didn't want!"
Ever heard anyone make a Soylent Green joke? Same thing. All we know is that "soylent green in people" but we don't know HOW it's people, but it doesn't matter because we simply don't want to eat people under any condition.
Plus all the other oddball personalities that don’t mind suffering.
So clearly some might want it.
In other words, it's just like discussing "the Hero's Magic Weapon" or "the Wise Wizard" or "the Weird Place Where Ships Vanish".
Suppose I stated: "Captains hate to pilot their ships near the Weird Place Where Ships Vanish." Does it make sense for someone to complain that the coordinates of the Place haven't been defined, or that nobody has done a statistical analysis of Ship Vanishing rates?
Perhaps they read it as a joke about something quite different.
"Torment Nexus" comes from (and is used as a concise reference to) a two-sentence tweet [0], and I think it makes clear what the joke (or, perhaps, "dystopian observation") is about:
  Sci-Fi Author: In my book, I invented the Torment Nexus 
  as a cautionary tale.
  Tech Company: At long last, we have created the Torment
  Nexus from the classic sci-fi novel, Don't Create The
  Torment Nexus.
At least until the Torment Nexus outlaws it.
The unification of government data on citizens under DOGE and the push to use AI for surveillance under an authoritarian government bring us far closer to the plot of Winter Soldier than the bread and butter surveillance capitalism we'd already been living under. I regret that I had to spell that out for you.
  > The unification of government data on citizens under DOGE and the push to use AI for surveillance under an authoritarian government bring us far closer to the plot of Winter Soldier than the bread and butter surveillance capitalism we'd already been living under. 
My comment was only about the point that Winter Soldier "doesn't get enough credit." My point was that setting was not just "not novel" but already common place. Meaning Winter Soldier does not stand out as a unique representation. I want to stress "not a unique representation" != "not a representation".
At any rate, while themes of technological surveillance and authoritarianism certainly predate Winter Soldier, I'm not aware of anything in popular culture prior to 2014 that really matches the moment _to the degree_ Winter Soldier does. And if you simply meant the actual state of America circa 2014, sure, Palantir existed, but DOGE and an authoritarian-controlled US military institution did not. ML, yes, but not the quasi-AGI of today that's a much closer match for the computerized Arnim Zola.
But also, it’s not the 20-somethings building this people making decisions are in their 40’s and 50’s.
I think we are close to Wally or Fifteen Million Credits, maybe even almost at the Oasis (as seen by IOI). But we have made little progress in direct brain stimulation of senses. We are also extremely far from robots that can do unsupervised complex work (work that requires a modicum of improvisation).
Of course we might already be in matrix or a simulation, but if that’s the case it doesn’t really change much.
The difference is that we don't have credits the way the characters do in Brooker's universe; we have social clout in the form of upvotes, likes, hearts, retweets, streaming subs, etc. most of which are monetised in some form or are otherwise a path to a sponsorship deal.
The popularity contest this all culminates in is, in reality, much larger in scale than what was imagined in Black Mirror. The platform itself is the popularity contest.
  > We are also extremely far from robots that can do unsupervised complex work
[0] I'm not joking, they are openly stating this...
The end goal for AI companies has always been to insert themselves into every data flow in the world.
Option 2: Ad data
Option 3: None of the above
I'm going with the first two because I like to contribute my data to help out a trillion dollar company that doesn't even have customer support :)
Investors are never happy long term because even if you have a fantastic quarter, they'll throw a tantrum if you don't do it even better every single time.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
* Insurence companies
* Health insurers
* Banks
These all would like a chance to increase their profits at your expense based on your private data. Just because they will get it wrong 25% of the time will not mean positive profits for them...
The government does not really care if you pay your taxes
Open-source agentic browser that uses any LLM provider (including local / Ollama).
That being said... I tried it, and while it was fun and cool, I didn't get enough value out of it to use it regularly (I think this would go for any agentic browser).
https://connect.mozilla.org/t5/ideas/archive-your-browser-hi...
The recorded history is stored in a SQLite database and is quite trivial to examine[0][1]. A simple script could extract the information and feed them to your indexer of choice. Developing such a script isn't the task for an internet browser engineering team.
The question remains whether the indexer would really benefit from real-time ingestion while browsing.
[0] Firefox: https://www.foxtonforensics.com/browser-history-examiner/fir...
[1] Chrome: https://www.foxtonforensics.com/browser-history-examiner/chr...
Yeah I am sure lots of people want their pornhub history integrated into AI...
If that is the "future" (gag), we better be able to opt out
Chrome Devtools MCP on the other hand is a browser automation tool. Its purpose is to make it trivial to send programmed events/event-flows to a browser session.
I fail to see the issue people have here. I mean, what exactly is the problem with training data here? This is not like advertising, where the data is used against you. It's not information about people that's being collected and extracted here - it's about collecting enough signal to identify patterns of thinking; it's about how human minds in general perceive the world. This is not going to hurt you.
(LLMs ultimately might, when wielded by... the same parties that have been screwing you over for decades or more. It's not OpenAI that's screwing you over here - it's advertisers, marketers, news publishers, and others in the good ol' cohort of exploitative liars.)
"Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads"
https://techcrunch.com/2025/04/24/perplexity-ceo-says-its-br...
Burn them, burn them all to hell!
I grew up on the banks of the Hudson River, polluted by corporations dumping their refuse into it while reaping profits. Anthropic/openai/etc are doing the same thing.
Oh the humanity!
I can't take those eco-impact threads seriously. Yes, ChatGPT uses compute, compute uses water and electricity. So does keeping your lawn trimmed and your dog well, and of the three, I bet ChatGPT is actually generating most value to everyone on the net.
Everything we do uses electricity and water. Everything that lives uses energy and water. The question isn't whether, or how much, but what for. Yes, LLMs use a lot of energy in absolute terms - but that's because they're that useful. Yes, despite what people who deny basic reality would tell you, LLMs actually are tremendously useful. In relative terms, they don't use that much more energy or water vs. things they displace, and they make up for it in the improvements.
Want to talk environmental impact of ChatGPT et al.? Sure, but let's frame it with comparative figures for sportsball, concerts, holiday decorations, Christmas lights, political campaigns, or pets. Suddenly, it turns out the whole thing is merely a storm in a teacup.
And I don’t have a dog but that water usage certainly provides the most benefit. Man’s best friend > online sex bot.
And? Have you read about the impact of ${production facilities} in non-US countries? That's literally what industrialization and globalization are about. Data centers aren't unique here - same is true about power plants, factories, industrial zones, etc. It all boils down to the fact that money, electricity and compute are fungible.
Note: this alone is not a defense of LLMs, merely me arguing that they're nothing special and don't deserve being singled out - it's just another convoluted scenario of multinational industries vs. local population.
(Also last time I checked, the whole controversy was being stoked up by a bunch of large interest groups that aren't happy about competition disturbing their subsidized water costs - it's not actually a grassroots thing, but an industry-level PR war.)
> Man’s best friend > online sex bot.
That's disingenous. I could just as well say: {education, empowering individuals to solve more of their own problems, improving patient outcomes} > pets and trimmed lawns. LLMs do all of these and sex bots too; I'm pretty sure they do more of the former than the latter, but you can't prove it either way, because compute is fungible and accurate metrics are hard to come by :P.
How can you know what you're reading is true when you can't verify what's happening out there?
This is true from global events to a pasta recipe.
AI is destabilizing the current balance of knowledge/information which creates the high potential for violence.
Societies are built upon unspoken but shared truths and information (i.e. the social contract). Dissolve this information, dissolve or fragment the society.
This, coupled with profiling and targeting will enable fragmentation of the societies, consolidation of power and many other shenanigans.
This also enables continuous profiling, opening the door for "preemptive policing" (Minority Report style) and other dystopian things.
Think about Cambridge Analytica or election manipulation, but on steroids.
This is dangerous. Very dangerous.
History has proved that keeping society stupid and disenfranchised is essential to control.
Did you know that in the 1600s the King of England banned coffee?
Simple.. fear of evolving propagating better ideas and more intense social fraternity.
"Patrons read and debated the news of the day in coffeehouses, fueled by caffeine; the coffeehouse became a core engine of the new scientific and philosophical thought that characterized the era. Soon there were hundreds of establishments selling coffee."
https://worldhistory.medium.com/why-the-king-of-england-bann...
(the late 1600s was something of a fraught time for England and especially for Charles II, who had spent some time in exile due to the monarchist loss of the English Civil War)
Thanks for sharing!
For virtually all of human history, there weren't anywhere near so many of us as there are now, and the success and continuation of any one group of humans wasn't particularly dependent on the rest. Sure, there were large-scale trade flows, but there were no direct dependencies between farmers in Europe, farmers in China, farmers in India, etc. If one society collapsed, others kept going.
The worst historical collapses I'm familiar with - the Late Bronze Age Collapse and the fall of the Roman Empire - were directly tied to larger-scope trade, and were still localized beyond comparison with our modern world.
Until very recently, total human population at any given point in history has been between 100 and 400 million. We're now past 8 billion. And those 8 billion people depend on an interconnected global supply chain for food. A supply chain that, in turn, was built with a complex shared consensus on a great many things.
AI, via its ability to cheaply produce convincing BS at scale, even if it also does other things is a direct and imminent threat to the system of global trade that keeps 8 billion human beings fed (and that sustains the technology base which allows for AI, along with many other things).
The shared truth that holds us together, that you mentioned, in my eyes is love of humanity, as cliche as that might sound. Sure it wavers, we have our ups and downs, but at the end, every generation is kinder and smarter than the previous. I see an upward spiral.
Yes, there are those of us who might feel inclined to subdue and deceive, out of feelings of powerlessness, no doubt. But, then there are many of us who don't care for anything less than kindness. And, every act of oppression inches us toward speaking and acting up. It's a self-balancing system: even if one falls asleep at the wheel, that only makes the next wake-up call more intense.
As to the more specific point about fragmented information spaces: we always had that. At all points in history we had varying ways to mess with how information, ideas and beliefs flowed: for better and for worse. The new landscape of information flow, brought about by LLMs, is a reflection of our increasing power, just as a teenager is more powerful than a pre-teen, and that brings its own "increased" challenges. That's part of the human experience. It doesn't mean that we have to ride the bad possibilities to the complete extreme, and we won't, I believe.
My personal foundations are not very different than yours. I don't care about many people cares. Being a human being and having your heart at the right place is a good starting point for me, too.
On the other hand, we need to make a distinction between people who live (ordinary citizens) and people who lead (people inside government and managers of influential corporations). There's the saying "power corrupts", now this saying has scientific basis: https://www.theatlantic.com/magazine/archive/2017/07/power-c...
So, the "ruling class", for the lack of better term, doesn't think like us. I strive to be kinder every day. They don't (or can't) care. They just want more power, nothing else.
For the fragmented spaces, the challenge is different than the past. We, humans, are social animals and were always in social groups (tribes, settlements, towns, cities, countries, etc.), we felt belong. As the system got complex, we evolved as a result. But the change was slow, so we were able to adapt in a couple of generations. In 80s to 00s, it was faster, but we managed it somehow. Now it's exponentially faster, and more primitive parts of our brains can't handle it as gracefully. Our societies, ideas and systems are strained.
Another research studying the effects of increasing connectivity found that this brings more polarization: https://phys.org/news/2025-10-friends-division-social-circle...
Another problem is, unfortunately, not all societies or the parts of the same society evolve at the same pace to the same kinder, more compassionate human beings. Radicalism is on the rise. It doesn't have to be violent, but some parts of the world is becoming less tolerant. We can't ignore these. See world politics. It's... complicated.
So, while I share your optimism and light, I also want to underline that we need to stay vigilant. Because humans are complicated. Some are naive, some are defenseless and some just want to watch the world burn.
Instead of believing that everything's gonna be alright eventually, we need to do our part to nudge our planet in that direction. We need to feed the wolf which we want to win: https://en.wikipedia.org/wiki/Two_Wolves
I appreciate your thoughtful reply. I too think that our viewpoints are very similar.
I think you hit the nail on the head about how it's important that positivity doesn't become an excuse for inaction or ignorance. What I want is a positivity that's a rally, not a withdrawal.
Instead of thinking of power as something that imposes itself on people (and corrupts them), I like to think that people tend to exhibit their inner-demons when they're in positions of power (or, conversely, in positions of no-power). It's not that the position does something to them, but it's that they prefer to express their preexisting disbalance (inner conflict) in certain ways when they're in those circumstances. When in power, the inner disbalance manifests as a villain; when out-of-power, it manifests as a victim.
I think it's important to say "we", rather than "us and them". I don't see multiple factions with fundamentally incompatible needs. Basically, I think that conflict is always a miscommunication. But, in no way do I mean that one should cede to tyranny or injustice. It's just that I want to keep in mind, that whenever there's fighting, it's always in-fighting. Same for oppression: it's not them hurting us, but us hurting us: an orchestration between villains and victims. I know it's triggering for people when you humanize villains and depassify victims, but in my eyes we're all human and all powerful, except we pretend that the 1% is super powerful, while the 99% are super powerless.
I had a few more points I wanted to share, but I have to run. Thanks for the conversation.
Let's convince the remaining 8 billion people.
Real life is a place encompassing the "cyberspace", too. They're not separate but intertwined. You argue that people affecting your life are the closest ones to you, but continuously interact with the ones who are farthest from you distance-wise and they affect your day at the moment.
Maybe this is worth thinking about.
As a bit of a semi-related aside, while everyone has different motivations when voting, as a whole when folks are able to vote for their gov't, one hopes that enough people are thinking about what is good for the majority and society as a whole and not only what is good for themselves. And that has more impact at local and state levels usually. A bit idealistic, admittedly.
Based on ie election behavior or populations, what you describe is naivety on a level of maybe my 5 year old son. Can you try to be a bit more constructive?
Ok, let’s break it down for you: What all 8 billion people in the world think does not matter to you. There are people out there cutting heads off in the name of religion, or people who think their dictator is a divine being.
People outside your country have little effect on your daily life.
Even people within your country have a weak effect on your daily life.
What other people believe only really matters for economic reasons. Still, unless you are very dependent on social safety nets even they don’t matter that much. You just find more money and carry on.
You might think that more propaganda will result in people voting for bad politicians, but it is actually possible to have too much propaganda. If people become aware of how easily fake content is generated, which they are rapidly realizing in the age of AI, the result is they become skeptical of everything, and come up with their own wild theories on what the truth really is.
The people whose thoughts matter most are the people you interact with on a daily basis, as they have the most capability to alter your daily life physically and immediately. Fortunately you can control better who you surround yourself with or how you interact with them.
If you turn off the conversation, the world will appear pretty indifferent even to things that seem like a big deal on social media.
In the US at least, the people who vote the most are typically the older people 40+ and those people have very little experience with tech and AI and are easily tricked by fake crape. Add AI to the mix, and they literally have no perception of the real world.
I think your comment is just very ageist. You stereotype everyone who is middle age and above as barely lucid nursing home seniors.
Ironically I would say it is young 20 somethings and below who have no clue how a computer or software even works. Just a magic sheet of glass or black box that spits out content and answers, and sometimes takes pictures and video.
> You stereotype everyone who is middle age and above as barely lucid nursing home seniors.
No, they have not. My dad worked on UK military IFF software solutions and simulations. Still took him years to realise Google (c. 2010) search results had a scroll bar. Mum eventually did get Alzheimers, but was mixing up reality and fiction decades earlier, in the form of New Age healing crystals, ley lines, etc., and as I grew up with that influence I too believed them until practicing Popper-like falsification stripped away each magickal belief.
Me, I've been following AI since before the turn of the millennium, and I still get surprised when I see how fast the tech is improving. I also see plenty of people, even on HN, assert AI will take (paraphrasing) "centuries, if ever" to reach performance thresholds it has now reached.
Your idea of living in society is something very different form my idea, or European idea (and reality). Not seeing how everything is interconnected, ripple effects and secondary/tertiary effects come back again and again, I guess you don't have kids... well you do your life, if you think money can push through and solve everything important. I'd call it a sad shortsighted life if I cared, but thats just me.
(also, has everyone forgotten COVID?)
- Study finds growing social circles may fuel polarization: https://phys.org/news/2025-10-friends-division-social-circle...
tl;dr: More close friends people have, more polarized societies become.
It's easy to profile people extensively, and pinpoint them to the same neighborhood, home or social circle.
Now what happens when you feed "what you want" to these groups of people you see. You can plant polarization with neighbourhood or home precision and control people en-masse.
Knowing or seeing these people doesn't matter. After some time you might find yourself proverbially cutting heads off in the name of what you believe. We used to call these flame wars back then. Now this is called doxxing and swatting.
The people you don't know can make life very miserable in a slow-burning way. You might not be feeling it, but this the core principle of slowly cooking the frog.
Look at really powerful people of this world - literally every single one of them is badly broken piece of shit (to be extremely polite), control freaks, fucked up childhood and thus overcompensating missing/broken father figure, often malicious, petty, vengeful, feeling above rest of us.
Whole reason for democracy since ancient times to limit how much power such people have. The competent sociopathic part of above will always rise in society to the top regardless of type of system, so we need good mechanism to prevent them from becoming absolute lifelong dictators (and we need to prevent them from attaining immortality since that would be our doom on another level).
People didn't change over past few thousands of years, and any society that failed above eventually collapsed in very bad ways. We shall not want the same for current global civilization and for a change learn from past mistakes, unless you like the idea of few decades of warfare globally and few billions of death. I certainly don't.
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
Yay!! Let’s all make a not-for-profit!!
Oh, but hold on a minute, look at all the fun things we can do with lots of money!
Ooooh!!
Clearly, an all-local implementation is safer, and using less powerful local models is the reasonable tradeoff. Also make it open source for trust.
All that said, I don’t need to have everything automated, so we also have ‘why even build it’ legitimate questions to ask.
On the contrary, it could be the case that Microsoft ritually sacrifices a dozen babies each day in their offices and it would still be used because office.
Well, unless the scenario is moot because such a vendor would never have released it in the first place.
There's 2 dimensions to it: determinism and discoverability.
In the Adventure example, the ux is fully deterministic but not discoverable. Unless you know what the correct incantation is, there is no inherent way to discover it besides trial and error. Most cli's are like that (and imho phones with 'gestures' are even worse). That does not make a cli inefficient, unproductive or bad. I use cli all the time as I'm sure Anhil does, it just makes them more difficult to approach for the novice.
But the second aspect of Atlas is its non determinism. There is no 'command' that predictivly always 'works'. You can engineer towards phrasings that are more often successfull, but you can't reach fidelity.
This leeway is not without merrit. In theory the system is thus free to 'improve' over time without the user needing to. That is something you might find desirable or not.
It could just as well degrade, improvement is not the only path.
Although there is one aspect in which the LLM interface still isn't discoverable - what interface does it have to the world.
Can I asked Alexa+ to send a WhatsApp? I have no idea because it depends if the programmers gave it that interface.
I updated ChatGPT and a little window popped up asking me to download Atlas. I declined as I already have it downloaded.
There was another window, similar to the update available window, in my chat asking me to download Atlas again...I went to hit the 'X' to close it and I somehow triggered it, it opened my browser to the Atlas page and triggered a download of Atlas.
This was not cool and has further shaken my already low confidence in OpenAI.
That's why they have to push it so hard. They spent a lot of money on it and they NEED us to buy it so they don't take a loss, so their tactic is to try to force it on us.
It happens to many companies, you start a disruptor and get huge because you did what the market wanted and competitors didn't. Then, later, you don't want the market to go in a direction, so you try to stack the market, never once realizing that another disruptor will come along to upset your apple cart.
You can control some of the market all of the time, and all of the market some of the time, but you can't control all of the market all of the time.
No judgment, pure curiosity. Why did you install their malware?
I guess google works in a vaguely similar way, by outputting results based on what it "knows" about you, though in an even more subtly manner.
they are quite aggressive at making people install this
They can't even lie and blame it on a programming fuckup because they'd have to say AI driven code is buggy.
No we didnt.
Heck, just look at what's happening at this very moment: I'm using a textarea with a submit button. Even as a developer/power-user, I have zero interest in replacing that with:
    echo "I think the..." | post_to_hn --reply-to=45742461I have opposite view. I think text (and speech) is actually pretty good interface, as long as the machine is intelligent enough (and modern LLMs are).
I've once saw a demo of an AI photo editing app that displays sliders next to light sources on a photo and you are able to dim/brighten the individual light sources intensity this way. This feels to me like a next level of the user interface.
1. There's a "normal" interface or query-language for searching.
2. The LLM suggests a query, based on what you said you wanted in English, possibly in conjunction with results of a prior submit.
3. The true query is not hidden from the user, but is made available so that humans can notice errors, fix deficiencies, and naturally--if they use it enough--learn how it works so that the LLM is no longer required.
For example, "Find me all pictures since Tuesday with pets" might become:
    type:picture after:2025-10-08 fuzzy-content:"with pets"
   Description: "black dog catching a frisbee"
   Does that "with pets"? 
   Answer Yes or No.
   Yes.>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
What is the significance of "Even all the Linux users"? First of all it's probably incorrect, because of the all quantifier. I've went out of my way to look at the website via the terminal to disprove the statement. It's clearly factually incorrect now.
Second, what does hate have anything to do with this? Graphical user interfaces serve different needs than text interfaces. You can like graphical user interfaces and specifically use Linux precisely because you like KDE or Gnome. You can make a terrible graphical user interface for something that ought to be a command line interface and vice versa. Vivado isn't exactly known for being easy to automate.
Third, why preemptively attack people as nerds?
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I mean, not only does this come off as an incredible strawman. After all, who wouldn't be excited by computers in an era where they were the hot new thing? Computers were exciting, not because they had text interfaces. They were fun, because they were computers. It's like learning how to drive on the highway for the first time.
The worst part by far is the unnecessary insinuation though. It's the standard anti-intellectual anti-technology stereotype. It creates hostility for absolutely no reason.
Also, some commenters here at HN stating that the CLI/TUI it's just the fallback option... that's ridiculous. Nvi/vim, entr, make... can autocompile (and autocomplete too with some tools) a project upon writting any file in a directory thanks to the entr tool.
But most of those will likely be developers, that use the CLI in a very particular way.
If we now subdivide further, and look at the people that use the CLI for things like browsing the web, that's going to be an even smaller number of people. Negligible, in the big picture.
In any case, Claude Code is not really CLI, but rather a conversational interface.
"htop" is a TUI, "ps" is a CLI. They can both accomplish most of the same things but the user experience is completely different. With htop you're clicking on columns to sort the live-updating process list, while with "ps" you're reading the manual pages to find the right flags to sort the columns, wrapping it in a "watch" command to get it to update periodically, and piping into "head" to get the top N results (or looking for a ps flag to do the same).
claude -p "Question goes here"
As that will print the answer only and exit.
I'd bookmarked a lot of Gamasutra articles over the years and am kinda bummed out that I can't find any of them now that the site has shifted. You mentioned having a collection of their essays? Is there any way to share or access them?
If I’d anticipated breaching containment and heading towards the orange site, I may not have risked the combination of humor and anything that’s not completely literal in its language. Alas.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Anyone who deals with any kind of machine with a console port.
CLIs are current technology, that receive active development alongside GUI for a large range of purposes.
Heck Windows currently ships with 3 implementations. Command Prompt, Powershell AND Terminal.
CLI is ALWAYS fallback when nothing else works (except when fetish of people on HN). even most devs use IDEs most of the time.
Theres a pretty decent list of incompatible features, but its shrinking (mostly due to those features being EOL'd not upgraded)
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
The web was supposed to set us free and you cry out for chains?
It's in your hands. It's literally in the hands of this entire forum. There is nobody else to blame.
I also believe noticing Baader-meinhof in the 90s is rather unsurprising, since RAF was just "a few years" ago. However, "dreck" as someone else noted is documented since the early 20th century. So I dont think me noticing this just recently isn't a bias, rather a true coincidence.
I guess you did say "the idea of" not "the reality of".
I think we should note that this is a situation created by the businesses more than the consumers. By offering "free" products they drove paid products out of the market. They didn't have to do that. But if I'm going to be as fair as possible, it's only regulation that could have stopped such a niche feom being exploited. One business or another would have eventually figured it out.
It feels like $10 / month would be sufficient to solve this problem. Yet, we've all insisted that everything must be free.
- Kagi
- YouTube Premium
- Spotify Premium
- Meta ad-free
- A bunch of substacks and online news publications
- Twitter Pro or whatever it’s called
On top of that I aggressively ad-block with extensions and at DNS level and refuse to use any app with ads. I have most notifications disabled, too.
It is a lot better, but it’s more like N * $10 than $10 per month.
https://help.kagi.com/kagi/ai/llms-privacy.html#llms-privacy
But, doesn't Youtube Premium include Youtube Music? So why pay for Spotify premium too?
Spotify (now) supports lossless.
Spotify connects to Sonos, wiim, etc. devices
Spotify supports marking albums and playlists for offline sync, including to my Garmin watch.
I participate in a number of collaborative Spotify playlists (e.g. on group trips, at parties, etc.). I’ve never seen anyone make a collaborative playlist on another platform, much less missed out on participating in one.
Shazam results have an “Open in Spotify” button and Shazam adds everything it identifies to a Spotify playlist.
When I’ve used it, the YouTube Music UI has felt like it’s not really designed for people who listen to music the way I do at all.
I’m not willing to go without YouTube just to spite Google but I’d rather not give them money or attention/usage if I can avoid it.
I don’t know how many of these would also be ok with YouTube Music, but it’s clearly not all of them and I suspect it’s close to zero. I’m fortunate that the cost of Spotify is not a burden for me, and I’d much rather pay it to get closer to the experience I want than try to get by with YouTube Music.
I've also been using Spotify for longer than YouTube Music, or its predecessor that Google killed (as they do periodically) even exited.
YouTube Music is both better and worse: UI has some usability issues and unfortunately it shares likes and playlists with the normal YouTube account, as a library it has lots of crap uploaded by YouTube users, often wrong metadata, but thanks to that it also has some niche artists and recordings which are not available on other platforms.
It doesn't address the other reasons, but there are some free tools for moving Spotify playlists to YouTube.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
I think the most telling is the breakdown of Average Revenue Per User per region for Facebook specifically [1]. The average user brought in about $11 per quarter while the average US/CA user brought in about $57 per quarter during 2023.
[1] https://s21.q4cdn.com/399680738/files/doc_financials/2023/q4... (page 15)
It’s closer to $100 than $10 though, for all the services I pay for to avoid ads, and you still need ad blockers for the rest of the internet.
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF , use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
But if you were looking for any information about anything? Say, what's the average cost of an apartment in Seville? That's where on Google, yes the search result is fast, but then you have to click through the links and those websites are often very slow to load and full of ads, and you might need to look at a few of them to get the information you wanted.
If you have a few added follow up piece of information you're interested in as well, what's the typical apartment size, have the prices been going up or down, what is the typical leasing process, etc. You can bottom out with the info much faster on ChatGPT.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
As it is, I find there are some things LLMs are genuinely better for but many where a search is still far more useful.
Google used to be like that, and if ChatGPT is better right now, it won't remain that way for long. They're both subject to the same incentives and pressures, and OpenAI is, if anything, less ethical and idealistic than Google was.
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
- We have tried to make Atlas a great way to browse the open web, and invested heavily in the search experience as we covered in the livestream. When I test typing “taylor swift" into Atlas I see links to the website in the autocomplete carousel, at the top of the results page, in inline citations within chat, and in the Search tab. We’re also working on making these faster and more prominent when we’re sure you want a link, for ex. you’ll see a large vertical list of links at the top of the chat response when you type in something like “gmail”. - Browser memories are opt-in, and by default we don’t train on the contents of pages you browse—even when memories are enabled. Keeping the Ask ChatGPT sidebar open has no bearing on what gets sent to us. Webpages are clearly displayed as attachments you can disable in the chat, and only sent to us when you submit the prompts. - Ask ChatGPT and Agent mode are not workarounds to user or publisher training settings. Even if a user has training enabled, we will still respect publisher preferences to block GPTBot in their robots.txt and do not train on that content.
Anil's point is that Chatgpt can't, on its own, train on data that requires a login. So Atlas is in fact automating training on login-walled content, and doing so by default, but also adding decoy settings that make you think it's not.
No evil here!
The part about Zork doesn't make sense to me. As I understand it text based adventure games are actually quite lenient with the input you can give, multiple options for the same action. Additionally certain keywords are "industry standard" in the same way that you walk using "wasd" in FPS games, so much that it became the title of the documentary "get lamp". Due to the players perceived knowledge of similar mechanics in other games you can even ague that providing these familiar commands is part of the game design.
It seems to me that the author never played a text based adventure game and is jut echoing whatever he heard. Projects like 1964's ELIZA prove that text based interfaces have been able to feel natural for a long time.
Text has a high information density, but natural language is a notoriously bad interface for certain things. Like giving commands, therefore we invented command lines with different syntax and programming languages for instructing computers what to do.
Figuring it all out is part of the fun, but outside the context of a game it would be maddening.
As for Eliza, she mostly just repeats back the last thing you said as a question. “My dog has fleas.” “How does your dog having fleas make you feel?”
Which is why it's done that way. Other text-based games where the focus is not on puzzling out what to do next (like roleplaying MUDs) have a more strict and easily discoverable vocabulary.
This would be like saying using programming languages is terrible because Brainfuck is a terrible programming language.
     It seems to me that the author never played a text based adventure game and is jut echoing whatever he heard
> thought this comparison was incredibly apt and also found it very funny
I'm happy for you, but I didn't.
To "read something with nuance" is to be open to nuance that is already present in the writing. This writing is not nuanced!
Perhaps you're asking us to make an effort to be more tolerant of weak writing. That's a fair request when the writer is acting in good faith. But to mock nerds for liking text adventures when you clearly do not like them yourself is not acting in good faith.
You should actually try and play zork and report back.
(Who is clearly playfully poking fun at something he enjoyed playing but can recognize as being constrained by computers of the era and no longer a common format of games thanks to 3D rendering).
I mean, for some people they do, but those people never liked books to begin with; they just didn't have an alternative.
Graphical applications can be more beautiful and discoverable, but they limit the user to only actions the authors have implemented and deployed.
Text applications are far more composable and expressive, but they can be extremely difficult to discover and learn.
We didn't abandon the shell or text interfaces. Many of us happily live in text all day every day.
There are many tasks that suffer little by being limited and benefit enormously by being discoverable. These are mostly graphical now.
There are many tasks that do not benefit much by spatial orientation and are a nightmare when needlessly constrained. These tasks benefit enormously by being more expressive and composable. These are still often implemented in text.
The dream is to find a new balance between these two modes and our recent advances open up new territory for exploring where they converge and diverge.
In addition to the analogy of the textual interface used in Zork, we could say that it'd be like interacting with any REST API without knowledge about its specification - guessing endpoints, methods, and parameters while assuming best practices (of "natural-ness" kind). Do we really want to explore an API like that, through naive hacking? Does a natural language wrapper make this hacking any better? It can make it more fun as it breaks patterns, sure, but is that really what we want?
I haven't used it and have no intention of using it.
I'm reacting to the OP articulating clearly dismissive and incorrect claims about text-based applications in general.
As one example, a section is titled with:
> We left command-line interfaces behind 40 years ago for a reason
This is immediately followed by an anecdote that is probably true for OP, but doesn't match my recollection at all. I recall being immersed and mesmerized by Zork. I played it endlessly on my TRS-80 and remember the system supporting reasonable variation in the input commands.
At any rate, it's strange to hold up text based entertainment applications while ignoring the thousands of text based tools that continue to be used daily.
They go on with hyperbolic language like:
> ...but people were thrilled to leave command-line interfaces behind back in the 1990s
It's 2025. I create and use GUI applications, but I live in the terminal all day long, every day. Many of us have not left the command line behind and would be horrified if we had to.
It's not either/or, but the OP makes incorrect claims that text based interfaces are archaic and have long been universally abandoned.
They have not, and at least some of us believe we're headed toward a new golden age of mixed mode (Text & GUI) applications in the not-so-distant future.
CLIs are still powerful and enjoyable because their language patterns settled over the years. I wouldn't enjoy using one of these undiscoverable CLIs that use --wtf instead of --help, or be in a terminal session without autocomplete and zero history. I build scripts around various CLIs and like it, but I also love to install TUI tools on my servers for quick insights.
All of that doesn't change the fact that computer usage moved on to GUIs for the general audience. I'd also use a GUI for cutting videos, editing images, or navigating websites. The author used a bit of tongue-in-cheek, but in general I'd agree with them, and I'd also agree with you.
Tbh, I also think the author would agree with you, as all they did was making an anecdote that
  s/Take/Pick up/
  s/atlas design/search web history for a doc about atlas core design/
I think this will let models be much smaller (and cheaper), but it would also enable a mechanism for monetizing knowledge. This would make knowledge sharing profitable.
For example, a user asks a question, the model asks knowledge sources if they have relevant information and how much it costs (maybe some other metadata like perceived relevance or quality or whatever), and then it decides (dynamically) which source(s) to use in order to compile an answer (decision could be based on past user feedback, similarly to PageRank).
One issue is that this incentivizes content users want to hear versus content they don’t want to hear but is true. But this is a problem we already have, long before AI or even the internet.
If you post for ad revenue, I truly feel sorry for you. How sad.
I think this is a bit dismissive towards people who create content because they enjoy doing it but also could not do it to this level without the revenue, like many indie Youtubers.
I absolutely do enjoy content that is financed through ads. I really, REALLY do like some of the stuff, honest. But it is also the case that the internet has been turning into a marketing hellscape for the last couple decades, we've gotten to a point in which engagement is optimized to such a degree that society itself is hurting from it. Politically and psychologically. The damage is hard to quantify, yet I can't fathom the opportunity cost not being well into the billions.
We'd be better off without that.
'Modern' Z-Machine games (v5 version compared to the original v3 one from Infocom) will allow you to do that and far more. By 'modern' I meant from the early 90's and up. Even more with v8 games.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
The original v3 Z Machine parser (raw one) was pretty much limited compared to the v5 one. Even more with against the games made with Inform6 and the Inform6 English library targetting the v5 version.
Go try yourself. Original Zork from MIT (Dungeon) converted into a v5 ZMachine game:
https://iplayif.com/?story=https%3A%2F%2Fifarchive.org%2Fif-...
Spot the differences. For instance, you could both say 'take the rock' and, later, say 'drop it'.
   >take mat
   Taken.
 
   >drop it
   Dropped.
 
   >take the mat
   Taken.
 
   >drop mat
   Dropped.
 
   >open mailbox
   You open the mailbox, revealing a small leaflet.
 
   >take leaftlet
   You can't see any such thing.
 
   >take leaflet
   Taken.
 
  >drop it
   Dropped.
https://ifdb.org/viewgame?id=op0uw1gn1tjqmjt7
You can either download the Z8 file and play it with Frotz (Linux/BSD), WinFrotz, or Lectrote under Android and anything else. Also, online with the web interpreter.
Now, the 'excuses' about the terseness of the original Z3 parser are now nearly void; because with Puny Inform a lot of Z3 targetting games (for 8086 DOS PC's, C64's, Spectrums, MSX'...) have a slightly improved parser against the original Zork game.
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
Response: Already achieved by OpenAI!
I guess Mag 7 is the new FAANG, not the mag-7 shotgun
https://i.postimg.cc/br7F8NLd/chat-GPT.png
I wonder when webmasters will take theirs gloves off and just start feeding AI crawlers with porn and gore.
The complaint about the OpenAI browser seems to be it didn't show any links. I agree, that is a big problem. If you are just getting error prone AI output then it's pretty worthless.
:skull:
Except one: it gives them the default search engine and doesn’t let you change it.
I asked Atlas about this and it told me that’s true, the AI features are just a hook, this is about lock in.
Make of that what you will.
They didn't even change the codename that's displayed in the Settings page.
Just like it's bad news if one company owns the roads or the telecom infrastructure.
Governments need to prepare some strong regulation here.
(Of course this is still true if it's not strictly a monopoly)
Imagine a browser that lets people define sandboxes and scope credential that an LLM may access to complete certain tasks?
I’d love to have a browser download all of my utility bills for taxes. I’d setup a browser, limit it to a few URLs, give it access to a few credentials, and then let it rip.
> We left command-line interfaces behind 40 years ago for a reason
Man I still love command-line so much
> And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first. [...] guess what secret spell they had to type into their computer to get actual work done
Well... docs and the "-h" do a pretty good job.
I think same here, while there are docs "please show me the correct way to do X," the surface area is so large the the analogy holds up still, in that you might as well just be guessing fo rhte right command.
Is it a webpage? Well... it displays in a browser...
But if I type Taylor Swift, I get links to her website, instagram, facebook, etc.
Is it a webpage? Well... it displays in a browser...
This isn't Web 2.0, Anil. Things change.
> Well... it displays in a browser...
Yes, a slop browser.
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
https://www.womenslaw.org/about-abuse/forms-abuse/emotional-...
I like anti-web phrase. I think it will be a next phase after all those web 2.0 and web x.0 things.
https://bsky.app/profile/kkarpieszuk.bsky.social/post/3m4cxf...
Let's flood the system with junk and watch it implode.
2.0 - algorithmic feeds of real content with no outbound links - stay in the wall
3.0 - slop infects rankings and feeds, real content gets sublimated
4.0 - algorithmic feeds become only slop
5.0 - no more feeds or rankings, but on demand generative streams of slop within different walled slop gardens
6.0 - 4D slop that feeds itself, continuously turning in on itself and regenerating
I read this in my TUI RSS reader lol
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked. It’s made me roughly twice as productive — I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.
> The amount of data they're gathering is unfathomable.
The author suggests that GPT continuously reads information like typed keystrokes etc, I don't see why that's implied. And it wouldn't be new either, with tools like Windows Replay.
Because "it wouldn't be new either, with tools like Windows Replay".
Genuinely, why would they leave any data on the table if they don't have to? That's the entire purpose of the browser.
Because extracting and using that data may not be trivial, practically and legally speaking.
I am sure they use the chat input you send them for training. But to say that they transfer all the websites you visit to their servers, or that they can monitor your keyboard input either by continuously streaming your window or your keyboard events to their servers. All of that would be a small technical feat and noticeable in the created traffic.
My belief is that they simply hacked together an AI web-browser in the least amount of time with the least amount of effort, so that they can showcase another use for AI.
That would be a much simpler explanation than them building a personal surveillance tool that wants to know what you've typed and keeps track of the time you've spent looking for shoes.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
So I am 0% surprised.
I imagine a future where websites (like news outlets or blogs) will have something like a “100% human created” label on it. It will be a mark of pride for them to show off and they’ll attract users because of it
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
The article does taste a bit "conspiracy theory" for me though
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.
The bridge would come from layering natural languages interfaces on top of deterministic backends that actually do the tool calling. We already have models fine-tuned to generate JSON schemas. MCP is a good example of this kind of stuff. It discovers tools and how to use them.
Of course, the real bottle neck would be running a model capable of this locally. I can't run any of models actually capable of this on a typical machine. Till then, we're effectively digital serfs.
can never go back
Sounds like the browser did you a favor. Wonder if she'll be suing.
If they don't put AI in every tool, they won't get new training data.
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I can barely stomach it with John Oliver does it, but reading this sort of snark without hearing a British voice is too much for me.
Granted, that number does not equal the number "nerds" who played the games because the same player will probably have bought multiple titles if they enjoyed interactive fiction.
However, also keep in mind that some of the games in that table were only available after 1981, i.e., at a later point during the 1981-1986 time frame. Also, the 80s were a prime decade for pirating games, so more people will have played Infocom titles than the sales figures suggest - the document itself mentions this because they sold hint books for some titles separately.
[0] https://ia601302.us.archive.org/1/items/InfocomCabinetMiscSa...
What irks me the most about LLMs is when they lie about having followed your instructions to browse a site. And they keep lying, over and over again. For whatever reason, the ONE model that consistently does this is Gemini.
Worth reading to the end.
Welcome to capitalism.
I wish it wasn't like this, but this is the system we have. If you want OpenAI to do things differently, another company has to do it differently, show that doing it differently is even more profitable (see Apple and their privacy angle), and force OpenAI to change for competitive reasons.
Or figure out some way to trick our way into a constitutional convention so we can change the system.
2 - we didn't leave command-line interfaces behind 40 years ago
2 - That's an entirely different situation and you know it.
In reality the fact of social media means web failed long time ago, and it only serves a void not taken by mobile apps , and now llm agents.
Why do I need to read everything about tailor Swift on you her web site , if I don’t know a single song of her ? ( I actually do ) .
I don’t want a screaming website tells me about her best new album ever , and her tours if LLM knows I don’t like pop music . The other way around if you like her you’d like a different set of information. Website can do that for you
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
The machine is suddenly the narrator, and it doesn’t cite him. When he calls Atlas “anti-web,” he’s really saying it is “anti-author”.
In a way though, how much do we need people to narrate these shifts for us? Isn't the point of these technologies that we are able to do things for ourselves rather than rely on mediators to interpret it for us? If he can be outcompeted by LLMs, does that not just show how shallow his shtick is?