DeepSeek-v3.1 - https://news.ycombinator.com/item?id=44976764 - Aug 2025 (253 comments)
Here's my summary of Carmack's talk from my bookmarks file:
> #video of John Carmack saying video games, even Atari, are nowhere close to being solved by #neural-networks yet, which is why he’s doing Keen Technologies. They’re going to open-source their RL agent! Which they’re demoing playing an Atari 2600 (emulated on a Raspberry Pi) with an Atari 2600 joystick and servos (“Robotroller”), running on just a gamer laptop with a 4090. (No video of the demo, though, just talking head and slides.) He says now he’s using PyTorch just like anyone else instead of implementing his own matrix multiplies. Mentions that the Atari screen is 160×210, which I’d forgotten. They’re using April-tag fiducials so their camera can find the screen, but patch them into the video feed instead of trying to get the lighting right for stickers. He says the motion-to-photons #latency at Oculus had to be below 20ms for avoiding nausea (26'58”). Finding the Atari scores on the screen was surprisingly hard. #retrocomputing
Sam Altman: I do guess that a lot of the world gets covered in data centers over time.
Theo Von: Do you really?
Altman: But I don’t know, because maybe we put them in space. Like, maybe we build a big Dyson sphere around the solar system and say, “Hey, it actually makes no sense to put these on Earth.”
Cassandra wasn't evil or crazy, she just had bad news.
1 kcal = 4200 joules
1 Watthours = 3600 joules
The Taco Bell double decker has 310 kcal = 1.3M joules
1.3M joules = 0.35 kWh
If each AI prompt takes 0.3 Wh, you could do ca. 1200 prompts per double decker. Which is a lot.
If humans had evolved to do prompts (while retaining everything else that makes human thought human), that number doesn't sound that big.
OTOH, if LLMs had to do everything humans need energy for, that number would be waaay too big for LLMs.
----
Humans don't even have an efficient energy input system. How many of those 1.3M joules actually get assimilated? Silicon is a lot more efficient at energy, because a lot of effort has been put into making it so, and is fed raw energy. It doesn't need to process food like humans, humans already did that for it when they captured the energy in electricity.
----
I'm sure there's more ways of making the comparison more fair, but I doubt your parent was trying to prove their claim with such deep research. So let me try another angle: No human can burn through as much energy as the top hosted LLMs do for one prompt, in as much time.
"Over time" clearly means that he's talking about the far future, not the next few years when they're cleaning up the low-hanging fruit. Rather than "absurd ... fantastic and impossible" I would describe the Dyson-sphere outcome as inevitable, unless humanity goes extinct within a few centuries. Maybe you thought he meant next March?
In https://epoch.ai/gradient-updates/how-much-energy-does-chatg..., Josh You, Alex Erben, and Ege Erdil estimate that a typical GPT-4o query uses 0.3 watt hours, based on the estimates that it's a 400-billion-parameter MoE model in which ¼ of the parameters are activated for a given query, so each token requires 200 gigaflops (100 billion multiplies and 100 billion adds I guess), a typical query produces 500 output tokens (about a page of text), and they're running on 1-petaflop/s H100 GPUs, with the overall cluster consuming at peak 1500 watts per GPU, with a utilization rate of 10%, and that the cluster on average consumes 70% of peak power. This works out to 1.05 kilojoules, or about 0.25 kilocalories, the amount of food energy in one gram of carbohydrates.
So, a 320-kcal Double DeckerⓇ Taco™ works out to 1280 GPT-4o queries answered, and a standard 2000kcal/day diet works out to 8000 GPT-4o queries per day, if we believe You, Erben, and Erdil's estimate. For someone who is looking for GPT-4o-quality output, if you are producing less than 8000 pages of writing per day, you are less energy-efficient than GPT-4o.
______
† although there's no guarantee they would, and it's not obvious that they would be a million times more intelligent, or even 10% more intelligent; they might just post a million times more poorly-thought-out sarcastic comments on forums due to their poor impulse control, resulting in long pointless arguments where they insult each other and dismiss obviously correct ideas as absurd because they come from people they hate
https://thezvi.substack.com/p/deepseek-v31-is-not-having-a-m...