I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.
Writing code has just typically been how I've needed to solve those problems.
That has increasingly shifted to "just" reviewing code and focusing on the architecture and domain models.
I get to spend more time on my actual job.
Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.
> That has increasingly shifted to "just" reviewing code
It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.
Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.
Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.
Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.
OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.
I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.
I am sorry for being direct but you could have just kept it to the first part of that sentence. Everything after that just sounds like pretentious name dropping and adds nothing to your point.
But I fully agree, for complex problems that require insight, LLMs can waste your time with their sycophancy.
Seriously though, I appreciated it because my curiosity got the better of me and I went down a quick rabbit hole in Sugiyama, comparative graph algorithms, and learning about the node positioning as a particular dimension of graph theory. Sure nothing ground breaking, but it added a shallow amount to my broad knowledge base of theory that continues to prove useful in our business (often knowing what you don't know is the best initiative for learning). So yeah man, lets keep name dropping pretentious technical details because thats half the reason I surf this site.
And yes, I did use ChatGPT to familiarize myself with these concepts briefly.
Where I'm skeptical of this study:
- 54 participants, only 18 in the critical 4th session
- 4 months is barely enough time to adapt to a fundamentally new tool
- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?
- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch
Where the study might have a point:
Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.
[Edit]: Formatting
They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.
We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox
Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).
That said:
TV very much is the idiot box. Not necessarily because of the TV itself but rather whats being viewed. An actual engaging and interesting show/movie is good, but last time I checked, it was mostly filled with low quality trash and constant news bombardment.
Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to. Simple calculations I do in my head but my ability to do more complex ones diminished. Thats down to me not doing them as often yes, but also because for complex ones I simply whip out my phone.
I got scared by how awfully my juniour (middle? 5-11) school mathematics had slipped when helping my 9 year old boy with his homework yesterday.
I literally couldn't remember how to carry the 1 when doing subtractions of 3 digit numbers! Felt literally idiotic having to ask an LLM for help. :(
What I have asked my children to do very often is back-of-the-envelope multiplications and other computations. That really helped them to get a sense of the magnitude of things.
In my opinion, they've almost always been right.
In the past two decades, we've seen the less-tech-savvy middle managers who devalued anything done on computer. They seemed to believe that doing graphic design or digital painting was just pressing a few buttons on the keyboard and the computer would do the job for you. These people were constantly mocked among online communities.
And strangely, in the end those tech muggles were the insightful ones.
TV is the uber idiot box, the overlord of the army of portable smart idiot boxes.
Yes, but also the extra wrinkle that this whole thing is moving so fast that 4 months old is borderline obsolete. Same into the future, any study starting now based on the state of the art on 22/01/2026 will involve models and potentially workflows already obsolete by 22/05/2026.
We probably can't ever adapt fully when the entire landscape is changing like that.
> Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
Yes, but also consider that this is true of any team: All managers hire people to outsource some entire cognitive process, letting themselves focus on their own personal comparative advantage.
The book "The Last Man Who Knew Everything" is about Thomas Young, who died in 1829; since then, the sum of recorded knowledge has broadened too much for any single person to learn it all, so we need specialists, including specialists in managing other specialists.
AI is a complement to our own minds with both sides of this: Unlike us, AI can "learn it all", just not very well compared to humans. If any of us had a sci-fi/fantasy time loop/pause that let us survive long enough to read the entire internet, we'd be much more competent than any of these models, but we don't, and the AI runs on hardware which allows it to.
For the moment, it's still useful to have management skills (and to know about and use Popperian falsification rather than verification) so that we can discover and compensate for the weaknesses of the AI.
Here’s the key difference for me: AI does not currently replace full expertise. In contrast, there is not a “higher level of storage” that books can’t handle and only a human memory can.
I need a senior to handle AI with assurances. I get seniors by having juniors execute supervised lower risk, more mechanical tasks for years. In a world where AI does that, I get no seniors.
Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
a) serious, but we live on different planets
b) serious with the idea, tongue-in-check in the style and using a lot of self-irony
c) an ironic piece with some real idea
d) he is mocking AI maximalists
He may have been right... Maybe our minds work in a different way now.
I think a better framing would be "abusing (using it too much or for everything) any new tool/medium can lead to negative effects". It is hard to clearly define what is abuse, so further research is required, but I think it is a healthy approach to accept there are downsides in certain cases (that applies for everything probably).
Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.
Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...
Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.
TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:
Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.
In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.
I also find exhausting the Socrates reference that's ALWAYS brought up in these discussions. It is not the same. Losing the collective ability to recite a 10000 words poem by heart because of books it's not the same thing as stopping to think because an AI is doing the thinking for you.
We keep adding automation layers on top of the previous ones. The end goal would be _thinking_ of something and have it materialized in computer and physical form. That would be the extreme. Would people keep comparing it to Socrates?
Were they? It seems that often the fears came true, even Socrates’
It hugely enhanced synthetic and contextual memory, which was a huge development.
AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.
Of course we identify with cognition in a way we didn't do with rote memory. But we should possibly identify more with synthetic and creative cognition - in the sense of exploring interesting problem spaces of all kinds - than with "I need code to..."
Perhaps he could. If there’s an argument to be made against writing, social media (including HN) is a valid one.
this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd
Previously I had a mountain of things I'd have liked to do, but the reality of getting through the boring scaffolding to the interesting parts just wasn't worth it.
With LLMs, I can rapidly try things and I don't need to worry about the scaffolding. Even when the scaffolding wouldn't really have taken much time, but even 5-10 minutes of a mundane task could cause me to procrastinate on it forever.
The study shows that the brain is not getting used. We will get stupid in the same way that people with office jobs get unhealthy if they don't deliberately exercise.
I haven't been diagnosed with ADHD or anything but i also haven't been tested for it. It's something I have considered but I think it's pretty underdiagnosed in Spain.
That must be how normal people feel.
1: https://www.catharsisinsight.com 2: https://ashleyjuavinett.com
How about some more info on what their main conclusions are?
"Your Brain On Chat GPT" Paper Analysis
In this transcript, neuroscientist Ashley and psychologist Cat critically analyze a controversial paper titled "Your Brain On Chat GPT" that claims to show negative brain effects from using large language models (LLMs).
Key Issues With the Paper:
Misleading EEG Analysis:
The paper uses EEG (electroencephalography) to claim it measures "brain connectivity" but misuses technical methods EEG is a blunt instrument that measures thousands of neurons simultaneously, not direct neural connections The paper confuses correlation of brain activity with actual physical connectivity Poor Research Design:
Small sample size (54 participants with many dropouts) Unclear time intervals between sessions Vague instructions to participants Controlled conditions don't represent real-world LLM use Overstated Claims:
Invented terms like "cognitive debt" without defining them Makes alarmist conclusions not supported by data Jumps from limited lab findings to broad claims about learning and cognition Methodological Problems:
Methods section includes unnecessary equations but lacks crucial details Contains basic errors like incorrect filter settings Fails to cite relevant established research on memory and learning No clear research questions or framework The Experts' Conclusion:
"These are questions worth asking... I do really want to know whether LLMs change the way my students think about problems. I do want to know if the offloading of cognitive tasks changes my own brain and my own cognition... We need to know these things as a society, but to pretend like this paper answers those questions is just completely wrong."
The experts emphasize that the paper appears designed to generate headlines rather than provide sound scientific insights, with potential conflicts of interest among authors who are associated with competing products.
The hosts condemn the study’s "bafflingly weak" logic and ableist rhetoric, and advise skepticism toward "science communicators" who might profit from selling hardware or supplements related to their findings: one of the paper's lead authors, Nataliya Kosmyna, is associated with the MIT Media Lab and the development of AttentivU, a pair of glasses designed to monitor brain activity and engagement. By framing LLM use as creating a "cognitive debt," the researchers create a market for their own solution: hardware that monitors and alerts the user when they are "under-engaged". The AttentivU system can provide haptic or audio feedback when attention drops, essentially acting as the "scaffold" for the very cognitive deficits the paper warns against. The research is part of the "Fluid Interfaces" group at MIT, which frequently develops Brain-Computer Interface (BCI) systems like "Brain Switch" and "AVP-EEG". This context supports the hosts' suspicion that the paper’s "cognitive debt" theory may be designed to justify a need for these monitoring tools.
Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.
That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.
There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.
But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.
If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.
Yes, this is one of my favorite prompting styles.
If you're stuck on a problem, don't ask for a solution, ask for a framework for addressing problems of that type, and then work through it yourself.
Can help a lot with coming unstuck, and the thoughts are still your own. Oftentimes you end up not actually following the framework in the end, but it helps get the ball rolling.
My wife had a similar experience, she had some college project where they had to drive up and down some roads and write about it, it was a group project, and she bought a map, and noticed that after reading the map she was more knowledgeable about the area than her sister who also grew up in the same area.
I think AI is a great opportunity for learning more about your subjects in question from books, and maybe even the AI themselves by asking for sources, always validate your intel from more authoritative sources. The AI just saved you 10 minutes? You can spend those 10 minutes reading the source material.
"+4 and then -2 and then +6 and then -3. Aha! All makes sense! Cannot repeat the digit differences, and need to be whole numbers, so going to the next higher even number, which is 6, which is 3 when halved!"
And then I am kinda proud my brain still works, even if the found "pattern" is hilariously arbitrary.
What the druids/piests were really decrying was that people spent less time and attention on them. Religion was the first attention economy.
Funny enough, the reason he gave against books has now finally been addressed by LLMs.
The kids are using ChatGPT for simple maths...
On a side note, the most hilarious part of it was when I asked gemini to do something for me in Google Sheets and it kept refering to it as Excel. Even after I corrected it.
It’s cheap, easy, and quite effective to passively learn the maps over the course of time.
My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.
I also wanted to mention that just spending some time looking at the maps and comparing differences in each services' suggested routes can be helpful for developing direction awareness of a place. I think this is analogous to not locking yourself into a particular LLM.
Lastly, I know that some apps might have an option to give you only alerts (traffic, weather, hazards) during your usual commute so that you're not relying on turn-by-turn instructions. I think this is interesting because I had heard that many years ago, Microsoft was making something called "Microsoft Soundscape" to help visually impaired users develop directional awareness.
some cognitive load
That's the entire point of it though, to make you more aware of where you are and which way you should go.It is hard to gain some location awareness and get better at navigating without extra cognitive load. You have to actively train your brain to get better, there is no easy way that I know of.
The first chapter goes into human navigation and it gives this exact suggestion, locking the North up, as a way to regain some of the lost navigational skills.
I've pretty much always had GPS nav locked to North-Up because of this experience.
I was shocked into using it when I realized that when using the POV GPS cam, I couldn't even tell you which quadrant of the city I just navigated to.
I wish the north-up UX were more polished.
Living in a city where phone-snatching thieves are widely reported on built my habit of memorising the next couple steps quickly (e.g. 2nd street on the left, then right by the station), then looking out for them without the map. North-Up helps anyways because you don't have to separately figure out which erratic direction the magnetic compass has picked this time (maybe it's to do with the magnetic stuff I EDC.)
It's amazing to see how he navigates the city. But however amazing it is, he's only correct perhaps 95 times out of 100. And the number will only go down as he gets older. Meanwhile he has the 99.99% correct answer right in the front panel.
For this experience I am not sure, whether people really don't know regularly taken routes, or they just completely lack the confidence in their familiarity with it.
https://www.nature.com/articles/s41598-020-62877-0
This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.
I have to visit a place several times and with regularity to remember it. Otherwise, out it goes. GPS has made this a non-issue; I use it frequently.
For me, however, GPS didn't cause the problem. I was driving for 5 or 6 years before it became ubiquitous.
Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...
I saw this first hand with coworkers. We would have to navigate large builds. I could easily find my way around while others did not know to take a left or right hand turn off the elevators.
That ability has nothing to do with GPS. Some people need more time for their navigation skills to kick in. Just like some people need to spend more time on Math, Reading, Writing, ... to be competent compared to others.
It's a tool, and this study at most indicates that we don't use as much brain power for the specific tasks of coding but do they look into for instance maintenance or management of code?
As that is what you'll be relegated to when vibe coding.
Back when it came out, it was all the rage at my company and we were all trying it for different things. After a while, I realized, if people were willing to accept the bullshit that LLMs put out, then I had been worrying about nothing all along.
That, plus getting an LLM to write anything with meaning takes putting the meaning in the prompt, pushed me to finally stop agonizing over emails and just write the damn things as simply and concisely as possible. I don't need a bullshit engine inflating my own words to say what I already know, just to have someone on the other end use the same bullshit engine to remove all that extra fluff to summarize. I can just write the point straight away and send it immediately.
You can literally just say anything in an email and nobody is going to say it's right or wrong, because they themselves don't know. Hell, they probably aren't even reading it. Most of the time I'm replying just to let someone know I read their email so they don't have to come to my office later and ask me if I read the email.
Every time someone says the latest release is a "game changer", I check back out of morbid curiosity. Still don't see what games have changed.
Accumulation of cognitive debt when using an AI assistant for essay writing task - https://news.ycombinator.com/item?id=44286277 - June 2025 (426 comments)
And asbestos and lead paint was actually useful.
As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.
Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.
In parallel, people start using LLMs to summarize content in a style they prefer.
Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.
Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.
We're already seeing people use AI to express themselves in several contexts, but it doesn't lead to an increased range of styles. It leads to one style, the now-ubiquitous upbeat LinkedIn tone.
Theoretically we could see diversification here, with different tools prompting towards different voices, but at the moment the trend is the opposite.
Guttural vocalizations accompanied by frantic gesturing towards a mobile device, or just silence and showing of LLM output to others?
That said, if most people turn into hermits and start living in pods around this period, then I think you would be in the right direction.
While sometimes I do dump a bunch of scratch work and ask for it to be transformed into organized though, more often I find that I use LLM output the opposite way.
Give a prompt. Save the text. Reroll. Save the text. Change the prompt, reroll. Then going through the heap of vomit to find the diamonds. Sort of a modern version of "write drunk, edit sober" with the LLM being the alcohol in the drunk half of me. It can work as a brainstorming step to turn fragments of though into a bunch of drafts of thought, then to be edited down into elegant thought. Asking the LLM to synthesize its drafts usually discards the best nuggets for lesser variants.
If you are feeling over reliant on these tools then I quickfix that's worked me is to have real conversations with real people. Organise a coffee date if you must.
If you give up your hands-on interaction with a system, you will lose your insight about it.
When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.
That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.
I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.
Seems to focus only on the first part and not on the other end of it.
Similar mess with can be found in `Figure 34.`, with an added bonus of "DO NOT MAKE MISTAKES!" and "If you make a mistake you'll be fined $100".
Also, why are all of these research papers always using such weak LLMs to do anything? All of this makes their results very questionable, even if they mostly agree with "common intuition".
Thinking everything ML produces is just shorting the brain.
I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.
I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.
Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.
I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.
Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.
I wouldn't ask Cursor to go off and write software from scratch that I need to take ownership of, but I'm reasonably comfortable at this point having it make small changes under direction and with guidance.
The project I mentioned above was adding otel tracing to something, and it wrote a tracae viewing UI that has all the features I need and works well, without me having to spend hours getting it up set up.
I have actually been improving in other fields instead like design and general cleanliness of the code, future extensability and bug prediction.
My brain is not 'normal' either so your mileage might vary.
"Exactly!"
this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?
using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics
The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.
We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.
It also goes against the main ethos of the AI sect to "stress-test" the AI against everything and everyone, so there's that.
As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.
this is just what AI companies say so they are not held responsibly for any legal issues, if a person is searching for summary of a paper, surely they don't have time to vet the paper.
Carson Gross sure knows how to stay in character.
- Socrates on Writing.
The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?
/s
Incidentally how I feel about React regardless of LLMs. Putting Claude on top is just one more incomprehensible abstraction.
Software CEOs think about this and rub their hands together thinking about all the labor costs they will save creating apps, without thinking one step further and realizing that once you don't need developers to build the majority of apps your would-be customers also don't need the majority of apps at all.
They can have an LLM build their own customized app (if they need to do something repeatedly, or just have the LLM one-off everything if not).
Or use the free app that someone else built with an LLM as most app categories race to the moatless bottom.
A door has been opened that cant be closed and will trap those who stay too long. Good luck!
I do use them, and I also still do some personal projects and such by hand to stay sharp.
Just: they can't mint any more "pre-AI" computer scientists.
A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:
* Not being able to mint any more "pre-AI" junior hires
And, even if we could:
* Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs
* Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs
* Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"
The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.
We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!
Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).
Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?
Just my $0.02, I could be wrong.
This is a non-study.
The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.
"While these LLM-to-Brain participants demonstrated substantial improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly higher connectivity across frequency bands, they consistently underperformed relative to Session 2 of Brain-only group, and failed to develop the consolidation networks present in Session 3 of Brain-only group."
The study also found that LLM-group was largely copy-pasting LLM output wholesale.
Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.
If you wrote two essays, you have more 'cognitive engagement' on the clock as compared to the guy who wrote one essay.
In other news: If you've been lifting in the gym for a week, you have more physical engagement than the guy who just came in and lifted for the first time.
Isn't the point of a lot of science to empirically demonstrate results which we'd otherwise take for granted as intuitive/obvious? Maybe in AI-literature-land everything published is supposed to be novel/surprising, but that doesn't encompass all of research, last I checked.
I want a life of leisure. I don’t want to do hard things anymore.
Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”
Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754
I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.
A John Green quote on public education feels appropriate:
> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.
Either way, that’s not how compliments nor insults work. The intent is what matters, not the word.
For example, amongst finance bros, calling each other a “ruthless motherfucker” can be a compliment. But if your employee calls you that after a round of layoffs, it’s an insult.
There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.