When I first began, I tried vibe coding the backend, but found the feeling of disconnect from "my" code too uncomfortable so abandoned that approach.
I've been relying pretty heavily on ChatGPT to help me learn unfamiliar technologies, but perhaps because of its fallibility with DuckDB, I've been spending a lot of time in the documentation and writing my own queries, and I think I'm going through enough cognitive difficulty to be learning properly.
There was one time when I'd spent probably at least a day trying to optimise a query in Postgres (yes, two database technologies, don't ask) without much success, and ChatGPT completely solved it in about 10 minutes. Incredible result, helped me learn some useful techniques. There are so many rabbit holes on this project it's nice not to have to go down every single one by myself.
On HN there's often this split between anti-AI and AI evangelists but there really is a lot of space in the middle: judicious use of AI for specific purposes, managing the risks and benefits, etc.
(Side note...did the OP really mention the highly discredited broken windows theory?)
From the article "Muscles grow by lifting weights" yet we do that now as a hobby and not as a critical job. I'm not sure I want to live in a world where thinking is a gym like activity, however if you go back 200 years it would probably be difficult to explain the situation today to someone living in a world where most people are doing physical labor or using animals to do it.
The engine provides artificial strength, granted, but AI does not provide artificial intelligence. It's a misnomer.
Ok.
The jackhammer replaced the hammer and chisel for busting concrete, and the user's physical strength is important to both the manual and automated tool.
AI is a multiplayer to the user's intelligence, as the jackhammer is a multiplayer to physical strength.
you
That kind of describes the experience of retired people who do sudokus to stave off dementia. I suspect it's a bit akin to going from being a lumberjack to doing 10 squats a day though.
I believe what's happening now is only a demonstration of people's misaligned perception.
The LLM's were never good. You were just impressed in that virgin moment and now you see the flaws more clearly when using other models.
"Almost everyone lives a life closer to that of nobility or the merchent class"
I'm sure the vast majority of the people from that time would rather live in ours if explained that way.
Curious how the "AI" going to replace thought, as it is incapable if creating anything. Its just statistically matching data patterns for G-d's sake, don't make it something anything beyond that
quantum mechanics is just linear algebra, chemistry is just quantum mechanics, biology is just chemistry, and thus human intelligence is just linear algebra, so what's the problem? what else do you want from a mathematical framework, other than the ability to describe the system you need to describe?
Also it has a high opinion of Bryan Ferry. Deeply untrustworthy.
But I don't buy this at all for software development. I find myself thinking more carefully and more expansively, at the same time, about solving programming problems when I'm assisted by an LLM agent, because there's minimal exertion to trying multiple paths out and seeing how they work out. Without an agent, every new function I write is a kind of bet on how the software is going to come out in the end, and like every human I'm loss-averse, so I'm not good at cutting my losses on the bad bets. Agents free me from that.
I do use it for learning and to help me access new concepts I've never thought about but if you're not proving what it's writing yourself and understanding what its written yourself then I hope I never have to work on code you've written. If you are, then you are not doing what the article is talking about.
I've been writing Go since ~2012 and coding since ~1995. I read everything I merge to `main`. The code it produces is solid. I don't know that it one-shots stuff; I work iteratively and don't care enough to try to make it do that, I just care about the endpoint. The outcomes are very good.
I know I'm not alone in having this experience.
Whoa, whoa, are we talking Bryan Ferry as an artist, or Bryan Ferry as a guy? Because I love me some Roxy Music but have heard that Bryan is kind of a dick.
Every developer that uses LLMs believes this. And every time they are objectively measured, it is shown that they are wrong. Just look at the FOSS study from METR, or the cognitive bias study by Microsoft.
If you understand how this applies to writing can you not connect the dots and realize that it is giving you a false sense of productivity?
This is the reality of a lot of us, poor ESL plebs
Plato was not against writing. In fact, he wrote prolifically. Plato's writings form the basis of Western Philosophy.
Plato's teacher Socrates was against writing, and Plato agreed that writing is inferior to dialog in some ways; memory, inquiry, deeper understanding, etc.
We know this because Plato wrote it all down.
I think it would be more accurate to say that Plato appreciated the advantages of both writing and the Socratic method.
https://www.fantasticanachronism.com/p/having-had-no-predece...
We externalize our information to books. We externalize our jobs to specialists. We externalize our shelter to home builders. We externalize our food to farmers. We externalize our water to manucipalities.
Individually we may be weaker because of it. Yet in the end we are all stronger, and now billions of us can live at levels unimaginable in the past.
We're not concerned with the community aloofness
Duke, we're animals, we just go where the most food is
Lower the toast, most formal etiquette is useless
Truth is you're equally expendable as spoon-fed
Funny enough, that's kinda what we're seeing with LLMs. We're past the "regurgitate the training set" now, and we're more interested in mixing and matching stuff in the context window so we get to a desired goal (i.e. tool use, search, "thinking" and so on). How about that...
https://www.scientificamerican.com/article/you-dont-need-wor...
Not really sure why the geometry and color gradient of the world needs to be captured in a skeletonized syntax and semantics when our body is evolved to experience them directly.
To some extent plato wrote though, which is mostly how we can learn about him, but most of his writings are dialogues between characters.
Also a lot of what we know is written by his disciples.
I find prompt fettling a great way of getting to grips with a problem. If I can explain my problem effectively enough to get a reasonable start on an answer, then I likely thoroughly understand my problem.
An LLM is like a really fancy slide rule or calculator (I own both). It does have a habit of getting pissed and talking bollocks periodically. Mind you, so do I.
If you love to knit that's cool but don't get on me because I'd rather buy a factory sweater and get on with my day.
I love creating things, I love solving problems, I love designing elegant systems. I don't love mashing keys.
Maybe it was worse where you were.
biking instead of driving would be a better analogy... which you might have caught if llms hadn't made you dumber.
Or are you growing everything you need to eat by yourself?
Keep it to programming then, I'm sure you write all your own libraries right? In assembly, that is.
Everything about modern life, especially programming is about enabling more with less work.
It’s not about “passion”. It’s purely transactional and I will use any tool that is available to me to do it.
If an LLM can make me more efficient at that so be it. I’m also not spending months getting a server room built out to hold a SAN that can store a whopping 3TB of storage like in 2002. I write 4 lines of Yaml to provision an S3 bucket.
Everyone, even yourself, enjoys things being easier, when moving towards a solution. Programming is a means to an end, to solve an actual problem.
I have heard people argue that the use of calculators (and later, specifically graphing calculators) would make people worse at math; quick searching found papers like https://files.eric.ed.gov/fulltext/ED525547.pdf discussing the topic.
I can't see how the "LLMs make us dumber" argument is different than those. I think calculators are a great tool, and people trained in a calculator-having environment certainly seem to be able to do math. I can't see that writing has done anything but improve our ability to reason over time. What makes LLMs different?
Calculators don't solve problems, they solve equations. Writing didn't kill our memories because there's still so much to remember that we almost have to write things down to be able to retain it.
If you don't do your own research and present the LLM with your solution and let it point out errors and instead just type "How do I make ____?" it's solving the entire thought process for you right there. And it may be leading you wrong.
That's my view on how it's different at least. They're not calculators or writing. They're text robots that present solutions confidently and offer to do more work immediately afterwards, usually ending a response in "Want me to write you a quick python script to handle that?"
A thought experiment, if you're someone who has used a calculator to calculate 20% tips your whole life, try to calculate one without it. Maybe you specifically don't struggle because you're good at math or have a lot of math experience elsewhere but if you have approached it the way this article is calling bad, you'd simply have no clue where to start.
So, I guess I'm just saying that LLMs are a tool like any other. Their existence doesn't make you worse at what they do unless you forgo thinking when you use them. You can use a calculator to efficiently solve a wrong equation - you have to think about what it is going to solve for you. You can use an LLM to make a bad argument for you - you have to think about the inputs you're going to have it output for you.
I was just feeling anti-alarmist-headline - there's no intrinsic reason we'd get dumber because LLMs exist. We could, but I think history has shown that this kind of alarmism doesn't come to fruition.
I'd argue we're using them "right" though.
Good question!
Writing or calculators likely do reduce our ability memorize vast amounts of text or do arithmetic in our heads; but to write or do math with writing and calculation, we still must fully load those intermediate facts into our brain and fully understand what was previously written down or calculated to wield and wrangle it into a new piece of work.
In contrast, LLMs (unless used with great care as only one research input) can produce a fully written answer without ever really requiring the 'author' to fully load the details of the work into their brain. LLMs basically reduc ethe task to editing not writing. As editing is not the same as writing, so it is no surprise this study shows an serious inability to remember quotes from the "written" piece.
Perhaps it is similar to learning a new language wherein we tend to be much sooner able to read the new language at a higher complexity than write or speak it?
You (and another respondent) both cite the case where someone unthinkingly generates a large swath of text using the LLM, but that's not the only modality for incorporating LLMs into writing. I'm with you both on your examples, fwiw, I just think that only thinking about that way of using LLMs for writing is putting on blinders to the productive ways that they can be used.
It feels to me like people are reacting to the idea that we haven't figured out how to work it into our pedagogy, and that their existence hurts certain ways we've become accustomed to measuring people having learned what we intended them to learn. There's certainly a lot of societal adaptation that should put guardrails around their utility to us, but when I see "They will make us dumb!" it just sets of a contrarian reaction in me.
I also feel like there's more to be said about LLMs fostering the ability to ask questions better than you might if you primarily used search. If the objective was to write, for example, about an esoteric organic chemistry topic, and a "No Brain" group of non-experts was only allowed to formulate a response by asking real-life experts as much as they can about the esoteric topic, then would users more experienced with LLMs come out ahead on the essay score? Understanding how to leverage a tight communication loop most effectively is a skill that the non-LLM groups in this study should be evaluated on.
What once was! A genius paradise lost to technology once again.
Everyone stopped learning the roads & streets of where they were driving, but it was OK to lose that knowledge.
GPS companies revised their products to adapt to driver mistakes, and today driving is generally better with Google Maps, even though sometimes it can be worse (no internet, or when a bug arises).
Individuals? Most information technology makes us dumber in isolation, but with the tools we end up net faster.
The scary thing is that it is less about making things "better" than it is making them cheaper. AI isn't winning on skill, its winning on being "80% the quality at 20% the price."
So if you see "us" as the economic super-organism managed by very powerful people, then it makes us a lot smarter!
Did Google and Yahoo make us dumber?
Dumb is accidental or genetic.
AI won't affect how dumb we are.
I think they will decrease the utility of crystalline knowledge skills and increase our fluid knowledge skills. Smart people will still find ways to thrive in the environment.
Human intelligence will continue moving forward.
It was helpful, I got pretty far along in collegiate math without tutors or assistance thanks to the hard calculation skills I drilled into my head.
But, counterpoint, if I leave my calculator/computer/all in one everything device at home on any given day it can ruin my entire day. I haven't gone 72 hours without a calculator in nearly a decade.