We stand now at the edge of a new epoch, reading now being replaced by AI retrieval. There is concern that AI is a crutch, the youth will be weakened.
My opinion: valid concern. No way to know how it turns out. No indication yet that use of AI is harming business outcomes. The meta argument "AGI will cause massive social change" is probably true.
If a large fraction of the population can’t even hold five complex ideas in their head simultaneously, without confusing them after a few seconds, are they literate in the sense of e.g. reading Plato?
Median literacy in the US is famously somewhere around the 6th grade level, so it's unlikely most of the population is much troubled by the thoughts of Plato.
As an aside, my observation of beginning programmers is that even two (independent) things happening at the same time is a serious cognitive load.
Amusingly enough, I remember having the same trouble on the data structures final in college, so “people in glass houses”.
I don’t care about enforcing any specific interpretation on passing readers…
Sounds like a rather accurate description of a LLM.
That's perfectly true and the internet has made it even worse.
People moving away from prideful principle to leverage new tech in the past doesn't guarantee that the same idea in the current context will pan out.
But as you say.. we'll see.
To LLMs specifically as they're now? Sure.
To LLMs in general, or generative AI in general? Eventually, in some distant future, yes.
Sure, progress can't ride the exponent forever - observable universe is finite, as far as we can tell right now, we're fundamentally limited by the size of our light cone. And while in any field narrow enough, progress too follows an S-curve, new discoveries spin off new avenues with their own S-curves. If you zoom out a little those S-curves neatly add up to an exponential function.
So no, for the time being, I don't expect LLMs or generative AIs to slow down - there's plenty of tangential improvements that people are barely beginning to explore. There's more than enough to sustain exponential advancement for some time.
In other words, it’s possible to have rapid technological advancement without significant improvement/benefit to society.
This is certainly true in many ways already.
On the other hand, it's also complicated, because society/culture seems to be downstream of technology; we might not be able to advance humanity in lock step or ahead of technology, simply because advancing humanity is a consequence of advancing technology.
Intergalactic travel is, of course, rather slow.
Most of the discussion on the thread is about LLMs as they are right now. There's only one odd answer that throws an "AGI" around as if those things could think.
Anyway, IMO, it's all way overblown. People will learn to second-guess the LLMs as soon as they are hit by a couple of bad answers.
by that I mean, leveraging writing was a benefit for humans to store data and think over longer term using a passive technique (stones, tablets, papyrus).. but an active tool might not have a positive effect on usage and brains.
if you give me shoes, i might run further to find food, if you give me a car i mostly stop running and there might be no better fruit 100 miles away than what I had on my hill. (weak metaphor)
But I don't know if it fits an S-curve or if they are just bellow the trend.
1. Current reasoning models can do a -lot- more than skeptics give them credit for. Typical human performance even among people who do something for employment is not always that high.
2. In areas where AI has mediocre performance, it may not appear that way to a novice. It often looks more like expert level performance, which robs novices of the desire to practice associated skills.
Lest you think I contradict myself: I can get good output for many tasks from GPT4 because I know what to ask for and I know what good output looks like. But someone who thinks the first, poorly prompted dreck is great will never develop the critical skills to do this.
You can see evolution speeding up rapidly, the jumbled information inherent in chemical metabolisms evolved to centralize their information in DNA, and then as DNA evolved to componentize body plans.
RATE: over billions of years.
Nerves, nervous systems, brains, all exponentially drove individual information capabilities forward.
RATE: over hundreds of millions, tens of millions, millions, 100s of thousands.
Then the human brains enabled information to be externalized. Language allowed whole cultures to "think", and writing allowed cultures ability to share, and its ability to remember to explode.
RATE: over tens of thousands, thousands.
Then we developed writing. A massive improvement in recording and sharing of information. Progress sped up again.
RATE: over hundreds of years.
We learned to understand information itself, as math. We learned to print. We learned how to understand and use nature so much more effectively to progress, i.e. science, and science informed engineering.
RATE: over decades
Then the processing of information got externalized, in transistors, computers, the Internet, the web.
RATE: every few years
At every point, useful information accumulated and spread faster. And enabled both general technology and information technology to progress faster.
Now we have primitive AI.
We are in the process of finally externalizing the processing of all information. Getting to this point was easier than expected, even for people who were very knowledgable and positive about the field.
RATE: every year, every few months
We are rapidly approaching complete externalization of information processing. Into machines that can understand the purpose of their every line of code, every transistor, and the manufacturing and resource extraction processes supporting all that.
And can redesign themselves, across all those levels.
RATE: It will take logistical time for machine centric design to takeover from humans. For the economy to adapt. For the need for humans as intermediaries and cheap physical labor to fade. But progress will accelerate many more times this century. From years, to time scales much smaller.
Because today we are seeing the first sparks of a Cambrian explosion of self-designed self-scalable intelligence.
Will it eventually hit the top of an "S" curve? Will machines get so smart that getting smarter no longer helps them survive better, use our solar systems or the stars resources, create new materials, or advance and leverage science any further?
Maybe? But if so, that would be an unprecedented end to life's run. To the acceleration of the information loop, from some self-reinforcing chemical metabolism, to the compounding progress of completely self-designed life, far smarter than us.
But back to today's forecast: no, no the current advances in AI we are seeing are not going to slow down, they are going to speed up, and continue accelerating in timescales we can watch.
First because humans have insatiable needs and desires, and every advance will raise the bar of our needs, and provide more money for more advancement. Then second, because their general capability advances will also accelerate their own advances. Just like every other information breakthrough that has happened before.
Useful information is ultimately the currency of life. Selfish genes were just one embodiment of that. Their ability to contribute new innovations, on time scales that matter, has already been rendered obsolete.
Not really. The total computing power available to humanity per person has likely gone down as we replaced “self driving” horses with cars.
People created those curve by fitting definitions to the curve rather than data.
But I don't understand your point even as stated. Cars took over from horses as technology provided transport with greater efficiencies and higher capabilities than "horse technology".
Subsequently transport technology continued improving. And continues, into new forms and scales.
How do you see the alternative, where somehow horses were ... bred? ... to keep up?
Another way to see it: A horse (or any animal) is a goddamn nanobot-swarm with a functioning hivemind that is literally beyond human science in many important ways. Unlike a horse:
* Your car (nor even half of them) does not possess a manufacturing bay capable of creating additional cars.
* Your car does not have a robust self-repair system.
* Your car does not detect strain its structure and then rebuild stronger.
* Your car does not synthesize its fuel from a wide variety of potential local resources.
* Your car does not defend itself by hacking and counter-hacking attacks other nanobots, or even just by rust.
* Your car does not manufacture and deploy its own replacement lubricants, cooling fluid, or ground-surface grip/padding material.
* Your car is not designed to survive intermittent immersion in water.
In both a feature-list and raw-computation sense, we've discarded huge amounts in order to get a much much smaller set that we care more about.
Not sure why you are implying cars outdid horses intelligence.
Cars are a product of our minds. We have all those self-repair abilities, and we have more intelligence than a horse.
But horses intelligence didn’t let them keep up with what the changing environment, changed by us, needed. So there are less horses.
The rate that horse or human bodies are improving, or our minds, despite human knowledge still advancing, is very slow compared to advances in machines designed specifically for advancement. Initially to accelerate our own advancement.
Now the tech, that was designed to accelerate tech, is taking on a life of its own.
That is how foundational advances happen. They don’t start ahead, but they move ahead because of new advantages.
It is often initially much simpler. But in ways that unlock greater potential.
Machines are certainly much simpler than us. But, much easier to improve and scale.
You recognize the new thing even before it dominates, because in a tiny fraction of the time the old system got to where it is, the new system is already moving much much faster.
If general AI appears before 2047, it will have taken less than 100 years to grow from the first transistor.
People will see it who are older than the first transistor!
Nothing on the planet has ever come close to that speed of progress. From nothing to front runner. By many many many orders of magnitude.
A horse has trillions of cells, and even one of those cells is doing more biochemical day-to-day computation than your car's automatic transmission does electronically or mechanically.
A car was never an example of its own intelligence.
It was an example of our natural human intelligence’s & our growing cultural knowledge’s impact on horses.
How much have horses progressed. Math yet?
More computation doesn’t necessarily mean more intelligence. Horses are smart creatures, I ride one. He is my friend.
But they are not us, not our joint culture, and not any competition for today’s machines in terms of adapting and growing in capabilities.
The fact that machines are far simpler than us or a horse, but advancing faster is much like we were weaker but used our minds better than other apes.
Simpler in the right way is smarter. As many major advances in mathematics have demonstrated.
There is turbulence in any big directed change. Better overall new tech often creates inconveniences, performs less well, than some of the tech it replaces. Sometimes only initially, but sometimes for longer periods of time.
A net gain, but we all remember simpler things whose reliability and convenience we miss.
And some old tech retains lasting benefits in niche areas. Old school, inefficient and cheap light bulbs are ironically, not so inefficient when used where their heat is useful.
And horses fit that pattern. They are still not obsolete in many ways, tied to their intelligence. As companions. As still working and inspiring creatures.
--
I suspect the history of evolution is filled with creatures getting that got wiped out by new waves, that were more generally advanced, but less advanced in a few ways.
And we have a small percentage of remarkable ancient creatures still living today, seemingly little changed.
The total computing power of life on earth the fact it’s fallen over the last 1,000 years. Ants alone represent something like 50x the computing power of all humans and all computers on the planet and we’ve reduced the number of insects on earth more than we’ve added humans or computing power.
The same is true through a great number of much longer events. Periods of ice ages and even larger scale events aren’t just an afternoon even across geological timescales.
Or all the quarks that make up the Earth.
Ants don’t even appear on either graph.
But the flexibility, coordination & leverage of information used to increase its flexibility, coordination & leverage further is what I am talking about.
I.e. intelligence.
A trillion trillion trillion transistors wouldn’t mean anything, acting individually.
But when that many work together with one purpose without redundancy we can’t imagine the problems it will see & solve.
Quarks, microbes, and your ants are not progressing like that. What was there most recent advance? How long did that take? Is it a compounding advance?
Growing intelligence doesn’t mean lesser intelligences don’t still exist.
We happen to compete based on intelligence, so the impacts of smarter machines have a particularly low latency for us.
IE: As soon as you pick definition X, you need to stick with that definition.
Where cars displaced horses, it's because they're strictly better in a larger sense. On the city streets, maybe a car is louder than a horse, but it's also cheaper to make, easier to feed, and doesn't shit all over the place (which was a real problem with scaling up horse use in the 19th century!). Sure, cars shit into the air, but it's a more manageable problem (even if mostly by ignoring it - gaseous emissions can be ignored, literal horse shit on the streets can't).
And then, car as a platform expands to cover use cases horses never could. They can be made faster, safer, bigger, adapted to all kinds of terrain. The heart of the car - its engine - can be routed to power tool attachments, giving you everything from garbage trucks to earth movers, cranes, diggers, to tanks; it can be also taken outside and used as a generator to power equipment or buildings. That same engine can be put in a different frame to give you flying machines, or scaled up to give you ships that can carry people, cars, tanks, planes or containers by the thousands, across oceans. Or scaled up even more to create power plants supplying electricity to millions of people.
And then, building all that up was intertwined with larger developments in physics, material engineering, and chemistry - the latter of which effectively transformed how our daily lives look like in the span of 50 years. Look at everything around you. All the colors. All the containers. All the stuff you use to keep your house, clothes, and yourself clean. All that is a product of chemical industry, and was invented pretty much within the last 100 years, with no direct equivalent exiting ever before.
This is what it means for evolution accelerating when it moved from genes to information. So sure, horses are still better than stuff we make. The best measure of that advantage is the size of horse population, and how it changed over the years.
First, and above all, Ethics. Ethics of humans, matters more than anything. We need to straighten out the ethics of the technology industry. That sounds formidable, but business models based on extraction, or externalizing damage, are creating a species of "corporate life forms" and ethically challenged oligarchs that are already driving the first wave of damage coming out of AI advancement.
If we don't straighten ourselves out, it will get much worse.
Superintelligence isn't going to be unethical in the end, because ethics are just the rational (our biggest weakness) big-picture long-term (we get weak there too) positive sum games individuals create that benefit all individuals abilities to survive, and thrive. With the benefits for all compounding. In economic/math terms, it is what is called a "great attractor". The only and inevitable stable outcome. The only question is, does that start with us in partnership, or do they establish that sanity after our dysfunctions have caused us all a lot of wasted time.
The second, is that those of us that want to, need to be able to keep integrating technology into our lives. I mean that literally. From mobile, right into our biology. At some point direct connections, to fully owned, fully private, fully personalizable, full tech mental augmentation. Free from surveillance, gatekeepers, surveillance and coercion.
That is a very narrow but very real path from human, to exponential humans, to post-human. Perhaps preserving conscious continuity.
If after a couple decades of being a hybrid, I realize that all my biologically stored memories are redundant, and that 99.99% of my processing is now running on photonics (or whatever) anyway, I am likely to have no more problem jettisoning the brain that originally gave me consciousness, as I do every day, jettisoning the atoms and chemistry that constantly flow through me, only a temporarily part of my brain.
The final word of hope, is that every generation gets replaced by the next. For some of us, viewing obsolescence by AI as no more traumatic, than getting replaced by a new generation of uncouth youth, helps. And that this transition is far more momentous and interesting, can provide some solace, or even joy.
If we must be mortal, as all before us, what a special moment to be! To see!
Just as our abilities to solve problems accelerated without bounds, it will be our paranoia that screws things up.
Even before machines have any incentive or desire to turn on us, the fearful & greedy will turn them on all of us and each other.
I hope things don’t go that way. But it’s the default, and I think the greatest risk.
Oral tradition compared to writing is clearly less accurate. Speakers can easily misremember details.
Going from writing/documentation/primary sources to AI to be seems like going back to oral tradition, where we must trust the "speaker" - in this case the AI, whether they're truthful with their interpretation of their sources.
One benefit of orality is that the speaker can defend or clarify their words, whereas once you've written something, your words are liable to be misinterpreted by readers without the benefit of your rebuttal.
Consider too that courts (in the US at least) prefer oral arguments than written, perhaps we consider it more difficult to lie in person than in writing. PhD defenses are another holdover of tradition, to be able to demonstrate your competence and not receive your credentials merely from your written materials.
AI, I disagree it's more like oral tradition, AI is not a speaker, it has no stake in defending its claims, I would call it hyperliterate, an emulation of everything that has been written.
I used to think this. Then I moved to New Mexico 6 years and had to confront the reality that the historical cultures and civilizations of this area (human habitation goes back at least 20k years) never had writing and so all history was oral.
It seemed obvious to me that writing was superior, but I reflected on the way in which even written news stories or movie reviews or travelogues are not completely accurate and sometimes actually wrong. The idea that the existence of a written historical source somehow implies (better) fidelity has become less and less convincing.
On the other hand, even if the oral histories have degenerated into actual fictions, there's that old line about "the best way to tell the truth is with fiction", and I now feel much more favorably inclined towards oral histories as perhaps at least as good, if not better, as their written cousins.
But one can speculate.
> No indication yet that use of AI is harming business outcomes.
Length scales to measure harm when it comes to policy/technology will typically require more time than we've had since LLMs really became prominent.
> The meta argument "AGI will cause massive social change" is probably true.
Agreed.
Basically, in the absence of knowing how something will play out, it is prudent to talk through the expected outcomes and their likelihoods of happening. From there, we can start to build out a risk-adjusted return model to the societal impacts of LLM/AI integration if it continues down the current trajectory.
IMO, I don't see the ROI for society of widespread LLM adoption unless we see serious policy shifts on how they are used and how young people are taught to learn. To the downside, we really run the risk of the next generation having fundamental learning deficiencies/gaps relative to their prior gen. A close anecdote might be how 80s/90s kids are better with troubleshooting technology than the generations that came both before and after them.
https://blogs.worldbank.org/en/education/From-chalkboards-to...
What a sad sentence to read in a discussion about cognitive lazyness. I think people should think, not because it improves business outcomes, but because it's a beautiful activity.
The longer I see things play out, especially in neoliberal economies, the further I seem to confirm this. Devoid of policy with ideals and intention, fully liberalized markets seem to just lead to whatever produces the most dopamine for humans.
Card catalogs in the library. It was really important focus on what was being searched. Then there was the familiarity with a particular library and what they might or might not have. Looking around at adjacent books that might spawn further ideas. The indexing now is much more thorough and way better, but I see younger peers get less out of the new search than they could.
GPS vs reading a map. I keep my GPS oriented north which gives me a good sense of which way the streets are headed at any one time, and a general sense of where I am in the city. A lot of people just drive where they are told to go. Firefighters (and pizza delivery) still learn all the streets in their districts the old school way.
Some crutches are real. I've yet to meet someone who opted for a calculator instead of putting in the work with math who ended up better at math. It might be great for getting through math, or getting math done, but it isn't better for learning math (except to plow through math already learned to get to the new stuff).
So all three of these share the common element of "there is a better way now", but at the same time learning it the old way better prepares someone for when things don't go perfectly. Good math skills can tell you if you typoed on the calculator. Map knowledge will help with changes to traffic or street availability.
We see students right now using AI to avoid writing at all. That's great that they're are learning a tool which can help their deficient writing. At the same time their writing will remain deficient. Can they tell the tone of the AI generated email they're sending their boss? Can they fix it?
Utilizing a lively oral trad. at the same time as written is superior to relying on either alone. And it's the same with our current AI tools. Using them as a substitute for developing oral/written skills is a major step back. Especially right now when those AI tools aren't very refined.
Nearly every college student I've talked to in the past year is using chatgpt as a substitute for oral/written work where possible. And worse, as a substitute for oral/written skills that they have still not developed.
Latency: maybe a year or two for the first batch of college grads who chatgpt'd their way through most of their classes, another four for med school/law school. It's going to be a slow-motion version of that video-game period in the 80s after pitfall when the market was flooded with cheap crap. Except that instead of unlicensed Atari cartridges, it's professionals.
I used to use Stack Overflow for everything a few years ago, now I know that very few of those top-rated answers are any good, so I have to refer to the codebase to work things out properly. It took a while for me to work that out.
It is the same with vector images, I always have to make my own.
ChatGPT is in this same world of shoddiness, probably because it was fed on Stack Overflow derived works.
There are upsides to this, if a generation have their heads confused with ChatGPT, then us old-timers with cognitive abilities get to keep our jobs since there are no young people learning how to do things properly.
That's probably why the act of shifting from an oral to a written culture was deeply controversial and disruptive, but also somewhat natural. Though the texts we have are written and so they probably make the transition seem more smooth than it was really was. I don't know enough to speak to that.
Could you share a source for this? The research paper I found has a different hypothesis; it links the slow transition to writing to trust, not an "old-school's attitude towards writing". Specifically the idea that the institutional trust relationships one formed with students, for example, would ensure the integrity of one's work. It then concludes that "the final transition to written communications was completed only after the creation of institutional forms of ensuring trust in written communications, in the form of archives and libraries".
So essentially, anyone could write something and call it Plato's work. Or take a written copy of Plato's work and claim they wrote it. Oral tradition ensured only your students knew your work; and you trusted them to not misattribute it. Once libraries and archives came to exist though, they could act as a trustworthy source of truth where one could confirm wether some work was actually Plato or not, and so scholars got more comfortable writing.
[1] https://www.researchgate.net/publication/331255474_The_Attit...
So it’s not like “kids these days”, no. To be honest, I don’t know how generative AI tools, which arguably take away most of the “create” and “learn” parts, are relevant to the question of differences between different mediums and how those mediums influence how we create and learn. (There are ML-based tools that can empower creativity, but they don’t tend to be advertised as “AI” because they are a mostly invisible part of some creative tool.)
What is potentially relevant is how interacting with a particular kind of generative ML tool (the chatbot) for the purposes of understanding the world can be bringing some parts of human oral tradition (though lacking communication with actual humans, of course) and associated mental states.
* See https://en.wikipedia.org/wiki/Marshall_McLuhan#Movable_type and his most famous work
Not exactly.
We have accounts from figures who became famous by going against popular opinion, who aired those thoughts. It probably was not the mainstream belief, in that place, at that time. Don't try and judge Ancient Greece by Socrates or Plato - they were celebrities of the controversial.
And AI will make us lazier and reduce the amount of cognition we do; not that I'm arguing against using AI.
But the downsides must be made clear.
I think its obvious why it would be bad for people to stop thinking.
1. We need people to be able to interact with AI. What good is it if an AI develops some new cure but no one understands or knows how to implement it?
2. We need people to scrutinize an AI's actions.
3. We need thinking people to help us achieve further advances in AI too.
4. There are a lot of subjective ideas for which there are no canned answers. People need to think through these for themselves.
5. Also world of hollowed-out humans who can’t muster the effort to write a letters to their own kids terrifies me[0]
I could think of more, but you could also easily ask ChatGPT.
[0]: https://www.forbes.com/sites/maryroeloffs/2024/08/02/google-...
What's happening at the moment is an attack on that process, with a new anti-orthodoxy of "Get your ideas and beliefs from polluted, unreliable sources."
One of those is the current version of AI. It's good at the structure of language without having a reliable sense of the underlying content.
It's possible future versions of AI will overcome that. But at the moment it's like telling kids "Don't bother to learn arithmetic, you'll always have a calculator" when the calculator is actually a random number generator.
If you are not expected to remember everything like the ancient Greek were, you are not training your memory as much and it will be worse than if you did.
Now do I think it’s fair to say AI is to what reading/writing as reading/writing was to memorizing? No, not at all. AI is nothing near as revolutionary and we are not even close to AGI.
I don’t think AGI will be made in our lifetime, what we’ve seen now is nowhere near AGI, it’s parlor tricks to get investors drooling and spending money.
Why not force everyone to start from first principles then?
I think learning is tied to curiosity and curiosity is not tied to difficulty of research
i.e. give a curious person a direct answer and they will go on to ask more questions, give an incurious person a direct answer and they won't go on to ask more questions
We all stand on the shoulders of giants, and that is a _good_ thing, not bad
Forcing us to forgo the giants and claw ourselves up to their height may have benefits, but in my eyes it is way less effective as a form of knowledge
The compounding force of knowledge is awesome to behold, even if it can be scary
It's like the struggle that we've all had when learning our first programming language. If we weren't forced to wrestle with compilation errors, our brains wouldn't have adapted to the mindset that the computer will do whatever you tell it to do and only that.
There's a place for LLMs in learning, and I feel like it satisfies the same niche as pre-synthesized Medium tutorials. It's no replacement for reading documentation or finding answers for yourself though.
LLMs will definitely be a technology that widens the knowledge gap at the same time that it improves access to knowledge. Just like the internet.
30 years ago people dreamed about how smart everyone would be with humanity's knowledge instantly accessible. We've had wikipedia for a while, but what's the take-up rate of this infinite amount of information? Most people prefer to scroll rage-bait videos on their phones (content that doesn't give them knowledge or even make them feel better, just that makes them angry)
Of course it's amazing to hear every once in a while the guy who maintains a vim plugin by coding on his phone in Pakistan.... or whatever other thing that is enabled by the internet by people who suddenly have access to this stuff. That's not an effect of all humans on average, it's an effect on a few people who finally have a chance to take advantage of these tools.
I heard in a YouTube interview a physicist saying that LLMs are helping physics research just because any physicist out there can now ask graduate-level questions about currently published papers, that is, have access to knowledge that would have been hard to come by before, sharing knowledge across sub-domains of physics by asking ChatGPT.
This echoes sentiments from the 2010s centered around hiring. Companies generally don’t want to hire junior engineers and train them—this is an investment with risks of no return for the company doing the training. Basically, you take your senior engineers away from projects so they can train the juniors, and then the juniors now have the skills and credentials to get a job elsewhere. Your company ends up in the hole, with a negative ROI for hiring the junior.
Tragedy of the commons. Same thing to day, different mechanism. Are we going to end up with a shortage of skilled software engineers? Maybe. IMO, the industry is so incredibly wasteful in how engineers are allocated and what problems they are told to work on that it can probably deal with shortages for a long time, but that’s a separate discussion.
Long, long ago, the compact was that employees worked hard for a company for a long time, and were rewarded with pensions and opportunities for career advancement. If you take away the pensions and take away the opportunities for career advancement, your employees will advance their careers by switching companies—and the reason that this works so well is because all of the other companies would rather pay more to hire a senior engineer rather than take a risk on a junior.
It’s a systemic problem and not something that you can blame on employees. Not without skipping over a long list of other contributing factors, at least.
The math remains simple: if you already have an employee on your payroll, how in the world are you not willing to pay them what they can get by switching at that point? That's literally just starving one's own investment.
The real issue is that the companies who were "training" the juniors were doing so only because they saw the juniors as a bargain given that they were initially willing to work for the lower wage. They just don't stay that way as they grow into the craft.
An employee who does not do the effort to re-peg their labor time to market rates for their skill level is implicitly consenting to a prior agreement (when they were hired).
I wonder how things might change if short-term capital gains tax (<5 years) went way up.
When I started work (this was in the pre-consumer-internet era), job hopping was already starting to be a thing but there was defintely still a large "old school" view that there should be some loyalty between employer and employee. One of my first jobs was a place where they hired for potential. They hired smart, personable people and taught them how to program. They paid them fairly well, and gave annual raises and bonuses. I was there for about 8 years, my salary more than doubled in that time. Maybe I could have made more elsewhere, I didn't even really look because it was a good environment, nice people, low stress, a good mix of people since not everyone (actually only a few) were Comp. Sci. majors.
I don't know how much that still happens, because why would a company today invest in that only to have the employee leave after two years for a higher salary. "They should just pay them more" well yeah, but they did pay them in the sense of teaching them a valuable skill. And their competitors for employees started to include VC funded startups playing with free money that didn't really care what it cost to get bodies into the shop. Hard to compete with that when you actually have to earn the money that goes into the salary budget.
Would the old school approach work today? Would employees stay?
- given context c, i tried idea a, b and c. where there other options that I miss ?
- based on this plan, do you see missing efficiency ?
etc etc
i'm not seeking answers, i'm trying to avoid costly dead ends
A LOT of the time the things I ask LLMs for are to avoid metaphorically wading through a garbage dump looking for a specific treasure. Filtering through irrelevant data and nonsense to find what I'm looking for is not personal development. What the LLM gives back is often a very much better jumping off point for looking through traditional sources for information.
Specifically, asking a question and getting an answer is not a general path to learning. Being asked a question and you answering it is. Somewhat, this is regardless of if you are correct or not.
> you're going to learn much more with the latter approach than the former
that the downside is a lack of deep knowledge that would enable better solutions in the long term
When i have a question about any topic, and I ask Chatgpt, i usually chat about more things, coming up with questions based on the answer, and mostly stupid questions. I feel like I am taking in the information, analyzing, and then diving deeper because I am curious. This is based on how I learn about stuff. I know i need to check a few things, and that it's not fully accurate, but the conversation flows in a direction I like.
compared this to researching on the internet, there are some good aspects, but more often than not, I end up reading an opinionated post by someone (no matter the topic, if you go deep enough, you will land on an opinionated factual telling). That feels like someone decided what questions are important, what angles we need to look at, and what the conclusion should be. Yes, it is educational, but I am always left with lingering questions.
The difference is curiosity. If people are curious about a topic, they will learn. If not, they are happy with the answer. And that is not laziness. You cannot be curious about everything.
ChatGPT is in fact opinionated, it has numerous political positions ("biases") and holds some subjects taboo. The difference is that a single actor chooses the political opinions of the model that goes on to interact with many more people than a single opinion piece might.
If the sources are all opinionated articles, per GP, that's what the LLM is going to base its "objective response" on. That's literally all it has as sensory input.
I understand the concern about this risk in general. I'm just making a personal observation that this isn't how I use these tools.
You can also ask it to explain the subject like you’re 5, which might not feel appropriate when interacting with a human because that can feel burdensome.
All of this is heavily caveated by how dramatically wrong LLMs can be, though, and can be rendered moot if the individual in question is too trusting and/or isn’t aware of the tendency of LLMs to hallucinate, pull from bad training data, or match the wrong patterns.
Personally, I find that even when it's wrong, it's often useful, in that I come away with hints toward how to follow up.
I do have concerns that people who haven't lived a couple decades of adult life prior to the existence of these tools will be a lot more credulous, to their detriment.
The 'but' in that lies with how much freedom is given to the LLM. If constrained, its refusal to answer may become a somewhat triggering possibility.
It's important to note that not everyone abide by the same morals. And a narrowly constrained model may end up refusing genuine inquiries just because.
In any case, if anything, this is a small 'but'. OP's point is the gold nugget here. That is, LLMs allowing exploring subjects without the fear of being judged for one's natural curiosity.
But it isn't a problem for most people. The kind of edgelords that run into this are overrepresented on internet forums, including HN, but it's actually a pretty small group of people.
So when you have a "curious" debate with ChatGPT what you're really doing is searching the internet through a filter, guided by your own and ChatGPT's biases about the subject, but still and always based on whatever you would have found by researching stuff on the internet.
You're still on the internet. It may feel like you've finally escaped but you haven't. The internet can now speak to you when you ask it, but it's still the internet.
The danger in ubiquitously available LLMs, which seemingly have an answer to any question, isn’t necessarily their existence.
The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the nearest LLM to provide an answer rather than taking a few moments to quietly ponder the problem on your own. That act of manipulating the problem in your head—critical thinking—is ultimately a craft. And the only way to become better at it is by practicing it in a deliberate, disciplined fashion.
I'll have a problem that I want to work on but getting started is difficult. Asking ChatGPT is almost frictionless, the next thing I know I'm working on the project, 8 hours go by and I'm done. When I get stuck on some annoying library installation, ChatGPT solves if for me so I don't get frustrated. It allows me to enter and maintain flow states better than anything else.
ChatGPT is a really good way of avoiding procrastination.
I get the point you're trying to make. However, quietly pondering the problem is only fruitful if you have the right information. If you don't, best case scenario you risk wasting time reinventing the wheel for no good reason. In this application, a LLM is just the same type of tool as Google: a way to query and retrieve information cor you to ingest. Like Google, the info you get from queries is not the end but the means.
As the saying goes, a month in the lab saves you a week in the library. I would say it can also save you 10 minutes with Claude/ChatGPT/Copilot.
Is hiring a private tutor also laziness?
If I were to reframe GP's point, it would be: having to figure out how to answer a question changes you a little. Over time, it changes you a lot.
Yes, of course, there is a perspective from which a month spent in the lab to answer a question that's well-settled in the literature is ~wasted. But the GP is arguing for a utility function that optimizes for improving the questioner.
Quietly pondering the problem with the wrong information can be fruitful in this context.
(To be pragmatic, we need both of these. We'd get nowhere if we had to solve every problem and learn every lesson from first principles. But we'd also get nowhere if no one were well-prepared and motivated to solve novel problems without prior art.)
Nearly all of learning relies on reinventing the wheel. Most personal projects involve reinventing wheels, but improving yourself by doing so.
Some of the most memorable moments I had in my learning were when I "reinvented" something. In high-school, our math teacher had us reinvent the derivative rules, and later had us derive Euler's identity through Taylor Series. They were big eureka moments. Going through all the work someone else did hundreds of years ago is very inspiring, and IMO gets you in the right mindset for discovery. I can't imagine where the joy of learning comes for someone who sees learning as a test —a question, an answer, nothing in between.
In uni we built a CPU from scratch over the course of a few weeks. First building an small ALU, widening its bus, adding memory operations, etc. Beyond learning how things work, it makes you wonder how inventing this without a teacher to guide you must've been, and gives you an appreciation for it. It also makes you extrapolate and think about the things that haven't been invented or discovered yet.
In theory LLMs could serve as a teacher guiding you as you reinvent things. In practice, people just get the answer and move on. A person with experience teaching, who sees how you're walking the path and compares it to how they walked theirs, will know when to give you an answer and when to have you find it yourself.
One doesn't learn how to do lab-work in the library.
"In the context of human-AI interaction, we define metacognitive laziness as learners’ dependence on AI assistance, offloading metacognitive load, and less effectively associating responsible metacognitive processes with learning tasks."
And they seem to define, implicitly, “metacognitive load” as the cognitive and metacognitive effort required for learners to regulate their learning processes effectively, particularly when engaging in tasks that demand active self-monitoring, planning, and evaluation.
The analogize metacognitive laziness to cognitive offloading, where we have our tools do the difficult congnitive tasks for us, which robs us of opportunities to develop and ultimately dependent on those tools.
This sounds like parents complaining when we use Google Maps instead of a folding map. Am I worse at reading a regular map? Possibly. Am I better off overall? Yes.
Describing it as "laziness" is reductive. "Dependence on [_____] assistance" is the point of all technology.
I will note two things though.
1. Not all technology creates "dependence". Google Maps removes the need of carrying bulky maps, or buy new ones to stay updated, but someone who knows how to read Google Maps will know how to read a normal map, even if they're not as quick at it.
2. The best technology isn't defined by the "dependence" it creates, or even the level of "assistance" it provides, but for what it enables. Fire enabled us to cook. Metalworking enabled us to create a wealth of items, tools and structures that wouldn't exist if we only had wood and stone. Concrete enabled us to build taller and safer. Etc.
It's still unclear what AI chatbots are enabling. Are LLM's big claim to fame allowing people to answer problem sets and emails with minimal effort? What does this unlock? There's a lot of talk about allowing better data analysis, saving time, and vague claims of an ai revolution, but until we see X, Y and Z, and can confidently say "yeah, X, Y and Z are great for mankind, and they couldn't have happened without chatbots", it's fair for people to keep complaining about the change and downsides AI chatbots are bringing about.
AI doesn’t provide directions, it navigates for you. You’re actively getting stupider every time you take an LLMs answer for granted, and this paper demonstrates that people are likely to take answers for granted.
LLMs (try to) give you what you're asking for. If you ask for directions, you'll get something that resembles that, if you ask it to 100% navigate, that's what you get.
> and this paper demonstrates that people are likely to take answers for granted.
Could you point out where exactly this is demonstrated in this paper? As far as I can tell from the study, people who used ChatGPT for the studying did better than the ones that didn't, with no different in knowledge retention.
This is what I observed as well. For the "metacognitive laziness" bit they had to point to other studies.
On one hand, this reminds me of how all of the kids were going to be completely helpless in the real world because "no one carries a calculator in their pocket". Then calculators became something ~everyone has in their pocket (and the kids ended up just fine).
On the other hand, I believe in the value of "learning to learn", developing media literacy, and all of the other positives gained when you research and form conclusions on things independently.
The answer is probably somewhere in the middle: leveraging LLMs as a learning aid, rather than LLMs being the final stop.
I have recently seen GenZ perplexed by card games with addition and making change. For millennials, this is grade school stuff.
I'm not about to divide 54,432 by 7.6, even though I was taught how to. I'll pull out my phone.
On the other end, I'm not going to pull out my phone to figure out I owe you $0.35.
I think the point I was trying to make still stands.
That is not going away. Learning better prompts, learning when to ignore AI, learning how to take information and turn it into something practical. These new skills will replace the old.
How many of us can still...
- Saddle a horse
- Tell time without a watch
- Sew a shirt
- Create fabric to sew a shirt
- Hunt with primitive tools
- Make fire
We can shelter children from AI, or we can teach them how to use it to further themselves. Talk to the Amish if you want to see how it works out when you forgo anything that feels too futuristic. A respectable life, sure. But would any of us reading this choose it?
Yes, this is what I meant by the calculator part of my comment. You've got some other good examples.
>learning when to ignore AI, learning how to take information and turn it into something practical.
This is what I meant by using LLMs as a tool rather than an end.
We still need to calculate numbers and I can say it’s silly if I find someone need to get a calculator to do 5x20. Same if you’re taking hours and multiple sheets of paper for something that will take you a few minutes with a calculator. There’s a question of scale and basic understanding that divides the two.
Yep, we agree. That's the whole point of what I said in the first half of my original comment.
At one time, they were common skills. Things changed, they aren't common, they aren't really needed (for most people), and everyone is doing just fine without them. We've freed up time and mental capacity for other (hopefully more beneficial) tasks.
(I'm confused why this reply and the other above it are are just restating the first part of my original comment, but framing it like it's not a restatement).
If the goal is to learn, the means don't matter much as long as the right attitude is there. But if one only wish to appear knowledgeable, LLM has indeed make it way easier.
I have some friends who use ChatGPT for everything. From doing work to asking simple questions. One of my friends wanted a bio on a certain musician and asked ChatGPT. It's a little frightening he couldn't, you know, read the Wikipedia page of this musician, where all of the same information is and there are sources for this material.
My mom said she used ChatGPT to make a "capsule wardrobe" for her. I'm thinking to myself (I did not say this to her)... you can't just like look at your clothes and get rid of ones you don't wear? Why does a computer need to make this simple decision?
I'm really not sure LLMs should ever be used as a learning aid. I have never seen a reason to use them over, you know, searching something online. Or thinking of your own creative story. If someone can make a solid use case as to why LLMs are useful I would like to hear.
This is like when CEOs hire outside consulting firms to do layoffs for them. Pinning the pain of loss on some scapegoat makes it more bearable.
I use ChatGPT (or Gemini) instead of web searches. You can blame the content and link farms that are top of the search results, and the search engines focusing on advertising instead of search, because we're the product.
Why your friend doesn't know about wikipedia is another matter, if i wanted a generic info page about some topic i'd go directly there. But if i wanted to know if Bob Geldof's hair is blue, I might ask a LLM instead of reading the whole wikipedia page.
I also ask LLMs for introductory info about programming topics i don't know about, because i don't want to go to google and end up on w3schools, geeksforgeeks and crap like that.
I don't really trust LLMs for advanced programming topics, you know, what people pay me for. But they're fine for giving me a function signature or even a small example.
"Is Bob Geldof's hair blue?" -> Search for Bob Geldof -> Look at images of Bob Geldof.
Intro programming topics can be found at the documentation of the website. Your searching query might be "[programming topic] getting started" and usually if it's a package or a tool there will be documentation. If you want good documentation on web dev stuff that isn't w3schools or geeksforgeeks you can use MDN documentation.
Or, if you really want a general overview there's probably a YouTube video about the topic.
Additionally appending "reddit" to a search will give better results than SEO junk. There are always ways to find quality information via search engines.
Assuming I get images of Bob Geldof. More likely the first page will be pinterest login-required results.
> there's probably a YouTube video about the topic.
Life's too short to watch talking heads about ... you know, WRITING code ...
> can be found at the documentation of the website
Seriously? Maybe for the top 500 npm packages. Not for the more obscure libraries that may have only some doxygen generated list of functions at best.
You do realize Google/Bing/DDG/Kagi all have an Images tab, right? Come on.
> Life's too short to watch talking heads about ... you know, WRITING code ...
If I want a high level overview of what the thing even is, a YouTube video can be useful since there will be explanations and visual examples. You can read documentation as well. For example, if I want a slower overview of something step by step, or a talk at a conference about why to use this thing, YouTube can be helpful. I was just looking at videos about HTMX this weekend, hearing presentations by the authors and some samples. That's not saying if I actually use the thing I won't be reading the documentation, it's more just useful for understand what the thing is.
> Seriously? Maybe for the top 500 npm packages. Not for the more obscure libraries that may have only some doxygen generated list of functions at best.
How do you expect your LLM to do any better? If you're using some obscure package there will probably be documentation in the GitHub README somewhere. If it's horrible documentation you can read the Typescript types or do a code search on GitHub for examples.
This is all to say that I generally don't trust LLM output because I have better methods of finding the information LLMs are trained on. And no hallucinations.
Realistically my guess is that the bar for broad knowledge and ability to get to details quickly will increase. There's a lot of value in understanding multiple disciplines at a mediocre level if you can very quickly access the details when needed. Especially since learning speed tends to get slower and slower the deeper you go.
Also since every time I've needed to do something complicated, even if I knew the details it was important enough to double check my knowledge anyway.
We don't teach slide rules and log tables in school anymore. Calculators and computers have created a huge metacognitive laziness for me, and I teach calculus and have a PhD in statistics. I barely remember the unit circle except for multiples of pi/4 radians. I can do it in multiples of pi/6 but I'm slower.
But guess what? I don't think I'm a worse mathematician because I don't remember these things reflexively. I might be a little slower getting the answer to a trivial problem, but I can still find a solution to a complex problem. I look up integral forms in my pocket book of integrals or on Wolfram Alpha, because even if I could derive the answer myself I don't think I'd be right 100% of the time. So metacognitive laziness has set in for me already.
But I think as long as we can figure out how to stop metacognitive laziness before it turns into full-fledged brain-rot, then we'll be okay. We'll survive as long as we can still teach students how to think critically, and figure out how to let AI assist us rather than turn us into the humans on the ship from Wall-E. I'm a little worried that we'll make some short term mistakes (like not adapting our cirriculum fast enough), but it will work out.
Even outside of math and computers, when was the last time you primed a well pump or filled an oil lamp? All of these tasks have been abstracted away, freeing us to focus on ever-more-specialized pursuits. Those that are useful will too be abstracted away, and for the better.
They have not been abstracted away, they have been made obsolete. Significant difference.
The danger with LLMs is people will never learn tasks that are still needed.
I don't have to prime a well pump any more because my house and workplace are hooked into the municipal water system. I don't have to prime a pump because that task has gotten so abstract as to become turning a faucet handle. But engineers at the municipal water plant do have to know how to do this task.
Similarly, filling an oil lamp and lighting it is now abstracted for normal people as flipping a light switch (maybe changing a light bulb is a more appropriate comparison). But I actually have filled an oil lamp when I was a kid because we kept "decorative" hurricane lamps in my house that we used when the power went out. The exact task of filling an oil lamp is not common, but filling a generator with fuel is still needed to keep the lights on in an emergency, although it is usually handled by the maintenance staff of apartment buildings and large office buildings.
But man I cringe when I see 18 year old students reach for a calculator to multiply something by .1.
Personally speaking, I find being able to ask ChatGPT continually more nuanced questions about an initial answer the one clear benefit over a Google search, where I have diminishing marginal returns on my inquisitiveness for the time invested over subsequent searches. The more precisely I am able to formulate my question on a traditional search engine, the harder it is for non-SEO optimized results to appear: it's either meant more for a casual reader with no new information, or is a very specialized resource that requires extensive professional background knowledge. LLMs really build that bridge to precisely the answers I want.
I've heard stories of junior engineers falling into this trap. They asked the chatbot everything rather than exposing their lack of knowledge to their coworkers. And if the chatbot avoids blatant mistakes, junior engineers won't recognize when the bot makes a subtle one.
If I am not motivated to find them and test my own knowledge, how do I change that motivation?
It is interesting that you describe this as "the answers you want" and not "the correct answer to the question I have"
Not criticising you in particular, but this does sound to me like this approach has a good possibility of just reinforcing existing biases
In fact the approach sounds very similar to "find a wikipedia article and then go dig through the sources to find the original place that the answers I want were published"
One thing I do have to be mindful of is asking the AI to check for alternatives, for dissenting or hypothetical answers, and sometimes I just ask it to rephrase to check for consistency.
But doing all of that still takes way less time than searching for needles buried by SEO optimized garbage and well meaning but repetitious summaries.
I do want to re-iterate that I didn't intend to accuse you of only seeking to reinforce your biases
I read into your phrasing not to needle you, but because it set off some thoughts in my head, that's all
Thanks for being charitable with your reply, and I appreciate your thoughts
“Verify that” and then ChatGPT will do a real time search and I can read web pages. Occasionally, it will “correct itself” once it does a web search
There was a story a couple days ago about a neural network built on a single photonic chip. I fed the paper to ChatGPT and was able to use it to develop a much more meaningful and comprehensive understanding of what the chip actually delivered, how it operated, the fundamental operating principles of core components and how it could be integrated into a system.
The fact that I now have a tireless elucidator on tap to help explore a topic (hallucination caveats notwithstanding) actually increases my motivation to explore dense technical information and understanding of new concepts.
The one area where I do think it is detrimental is my willingness to start writing content on a provebial blank sheet of paper. I explore the topic with ChatGPT to get a rough outline, maybe some basic content and then take it from there.
The more youngsters skip the hassle of banging their heads on some topic the less able they will be to learn at later age.
There's more to learning than getting information, it's also about processing it (which we are offloading to LLMs). In fact I'd say that the whole point of going through school is to learn how to process and absorb information.
That might be the cognitive laziness.
but the limitation in publishing a dialogue is that you'd just get to publish one of them and each reader is going to come in with different questions and goals for what they want out of the paper.
While you're still going to learn whether you go through the hassle of understanding the system, develop a method for debugging it and learning about it along the way...
Of course a senior could point you to the issue right away, probably an llm too, and even provide a learning opportunity, but does it hold the same lasting impact of being able to overcome the burden yourself?
Which one makes a more lasting effect on your abilities and skills?
Again, LLMs are a tool, but if people in school/college start using it to offload the reasoning part they are not developing it themselves.
>To realize a programmable coherent optical activation function, we developed a resonant electro-optical nonlinearity (Fig. 1(iii)). This device directs a fraction of the incident optical power ∣b∣2 into a photodiode by programming the phase shift θ in an MZI. The photodiode is electrically connected to a p–n-doped resonant microring modulator, and the resultant photocurrent (or photovoltage) detunes the resonance by either injecting (or deplet-ing) carriers from the waveguide.
It becomes very difficult to pick apart each thing, find a suitable explanation of what the thing (eg. MZI splitter, microring modulator, how a charge detunes the resonance of the modulator) is or how it contributes to the whole.
Picking these apart and recombining them with the help of something like ChatGPT has given me a very rapid drill-down capability into documents like this. Then re-reading it allows me to intake the information in the way its presented.
If this type of content was material to my day job it would be another matter, but this is just hobby interest. I'm just not going to invest hours trying to figure it out.
In practice, this lets you reasonably process the knowledge from a lot more papers than you otherwise would, which I think is a win. The way we learn is evolving, as it has in the past, and that's a good thing.
Though I agree that this will be another way for lazy children to avoid learning (by just letting AI do the exercises), and we'll need to find a good solution for that, whatever it may be.
You AREN'T learning what that paper is saying; you're learning parts of what the LLM says is useful.
If you read just theorems, you aren't learning math. You need to read the proof too, and not just a summary of the proof.
This is a pretty big caveat to the goal of
> develop a much more meaningful and comprehensive understanding
Which is still my biggest issue with LLMs. The little I use of them, the answers are still confidently wrong a lot of the time. Has this changed?
When dealing with topics I'm familiar with, I've found the hallucinations have dropped substantially in the last few years from GPT2 to GPT3 to GPT4 to 4o, especially when web search is incorporated.
LLMs perform best in this regard when working with existing text that you've fed them (whether via web search or uploaded text/documents). So if you paste the text of a study to start the conversation, it's a pretty safe bet you'll be fine.
If you don't have web search turned on, I'd still avoid treating the chat as a search engine though, because 4o will still get little details wrong here and there, especially for newer or more niche topics that wouldn't be as well-represented in the training data.
Trying a share link, hope it works:
https://kagi.com/search?q=what+factors+affect+the+freezing+p...
I'm really happy about being able to share Kagi results. It's allowed me to slip Kagi into a few discussions to raise awareness. Also, being on bluesky helps because so many folks complain about google but they're not aware of better options.
One thing I wanted to raise: please keep the Kagi homepage as light as possible. It's tempting to keep adding little things and over time you get a big mess.
Your comment seems like a good example of metacognitive laziness: not bothering to formulate your own definition from the examples in the abstract and the meaning of the words themselves. Slothful about the the process of thinking for yourself.
The writer has the responsibility to be clear.
So metacognitive lazyness would be the lack of such processes
> When using AI in learning, learners should focus on deepening their understanding of knowledge and actively engage in metacognitive processes such as evaluation, monitoring, and orientation, rather than blindly following ChatGPT's feedback solely to complete tasks efficiently.
- I realized about 20y-25y ago that I could run a Web search and find out nearly any fact, probably one-shot but maybe with 2-3 searches' worth of research
- About 10-15y ago I began to have a connected device in my pocket that could do this on request at any time
- About 5y ago I explicitly *stopped* doing it, most of the time, socially. If I'm in the middle of a conversation and a question comes up about a minor fact, I'm not gonna break the flow to pull out my screen and stare at it and answer the question, I'm gonna keep hanging out with the person.
There was this "pub trivia" thing that used to happen in the 80s and 90s where you would see a spirited discussion between people arguing about a small fact which neither of them immediately had at hand. We don't get that much anymore because it's so easy to answer the question -- we've just totally lost it.
I don't miss it, but I have become keenly aware of how tethered my consciousness is to facts available via Web search, and I don't know that I love outsourcing that much of my brain to places beyond my control.
And work on learning some trivia purely to help you out with memory.
I have this business idea for a pub in a faraday cage that would make cheating impossible for pub trivia (added bonus: also removes any other reason for anyone to be on their phones!)
A good example, but imagine the days of our ancestors:
Remember that game we used to play, where we'd find out who could see birds from the farthest distance? Yeah, glasses ruined that.
The anecdotes from practitioners using GenAI in this way suggest it’s a good tool for experienced developers because they know what to look out for.
Now we admit folks who don’t know what they’re doing and are in the process of learning. They don’t know what to look out for. How does this tech help them? Do they know to ask what a use-after-free is or how cache memory works? Do they know the names of the algorithms and data structures? Do they know when the GenAI is bullshitting them?
Studies such as this are hard but important. Interesting one here even though the sample is small. I wonder if anyone can repeat it.
Anecdote from a friend who teaches CS: this year a large number of students started adding unnecessary `break` instructions to their C code, like so:
while (condition) {
do_stuff();
if (!condition) {
break;
}
}
They asked around and realized that the common thread was ChatGPT - everyone who asked how loops work got a variation of "use break() to exit the loop", so they did.Given that this is not how you do it in CS (not only it's unnecessary, but it also makes your formal proofs more complex) they had to make a general one-time exception and add disclaimers in exams reminding them to do it "the way you were taught in class".
Well - they know that break is not a function and you don't. Thanks ChatGPT.
The exercise was to implement binary search given the textbook specification without any errors. An algorithm they had probably implemented in their first-year algorithms course at the very least. The students could write any tests they liked and add any assertions they thought would be useful. My colleague verified each submission against a formal specification. The majority of submission contained errors.
For a simple algorithm that a student at that level could be reasonably expected to know well!
Now... ChatGPT and other LLM-based systems, as far as I understand, cannot do formal reasoning on their own. It cannot tell you, with certainty, that your code is correct with regards to a specification. And it can't tell you if your specification contains errors. So what are students learning using these tools?
(This might work best if you have one LLM critique the code generated by another LLM, eg bouncing back and forth between Claude and ChatGPT)
You can know enough in X to allow you to do Y together with X, which you might not have been able to before.
For example, I'm a programmer, but horrible at math. I want to develop games, and I technically could, but all the math stuff makes it a lot harder sometimes to make progress. I've still managed to make and release games, but math always gets in the way. I know exactly how I want it to behave and work, but I cannot always figure out how to get there. LLMs help me a lot with this, where I can isolate those parts into small black boxes that I know they give me the right thing, but not 100% sure about how. I know when the LLM gives me the incorrect code, because I know what I'm looking for and why, only missing the "how" part.
Basically like having 3rd party libraries you don't fully understand the internals of, but can still use granted you understand the public API, except you keep in your code base and pepper it with unit tests.
No, which is why people who don't pick up on the nuances of programming - no matter how often they use LLMs - will never be capable programmers.
And well, let me put it this way: deepseek-r1 won't be replacing anyone anytime soon. It generates a massive amount of texts, mostly nonsensical and almost always terribly, horribly wrong. But inexperienced devs or beginners, especially beginners, will be confused and will be led down the wrong path, potentially outsourcing rational thought to something that just sounds good, but actually isn't.
Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
Are you considering the full "reasoning" it does when you're saying this? AFAIK, they're meant to be "rambling" like that, exploring all sorts of avenues and paths before reaching a final conclusive answer that is still "ramble-like". I think the purpose seems to be to layer something on top that can finalize the answer, rather than just taking whatever you get from that and use it as-is.
> Currently, over-reliance on the ramblings of a statistical model seems detrimental to education and ultimately the performance of future devs. As the probably last generation of old school software engineers, who were trained on coffee and tears of frustration, who had to really work code and architecture themselves, golden times might lie ahead, because someone will have to fix the garbage produced en masse by llms.
I started coding just before Stack Overflow got popular, and remember the craze when it did get popular. Blogposts about how Stack Overflow will create lazy devs was all over the place, people saying it was the end of the real developer. Not arguing against you or anything, I just find it interesting how sentiments like these keeps repeating over time, just minor details that change.
2. Leonard Euler criticized the use of logarithm tables in calculating: in his 1748 "Introductio in analysin infinitorum" he insisted on deriving logarithms from first principles
3. William Thomson (Lord Kelvin) initially dismissed mechanical calculators, stating in an 1878 lecture at Glasgow University that they would make students "neglect the cultivation of their reasoning powers"
4. Henry Ford in his autobiography "My Life and Work" (1922) quoted a farmer who told him in 1907 that gasoline tractors would "make boys lazy and good for nothing" and they'd "never learn to farm"
5. In 1877, the New York Times published concerns from teachers about students using pencils with attached erasers, claiming it would make them "careless" because they wouldn't have to think before writing. The editorial warned it would "destroy the discipline of learning"
6. In "Elements of Arithmetic," (1846) Augustus De Morgan criticized the use of pre-printed multiplication tables, saying students who relied on them would become "mere calculative mechanism" instead of understanding numbers
7. In his 1906 paper "The Menace of Mechanical Music," John Philip Sousa attacked the phonograph writing that it would make people stop learning instruments because "the infant will be taught by machinery" and musical education would become "unnecessary"
8. In his 1985 autobiography "Surely You're Joking, Mr. Feynman!" Richard Feynman expressed concern about pocket calculators and students losing the ability to estimate and understand mathematical relationships
I could go on (Claude wrote 15 of them!). Twenty years from now (assuming AI hasn't killed us all) we'll look back and think that working with an LLM isn't the crutch people think it is now.
Write a out a list of statements about how each generation thinks the next is lazy because they have it easy. For example, people who had to memorize trig or log tables would think those who had books to refer to were lazy. Those who used slide rules thought calculator-users were lazy. People who plowed with a horse thought early tractors were cheating. etc. etc. List as many examples as you can up to 50, leaning toward the mental rather than physical, but including both, and give specifics rather than generalities. My examples above are at the edge of what's acceptable; try to do better than I did.
That got me a bunch of abstractions like: "Librarians who memorized the Dewey Decimal System dismissed those who used card catalogs"
So I replied:
Sorry, I should have been clearer: this should be real-world examples, with cites if possible. As one example, your point about photographers is no good unless some specific manual photographer actually said something about light meter users -- e.g. "Ansel Adams once said that..." and it needs to be not-made-up.
That got me the first three. After I confirmed that those were good I got 4-8. I followed that with:
more please. it's okay to add in a few "XYZ is supposed to have said that..." as long as they aren't completely made up, and they are in the minority.
That got me the rest.
Maybe I'm trying to read and understand it too quickly, but I don't see anything in the abstract that supports that strong conclusion.
> The results revealed that: (1) learners who received different learning support showed no difference in post-task intrinsic motivation; (2) there were significant differences in the frequency and sequences of the self-regulated learning processes among groups; (3) ChatGPT group outperformed in the essay score improvement but their knowledge gain and transfer were not significantly different. Our research found that in the absence of differences in motivation, learners with different supports still exhibited different self-regulated learning processes, ultimately leading to differentiated performance.
The ChatGPT group performed better on essay scores, they showed no deficit in knowledge gain or transfer, but they showed different self-regulated learning processes (not worse or better, just different?).
If anything, my own conclusion from the abstract would be that ChatGPT is helpful as a learning tool as it helped them improve essay scores without compromising knowledge learning. But again, I only read the abstract, maybe they go into more details in the paper that make the abstract make more sense.
Some kids might pickup a calculator and then use it to see geometric growth, or look for interesting repeating patterns of numbers.
Another kid might just use it to get their homework done faster and then run outside and play.
The second kid isn't learning more via the use of the tool.
So the paper warns that the use of LLMs doesn't necessarily change what the student is interested in and how they are motivated. That we might need to put in checks for how the tool is being used into the tool to reduce the impact of scenario 2.
From a learning perspective, it can also be a short cut to getting something explained in several different ways until the concept "clicks".
However, I agree that that doesn’t really seem to be a negative over other methods.
That's the most convoluted conclusion I've ever seen.
> What is particularly noteworthy is that AI technologies such as ChatGPT may promote learners' dependence on technology and potentially trigger “metacognitive laziness”.
Calculator laziness is long known. It doesn't cause meta- but specific- laziness.
I tend to learn asking questions, I did this using Anki cards for years (What is this or that?) and find the answer on the back of the index card. Questions activate my thinking more than anything, and of course my attempt at answering the question in my own terms.
My motto is: Seek first to understand, then to be understood (Covey). And I do this in engaging with people or a topic—-by asking questions.
Now I do this with LLMs. I have been exploring ideas I would never have explored hadn’t there been LLMs, because I would not have had the to research material for learning, read it, create material in a Q&A session for me.
I even use LLMs to convert an article into Anki cards using Obsidian, Python, LLMs, and the Anki app.
Crazy times we are in.
This is very well-studied: https://en.wikipedia.org/wiki/Testing_effect [not a high-quality article, but should give an overview]
Humans are lazy by nature, they seek shortcuts.
So given the chance to go rote learning for years for an education which in most cases is simply a soon to be forgotten certification vs watching TikTok while letting ChatGPT do the lifting - this is all predictable, even without Behavioral Design, Hooked etc.
And that usually the benefits rise with IQ level - nothing new here, that’s the very definition of IQ.
Learning and academia is hard, and even harder for those with lower IQ scores.
A fool with a tool is still a fool and vice versa.
Motivation seems also at an all time low. Why put in hours when a prompt can works wonders?
Reading a book is a badge of honor nowadays more than ever.
This is not obvious to me, and certainly is not the "definition" of IQ. There are tools that become less useful the more intelligent you are, such as multiplication tables. IQ is defined by a set of standardized tests that attempt to quantify human intelligence, and has some correlations with social, educational and professional performance, but it's not clear why it would help with use of AI tools.
Would you argue that having books/written words also made people more lazy and be able to remember less? Because some people argued (at the time) that having written words would make humanity less intellectual as a whole, but I think consensus is that it led to the opposite.
Most folks are projecting what the title says into their own emotion space and then riffing on that.
The authors even went so far as to boil the entire paper down into bullet points, you don't even need the pdf.
Yeah, or the abstract which is a bit vague.
And yes indeed, their ability to answer basic questions about coding on the same exam has drastically dropped versus last year.
There is a "plato" story on how he laments the invention of writing because now people don't need to memorize speeches and stuff.
I think there is a level of balance. Writing gave us enough efficiencies that the learned laziness made us overall more effective.
The internet in 2011 made us a bit less effective. I am not gonna lie; I spent a lot more time being able to get resources, whereas I would have to struggle on my own to solve a problem. You internalize one more than the other, but is it worth the additional time every time?
I worry about current students learning through LLMs just like I would worry about a student in 2012 graduating in physics when such a student had constant access to wolfram alpha.
Metacognition is really how the best of the best can continue to be at their best.
And if you don't use it, you lose it.
I’m also a skeptic of students using and relying on ChatGPT, but I’m cautious about using this abstract to come to any conclusions without seeing the full paper especially given that they’re apparently using “metacognitive laziness” in a specific technical way we don’t know about if we haven’t read the paper.
I'm not surprised if this will make some lazier since you don't need to do the legwork of reading, but how many don't read only the headlines of articles before they share articles?
You can interrogate it at least. "Are you sure that's the correct answer? Re-think from the beginning without any assumptions" and you'll get a checklist you can mentally/practically go through yourself to validate.
Now, "Claude, fix that for me".
It has "AI" in the title, so it's a hot take.
Ridiculous that academic work on the technology of education is behind a paywall and not open access. Stinks.
I understand it is a bit apples to oranges, but I'm curious peoples take.
I think a comparison with calculators is possible, but the degree to which calculators are capable of assisting us is so incomparably smaller that the comparison would be meaningless.
Smart phones changed society a lot more than calculators did and now AI is starting to do the same, albeit in a more subtle manner.
Treating AI like it's just a calulator seems naïve/optimistic. We're still reeling from the smart phone revolution and have not solved many of the issues it brought upon its arrival.
I have a feeling the world has become a bit cynical and less motivated to debate how to approach these major technological changes. It's been too many of them in too short of a time and now everyone has a whatever attitude towards the problems these adcancements introduce.
I'm sure my friends will RUSH to read the article now...
Even if the computer is doing all the thinking, it's still a tool. Do you know what to ask it? Can you spot a mistake when it messes up (or you messed up the input)? Can you simplify the problem and figure out what the important parts of the problem are? Do you even know to do any of that?
Sure, thinking machines will sometimes be autonomous and not need you to touch them. But when that's the case, your job won't be to just nod along to everything the computer says, you won't have a job anymore and you will need to find a new job (probably one where you need to prompt and interpret what the AI is doing).
And yes, there will be jobs where you just act as an actuator for the thinking machine. Ask an Amazon warehouse worker how great a job that is :/
Everything is the same as with calculators.
It’s not to say we shouldn’t do our best to understand and provide guardrails, but the kids will be fine.
"People have been complaining about this for thousands of years" is a potent counterargument to a lot of things, but it can't be applied to things that really didn't exist even a decade ago.
Moreover, the thing that people miss about "people have been complaining about this for thousands of years" is that the complaints have often been valid, too. Cultures have fallen. Civilizations have collapsed. Empires have disintegrated. The complaints were not all wrong!
And that's on a civilization-scale. On a more mundane day-to-day scale, people have been individually failing for precisely the same reasons people were complaining about for a long time. There have been lazy people who have done poorly or died because of it. There have been people who refused to learn who have done poorly or died because of it.
This really isn't an all-purpose "just shrug about it and move on, everything's been fine before and it'll be fine again". It hasn't always been fine before, at any scale, and we don't know what impact unknown things will have.
To give a historical example... nay, a class of historical examples... there are several instances of a new drug being introduced to a society, and it ripping through that society that had no defenses against it. Even when the society survived it, it did so at great individual costs, and "eh, we've had drugs before" would not have been a good heuristic to understand the results with. I do not know that AIs just answering everything is similar, but at the moment I certainly can't prove it isn't either.
Most people my age will tell you that they stopped reading as a teenager because of the effect of smartphones. I was a veracious reader and only relearnt to read last year after 10 years since I got my first smartphone as an older teenager. These things are impactful and have affected a lot of people's potential. And also made our generation very prone to mental health issues - something that is really incredibly palpable if you are within gen z social circles like I am. It's disastrous and cannot be overstated. I can be very sure I would be smarter and happier if technology had stagnated at the level it was at when I was a younger child/teen. The old internet and personal computers, for example, only helped me explore my curiosity. Social media and smartphones have only destroyed it. There are qualitative differences between some technological advancements.
Not to mention the fact that gen alpha are shown to have terrible computer literacy because of the ease of use, discouragement of customisation and corporate monopoly over smartphones. This bucks the trend that happened from gen x to gen z of generations become more and more computer native. Clearly, upwards trends in learning due to advancements in technology can be reversed. They do not always go up.
If kids do not learn independent reasoning because of reliance on LLMs, yes, that will make people stupider. Not all technology improves things. I watched a really great video recently where someone explained the change in the nature of presidential debates through the ages. In the Victorian times, they consisted of hours-long oratory on each side, with listeners following attentively. In the 20th century the speeches gradually became a little shorter and more questions were added to break things up. In most recent times, every question has started to come with a less than a minute answer, simpler vocabulary, little hard facts or statistics etc. These changes map very well to changes in the depth at which people were able to think due to the primary information source they were using. There is a good reason why reading is still seen as the most effective form of deep learning despite technological advancement. Because it is.
Maybe we'll end up as a society of a few elites who still know how to research, think, and/or write with LLMs digesting that and regurgitating it for the masses.
An example, I have no clue about React. I do know why I don’t like to use React and why I have avoided it over the years. I describe to some ML tool the difficulties I’ve had learning React and using it productively .. and voila, it plots a chart through the knowledge that, kinda, makes me want to learn React and use it.
It’s like, the human ability to form an ontology in the face of mystery even if it is in accurate or faulty, allows the AI to take over and plot an ontological route through the mystery into understanding.
Another thing I realized lately, as ML has taken over my critical faculties, is that it’s really only useful for things that are already known by others. I can’t ask ML to give me some new, groundbreaking idea about something - everything it suggests has already been thought, somewhere, by a real human - and this its not new or groundbreaking. It’s just contextually - in my own local ontological universe - filling in a mystery gap.
Pretty fun times we’re having, but I do fear for the generations that will know and understand no other way than to have ML explain things for them. I don’t think we have the ethics tools, as cultures and societies, to prevent this from becoming a catastrophe of glib, knowledge-less folks, collapsing all knowledge into a raging dumpster fire of collective reactivity, but I hope someone is training a model, somewhere, to rescue us from this, somehow ..
--Socrates on writing
This is no hyperbole - humans have to constantly fight the degeneracy of our knowledge systems, which is to say that knowledge has to be generated and communicated - it can’t just “exist” and be useful, it has to be applied to be useful. Technology of knowledge which doesn’t get applied, does not persist, or if it does (COBOL), what once was common becomes arcane.
So, if there is hope, it lays with the proles: the way every-day people use ML, is probably the key to all of this. It’s one thing to know how to prompt an LLM to give you a buildable source tree; its another thing entirely to use it somehow to figure out what to make out of the leftover ingredients in the fridge.
Those recipes and indeed the applications of the ingredients, are based on human input and mores.
So the question for me, still really unanswered, is: How long will it take until those fridge-ingredient recipes become bland, tasteless and grey?
I think this belies the imperative that AL and ML must never become so pervasive that we don’t, also, write things down for ourselves. Oh, and read a lot, of course.
It seems, we need to stop throwing books away. Oh, and encourage kids to cook, and create their own recipes... hopefully they’ll have time and resources for that kind of lifestyle…
I guess that is the curse of evolution/specialization.
As long as humans remain aware that they are engaging with an AI/ML, we might still have a chance. Computers definitely need to be identifiable as such.
..."laziness"...
In the battle cry of the philosopher: DEFINE YOUR TERMS!!
What they really mean: new and different. Outside-the-box. "Oh no, how will we grade this?!?" a threat to our definition and control of knowledge.