I've seen many other people who have essentially become meatspace analogues for AI applications. It's sad to watch this happen while listening to many of the same people complain about how AI will take their jobs, without realizing that they've _given_ AI their jobs by ensuring that they add nothing to the process. I don't really understand yet why people don't see that they are doing this to themselves.
I still use StackOverflow and LLMs, but if those things were available when I was learning I would probably not have learnt as much.
The change with LLM is I can now just ask my hare brained questions first and figure out why it was a stupid question later
The problem with Stack Overflow is not that it makes you do the work—that’s a good thing—but that it’s too often too pedantic and too inattentive to the question to realise the asker did put in the work, explained the problem well, and the question is not a duplicate. The reason it became such a curmudgeonly place is precisely due to a constant torrent of people treating it like you described it, not putting in the effort.
SO is infamous for treating question-askers badly.
I have used LLMs for some time, and have no intentions of ever going back to SO. I get tired of being insulted.
> The only stupid question is the one you don't ask.
- A poster on my old art teacher's studio wall.
One of the reasons that I strive to behave, hereabouts, is that I feel the need to atone.
It can be quite difficult to hold my tongue/keyboard, though. I feel as if it’s good exercise.
StackOverflow intimidated me for reasons you say. What is it with the power trip that some of these forum mods have?
“Breaking the chain” is quite difficult, because it means not behaving in the manner that every cell in your body demands.
(Mostly this ended up being weird Ubuntu things relating to usecases specific to robots... not normal programming stuff)
I can’t speak to whether this is a good approach for anyone else (or even for myself ~15 years later) but it served to ingrain in me the habit of questioning everything and poking at things from multiple angles to make sure I had a good mental model.
All that is to say, there is something to be said for answering “stupid” questions yourself (your own or other people’s).
Way back in "Ye olden days" (Apple ][ era) my first "computer teacher" was a teacher's assistant who had wrangled me (and a few other students) an hour a day each on the school's mostly otherwise un-used Apple ][e. He plopped us down in front of the thing with a stack of manuals, magazines, and floppy discs and let us have at it. "You wanna learn computer programming? You're gonna have to read..." :)
With LLMs you can start with first principles, confirm basic knowledge (of course, it hallucinates but I find it's not that hard to verify things most of the time) or just get pointers where to dive deeper.
A second major issue with SO is that answers decay over time. So a good answer back in 2014 is a junk answer today. Thus, I would get drive by downvotes on years old discussions, which is simply irritating.
So I quit SO and never bothered to answer another single question ever again.
SO has suffered from enshitification, and though I despise that term, it does sort of capture how sites like SO went from excellent resources into cesspools of filth and fools.
That LLMs are trained on that garbage is amusing.
It was horrible. Because it wasn't about "figuring things out for yourself." I mean, if the answer was available in a programming language or library manual, then debugging was easy.
No, the problem was you spent 95% of your debugging time working around bugs and unspecified behavior in the libraries and API's. Bugs in Windows, bugs in DLL's, bugs in everything.
Very frequently something just wouldn't work even though it was supposed to, you'd waste an entire day trying to get the library call to work (what if you called it with less data? what if you used different flags?), and then another day rewriting your code to use a different library call, and praying that worked instead. The amount of time utterly wasted was just massive. You didn't learn anything. You just suffered.
In contrast, today you just search from the problem you're encountering and find StackOverflow answers and GitHub issues describing your exact problem, why it's happening, and what the solution is.
I'm so happy people today don't suffer the way we used to suffer. When I look back, it seems positively masochistic.
TBF Ali bugs in some framework you're using still happens. The problem wasn't eliminated, just moved to the next layer.
Those debugging skills are the most important part of working with legacy software (which is what nearly all industry workers work in). It sucks but is necessary for success in this metric.
My point is that I can frequently figure out how to work around them in 5 minutes rather than 2 days, because someone else already did that work. I can find out that a different function call is the answer, or a weird flag that doesn't do what the documentation says, or whatever it is.
And my problem of it taking two days to debug something is eliminated, usually.
Guess I'm just dumb then. I'm still taking days to work around some esoteric, underdocumented API issues in my day-to-day work.
The thing is these API's are probably just as massive as old school OS cosebases, so I'm always tripping into new landmines. I can be doing high level gameplay stuff one week. Then the next week I need to figure out how authoring assets works, and then the next week I'm performing physics queries to manage character state. All in the same API that must span 10s of millions of lines of code at this point.
There are tons of obfuscated Java jar libraries out there that are not upgradeable that companies have built mission critical systems around only to find out they can't easily move to JVM 17 or 25 or whatever and they don't like hearing that.
I have not, but at the beginner level you don't really need it, there are tons of tutorials and language documentation that is easier to understand. Also beginners feel absolutely discouraged to ask anything, because even if the question is not a real duplicate, you use all terms wrong and thus get downvoted to hell and then your question is marked as a duplicate to something, that doesn't even answer your question.
Later it's quite nice to ask for clarifications of e.g. the meaning of something specific in a protocol or the behaviour of a particular program. But quite quickly you don't actually get any satisfying answers, so you revert to just read the source code of the actual program and are surprised how easy that actually was. (I mean it's still hard every time you start with a new unknown program, but it's easier than expected.)
Also when you implement a protocol, asking questions on StackOverflow doesn't scale. Not because the time you need to wait for answers; even if that were zero, it still takes to long time and is deeply unsatisfying to develop a holistic enough understanding to write the code. So you start reading the RFCs and quickly appreciate how logically and understandable they are. You first curse how unstructured anything is and then you recognize that the order follows what you need to write and you can just trust the text and write the algorithm down. Then you see that the order in which the protocol is described actually works quite well for async and wonder what the legacy code did, because not deviating from the standard is actually easier.
At some point you don't understand the standard, there will be no answer on StackOverflow, the LLM just agrees with you for every conflicting interpretation you suggest, so you hate everything and start reading other implementations. So no, you still need to figure out a lot for yourself.
That way of life is gone for me. I've got a smartphone and I doomscroll to my detriment. What's new and fascinating to me is the models themselves. https://fi-le.net/oss/ currently trending here is the tip of a whole new area of study and work.
(1)https://archive.org/details/borland-turbo-pascal-6.0-1990
However, on the intrinsic value of these new tools when developing habits and skills, I just think about the irreplaceable role that the (early) internet played in my own development and how incredible an LLM would have been. People are still always impressed whenever a precocious high schooler YouTubes his way to an MVP SaaS launch -- I hope and expect the first batch of LLM-accompanied youth to emerge will have set their sights higher.
I don't know about that. Early internet taught me I still need to put in work to to find answers. I still had to read human input (even though I lurked) and realize a lot of information was lies and trolls (if not outright scams). I couldn't just rely on a few sites to tell me everything and had to figure out how to refine my search queries. The early internet was like being thrown into a wilderness in many ways, you pick up survival skills as you go along even if no one teaches you.
I feel an LLM would temper all the curiosity I gained in those times. I wouldn't have the discipline to use an LLM the "right way". Clearly many adults today don't either.
2. https://pubmed.ncbi.nlm.nih.gov/25509828/
3. https://www.researchgate.net/publication/392560878_Your_Brai...
I’m pretty convinced it should be used to do things humans can’t do instead of things humans can do well with practice. However, I’m also convinced that Capital will always rely on Labor to use it on their behalf.
They're going to be rather surprised when this doesn't work as planned, for reasons that are both very obvious and not obvious at all. (Yet.)
Compare this body of work to the body of work that has consistently showed social media is bad for you and has done so for many years. You will see a difference. Or if you prefer to focus on something more physical, anthropogenic climate change, the evidence for the standard model of particle physics, the evidence for plate tectonics, etc.
I'm not saying we shouldn't be skeptical that these technologies might make us lazy or inable to perform critical functions of technical work. I think there is a great danger that these technologies essentially fulfill the promise of data science across industries, that is, a completely individualized experience to guide your choices across digital environments. That is not the world that I want to live in. But, I also don't think that my mind is turning to mush because I asked Claude Code to write some code to make a catboost model that would have taken me a few hours to try out some idea.
We don't practice much using the assembler either, or the slide ruler. I also lost the skill to start an old Renault 12 which I owned 30 years ago, it is a complex process believe me, there were some owners reaching artist level at it.
In an interview setting, while in a meeting, if you're idling on a problem while traveling or doing other work, while you are in an area with weak reception, if your phone is dead?
There are plenty of situations where my problem solving does not involve being directly at my work station. I figured out a solution to a recent problem while at the doctor's office and after deciding to check the API docs more closely instead of bashing my head on the compiler.
>We don't practice much using the assembler either, or the slide ruler.
Treating your ability to research and critically think as yet another tool is exactly why I'm pessimistic about the discipline of the populace using AI. These aren't skills you use 9-5 then turn off as you head back home.
Sad truth is future will most likely invalidate all „knowledge” beside critical thinking.
When you're unemployed, homeless, or cash strapped for other reasons, as has happened to more than a few HNers in the current downturn, and can't make your LLM payments.
And that doesn't even account for the potential of inequality, where the well-off can afford premium LLM services but the poor or unemployed can only afford the lowest grades of LLM service.
When computers were new, they were on occasion referred to as "electronic brains" due to their capacity for arithmetic.
Humans can, with practice, do arithmetic well.
A Raspberry Pi Zero can do arithmetic faster than the combined efforts of every single living human even if we all trained hard and reached the level of the current record holder.
Should we stop using computers to do arithmetic just because we can also do arithmetic, or should we allow ourselves to benefit from the way that "quantity has a quality all of its own"*?
* Like so many quotes, attributed to lots of different people. Funny how unreliable humans can be with information :P
Geesh, doorbell again. Last mile problem? C'mon. Whoever solves the last 30 feet problem is the real hero.
Paintball gun, except with cream fillings instead of paint, and softer shells so they can be fired straight into people's mouths.
Didn't we?
Maybe it has something to do with the purveyors of these products
- claiming they will take the jobs
- designing them to be habit-forming
- advertising them as digital mentats
- failing to advertise risks of using them
Some people benefit from the relaxing effects of a little bit. It helped humanity get through ages of unsafe hygiene by acting as a sanitizer and preservative.
For some people, it is a crutch that inhibits developing safe coping mechanisms for anxiety.
For others it becomes an addiction so severe, they literally risk death if they don't get some due to withdrawal, and death by cirrhosis if they keep up with their consumption. They literally cannot live without it or with it, unless they gradually taper off over days.
My point isn't that AI addiction will kill you, but that what might be beneficial might also become a debilitating mental crutch.
Better analogy is processed food.
It makes calories cheaper, it’s tasty, and in some circumstances (e.g. endurance sports or backpacking) is materially enhances what an ordinary person can achieve. But if you raise a child on it, to where it’s what they reach for by default, they’re fucked.
I was building a little roguelike-ish sort of game for myself to test my understanding of Raylib. I was using as few external resources as possible outside of the cheatsheet for functions, including avoiding AI initially.
I ran into my first issue when trying to determine line of sight. I was naively simply calculating a line along the grid and tagging cells for vision if they didn't hit a solid object, but this caused very inconsistent sight. I tried a number of things on my own and realized I had to research.
All of the search results I found used Raycasting, but I wanted to see if my original idea had merit, and didn't want to do Raycasting. Finally, I gave up my search and gave copilot a function to fill in, and it used Bresenham's Line Algorithm. It was exactly what I was looking for, and also, taught me why my approach didn't work consistently because there's a small margin of error when calculating a line across a grid that Bresenham accounts for.
Most people, however, won't take interest in why the AI answer might work. So while it can be a great learning tool, it can definitely be used in a brainless sort of way.
That system, of course, doesn't rely on generative AI at all: all contributions to the system are appropriately attributed, etc. I wonder if a similar system could be designed for software?
- the code
- your improvement in knowledge
would have been if you had skipped copilot and described your problem and asked for algorithmic help?
The value isn't objective and very much depends on end goals.People seem to trounce out the "make games, not engines" without realizing that engine programmers still do exist.
Stepping back - the way fundamental technology gets adopted by populations always has a distribution between those that leverage it as a tool, and those that enjoy it as a luxury.
When the internet blew up, the population of people that consumed web services dwarfed the population of people that became web developers. Before that when the microcomputer revolution was happening, there were once again an order of magnitude more users than developers.
Even old tech - such as written language - has this property. The number of readers dwarfs the number of writers. And even within the set of all "writers", if you were to investigate most text produced, you'd find that the vast majority of it is falls into that long tail of insipid banter, gossip, diaries, fanfiction, grocery lists, overwrought teenage love letters, etc.
The ultimate consequences of this tech will depend on the interplay between those two groups - the tool wielders and the product enjoyers - and how that manifests for this particular technology in this particular set of world circumstances.
That's a great observation!
'Literacy' is defined as the ability to both read and write. People as a rule can write, even if it isn't a novel worth publishing they do have the ability to encode a text on a piece of paper. It's a matter of quality rather than ability (at least, in most developed countries, though even there there are still people who can not read or write).
So think that you could fine-tune that observation to 'there is a limited number of people that provide most of the writings'. Observing for instance Wikipedia or any bookstore would seem to confirm that. If you take HN as your sample base then there too it holds true. If this goes for one of our oldest technologies it should not be surprising that on a forum dedicated to creating businesses and writing the ability to both read and write are taken for granted. But they shouldn't be.
The same goes for any other tech: the number of people using electronics dwarfs the number of circuit designers, the number of people using buildings dwarfs architects and so on, all the way down to food consumption and farmers or fishers.
Effectively this says: 'we tend to specialize' because specialization allows each to do what they are best at. Heinlein's universal person ('specialization is for insects') is an outlier, not the norm, and probably sucks at most of the things they claim to have ability for.
This is quoted elsewhere in this thread (https://news.ycombinator.com/item?id=45482479). Most of the things are stuff that you will be doing at some point in your life, that are socially expected from every human at part of human life or things you do daily. It also only says you should be able to do it, it does not need to be good; but should the case arise, that you are required to do it, you should be able to deal with it.
Well the current vision right now seems to be for the readers to scroll AI TikTok and for writers to produce AI memes. I'm not sure who really benefits here.
That's my primary problem as of now. It's not necessary used as some luxury tool or some means of entertainment. It's effectively trying to outsource knowledge itself. Using ChatGPT as a Google substitute has consequences to readers, and using it to cut corners for writers has even worse consequences. I don't think we've had tech like this that can argued as dangerous on both sides of the aisle simultaneously.
On the contrary, all tech is like this. It is just the first time that the knowledge workers producing the tech are directly affected so they see first hand the effects of their labor. That really is the only thing that is different.
So let's not just handwave it as "nothing special" and actually demonstrate why this isn't special. Most other forms of technological progress have shown obvious benefits to producers and consumers. Someone is always harmed in the short term, yes. But society's given them ways to either retire or seek new work if needed. I'm not seeing that here.
Sorry, but my comment wasn't about you in particular. It was about the tech domain in general. I know absolutely nothing about you so I would not presume to make any statements about you in that sense.
> But society's given them ways to either retire or seek new work if needed. I'm not seeing that here.
No, not really. For the most part they became destitute and at some point they died.
What you are not seeing is that this is the end stage of technological progress, the point at which suddenly a large fraction of the people is superfluous to the people in charge. Historically such excess has been dealt with by wars.
Having opportunity doesn't mean they will seize it. I will concede that if you are disrupted and in your 50's (not old enough to retire, and where it becomes difficult to be re-hired unless you're management) you get hit especially hard.
But it's hard to see the current landscape of jobs now and suggest that boomers/older GenX had nothing to fall back on when these things happen. These generations chided millennials and Gen Z for being "too proud to work a grill". Nowadays you're not even getting an interview at McDonald's after submitting hundreds of applications. That's not an environment that let's you "bounce back" after a setback.
>Historically such excess has been dealt with by wars.
Indeed. We seem to be approaching that point, and it's already broken out in several places. When all other channels are exhausted, humans simply seek to overthrow the ones orchestrating their oppression.
In this case that isn't AI. At least not yet. But it's a symptom of how they've done this.
Well, in that sense everybody has opportunity. But I know quite a few people who definitely would not survive their line of employment shutting down. A lot of them have invested decades in their careers and have life complications, responsibilities and expenses that stop them from simply 'seizing opportunity'. For them it would be the end of the line, hopefully social security would catch them but if not then I have no idea how they would make it.
But speaking in macroeconomics, most people have the capacity to readjust if needed. I had to do so these last few years (and yes, am thankful I am "able bodied" and have a family/friend network to help me out when at my lowest points). And the market really sucks, but I eventually found some things. Some related to my career, some not.
But I'm 30. In the worst worst cases, I have time and energy to pivot. The opportunities out there are dreadful all around, though.
I am not so sure about that. I know I can. But I also know that I'm pretty privileged, where most people are not.
It would have been nice if the author had not overgeneralized so much:
https://claude.ai/share/27ff0bb4-a71e-483f-a59e-bf36aaa86918
I’ll let you decide whether my use of Claude to analyze that article made me smarter or stupider.
Addendum: In my prompt to Claude, I seem to have misgendered the author of the article. That may answer the question about the effect of AI use on me.
And then:
> It would have been nice if the author had not overgeneralized so much
But you just fell into the exact same trap. The effect on any individual is a reflection of that person's ability in many ways and on an individual level it may be all of those things depending on context. That's what is so problematic: you don't know to a fine degree what level of competence you have relative to the AI you are interacting with so for any given level of competence there are things that you will miss when processing an AI's output. The more competent you are the better you are able to use it. But people turn to AI when they are not competent and that is the problem, not that when they are competent they can use it effectively. And despite all of the disclaimers that is exactly the dream that the AI peddlers are selling you. 'Your brain on steroids'. But with the caveat that they don't know anything about your brain other than what can be inferred from your prompts.
A good teacher will be able to spot their own errors, here the pupil is supposed to be continuously on the looking for utter nonsense the teacher utters with great confidence. And the closer it gets to being good at some stuff the more leeway it will get for the nonsense as well.
> Would I recall it all without my new crutch? Maybe not
This just seems like you’ve shifted your definition of “learning” to no longer include being able to remember things. Like “outsourcing your thinking isn’t bad if you simply expect less from your brain” isn’t a ringing endorsement for language models
Sure, having a real-time data source is nice for avoiding construction/traffic, and I'd use a real-time map, but going beyond that to be spoon fed your next action over and over leads to dependency.
More or less at the same time I found “Human Being: Reclaim 12 Vital Skills We’re Losing to Technology”, and the chapter on navigation hit me so hard I put the book down and refused to read any more until my navigation skills improved.
They're quite good now. I sit at the toilet staring at the map of my city, which I now know quite well. No longer navigate with my phone.
I'm scared about the chapter on communication, which I'm going through right now.
I do think we're losing those skills, and offloading more thinking to technology will further erode your own abilities. Perhaps you think you'll spend more time in high-cognition activities, but will you? Will all of us?
When I can get a full time job again, I plan to. I was trying to learn how to 3d model before the tech scene exploded 3 years ago. I'm probably not trying to take back all 12 factors (I'm fine with where my writing is as of now, even if it is subpar), but I am trying to focus on what parts are important to me as a person and not take any shortcuts out of them.
Don't leave as hanging, what were they saying?
https://www.npr.org/2011/07/26/137646147/the-gps-a-fatally-m...
and for the way this mindset erodes values and skills:
https://www.marinecorpstimes.com/news/your-marine-corps/2018...
(And of course, idiotic behaviour... but GPS doesn't cause that.)
Overall GPS has been an absolutely enormous benefit for society with barely any downside other than nostalgia for map reading.
Then I can pretty quickly see whether my idea was a good one or not. It's so easy and quick to build tiny bespoke tools now, that I'm building them left and right.
Some stay with me and I use them regularly, the others I forget. But I didn't have to spend hours and hours building them so the time-cost is not an issue.
Maybe the place to draw the line is different for each individual and depends on if they're really spending their freed-up time doing something useful or wasting it doing something unproductive and self-destructive.
The ballad of John Henry was written in the 1840s
“Does the engine get rewarded for its steam?” That was the anti-automation line back then
If you gave up anything that was previously called “AI” we would not have computers, cars, airplanes or any type of technology whatsoever anywhere
Sure, and it was wrong because it turns out the conductor does get rewarded. Given train strikes that had to be denied as recently as a few years ago, it's clear that's an essential role 150 years later.
With how they want to frame AI as replacing labor, who's being rewarded long term for its thinking? Who's really being serviced?
Humans haven’t figured out how to include all humans and ecological systems into the same “tribe” and therefore the infighting between artificially segregated human groups, disconnected with ecological “externalities” which prevents a sustainable cooperative solution.
So most likely it will continue to be a small number of humans dominating the rest with increasingly powerful tools that reduce the number of humans required to act in active domination or displacement roles.
Humanity long ago decided that its everyone for themselves and to encode “might makes right” into ritual, mythology and organizational formation-operations.
The tool itself doesn't matter, but the people are falling into the same cycle once again. I can see LLM's used ethically and carefully managed to assist the populace. That's clearly not what's happening and is the entire reason I'm against them. It's tiring being dismissed as a luddite just because I don't want big tech to yet again recklessly ransack the populace with no oversight.
How do you think you can break the cycle?
Do you have a suggestion for what you are going to do about it?
Not to say that apps aren't useful in replacing the paper map, or doing things like adding up the times required (which isn't new - there used to be tables in the back of many maps with distances and durations between major locations).
I always feel like they aren't even trying. Like you just make a point were you are, a point were you want to do, draw a straight line, take the nearest streets, and then you can optimize ad libitum.
I grew up with a glove box full of atlases in my car. On one job, I probably spent 30 minutes a day planning the ~4h of driving I'd do daily to different sites. Looking up roads in indexes, locating grid numbers, finding connecting roads spanning pages 22-23, 42-43, 62-63, and 64-65. Marking them and trying not to get confused with other markings I'd made over the past months. Getting to intersections and having no idea which way to turn because the angles were completely different from on the map (yes this is a thing with paper maps) and you couldn't see any road signs and the car behind you is honking.
What a waste of time. It didn't make me stronger or smarter or a better person. I don't miss it the same way I don't miss long division.
Yes, it did.
Planning routes isn't exactly rocket science. There's not much to learn. It just takes a lot of time. It's busywork.
I'm just lucky in that I've always had a sense of direction ever since I was little. It's not a skill I've ever had to develop. There's nothing to "get better at". Some people just seem to be born with it, and I got lucky.
And people can't go that far to begin with. SThat's the scary part.
These little things we think of as insignificant add up and give us our ability to think. Change how we perceive and navigate (no pun intended) the world. Letting one or two of these factors rust probably won't cost us, but how far off are we really from the WALL-E future of we automate all our cognition, our spatial reasoning, and our curiosity?
I think we're nowhere close to WALL-E, nor are we headed in that direction. For everything that becomes easier, new harder skills become more important.
I'll ask point blank, then: what new "hard skills" are becoming more important in the short and mid terms that you see on the horizon? My biggest fears are that the technocrats very much want to raise a generation of "sheep" dependent on them to think. They don't need thinkers, only consumers.
And then communication, management, and people skills become more important each year. That's not stopping. It's only becoming more valuable, and a lot of people need to get a lot better at it.
Being an effective software developer is going to get much more challenging, skills-wise, over the next couple decades as productivity expectations rise exponentially.
And this is going to be the same in every knowledge work field. People will be using AI to orchestrate and supervise 20x the amount of work, and that's an incredibly demanding skill set.
I've heard this a decade ago as well (replace Ai agents with distributed cloud clusters). Instead it seems like industry wants to kick out all the expertise and outsource as much grunt work as possible to maintain what is already there. So I not too optimistic that the industry will be looking for proper architects. We're pushing more CRUD than ever under the guisd of cutting edge tech.
We're not working smarter, we're trying to work cheaper. We'd need a huge cultural shift to really show me that this won't be even more true on 10 years. That's why I'm slowly trying to pivot to a role not reliant on such industry practices
I’ve blitzed through the formerly famous Tokyo subway system mindlessly without a clue.
I have utterly no idea what the different us highways in my area are but it’s never really affected me besides being unable to join in mundane discussions of traffic on 95 or whatever
80% of senior candidates I interview now aren’t able to do junior level tasks without GenAI helping them.
We’ve had to start doing more coding tests to weed their skill set out as a result, and I try and make my coding tests as indicative of our real work as possible and the work they current do.
But these people are struggling to work with basic data structures without an LLM.
So then I put coding aside, because maybe their skills are directing other folks. But no, they’ve also become dependent on LLMs to ideate.
That 80% is no joke. It’s what I’m hitting actively.
And before anyone says: well then let them use LLMs, no. Firstly, we’re making new technologies and APIs that LLMs really struggle with even with purpose trained models. But furthermore, If I’m doing that, then why am I paying for a senior ? How are they any different than someone more junior or cheaper if they have become so atrophied ?
I am a lead engineer, but I’ve been using AI in much of my code recently. If you were to ask me to code anything manually right now, I could do it, but it would take a bit to acclimate to writing code line by line. By “a while”, I mean maybe a few days.
Which means that if we were to do a coding interview without LLMs, I would probably flop without me doing a bit of work beforehand, or at least struggle. But hire me regardless, and I would get back on track in a few days and be better than most from then on.
Careful not to lose talent just because you are testing for little used but latent capabilities.
How do I know you aren’t just a lead with a very good team to pick up the slack?
How do I separate you from the 20 other people saying they’re also good?
Why would I hire someone who can’t hit the ground running faster than someone else who can?
Furthermore, why would I hire someone who didn’t prepare at all for an interview, even if just mentally?
How do you avoid just hiring based on vibes? Bear in mind every candidate can claim they’re part of impressive projects so the resume is often not your differentiator.
If all their previous employers don't allow side projects (must be in US or something, where employees don't have rights), then they should pay accordingly more to balance that restriction and loss in experience.
My employer definitely doesn't own all the code I write in the evenings and on the weekends on my own time. Does yours?
They are allowed to suggest the language they’re most familiar with, they’re told they don’t need to finish and they don’t need to be correct.
It’s just about seeing how they work through something.
If someone like the person you replied to would show up that unprepared , I would really question their own judgement of their abilities.
Thats the issue. How can one be sure you can actually get back on track - or - you never were on the track in the first place and you are just an AI slopper?
Thats why on interview you need to show skills. And on actual job you can use AI.
Because they know how to talk to the AI. That's literally the skill that differentiates seniors from juniors at this point. And a skill that you gain only by knowing about the problem space and having banged your head at it multiple times.
If your product has points where llms falter, this use a useless metric.
>and having banged your head at it multiple times.
And would someone who relied on an LLM be doing this?
I've seen people with 10 years experience blindly duplicate C++ classes rather than subclass them, and when questioned they seemed to think the mere existence of `private:` access specifiers justified it. There were two full time developers including him, and no code review, so it's not like any of the access specifiers even did anything useful.
Straight from junior to senior just feels weird jump. Junior and Senior sound like adjectives to me. Qualifiers. And there should be some middle point in between where bulk of the workforce doing the actual job should be.
Me being from a place where they definitely aren't found this hilarious.
I've had meetings with Principal Architects with less experience than me (title: Backend Programmer).
Bigger organisations really should standardise their titles to specific experience/responsibility/capability milestones so people from other sides of the org can use the title to estimate the skill level of the other person they're talking with.
― Robert A. Heinlein
(It's a matter of opinion)
Though the quote is more broad than that. It's really saying "a C++ developer should be able to write up a report, participate in an interview panel, plan a small party for Jane's birthday this Friday, and be able to make conversation with the external partner coming in today". Few of us are simply just writing code all day.
4 years ago, I'd have said "obviously".
At this point? Only for specialist languages the LLMs suck at. (Last I tried, "suck at" included going all the way from yacc etc. upwards when trying to make a custom language).
For most of us, what's now important is: Design patterns; architectures; knowing how to think of big-O consequences; and the ability to review what the AI farts out, which sounds like it should need an understanding of the programming language in question, but it's surprisingly straightforward to read code in languages you're not fmiliar with, and it gets even easier when your instruction to the AI includes making the code easy to review.
I see both the latest Claude and GPT models fall over on a lot of C++, Swift/ObjC and more complex Python code. They do better in codebases where there is maximal context with type hints in the local function. Rust does well, and it’s easier to know when it’s messed up since it won’t compile.
They also tend to clog up code bases with a cacophony of different coding paradigms.
A good developer would be able to see these right away, but someone who doesn’t understand the basics at the least will happily plug along not realizing the wake they’ve left.
Would I hire a taxi driver who can’t drive to drive me somewhere?
Why would I hire a software engineer who can’t demonstrate their abilities.
If you hire an amazing programmer but then ask for a quick report on their implementation and it's gibberish. Is that satisfactory? Their job isn't to write reports.
If you ask them during lunch to grab your meal from the desk and they spill it everywhere, is that satisfactory? You didn't hire then for their ability to carry a plate.
EDIT: I should add here that I agree with your hiring approach. I just think you're critiquing this quote in the wrong way. The point of the quote is that humans have a capacity to learn and perform many tasks, even if they do some better than others. LLMs hinder this ability.
If it's the List and Dict (or whatever they are called in that language) then maybe. But I'd not expect someone to spell the correct API by heart to use Stack, Queue, Deque, etc, even they're text book examples of "basic" data structures.
- lists
- dictionaries
- sets
- nested versions of the above
- strings (not exactly a data structure)
strings are tenuously in my data structures list because I let people treat them as ascii arrays to avoid most of the string footguns.
More so, can they demonstrate that in interviews. We specifically structure our interviews so coding is only one part of them, and try and suss out other aspects of a candidate (problem solving, project leading, working with different disciplines) and then weight those against the needs of the role to avoid over indexing on any specific aspect. It also lets us see if there are other roles for them that might be better fits.
So a weak coder has opportunities to show other qualities. But generally someone interviewing for a coding heavy role who isn’t a strong coder tends to not sufficiently demonstrate those other qualities. Though of course there are exceptions.
A senior with a LLM will likely still outperform a junior. A CS major with an LLM will out perform an English major.
But is the senior out performing the junior at a level that warrants the difference in salary?
Then people will point to the intangibles of experience, but will forget the intangibles of being fresh and new with regards to being malleable and gung-ho.
The BLS works as well, for now. I think that's quickly going to change with future reports.
This was also making the rounds last ywar: https://futurism.com/the-byte/berkeley-professor-grads-job-m...
To add my two cents of anecdotes from what I can tell in the creative agency world: shit sucks hard at the moment. Clients are cutting back budgets hard and a lot of them explicitly ask for AI strategies - obviously in terms of saving money. Blockchain craze was similarly bad when it comes to client requests for stuff that Just Did Not Make Sense, but at least there was money pouring in. And it's that way across the industry, everyone I know feels the same. There's no money from clients that would pay for juniors, so no juniors are getting hired even if it might blow up in five, six years when there are no fresh juniors/intermediates around.
Personally, I decided to part ways. There's many places I want to be, but the creative industry when the AI bubble and hiring anti-bubble finally pops? Oh hell no.
IF agent starts generating code - nobody will have time and stamina to rewrite all the slop, it will just get approved.
Copy/paste from chat is the only way to ensure proper quality so that developer can write high quality code and just outsource to AI tasks that are boring or generic.
I’m assuming “unable” means not complete lack of knowledge how to approach it, but lack of detail knowledge. E.g. a junior likely remembers some $algorithm in detail (from all the recent grind), while a senior may no longer do so but only know that it exists, what properties it has (when to use, when to not use), and how to look it up.
If you don’t think of something regularly, memory of that fades away, becomes just a vague remembrance, and you eventually lose that knowledge - that’s just how we are.
However, consider that not doing junior-level tasks means it was unnecessary for the position and the position was about doing something else. It’s literally a matter of specialization and nomenclature mismatch: “junior” and “senior” are frequently not different levels of same skill set, but somewhat different skill sets. A simple test: if at your place you have juniors - check if they do the same tasks as seniors do, or if they’re doing something different.
Plus the title inflation - demand shifts and title-catching culture had messed up the nomenclature.
Candidates can ask for help, and can google/llm as well if they can’t recall methods. I just do not allow them to post the whole problem in an LLM and need to see them solve through the problem themselves to see how they think and approach issues.
This therefore also requires they know the language that they picked to do simple tasks , including iterating iterables
This said, IMHO one-shot is worth a try because it’s typically cheap nowadays - but if it’s not good (or, for interview reasons, unavailable) any developer should have the skills to break the problem down and iterate on it, especially if all learning/memory-refreshing resources are so available. That’s the skill that every engineer should have.
I guess I must take my words back - if that’s how nowadays “seniors” are then I don’t know what’s going on. My only guess is that you must’ve met a bunch of scammers/pretenders who don’t know anything but trying to pass for a developer.
I would've chosen a language without iterators, what would you do then??
Good luck, sounds like a fair hiring process.
Can the senior developer understand and internalize your codebase? Can they solve complex problems? If you're paying them to be a senior developer, it likely isn't worth their time to concern themselves with basic data structures when they are trying to solve more complex problems.
>Can the senior developer understand and internalize your codebase?
Would you trust someone who needs llms in the hiring phase to be able to do this higher order tasks if they can't nail down the fundamentals?
We've seem to have had a significant title-inflation in the last 5 years, and everybody seems to be at least a senior.
With no new junior positions opening up, I'm not even sure I blame them.
A junior being someone who needs more granular direction or guidance. I’d only send them to meetings paired with a senior. They need close to daily check ins on their work. I include them in all the same things the seniors do for exposure, but do not expect the same level of technical strength at this time in their careers.
I try not to focus on years of experience necessarily, partly because I was supervising teams at large companies very early in my career.
LLMs absolutely excel at this task.
Source: Me, been doing it since early July with Gemini Pro 2.5 and Claude Opus.
So good, in fact, that I have no plans to hire software engineers in the future. (I have hired many over my 25 years developing software.)
I am legitimately interested in your experience though. What are you creating where you can see the results in that time frame to make entire business decisions like that?
I would really like to see those kinds of productivity gains myself.
Separately, I wanted to make some changes to LMDB but the code is so opaque that it's hard to do anything with it (safely).
So I gave the entire codebase to Gemini Pro 2.5 and had it develop a glossary for local variable renames and for structure member renames. I then hand-renamed all of the structures (using my IDE's struct member refactoring tools). Then I gave the local variable glossary and each function to Gemini and had it rewrite the code. Finally, I had a separate Gemini Pro 2.5 context and a Claude Opus context validate that the new code was LOGICALLY IDENTICAL the previous code (i.e. that only local variables were renamed, and that the renaming was consistent).
Most of the time, GPro did the rewrite correctly the first time, but other times, it took 3-4 passes before GPro and Opus agreed. Each time, I simply pasted the feedback from one of the LLMs back into the original context and told it to fix it.
The largest function done this way was ~560 LOC.
Anyway, the entire process took around a day.
However, at one point, GPro reported: "Hey, this code is logically identical BUT THERE IS A SERIOUS BUG." Turns out, it had found the cause of the DUPSORT corruption, without any prompting—all because the code was much cleaner than it was at the start of the day.
That is wild to me! (It actually found another, less important bug too.)
Without LLMs, I would have never even attempted this kind of refactoring. And I certainly wouldn't pay a software engineer to do it.
> What are you creating where you can see the results in that time frame to make entire business decisions like that?
I've developed new libraries (using GPro) that have APIs it can reliably use. What's easy for LLMs can be hard for humans, and what's easy for humans can be hard for LLMs. If you want to use LLMs for coding, it pays to need code they are really good at writing. AI-first development is the big win here.
(This is all in Clojure BTW, which isn't really trained for by vendors. The idea that LLMs are ONLY good at, e.g. Python, is absurd.)
Clojure has enough code out there that it’s well covered by the major LLMs.
Fix: Use correct stack index when adjusting cursors in mdb_cursor_del0
In `mdb_cursor_del0`, the second cursor adjustment loop, which runs after
`mdb_rebalance`, contained a latent bug that could lead to memory corruption or
crashes.
The Problem: The loop iterates through all active cursors to update their
positions after a deletion. The logic correctly checks if another cursor
(`cursor_to_update`) points to the same page as the deleting cursor (`cursor`)
at the same stack level: `if
(cursor_to_update->mc_page_stack[cursor->mc_stack_top_idx] == page_ptr)`
However, inside this block, when retrieving the `MDB_node` pointer to update a
sub-cursor, the code incorrectly used the other cursor's own stack top as the
index:
`PAGE_GET_NODE_PTR(cursor_to_update->mc_page_stack[cursor_to_update->mc_stack_to
p_idx], ...)`
If `cursor_to_update` had a deeper stack than `cursor` (e.g., it was a cursor
on a sub-database), `cursor_to_update->mc_stack_top_idx` would be greater than
`cursor->mc_stack_top_idx`. This caused the code to access a page pointer from
a completely different (and deeper) level of `cursor_to_update`'s stack than
the level that was just validated in the parent `if` condition. Accessing this
out-of-context page pointer could lead to memory corruption, segmentation
faults, or other unpredictable behavior.
The Solution: This commit corrects the inconsistency by using the deleting
cursor's stack index (`cursor->mc_stack_top_idx`) for all accesses to
`cursor_to_update`'s stacks within this logic block. This ensures that the node
pointer is retrieved from the same B-tree level that the surrounding code is
operating on, resolving the data corruption risk and making the logic
internally consistent.
And here's the function with renames (and the fix): struct MDB_cursor {
/** Next cursor on this DB in this txn */
MDB_cursor *mc_next_cursor_ptr;
/** Backup of the original cursor if this cursor is a shadow */
MDB_cursor *mc_backup_ptr;
/** Context used for databases with #MDB_DUPSORT, otherwise NULL */
struct MDB_xcursor *mc_sub_cursor_ctx_ptr;
/** The transaction that owns this cursor */
MDB_txn *mc_txn_ptr;
/** The database handle this cursor operates on */
MDB_dbi mc_dbi;
/** The database record for this cursor */
MDB_db *mc_db_ptr;
/** The database auxiliary record for this cursor */
MDB_dbx *mc_dbx_ptr;
/** The @ref mt_dbflag for this database */
unsigned char *mc_dbi_flags_ptr;
unsigned short mc_stack_depth; /**< number of pushed pages */
unsigned short mc_stack_top_idx; /**< index of top page, normally mc_stack_depth-1 */
/** @defgroup mdb_cursor Cursor Flags
* @ingroup internal
* Cursor state flags.
* @{
*/
#define CURSOR_IS_INITIALIZED 0x01 /**< cursor has been initialized and is valid */
#define CURSOR_AT_EOF 0x02 /**< No more data */
#define CURSOR_IS_SUB_CURSOR 0x04 /**< Cursor is a sub-cursor */
#define CURSOR_JUST_DELETED 0x08 /**< last op was a cursor_del */
#define CURSOR_IS_IN_WRITE_TXN_TRACKING_LIST 0x40 /**< Un-track cursor when closing */
#define CURSOR_IN_WRITE_MAP_TXN TXN_WRITE_MAP /**< Copy of txn flag */
/** Read-only cursor into the txn's original snapshot in the map.
* Set for read-only txns, and in #mdb_page_alloc() for #FREE_DBI when
* #MDB_DEVEL & 2. Only implements code which is necessary for this.
*/
#define CURSOR_IS_READ_ONLY_SNAPSHOT TXN_READ_ONLY
/** @} */
unsigned int mc_flags; /**< @ref mdb_cursor */
MDB_page *mc_page_stack[CURSOR_STACK]; /**< stack of pushed pages */
indx_t mc_index_stack[CURSOR_STACK]; /**< stack of page indices */
#ifdef MDB_VL32
MDB_page *mc_vl32_overflow_page_ptr; /**< a referenced overflow page */
# define CURSOR_OVERFLOW_PAGE_PTR(cursor) ((cursor)->mc_vl32_overflow_page_ptr)
# define CURSOR_SET_OVERFLOW_PAGE_PTR(cursor, page_ptr) ((cursor)->mc_vl32_overflow_page_ptr = (page_ptr))
#else
# define CURSOR_OVERFLOW_PAGE_PTR(cursor) ((MDB_page *)0)
# define CURSOR_SET_OVERFLOW_PAGE_PTR(cursor, page_ptr) ((void)0)
#endif
};
/** @brief Complete a delete operation by removing a node and rebalancing.
*
* This function is called after the preliminary checks in _mdb_cursor_del().
* It performs the physical node deletion, decrements the entry count, adjusts
* all other cursors affected by the deletion, and then calls mdb_rebalance()
* to ensure B-tree invariants are maintained.
*
* @param[in,out] cursor The cursor positioned at the item to delete.
* @return 0 on success, or a non-zero error code on failure.
*/
static int
mdb_cursor_del0(MDB_cursor *cursor)
{
int rc;
MDB_page *page_ptr;
indx_t node_idx_to_delete;
unsigned int num_keys_after_delete;
MDB_cursor *cursor_iter, *cursor_to_update;
MDB_dbi dbi = cursor->mc_dbi;
node_idx_to_delete = cursor->mc_index_stack[cursor->mc_stack_top_idx];
page_ptr = cursor->mc_page_stack[cursor->mc_stack_top_idx];
// 1. Physically delete the node from the page.
mdb_node_del(cursor, cursor->mc_db_ptr->md_leaf2_key_size);
cursor->mc_db_ptr->md_entry_count--;
// 2. Adjust other cursors pointing to the same page.
for (cursor_iter = cursor->mc_txn_ptr->mt_cursors_array_ptr[dbi]; cursor_iter; cursor_iter = cursor_iter->mc_next_cursor_ptr) {
cursor_to_update = (cursor->mc_flags & CURSOR_IS_SUB_CURSOR) ? &cursor_iter->mc_sub_cursor_ctx_ptr->mx_cursor : cursor_iter;
if (!(cursor_iter->mc_flags & cursor_to_update->mc_flags & CURSOR_IS_INITIALIZED)) continue;
if (cursor_to_update == cursor || cursor_to_update->mc_stack_depth < cursor->mc_stack_depth) continue;
if (cursor_to_update->mc_page_stack[cursor->mc_stack_top_idx] == page_ptr) {
if (cursor_to_update->mc_index_stack[cursor->mc_stack_top_idx] == node_idx_to_delete) {
// This cursor pointed to the exact node we deleted.
cursor_to_update->mc_flags |= CURSOR_JUST_DELETED;
if (cursor->mc_db_ptr->md_flags & MDB_DUPSORT) {
cursor_to_update->mc_sub_cursor_ctx_ptr->mx_cursor.mc_flags &= ~(CURSOR_IS_INITIALIZED | CURSOR_AT_EOF);
}
continue;
} else if (cursor_to_update->mc_index_stack[cursor->mc_stack_top_idx] > node_idx_to_delete) {
// This cursor pointed after the deleted node; shift its index down.
cursor_to_update->mc_index_stack[cursor->mc_stack_top_idx]--;
}
XCURSOR_REFRESH(cursor_to_update, cursor->mc_stack_top_idx, page_ptr);
}
}
// 3. Rebalance the tree, which may merge or borrow from sibling pages.
rc = mdb_rebalance(cursor);
if (rc) goto fail;
if (!cursor->mc_stack_depth) { // Tree is now empty.
cursor->mc_flags |= CURSOR_AT_EOF;
return rc;
}
// 4. Perform a second cursor adjustment pass. This is needed because rebalancing
// (specifically page merges) can further change cursor positions.
page_ptr = cursor->mc_page_stack[cursor->mc_stack_top_idx];
num_keys_after_delete = NUMKEYS(page_ptr);
for (cursor_iter = cursor->mc_txn_ptr->mt_cursors_array_ptr[dbi]; !rc && cursor_iter; cursor_iter = cursor_iter->mc_next_cursor_ptr) {
cursor_to_update = (cursor->mc_flags & CURSOR_IS_SUB_CURSOR) ? &cursor_iter->mc_sub_cursor_ctx_ptr->mx_cursor : cursor_iter;
if (!(cursor_iter->mc_flags & cursor_to_update->mc_flags & CURSOR_IS_INITIALIZED)) continue;
if (cursor_to_update->mc_stack_depth < cursor->mc_stack_depth) continue;
if (cursor_to_update->mc_page_stack[cursor->mc_stack_top_idx] == page_ptr) {
if (cursor_to_update->mc_index_stack[cursor->mc_stack_top_idx] >= cursor->mc_index_stack[cursor->mc_stack_top_idx]) {
// If cursor is now positioned past the end of the page, move it to the next sibling.
if (cursor_to_update->mc_index_stack[cursor->mc_stack_top_idx] >= num_keys_after_delete) {
rc = mdb_cursor_sibling(cursor_to_update, 1);
if (rc == MDB_NOTFOUND) {
cursor_to_update->mc_flags |= CURSOR_AT_EOF;
rc = MDB_SUCCESS;
continue;
}
if (rc) goto fail;
}
if (cursor_to_update->mc_sub_cursor_ctx_ptr && !(cursor_to_update->mc_flags & CURSOR_AT_EOF)) {
// BUG FIX: Use the main cursor's stack index to access the other cursor's stacks.
// This ensures we are retrieving the node from the same B-tree level
// that the parent `if` condition already checked. The previous code used
// `cursor_to_update->mc_stack_top_idx`, which could be incorrect if its
// stack was deeper than the main cursor's.
MDB_node *node_ptr = PAGE_GET_NODE_PTR(cursor_to_update->mc_page_stack[cursor->mc_stack_top_idx], cursor_to_update->mc_index_stack[cursor->mc_stack_top_idx]);
if (node_ptr->mn_flags & NODE_DUPLICATE_DATA) {
if (cursor_to_update->mc_sub_cursor_ctx_ptr->mx_cursor.mc_flags & CURSOR_IS_INITIALIZED) {
if (!(node_ptr->mn_flags & NODE_SUB_DATABASE))
cursor_to_update->mc_sub_cursor_ctx_ptr->mx_cursor.mc_page_stack[0] = NODE_GET_DATA_PTR(node_ptr);
} else {
mdb_xcursor_init1(cursor_to_update, node_ptr);
rc = mdb_cursor_first(&cursor_to_update->mc_sub_cursor_ctx_ptr->mx_cursor, NULL, NULL);
if (rc) goto fail;
}
}
cursor_to_update->mc_sub_cursor_ctx_ptr->mx_cursor.mc_flags |= CURSOR_JUST_DELETED;
}
}
}
}
cursor->mc_flags |= CURSOR_JUST_DELETED;
fail:
if (rc)
cursor->mc_txn_ptr->mt_flags |= TXN_HAS_ERROR;
return rc;
}
I have confirmed the fixed logic seems correct, but I haven't written a test for it (I moved on to another project immediately after and haven't returned back to this one). That said…I'm almost certain I have run into this bug in production on a large (1TB) database that used DUPSORT heavily. It's kinda hard to trigger.Also, thanks for a great library! LMDB is fantastic.
I understand your commit description but I'm still a bit puzzled by it. cursor_to_update shouldn't have a deeper stack than cursor, even if it was a cursor on a subdatabase. All the references to the subdatabase are in the subcursor, and that has no bearing on the main cursor's stack.
The original code with the bug was added for ITS#8406, commit 37081325f7356587c5e6ce4c1f36c3b303fa718c on 2016-04-18. Definitely a rare occurrence since it's uncommon to write from multiple cursors on the same DB. Fixed now in ITS#10396.
In this particular case, the article is even explicit about this:
> While we have no idea how AI might make working people obsolete at some imaginary date, we can already see how technology is affecting our capacity to think deeply right now. And I am much more concerned about the decline of thinking people than I am about the rise of thinking machines.
So the author is already explicitly saying that he doesn't know about X (whether AI will take jobs), but prefers to focus in the article on Y (“the many ways that we can deskill ourselves”).
There's nothing stopping them from simply saying "an under-recognized problem with AI is Y, let me explain why it should be a concern".
Expressing the article as an attack on X, when in fact the author hasn't even put five minutes of thought into evaluating X, is just a click bait strategy. People go in expecting something that undermines X, but they don't get it.
Because it’s trivial? Outsmarting can happen because the tortoise runs faster or the hare slows down.
If we're not using our brains for work, then maybe it'll actually increase our deliberateness to strengthen them at home. In fact, I can't imagine that not to be the case. I mean, it's possible we turn into a society of mindless zombies, but fundamentally, at least among some reasonable percentage of the population, I have to believe there is an innate desire to learn and understand and build, and a relationship-building aspect to it as well.
Yeah but don't you kinda proof his point though? When we all created machines to do all the manual labor for us we got fat and unhealthy. Only recently we've really began to understand how important excercise (manual labor) is for us.
I fear that the obvious result of a laissez-faire attitude is that in 20-30 years we find out that there was actual benefit of using our mind, reading, writing, etc.
In the case of our muscles, we were lucky that we didn't need those to identify the health benefits of excercise and we could come up with a solution. In the case of our brains, we might not be as lucky, so maybe be prudent and assume it's something that's beneficial?
If the argument is that people shouldn't be able to get started on those things without having to slog though a lot of mindless drudgework - then people should be honest about that rather than dress it up in analogies.
The problem with that is that there's a lot of cases where a total newbie engaging with some subject could lead to problems. They have a false confidence in their abilities, while not knowing what they don't know.
What if you want to try chemistry and ask the AI about what you need to know. Since you don't know anything about chemistry you don't know if the answers are complete or correct. Because you don't know about chemistry you also don't know about the dangers you need to ask about, what prevention to take, etc.
The same could be said about many different subjects: rock-climbing, home-improvement, electrical work, car maintenance, etc.
You might argue then that it would still be perfect for low-risk subjects, but how would a total newbie be able to validly determine risks of anything they don't know anything about?
Most of us are talking about writing, coding, or analysis, not hazardous materials though.
Do you think we could have progressed to this level if we were still calculating stuff by hand? Log tables??
I picture going forward we'll have much more personalized AI-led curricula that students work on at their own pace. The AI systems can let you use as much or as little AI autocompletion as they feel appropriate, can test your understanding real-time by adding some subtle mistakes or opportunities for improvement, and iterate until you get it.
The main issue I worry about is perhaps the opposite. With education actually becoming more effective and interesting, what happens to kids' social and collaboration skills? And maybe that's where human teachers can still add value. Or in discipline and motivation, etc. IDK exactly how that plays out, but I imagine there's still a role for human teachers to play, and perhaps that aspect is even more important than "lecturer" and "grader" that takes most of their time now.
Of course teachers still have a role in maintaining discipline, motivation, and things that computers can't do, as well as validating that the AI systems are behaving correctly for the things they can.
The biggest thing I don't like about that approach is it's yet another bump in screen time, which, eh, if I think hard enough on that aspect, it maybe makes me hate the whole idea, so.
It's also why there's a considerable opposition to tracking.
If you evaluate academic ability, inevitably, the students who want to learn things are going to be closer to the top, and the ones who don't want to learn things are going to be closer to the bottom.
Now, group students up by ability aggressively: the "top end classes" are going to devour the school program - while the "bottom end classes" would devolve into a pandemonium.
Naturally, the parents of the bottom end students would want there to be zero tracking, so that the average "peer quality" pulls their children up, and the parents of the top end students want there to be the most aggressive tracking possible, so that the average "peer quality" doesn't drag their children down.
A big part of what the rich parents pay for when they send their kids into those expensive private schools is access to better "peer quality". "If a kid's parents are rich" isn't a perfect proxy for "if the kid wants to be learning things", but it outperforms the average. And if a private school is actually willing to expel the most disruptive students, then it's going to tip the scales even further.
If there's anything I'm both hoping that LLMs will replace, and also think of as one of the most vitally important reasons to have locally running, open models, it's teaching. I'm from the age when teaching was "good," and it was bad. The reason why people remember the teachers they were in love with was because of how distinct they were from the rest.
Teachers can be used to teach hard things, like socialization, cleaning, cooking, and how to do backflips.
The OP of that issue actually tries to argue that he should be allowed to post code-suggestions that he himself doesn't understand.
It's the same here: if you just shut off your brain and do what AI says, copy/pasting stuff from/to chat windows, that's going to be a bad time.
> More time is more tension; more pain is more gain.
While this meta analysis [1] found that:
> Results indicate that hypertrophic outcomes are similar when training with repetition durations ranging from 0.5 to 8s.
Maybe author should have chosen the amount of training resistance for the intro allegory (perceived effort?). This would have made their point just as well.
Hopefully people can learn to use AI to help them, while still thinking on their own. It's not like that many of the assignments in school were that useful anyways...
Think of it like this: If 3d printing (finally) gets good enough, is it an issue that most people aren't good at traditional manufacturing? I think we have discussion to be had here but articles like this always strike me as shallow since they keep judging now based on the past. This is a common problem. We see it in politics (make X great again anyone?) and it is a very hard problem to solve since understanding 'better' is a very hard thing to do even with hindsight much less when you are in the middle of change. I do think AI has serious harms, but if all we do is look for the harms we won't find and capture the benefits. Articles should balance these things better.
That's probably why they're not doing that. The core premise-- we will rely on AI so much that we will de-skill ourselves-- requires acknowledging that AI works.
No it doesn't require that because the vast majority of people aren't rational actors and they don't optimize for the quality of their work - they optimize for their own comfort and emotional experience.
They'll happily produce and defend low quality work if they get to avoid the discomfort of having to engage in cognitively strenuous work, in the same way people rationalize every other choice they make that's bad for them, the society, the environment, and anyone else.
People are rational, and the example you give actually shows that; they prefer reduced workload, so they optimize for their own comfort and emotional experience. What isn't rational about that?
If people made decisions the way you described, by carefully considering and accepting trade-offs then I would agree that they are rational actors.
But people don't do that, they pick an outcome they prefer and try to rationalize it afterwards by claiming that trade-offs don't exist.
But what does it matter? After the game of semantics is all said and done, the work is still being done to a lower standard than before, and people are letting their skills atrophy.
The article acknowledges that AI progress to date has worked. It snidely disagrees without argument that AI the field could work from here on out.
They should require in-person oral dissertations, presented before a panel of one to three teachers and lasting one to two hours. Ideally, the topic would be based on the student’s own original thesis.
This approach restores creativity and critical thinking, because at any moment during the examination, teachers can ask probing questions, explore unexpected tangents, and even encourage real-time brainstorming.
THIS is the kind of challenge that can help our species evolve beyond its current state of neurological atrophy.
NYU's Gallatin School of Individualized Study has been doing this since'72
*it was probably the most stimulating 4 years of my life.
Is AI special here? Maybe, if it's truly an existential risk.
I've been coding without my LLM for 2 hours and it's just more productive...yes it's good for getting things "working" but yeah, we still need to think and understand to solve harder problems.
My initial impressions blew me away because generating new things is a lot simpler than fixing old things, yes it's still useful, but only when you know what you're doing in the first place.
I don't disagree in general but I've had a lot of success asking the LLMs to specifically fix these things and make things more maintainable when specifically prompted to do so. I agree debugging and getting things working it often needs supervision, guidance and advice. And architecture it often gets a little wrong and needs nudges to see the light.
I'm not great at this stuff and I got tired of reviewing things and generating suggestions to improve implementations (it seemed to be repetitive a lot) but I am having good results with my latest project using simulated ecosystems with adversarial sub-projects. So there's the core project I care about with a maintainer agent/persona, an extension with an extension developer agent/persona (the extensions provide common features built upon the core with the perspective of being a third-party developer), and an application developer that uses both the core and extensions. I have them all write reports about challenges and reviewing the sub-projects they consume, complaining about awkwardness and ways the other parties could be improved. Then the "owner" of each component reviews the feedback to develop plans for me to evaluate and approve. Very often the "user" components end up complaining about complexity and inconsistency. The "owner" developers tend to add a lot multiple ways of doing things when asked for new features until specifically prompted to review their own code to reduce redundancy, streamline use and improve maintainability. But they will do it when prompted and I've been pretty happy with the code and documentation it's generating.
But my point remains specifically about the crappy code AI writes. In my experience, it will clean it up if you tell it to. There's the simple angle of complexity and it does an okay job with that. But there's the API design side also and that's what the second part is about. LLM will just add dumb ass hacks all over the place when a new feature is needed and that leads to a huge confusing integration mess. Whereas with this setup when I want to build an extension the API has been mostly worked out and when I want to build an application the API has mostly been worked out. That's the way it would work if I ran into a project on github I wanted to depend on.
Vibe coding takes the heavy thinking and offloads it to the machine. Even people who know how turn off their brains when they vibe code.
The time is now. The effects are available for inspection today. Pair with ai on something you know well. Find the place where it confidently acts but makes an error. Check in on your ability to reactivate your mind and start solving the problem.
It feels like waking up without coffee. Our minds are already mostly asleep when we lean on Ai for anything.
Before the advent of smartphones people needed to remember phone numbers and calculate on the fly. Now people don't even remember their own numbers and save it on somewhere and open the calculator app for smallest of things.
But there are still people who can do both. They are not luddites rather not fully reliant on smartphones.
Same thing is going to happen with LLMs.
At some point restricting LLM usage is going to be considered good parenting just like restricting phones are today. Use it for some purpose but not all.
AI allows for you to offload a lot of cognitive effort, allowing you to free up your mind, but the only catch is that AI can be politicized and more confident than accurate.
If AI companies can focus on improving accuracy on facts and make their AI more philosophically grounded on the rest, it would allow people to free up their minds for their immediate real lives.
Don't mistake thinking for intelligence.
Ypu can still make the tests and exams checking what students know. There will be less "generate bs at home" tasks and more "come back with a knowledge" which will likely be an improvement.
They are the same picture!
https://www.derekthompson.org/p/the-end-of-thinking (https://news.ycombinator.com/item?id=45372507)
AI, even when it provides a net benefit, does threathen the value potentially offered by individuals.
It's complicated.
Is it though? I think that's way too simplistic to even come close to being true. Is it even possible for all individuals in an ecosystem to be parasites? Doesn't a parasitic relationship not imply the existence of hosts?
Let's celebrate humanity and our robot friends (looking at my romba)
- Wars and violence to resolve geopolitical problems
- The biggest trading partner of most countries is waging tariff warfare
- Climate change
- Declining birth rate in almost every country
- Healthier foods are getting more expensive despite our technology and nutrition knowledge [0]
I'm not saying there is 0 chance that AI will make people dumb, but it just doesn't seem to be such an emergency humans should collectively be worried about.
If we as humans can’t maintain our current population without getting high schoolers pregnant, then so be it.
And I also don't think higher education necessarily means smarter. I'm quite confident that in the next decade, worldwide educational attainment will keep rising with only some temporary setbacks. If making people smart is as simple as sending them to colleges we really have nothing to worry about.
But anyway neither was my point.
If you lived in an earlier time, you'd be worrying about the rise of homosexuality, communism, atheism, etc.
If they lived in an earlier time, they'd be worrying about the rise of google, the internet, tv, radio... were going to make people stop using their brains. Socrates believed writing was going to make people stop memorizing stuff.
I'm not a doomsayer who thinks the world is on the edge of collapsing. But I do think the issues I listed are much more 'real' than AI making people stop thinking. (Including the birth rate one - yeah, I'm well aware that many people think it's a good thing. As someone whose mother worked at a nursing home, in a country with TFR of 0.78, I just don't agree with them. I believe people hugely underestimate the manpower needed to take care of the elderly and disabled.)
There are three things we need to deal with the temporary population imbalance:
Older people who remain healthy continuing to work to care for their peers—-not necessarily hard physical labor, but being out in their communities helping, which is rewarding and will help them stay healthy, too.
Easy immigration for regular people so they can move to where they’re needed.
General efficiency so we’re not wasting resources. This requires both technological and lifestyle changes, even for the rich. The more efficient we can get, the less we have to reduce our overall population.
- Older people help each other: it's obvious, but the oldest demographic is not going to be significant healthier than what we see now without a lot of medical breakthroughs. The number of new dementia cases in the US are expected to be one million in 2060 [0]. Ironically, the only way to make "older people taking care of older people" even remotely sustainable is automation of most of the caretaking, so I never get the AI doomsayer.
- Immigration: the decline of birthrate is a global phenomenon. Which means if immigration is the solution, instead of a diversified cohort, most of the newcomers will be from few countries where population grows because other countries need influx of immigration as well. In no world it won't cause alt-right rise. What we call alt-right now will look center-left the next decade.
- Future technology and lifestyle change: yes it solves everything including climate change. It's basically handwaving and saying fusion will come tomorrow so we don't need to worry about replacing petro with fission, solar and wind.
[0]: https://www.nih.gov/news-events/nih-research-matters/risk-fu...
Yes and for good reason. Where most immigrants are Muslim, you can end up with enough Muslims to change the laws and society to their ways. That includes sharia law which ironically is itself far-right and diametrically opposite to left-wing ideals.
I really don’t think we have anything to worry about not having any people left. There’s still plenty of innate drive to reproduce, both biologically and intellectually (as in, educating far more people than the number of kids you could ever have).
> Immigrants built America.
These were masses building a new country, not integrating into an existing; they had a way lower living standard, a lot of them died. It did not go well for the existing society in America. The people forming the government were british rich aristocrats. And don't think these situations are comparable.
I also think you’d be surprised how little time to pay off would actually be needed. Give people a calm space and community (including their friends and family), and we will all flourish.
These are different things. Yes we are required to provide the resources needed for life to refugees as well as to the elderly. That doesn't automatically mean that this is a worthy investment. What dividends do you intend to get from a bedridden old person.
> Rhetoric otherwise is just veiled racism.
No, it's not, were did a talk about race? If anything, you could accuse me of egoism, nationalism and social darwinism. Dealing with other perspectives by shouting you are bad guy(tm) won't advance anyone.
But I did not say, that any culture will perform worse than another. I did say a person robbed from his culture, law and economic system, and displaced into another will struggle. Putting on rose tinted glasses will help neither, your society, the refugee nor the refugees society.
> I also think you’d be surprised how little time to pay off would actually be needed.
Sure, that's why the societies with the refugees all outperform their neighbor and are way more peaceful.
> Give people a calm space and community (including their friends and family), and we will all flourish.
Sure, but why should this take place in your country, just because you have a self-inflicted lack of young people? That is very much egoistic and unfair, in my opinion. Providing money for education aid to developing countries and then sending ministers there to openly headhunt the well educated for your country, doesn't seem too far from colonialism and slave trade to me.
If we can’t get our act together and help each other out while living within our planet’s means, what kind of life are we even offering to future generations?
Instead of using a thought-terminating insult, think about why people veil their racism, and why they have it. Different races are intrinsically different and some of those differences are harmful to society. You're probably shocked hearing this but most people are too scared to say it because the group will punish them so we end up with people like you somehow actively oblivious to reality.
Nuclear war is still a risk, as are meteor strikes, but somehow they aren't in fashion at the moment. There are yet other risks that people don't want to think about or which protecting ourselves against would be very painful so we also mostly refuse to think about.
Ages ago someone surely cried that cars will cause our legs to disfunction.
https://www.roadbikereview.com/threads/editorial-the-bicycle...
>When man invented the bicycle he reached the peak of his attainments. Here was a machine of precision and balance for the convenience of man. And (unlike subsequent inventions for man’s convenience) the more he used it, the fitter his body became. Here, for once, was a product of man’s brain that was entirely beneficial to those who used it, and of no harm or irritation to others.
~ Elizabeth West, Author of Hovel in the Hills
I can't say that I'm totally unaffected by contemporary technology, and my attention span seems to have suffered a little, but I think I'm mostly still intact. I read most days for pleasure and have started a book club. I deliberately take long walks without bringing my smartphone; it's a great feeling of freedom, almost like going back to a simpler time.
We've turned out okay.
> A few weeks ago, The Argument Editor-in-Chief Jerusalem Demsas asked me to write an essay about the claim that AI systems would take all of our jobs within 18 months. My initial reaction was … no?
[...]
> The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.
[..]
> Students, scientists, and anyone else who lets AI do the writing for them will find their screens full of words and their minds emptied of thought.
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
Second, Socrates was generally arrogant in stories. That attitude you see there was not special disdain to reading, it was more off his general "I am better then everyone else anyway" attitude.
Some of the best thinking across history - euclid,newton,einstein - happened in the pre-computer era. So, let alone AI, even computers are not necessary. Pen, paper and some imagination/experimentation were sufficient.
Some in tech are fear mongering to seek attention.
Conciseness is a valuable thing. It wasn’t practical to convey knowledge in a short form previously because printing and distributing a blog post worth of info was too expensive.
On some level long form content just seems… poorly written. It’s long for the sake of being long.
There are things to be concerned about with students today. They are generally shockingly bad at literacy and numeracy. But I don’t buy that a lack of long form books are the culprit.
But I do think people often approach this issue wrong. Especially, as demonstrated by the OP article:
> “Daniel Shore, the chair of Georgetown’s English department, told me that his students have trouble staying focused on even a sonnet,” Horowitch wrote.
It's a wrong question to ask why people can't focus on a sonnet.
The real question is: why do the students who are not interested in literature choose to major English? What societal and economical incentives drove them to do that?
I agree that they might not, in themselves, be a necessary requirement, however: the ability to engage with material, short or long, at a level of deep focus and intentionality is important. And one of the (extremely common, historically) stronger methods of doing this is with long form content, the less passive and more challenging the better.
It touches on the topic of generally transferable— or diffuse, neurologically speaking— skills. It’s what is frustrating when speaking with folks who insist on ideas like “I shouldn’t have to take all these non-STEM courses”. A truly myopic world view that lacks fundamental understanding of human cognition, especially for the subgroup with this sentiment that will readily affirm the inverse: Non-STEM folks should nonetheless have a strong grounding in maths and sciences for the modes of thinking they impart.
Why the difference? It’s a strange, highly mechanistic and modular view of how critical thinking faculties function. As though, even with plenty of exclusivity, there isn’t still enormous overlapping structures that light up in the brain with all tasks that require concentration. The salience network in particular is critical when reading challenging material as well as during stem-related thinking, eg Math. Which, ironically, means the challenging courses involving analytical literature are precisely the courses that, taken seriously, would lay down neural pathways in a persons salience network that would be extremely useful in thinking about challenging math problems with more tools at your disposal, more angles of attack.
It really shouldn’t require much of an intuitive leap to realize that reading and interpreting complex works of literary creativity or other areas of a GenHumanities topics will help impart the ability to think in creative ways. It’s in the task name. Spending 3x16 hours in a class for a semester, roughly the same for work outside the class, 6 or 7 times throughout a 4 year college stretch is a very small cost for value.
I think the foundational failing of education for the past decades falls into all of these gaps in understanding and the inability to engage with the learning process because too few people can even articulate the relevance of highly relevant material.
You can’t teach things students don’t care to learn.
It is a failure of factual understanding in how cognitive function and critical thinking arises from the brain. If a person isn’t going to do the work then by all means don’t waste their time, but don’t indulge their factually incorrect stance that it has no utility simply because they also don’t have the basic knowledge to know what that utility is.
It’s an attention and working memory test.
I don’t think I’ve ever prided myself on focus. But signing off social media ten years ago has absolutely left me in a competitive place when it comes to deep thinking, and that’s not because I’ve gotten better at it.
> It wasn’t practical to convey knowledge in a short form previously because printing and distributing a blog post worth of info was too expensive
This is entirely ahistoric. Pamphlets and books published in volumes, where each volume would today be a chapter, was the norm. The novel is a modern invention.
I’d be curious to see some examples but I doubt these are anywhere near the size of a one or two page blog post.
The referenced example is principia mathematica
Victorian-era serialised fiction [1]. The Federalist Papers. Everything on clay tablets. Technologically speaking, you don't get a lot of large volumes until the advent of the printing press and mass literacy [2].
> referenced example is principia mathematica
Published in the 20th century.
[1] https://en.wikipedia.org/wiki/Serial_(literature)
[2] https://en.wikipedia.org/wiki/History_of_books#The_printing_...
I don't think concise is necessarily better than long, nor do I think long is better than concise. The thing is, humanity tends to go in cycles. Poems for Babylonians, long epics for the Greeks, back to poems for Shakespeare and Goethe, then the Russians brought back epics. Kind of a mix during the 20th century, but poetry seemed to slowly fade, and novels trended generally shorter. (All of this is very 30K foot-view; of course there were many exceptions in every era).
Philip Roth predicted the end of the era of the novel at some point, long (relatively) before AI [1]. He said that, similar to poetry in the early 20th century, humanity has evolved past the meaningfulness of the long-form novel.
This doesn't mean "the humanities is dead." It just means that we're entering another cycle where a different from of humanities needs to take over from what we've had in the past.
Anyone arguing that the death of the long-form novel is equivalent to the death of humanities is missing the fact that "humanities" is not a precisely-defined set of topics written in stone. Though it can seem like this is the case at any one point in time, humanities can, and must, exist in many forms that will invariably change as humanity's needs do likewise. That's why its prefix is "human".
[1] https://www.nytimes.com/2018/05/23/books/philip-roth-apprasi...
I don’t think this is true, people have been printing newspapers, pamphlets, and leaflets for hundreds of years.
It isn’t only long form content that the printing press was good for, It’s just that the long term content tends to be remembered longer. Probably because it isn’t just long for the sake of being long :p
The human/machine meld will continue until completed.
Call me when a machine can set goals.