This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.
New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.
There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.
If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.
Edit: lol this forum :)
I AM very impressed, and I DO use it and enjoy the results.
The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.
Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.
But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.
So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.
I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.
I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.
I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.
In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.
I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.
Or your job isn't what AI is good at?
AI seems really good at greenfield projects in well known languages or adding features.
It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.
This is precisely my experience.
Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.
Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.
When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.
This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.
PC, Web and Smartphone hype was based on "we can now do [thing] never done before".
This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.
I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.
This right here is the real thing which AI is deployed to upset.
The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.
The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.
That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.
My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.
Spot. Fucking. On.
Thank you.
But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.
The values of bitcoin are:
- easy access to trading for everyone, without institutional or national barriers
- high leverage to effectively easily borrow a lot of money to trade with
- new derivative products that streamline the process and make speculation easier than ever
The blockchain plays very little part in this. If anything it makes borrowing harder.
The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.
AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?
> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.
What do you view as the potential that’s been stated?
In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.
(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.
When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)
Way better than AI jammed into every crevice for no reason.
LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI
Please don't do this, make up your own definitions.
Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.
In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?
AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
And its tricky because I'm trying not to appeal to emotion despite being fascinated with how this tool has enabled me to do things in a short amount of time that it would have taken me weeks of grinding to get to and improves my communication with stakeholders. That feels world changing. Specifically my world and the day-to-day roll I play when it comes to getting things done.
I think it is fine that it fell short of your expectations. It often does for me as well but it's when it gets me 80% of the way there in less than a day's work, then my mind is blown. It's an imperfect tool and I'm sorry for saying this but so are we. Treat its imperfections in the same way you would with a JR developer- feedback, reframing, restrictions, and iterate.
I'd argue that's not true. It's more of a stated goal. The actual goal is to achieve the desired outcome in a way that has manageable, understood side effects, and that can be maintained and built upon over time by all capable team members.
The difference between what business folks see as the "output" of software developers (code) and what (good) software developers actually deliver over time is significant. AI can definitely do the former. The latter is less clear. This is one of the fundamental disconnects in discussions about AI in software development.
I'm going to say this next thing as someone with a lot of negative bias about corporations. I was laid off from Twitter when Elon bought the company and at a second company that was hemorrhaging users.
Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.
Well… That's no longer true, is it?
My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.
AI is already unavoidable.
Rinse and repeat for many "one-off" tasks.
It's not going away, you need to learn how to use it. shrugs shoulders
I mean this in sincerity, and not at all snarky, but - have you considered that you haven't used the tools correctly or effectively? I find that I can get what I need from chatbots (and refuse to call them AI until we have general AI just to be contrary) if I spend a couple of minutes considering constraints and being careful with my prompt language.
When I've come across people in my real life who say they get no value from chatbots, it's because they're asking poorly formed questions, or haven't thought through the problem entirely. Working with chatbots is like working with a very bright lab puppy. They're willing to do whatever you want, but they'll definitely piss on the floor unless you tell them not to.
Or am I entirely off base with your experience?
I prefer to use LLM as a sock puppet to filter out implausible options in my problem space and to help me recall how to do boilerplate things. Like you, I think, I also tend to write multi-paragraph prompts repeating myself and calling back on every aspect to continuously hone in on the true subject I am interested in.
I don't trust LLM's enough to operate on my behalf agentically yet. And, LLM is uncreative and hallucinatory as heck whenever it strays into novel territory, which makes it a dangerous tool.
I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.
The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.
Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.
It’s because they know it works better every day and the people controlling it are gleefully fucking over the rest of the world because they can.
The plainly stated goal is TO ELIMINATE ALL HUMAN EMPLOYEES, with no plan for how those people will feed, clothe, or house themselves.
The reactions the author was getting was the reaction of a horse talking to someone happily working for the hour factory.
... but maybe not in the way that these CEOs had hoped.[0]
Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.
I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.
This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.
With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.
It's popping up in my music streams now and then, and I generally hate it. Mushy-mouthed fake vocals over fake instruments. It pops up online and aside from the occasional meme I hate it there too. It pops up all over blogs and emails and I profoundly hate it there, given that it encourages the actual author to silence themselves and replaces their thoughts with bland drivel.
Every single software product I use begs me to use their AI integration, and instead of "no" I'm given the option of "not now", despite me not needing it, and so I'm constantly being pestered about it by something.
It has, thus far, made nearly everything worse.
To my mind, the 'only just started' argument is wearing off. It's software, it moves fast anyway, and all the giants of the tech world have been feverishly throwing money at AI for the last couple of years. I don't buy that we're still just at the beginning of some huge exponential improvement.
Simple. The company providing the tool needs actual earning suddenly. Therefore, they need to raise the prices. They also need users to spend more tokens, so they will make the tool respond in a way that requires more refinement. After all, the latter is exactly what happened with google search.
At this point, that is pretty normal software cycle - try to attract crowd by being free or cheap, then lock features behind paywall. Then simultaneously raise prices more and more while making the product worst.
This literally NEEDS to happen, because these companies do not have any other path to profitability. So, it will happen at some point.
I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.
One of the tests I sometimes do of LLMs is a geometry puzzle:
You're on the equator facing south. You move forward 10,000 km along the surface of the Earth. You are rotate 90° clockwise. You move another 10,000 km forward along the surface of the earth. Rotate another 90° clockwise, then move another 10,000 km forward along the surface of the Earth.
Where are you now, and what direction are you facing?
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).Anecdotes are of course a bad way to study this kind of thing.
Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.
Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.
How does that bridge get built? I can provide tangible real life examples but I've found push back from that in other online conversations.
Yup. Ai is so fickle it’ll do anything to accomplish the task. But ai is just a tool it’s all about what you allow it to do. Can’t blame ai really.
I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?
-206dev
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:
2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.
I think it's just not true that non-tech people are especially opposed to AI.
One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.
I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.
At least, that's my wife's experience working on a contract with a state government at a big tech vendor.
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.
Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
Now I want to know how you pronounce words like: through, bivouac, and queue.
I personally thought it was wander _fughel_ or something.
Let alone how difficult it is to remember how to spell it and look it up on Google.
There will absolutely some cases where AI is used well. But probably the larger fraction will be where AI does not give better service, experience or tool. It will be used to give a cheaper but shittier one. This will be a big win for the company or service implementing it, but it will suck for literally everybody else involved.
I really believe there's huge value in implementing AI pervasively. However it's going to be really hard work and probably take 5 years to do it well. We need to take an engineering and human centred approach and do it steadily and incrementally over time. The current semi-religious fervour about implementing it rapidly and recklessly is going to be very harmful in the longer term.
> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.
She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.
LLMs are always going to give you the most plausible thing for your query, and will likely just rehash the same destinations from hundreds of listicles and status signalling social media posts.
She probably understood this from the minimal description given.
My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?
With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.
In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.
While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.
I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.
AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.
I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.
Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.
This is a product of hurt feelings and not solid logic.
Right now, late in the business cycle, "tech" companies are dominated by non-technical people, who don't know how to write software, and aren't even capable of thinking through a real problem well enough to design software to solve it. This happens because people make up imposter roles like scrum master, and product manager, and then convert their friends into these roles to get them jobs, and build up their own political faction at a company. The salaries and opinions of these roles directly crowd out those of real talent.
Take a minute to cut through the bullshit about what each role is supposed to contribute at each part of the development cycle, and focus on the amount of decision-making influence that each person has over engineering resources. It's not weighted towards the innovative or creative people; it's probably inversely weighted. That's all you need to know; don't expect good products until that's fixed.
I'll know things have come around full circle when startups are recruiting with: huge management ratios (10+) or flat orgs, remote work or private offices, no product org, everyone in eng can program, everyone in sales can sell, etc. as selling points.
I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
---------
"If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." "
------------
On the one hand, if you were going to bet big on AI, there are aspects of this approach that make sense. e.g. Force everyone to use the company's no-good AI tools so that they become good. However, not permitting employees outside of the "AI org" to fix things neatly nixes the gains you might see while incurring the full cost.
It sounds like MS's management, the same as many other tech corp's, has become caught up in a conceptual bubble of "AI as panacea". If that bubble doesn't pop soon, MS's products could wind up in a very bad place. There are some very real threats to some of MS's core incumbencies right now (e.g. from Valve).
I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...
Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.
It doesn't matter how much I use it. It's still just an annoying tool that makes mistakes which you try to correct by arguing with it but then eventually just fix it yourself. At best it can get you 80% there.
So what's different between Seattle and San Francisco? Does Seattle have more employee-workers and San Francisco has more people hustling for their startup?
I assume Bali (being a vacation destination) is full people who are wealthy enough to feel they're insulated from whatever will happen.
Seattle has more “normal” people and the overall rhetoric about how life “should be” is in many ways resistant to tech. There’s a lot to like about the city, but it absolutely does not have a hustle culture. I’ve honestly found it depressing coming from the East Coast.
Tangent aside, my point is that Seattle has far more of a comparison ground of “you all are building shit that doesn’t make the world better, it just devalues the human”. I think LLMs have (some) strong use cases, but it is impossible to argue that some of the societal downsides we see are ripe for hatred - and Seattle will latch on to that in a heartbeat.
Working for a month out of Bali was wonderful, it's mostly Australians and Dutch people working remotely. Especially those who ran their own businesses were super encouraging (though maybe that's just because entrepreneurs are more supportive of other entrepreneurs).
You know who's NOT divided? Everyone outside the tech/management world. Antipathy towards AI is extremely widespread.
An opinion I've personally never encountered in the wild.
I think its definitely stronger in MS as my friend on the inside tells me, than most places.
There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.
I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.
They should focus more on data engineering/science and other similar fields which is a lot more about those, but since there are often no tests there, that's a bit too risky.
I live in Seattle, and got laid off from Microsoft as a PM in Jan of this year.
Tried in early 2024 to demonstrate how we could leverage smaller models (such as Mixtral) to improve documentation and tailor code samples for our auth libraries.
The usual “fiefdom” politics took over and the project never gained steam. I do feel like I was put in a certain “non-AI” category and my career stalled, even though I took the time to build AI-integrated prototypes and present them to leadership.
It’s hard to put on a smile and go through interviews right now. It feels like the hard-earned skills we bring to the table are being so hastily devalued, and for what exactly?
I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.
It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.
It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.
I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."
I consider it divine intervention that I departed shortly before LLMs got big. I can't imagine the unholy machinations my former team has been tasked with working on since I left.
> It felt like the culture wanted change. > > That world is gone.
Ummm source?
> This belief system—that AI is useless and that you're not good enough to work on it anyway
I actually don't know anyone with this belief system. I'm pretty slow on picking up a lot of AI tooling, but I was slow to pick up JS frameworks as well.
It's just smart to not immediately jump on a bandwagon when things are changing so fast because there is a good chance you're backing the wrong horse.
And by the way, you sound ridiculous when you call me a dinosaur just because I haven't started using a tool that didn't even exist 6 months ago. FOMO sales tactics don't work on everyone, sorry to break it to you.
? For the better, or for the worse ?
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)
Let me know what you've experienced. Not many construction EE on HN.
Oddly, the screenshots in the article show the name as "Wanderfull".
I expect it to settle out in a few years where: 1. The fiduciary duties of company shareholders will bring them to a point of stopping to chase AI hype and instead derive an understanding of whether it's driving real top-line value for their business or not. 2. Mid to senior career engineers will have no choice but to level up their AI skills to stay relevant in the modern workforce.
I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.
Microsoft is the same, a generally very practical company just trying to practical company stuff.
All the guys that made their bones, vested and rested and now want to turn some of that windfall into investments likely don't have the kind of risk tolerance it takes to fund a potential unicorn. All smart people I'm sure, smart enough to negotiate big windfalls from ms/az but far less risk tollerant than a guy in SF who made their investment nestegg building some risky unicorn.
"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."
Visual Studio is great. IntelliSense is great. Nothing open-source works on our giant legacy C++ codebase. IntelliSense does.
Claude is great. Claude can't deal with millions of lines of C++.
You know what would be great? If Microsoft gave Claude the ability to semantic search the same way that I can with Ctrl-, in Visual Studio. You know what would be even better? If it could also set breakpoints and inspect stuff in the Debugger.
You know what Microsoft has done? Added a setting to Visual Studio where I can replace the IntelliSense auto-complete UI, that provides real information determined from semantic analysis of the codebase and allows me to cycle through a menu of possibilities, with an auto-complete UI that gives me a single suggestion of complete bullshit.
Can't you put the AI people and the Visual Studio people in a fucking room together? Figure out how LLMs can augment your already-really-good-before-AI product? How to leverage your existing products to let Claude do stuff that Claude Code can't do?
It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.
That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.
That's the thing, though, it is about their careers.
It's not just that people are annoyed that someone who spends years to decades learning their craft and then someone who put a prompt into a chatbot that spit out an app that mostly works without understanding any of the code that they 'wrote'.
It's that the executives are positively giddy at the prospect that they can get rid of some number their employees and the rest will use AI bots to pick up the slack. Humans need things like a desk and dental insurance and they fall unconscious for several hours every night. AI agents don't have to take lunch breaks or attend funerals or anything.
Most employees that have figured this out resent AI getting shoved into every facet of their jobs because they know exactly what the end goal is: that lots of jobs are going to be going away and nothing is going to replace them. And then what?
If you are a utilitarian person and you try to parse a scrupulous person according to your utilitarianism of course their actions and opinions will make no sense to you. They are not maximizing for utility, whatsoever, in any sense. They are maximizing for justice. And when injustices are perpetrated by people who are unaccountable, it creates anger and complaining. It's the most you can do. The goal is to get other people to also be mad and perhaps organize enough to do something about it. When you ignore them, when you fail to parse anything they say as about justice, then yes, you are part of the problem.
Yeah, that's weird. Why would anyone think that? /s
Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.
I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.
[…]
Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.”
Nope, still completely fucking tone deaf.
Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away [1]. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.
If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.
[1] Just a couple days ago, my spouse was complaining to her friend about a change that Facebook made, and her friend pasted an AI suggestion for how to fix it with like 7 steps that were all fabricated. That isn't helpful at all. It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.
Specifically I was using Gemini to answer questions about Godot specifically for C# (not gdscript or using the IDE, where documentation and forums support are stronger), and it was mostly quite good for that.
(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)
Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.
(Also the quirky three-point bullet list with a three-point recitation at the end with bolded leadoffs to each bullet point and a final punchy closer sentence is totally an AI thing too.)
But, hey, I guess I fit the stereotype!—I'm in Seattle and I hate AI, too.
IIRC (it's been a while) there are 2 cases where a semi-colon is acceptable. One is when connecting two closely-related independent clauses (i.e. they could be two complete sentences on their own, or joined by a conjunction). The other is when separating items in a list, when the items themselves contain commas.
But introductory rhetorical questions? As sentence fragments? There I draw the line.
>>>
For me, the issue is that they’re misused in this piece. Em dashes used as appositives carry the feel of interruption—like this—and should be employed sparingly. They create a jarring bump in the narrative’s flow, and that bump should only appear when you want it. Otherwise, appositives belong with commas (when they’re integral to the sentence) or parentheses (when they’re not). Clause breaks follow the same logic: the em dash is the strongest interruption. Colons convey a sense of arrival—you’ve been building up to this: and now it’s here. Semicolons are for those rare cases when two clauses can’t quite stand alone as separate sentences; most of the time, a full stop is cleaner. Like this. Which is why full stops should be your default splice when revising.
Sprinkling em dashes everywhere—even if each one is technically correct—feels like AI writing. The system has learned what they are, how they work, and when they fit, but it hasn’t learned how to revise for overall flow or density. The result is too many dashes, when the right number should be low, because they’re heavy emphasis—true interruptions.
(And yes, the quirky three-point bullet list with bolded openers and a punchy closer at the end is another hallmark of AI prose.)
But hey, I guess I fit the stereotype—I’m in Seattle, and I hate AI too.
> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups
I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.
I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.
I think some of the reasons that they were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.
I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.
[1] basically the hall of shame for bad threads.
My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.
Not only because it's destroying creator jobs while also ripping off creators, but it's also producing shit that's offensively bad to professionals.
One thing that people in tech circles might not be aware of is that people outside of tech circles aren't thinking that tech workers are smart. They haven't thought that for a long time. They are generally thinking that tech workers are dimwit exploiter techbros, screwing over everyone. This started before "AI", but now "AI" (and tech billionaires backing certain political elements) has poured gasoline on the fire. Good luck getting dates with people from outside our field of employment. (You could try making your dating profile all about enjoying hiking and dabbling with your acoustic guitar, but they'll quickly know you're the enemy, as soon as you drive up in a Tesla, or as soon you say "actually..." before launching into a libertarian economics spiel over coffee.)
Second, engineering and innovation are two different categories. Most of engineering is about ... making things work. Fixing bugs, refactoring fragile code, building new features people need or want. Maybe AI products would be hated less if they were just a little less about being able to pretend they are an innovation and just a little more about making things works.
Satya has completely wasted their early lead in AI. Google is now the leader.
But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.
Honestly, this has been revolutionary for me for getting things done.
Again a somewhat positive term (if you focus on "back to nature" and ignore the nationalist parts) is taken, assimilated and turned on its head.
If it was about the technology, then it would be no different than being a java/c++ developer and calling someone who does html and javascript their equal so pay them. It's not.
People get anxious when something may cause them to have to change - especially in terms of economics and the pressures that puts on people beyond just "adulting". But I don't really think you explained the why of their anxiety.
Pointing the finger at AI is like telling the Germans that all their problems are because of Jews without calling out why the Germans are feeling pressure from their problems in the first place.
As somebody who has lived in Seattle for over 20 years and spent about 1/3 of it working in big tech (but not either of those companies), no, I don't really think so. There is a lot of resentment, for the same reasons as everywhere else: a substantial big tech presence puts anyone who can't get on the train at a significant economic disadvantage.
If you are a writer or a painter or a developer - in a city as expensive as Seattle - then one may feel a little threatened. Then it becomes the trickle down effect, if I lose my job, I may not be able to pay for my dog walker, or my child care or my hair dresser, or...
Are they sympathetic? It depends on how much they depend on those who are impacted. Everyone wants to get paid - but AI don't have kids to feed or diapers to buy.
SF embraces tech and in general (politics, etc) has a culture of being willing to try new things. Overall tech hostility is low, but the city becoming a testbed for projects like Waymo is possibly changing that. There is a continuous argument that their free-spirited culture has been cannibalized by tech.
Seattle feels like the complete opposite. Resistant to change, resistant to trying things, and if you say you work in tech you're now a "techbro" and met with eyerolls. This is in part because in Seattle if you are a "techbro" you work for one of the megacorps whereas in SF a "techbro" could be working for any number of cool startups.
As you mentioned, Seattle has also been taken over by said megacorps which has colored the impressions of everyone. When you have entire city blocks taken over by Microsoft/Amazon and the roads congested by them it definitely has some negative domino effects.
As an aside, on TV we in the Seattle area get ads about how much Amazon has been doing for the community. Definitely some PR campaign to keep local hostility low.
I think most people in Seattle know how economics works, logic follows: while "techbro" don't work is true: if "techbro" debt > income: unless assets == 0: sellgighustle else sellhousebeforeforeclosure nomoreseattleforyou("techbro") end else "gigbot" isn't summoned and people don't get paid. "techbro" health-- due to high expense of COBRA. [etc...] end end
(.. and exactly how is boeing doing since it was forced to move away from 'engineering culture' by moving out of the city where their workforce was trained and training the next generation. Oh yeah planes are falling out of the sky and their software is pushing planes into the ground.)
As to the point of the article, is it just to say "People shouldn't hate LLMs"? My takeaway was more "This person's future isn't threatened directly so they just aren't understanding why people feel this way." but I also personally believe that, if the CEOs have their way, AI will threaten every job eventually.
So yeah I guess I'm just curious what the conclusion presented here is meant to be?
Anecdotally, I work at a different FAANMG+whatever company in Seattle that I feel has actually done a pretty good job with AI internally: providing tools that we aren't forced to use (i.e. they add selectable functionality without disrupting existing workflows), not tying ratings/comp to AI usage (seriously how fucking stupid are they over in Redmond?), and generally letting adoption proceed organically. The result is that people have room to experiment with it and actually use it where it adds real value, which is a nonzero but frankly much narrower slice than a lot of """technologists""" and """thought leaders""" are telling us.
Maybe since Microsoft and Amazon are the lion's share (are they?) of big tech employment in Seattle, your point stands. But I think you could present it with a bit of a broader view, though of course that would require more research on your part.
Also, I'd be shocked if there wasn't a serious groundswell of anti-AI sentiment in SF and everywhere else with a significant tech industry presence. I suspect you are suffering from a bit of bias due to running in differently-aligned circles in SF vs. Seattle.
As a customer, I actually had an MS account manager once yelled at me for refusing to touch <latest newfangled vaporware from MS> with a ten foot pole. Sorry, burn me a dozen times; I don't have any appendages left to care. I seriously don't get Microsoft. I am still flabbergasted anytime anyone takes Microsoft seriously.
Presumably the account manager is under a lot of pressure internally...
Do they repeatedly yell at you?
Do you know how your <vaporware> usage was measured - what metrics was the account manager supposed to improve?
I feel bad for people who work at dystopian places where you can't just do the job, try to get ahead etc. It is set up to make people fail and play politics.
I wonder if the company is dying slowly but with AI hype qaand old good foundations keeping her stock price going.
It has all the telltale signs: lots of em-dashes but also "punched up" paragraphs, a lot of them end with a zinger, e.g.
> Amazon folks are slightly more insulated, but not by much. The old Seattle deal—Amazon treats you poorly but pays you more—only masks the rot.
or
> Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.
Once or twice can be coincidence, but a full article of it reads a tiny bit like AI slop.
Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.
It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?
AI the manual algorithm to generate code and analyze images is quite an interesting underlying tech.
I'm not sure they're as wrong as these statements imply?
Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?
Its an infinite moving goalpost of hate, if its an actor, "creative", writer, AI is a monolithic doom, next its theoretical public policy or the lack thereof, and if they have nothing that affects them about it then it's about the energy use and environment
nobody is going to hear about what your AI does, so don't mention anything about AI unless you're trying to earn or raise money. Its a double life
I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.
Except it didn't stick? https://news.ycombinator.com/item?id=43088369
Well, it's not the fault on a random person doing some project that may even be cool.
I'll certainly adjust my priors and start treating the person as probably an idiot. But if given evidence they are not, I'm interest on what they are doing.
I'm all for shaming people who just link to ChatGPT and call their whatever thing AI powered. If you're actually doing some work though and doing something interesting, I'll hear you out.
No shit. But that's hardly everyone is Seattle. I'd imagine people at Amazon aren't upset about being forced to use Copilot, or Google folks.
Maybe he can be "AI" officer at Infosys.
EDIT: The sycophancy of the downvoters for their beloved kings is striking. People like Nadella literally tell "AI" skeptics to leave. I am telling Nadella to leave. The enshittification process of Microsoft has started under Nadella.
Oh yeah, call out a tech city and all the butt-hurt-ness comes out. Perfect example of "Rage Bait".
People here aren't hurt because of AI - people here are hurt because they learned they were just line items in a budget.
When the interest rates went up in 2022/2023 and the cheap money went away, businesses had to pivot their taxes while appeasing the shareholder.
Remember that time when Satya went to a company sponsored rich people thing with Aerosmith or whomever playing while announcing thousands of FTE's being laid off? Yeah, that...
If your job can be done by a very small shell script, why wasn't it done before?