Everyone in Seattle hates AI
260 points
1 hour ago
| 75 comments
| jonready.com
| HN
bccdee
1 hour ago
[-]
> Engineers don't try because they think they can't.

This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.

There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.

I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.

So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

reply
averyvery
46 minutes ago
[-]
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).

A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.

There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.

New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.

reply
senordevnyc
9 minutes ago
[-]
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.

There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.

reply
SV_BubbleTime
41 minutes ago
[-]
This isn’t “unfair”, but you are intentionally underselling it.

If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.

Edit: lol this forum :)

reply
nosianu
28 minutes ago
[-]
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right

I AM very impressed, and I DO use it and enjoy the results.

The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.

Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.

But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.

So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.

I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.

I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.

I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.

reply
jandrese
14 minutes ago
[-]
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.

In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.

I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.

reply
rconti
28 minutes ago
[-]
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Or your job isn't what AI is good at?

AI seems really good at greenfield projects in well known languages or adding features.

It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

reply
perardi
12 minutes ago
[-]
> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

This is precisely my experience.

Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.

Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.

reply
sydd
15 minutes ago
[-]
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.

When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.

reply
Jblx2
10 minutes ago
[-]
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
reply
ggerni
34 minutes ago
[-]
post portfolio I wanna see your bags
reply
bigstrat2003
18 minutes ago
[-]
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
reply
pjmlp
1 hour ago
[-]
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.

This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.

reply
mr_toad
42 minutes ago
[-]
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
reply
Fordec
9 minutes ago
[-]
You know what, this clarifies something for me.

PC, Web and Smartphone hype was based on "we can now do [thing] never done before".

This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"

It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.

reply
pjmlp
29 minutes ago
[-]
Same, doesn't make this hype phase more bearable though.
reply
Paianni
34 minutes ago
[-]
or 'interactive' or 'cloud' (early 2010s).
reply
bwfan123
42 minutes ago
[-]
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product

I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.

reply
balamatom
5 minutes ago
[-]
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane

This right here is the real thing which AI is deployed to upset.

The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.

The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.

That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.

My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.

reply
hinkley
30 minutes ago
[-]
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
reply
mips_avatar
1 hour ago
[-]
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
reply
balamatom
18 minutes ago
[-]
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

Spot. Fucking. On.

Thank you.

reply
zzzeek
52 minutes ago
[-]
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.

But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.

reply
senordevnyc
6 minutes ago
[-]
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
reply
binary132
39 minutes ago
[-]
Bitcoin is at 93k so I don’t think it’s entirely accurate to say blockchain is insubstantive or without value
reply
skybrian
5 minutes ago
[-]
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.

Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.

reply
rpcope1
28 minutes ago
[-]
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
reply
mgaunard
19 minutes ago
[-]
very little of the trading actually happens on the blockchain, it's only used to move assets between trading venues.

The values of bitcoin are:

- easy access to trading for everyone, without institutional or national barriers

- high leverage to effectively easily borrow a lot of money to trade with

- new derivative products that streamline the process and make speculation easier than ever

The blockchain plays very little part in this. If anything it makes borrowing harder.

reply
airstrike
35 minutes ago
[-]
If you can't point to real use cases at scale, it's hard to argue it has intrisinc value even though it may have speculative value.
reply
ejoso
25 minutes ago
[-]
Uh… So the argument here is that anticipated future value == meaningful value today?

The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.

reply
senordevnyc
5 minutes ago
[-]
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
reply
throwout4110
1 hour ago
[-]
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).

AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?

> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.

What do you view as the potential that’s been stated?

reply
Fraterkes
1 hour ago
[-]
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.

In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.

(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.

When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)

reply
pydry
58 minutes ago
[-]
Shells around chatgpt are fine if they provide value.

Way better than AI jammed into every crevice for no reason.

reply
throwout4110
1 hour ago
[-]
Yes ok then I definitely agree
reply
mingus88
1 hour ago
[-]
Not OP but for starters LLMs != AI

LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI

reply
pixl97
41 minutes ago
[-]
>Not OP but for starters LLMs != AI

Please don't do this, make up your own definitions.

Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.

In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.

reply
ponector
16 minutes ago
[-]
Why then there is an AI-powered dishwasher, but no AI car?
reply
pkasting
1 hour ago
[-]
Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.

I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.

Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)

reply
CSMastermind
16 minutes ago
[-]
My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.
reply
hectdev
1 hour ago
[-]
It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.
reply
suprjami
1 hour ago
[-]
I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.

I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?

AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.

reply
hectdev
44 minutes ago
[-]
I would say it is like that. No one HAS to use AI. But the shared goal is to get a change to the codebase to achieve a desired outcome. Some will outsource a significant part of that to AI, some won't.

And its tricky because I'm trying not to appeal to emotion despite being fascinated with how this tool has enabled me to do things in a short amount of time that it would have taken me weeks of grinding to get to and improves my communication with stakeholders. That feels world changing. Specifically my world and the day-to-day roll I play when it comes to getting things done.

I think it is fine that it fell short of your expectations. It often does for me as well but it's when it gets me 80% of the way there in less than a day's work, then my mind is blown. It's an imperfect tool and I'm sorry for saying this but so are we. Treat its imperfections in the same way you would with a JR developer- feedback, reframing, restrictions, and iterate.

reply
moduspol
25 minutes ago
[-]
> But the shared goal is to get a change to the codebase to achieve a desired outcome.

I'd argue that's not true. It's more of a stated goal. The actual goal is to achieve the desired outcome in a way that has manageable, understood side effects, and that can be maintained and built upon over time by all capable team members.

The difference between what business folks see as the "output" of software developers (code) and what (good) software developers actually deliver over time is significant. AI can definitely do the former. The latter is less clear. This is one of the fundamental disconnects in discussions about AI in software development.

reply
hectdev
12 minutes ago
[-]
In my personal use case, I work at a company that has SO MUCH process and documentation for coding standards. I made an AI agent that knows all that and used it to update legacy code to the new standard in a day. Something that would have taken weeks if not more. If your desire is manageable code, make that a requirement.

I'm going to say this next thing as someone with a lot of negative bias about corporations. I was laid off from Twitter when Elon bought the company and at a second company that was hemorrhaging users.

Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.

reply
Freak_NL
18 minutes ago
[-]
> No one HAS to use AI.

Well… That's no longer true, is it?

My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.

And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.

AI is already unavoidable.

reply
hectdev
9 minutes ago
[-]
It isn't a universal thing. I have no doubt there is a job out there that that isn't a requirement. I think the issue is the C-level folks are seeing how more productive someone might be and making it a demand. That to me is the wrong approach. If you demonstrate and build interest, the adoption will happen.
reply
dwaltrip
42 minutes ago
[-]
You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.

Rinse and repeat for many "one-off" tasks.

It's not going away, you need to learn how to use it. shrugs shoulders

reply
Loughla
50 minutes ago
[-]
>AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.

I mean this in sincerity, and not at all snarky, but - have you considered that you haven't used the tools correctly or effectively? I find that I can get what I need from chatbots (and refuse to call them AI until we have general AI just to be contrary) if I spend a couple of minutes considering constraints and being careful with my prompt language.

When I've come across people in my real life who say they get no value from chatbots, it's because they're asking poorly formed questions, or haven't thought through the problem entirely. Working with chatbots is like working with a very bright lab puppy. They're willing to do whatever you want, but they'll definitely piss on the floor unless you tell them not to.

Or am I entirely off base with your experience?

reply
dwoldrich
22 minutes ago
[-]
It would be helpful if you would relate your own bad experiences and how you overcame them. Leading off with "do it better" isn't very instructive. Unfortunately there's no useful training for much of anything in our industry, much less AI.

I prefer to use LLM as a sock puppet to filter out implausible options in my problem space and to help me recall how to do boilerplate things. Like you, I think, I also tend to write multi-paragraph prompts repeating myself and calling back on every aspect to continuously hone in on the true subject I am interested in.

I don't trust LLM's enough to operate on my behalf agentically yet. And, LLM is uncreative and hallucinatory as heck whenever it strays into novel territory, which makes it a dangerous tool.

reply
sulicat
50 minutes ago
[-]
I'm probably one of the people that would say AI (at least LLMs) isn't all its cracked up to be and even I have examples where it has been useful to me.

I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.

The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.

Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.

reply
jimbokun
59 minutes ago
[-]
Most of the people against “AI” are not against it because they think it doesn’t work.

It’s because they know it works better every day and the people controlling it are gleefully fucking over the rest of the world because they can.

The plainly stated goal is TO ELIMINATE ALL HUMAN EMPLOYEES, with no plan for how those people will feed, clothe, or house themselves.

The reactions the author was getting was the reaction of a horse talking to someone happily working for the hour factory.

reply
IAmBroom
50 minutes ago
[-]
I don't think you're qualified to speak for most of the people against AI.
reply
throwout4110
1 hour ago
[-]
Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
reply
mjr00
56 minutes ago
[-]
> Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…

... but maybe not in the way that these CEOs had hoped.[0]

Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.

I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.

[0] https://github.com/ocaml/ocaml/pull/14369

reply
elictronic
46 minutes ago
[-]
AI is in a hype bubble that will crash just like every other bubble. The underlying uses are there but just like Dot Com, Tulips, subprime mortgages, and even Sir Isaac Newton's failings with the South Sea Company the financial side will fall.

This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.

reply
throwout4110
34 minutes ago
[-]
Ok sure the bubble/non-bubble stuff, fine, but in terms of “things I’d like to be a part of” it’s hard to imagine a more transformative technology (not to again turn off the anti-hype crowd). But ok, say it’s 1997, you don’t like the valuations you see. But as a tech person you’re not excited by browsers, the internet, the possibilities? You don’t want to be a part of that even if it means a bubble pops? I also hear a lot of people argue “finances don’t make a lick of sense” but i don’t think things are that cut and dried and I don’t see this as obvious. I don’t think really many people know how things will evolve and what size a market correction or bubble would have.
reply
zdragnar
17 minutes ago
[-]
What precisely about AI is transformative, compared to the internet? E-mail replaced so much of faxing, phoning and physical mail. Online shopping replaced going to stores and hoping they have what you want, and hoping it is in stock, and hoping it is a good price. It replaced travel agents to a significant degree and reoriented many industries. It was the vehicle that killed CDs and physical media in general.

With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.

It's popping up in my music streams now and then, and I generally hate it. Mushy-mouthed fake vocals over fake instruments. It pops up online and aside from the occasional meme I hate it there too. It pops up all over blogs and emails and I profoundly hate it there, given that it encourages the actual author to silence themselves and replaces their thoughts with bland drivel.

Every single software product I use begs me to use their AI integration, and instead of "no" I'm given the option of "not now", despite me not needing it, and so I'm constantly being pestered about it by something.

It has, thus far, made nearly everything worse.

reply
aisengard
35 minutes ago
[-]
There is zero guarantee that these tools will continue to be there. Those of us who are skeptical of the value of the tools may find them somewhat useful, but are quite wary of ripping up the workflows we've built for ourselves over decade(s)(+) in favor of something that might be 10-20% more useful, but could be taken away or charged greater fees or literally collapse in functionality at any moment, leaving us suddenly crippled. I'll keep the thing I know works, I know will always be there (because it's open source, etc), even if it means I'm slightly less productive over the next X amount of time otherwise.
reply
throwout4110
30 minutes ago
[-]
What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”? I would say Claude right now has probably made worse code and wasted time than if I had coded things myself, but it’s because this is like the first few hundred days of this. Open weight models are also worse but they will never go away and improve steadily as well. I am all for people doing whatever works for them I just don’t get the negativity or the skepticism when you look at the progress over what has been almost zero time. It’s crappy now in many respects but it’s like saying “my car is slow” in the one millisecond after I floor the gas pedal
reply
takluyver
10 minutes ago
[-]
My understanding is that all the big AI companies are currently offering services at a loss, doing the classic Silicon Valley playbook of burning investor cache to get big, and then hope to make a profit later. So any service you depend on could crash out of the race, and if one emerges as a victorious monopoly and you rely on them, they can charge you almost whatever they like.

To my mind, the 'only just started' argument is wearing off. It's software, it moves fast anyway, and all the giants of the tech world have been feverishly throwing money at AI for the last couple of years. I don't buy that we're still just at the beginning of some huge exponential improvement.

reply
ben_w
3 minutes ago
[-]
My understanding is they make a loss overall due to the spending on training new models, that the API costs are profit making if considered in isolation. That said, this is based on guestimates based on hosting costs of open-weight models, owing to a lack of financial transparancey everywhere for the secret-weights models.
reply
watwut
3 minutes ago
[-]
> What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”?

Simple. The company providing the tool needs actual earning suddenly. Therefore, they need to raise the prices. They also need users to spend more tokens, so they will make the tool respond in a way that requires more refinement. After all, the latter is exactly what happened with google search.

At this point, that is pretty normal software cycle - try to attract crowd by being free or cheap, then lock features behind paywall. Then simultaneously raise prices more and more while making the product worst.

This literally NEEDS to happen, because these companies do not have any other path to profitability. So, it will happen at some point.

reply
bigstrat2003
57 minutes ago
[-]
> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.

I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.

reply
ben_w
23 minutes ago
[-]
> I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1.

One of the tests I sometimes do of LLMs is a geometry puzzle:

  You're on the equator facing south. You move forward 10,000 km along the surface of the Earth. You are rotate 90° clockwise. You move another 10,000 km forward along the surface of the earth. Rotate another 90° clockwise, then move another 10,000 km forward along the surface of the Earth.

  Where are you now, and what direction are you facing?
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).

Anecdotes are of course a bad way to study this kind of thing.

Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.

Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.

reply
hectdev
39 minutes ago
[-]
This fascinates me. Just observing but because it hasn't worked for you, everyone else must be lying? (I'm assuming that's what you mean by baseless)

How does that bridge get built? I can provide tangible real life examples but I've found push back from that in other online conversations.

reply
WhyOhWhyQ
19 minutes ago
[-]
I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.
reply
mips_avatar
22 minutes ago
[-]
The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.
reply
buildsjets
10 minutes ago
[-]
This is one of the most depressing things I have ever read on Hacker News. You claim to have become so de-actualized as a human being that you cannot fulfill the most basic items of Maslow’s Hierarchy of Needs (food, health, personal security, shelter, transportation) without the aid of an LLM.
reply
miltonlost
13 minutes ago
[-]
Severe food sickness? I know WebMD rightly gets a lot of hate, but this is one thing where it would be good for. Stolen items? Depending on the items and the place, possibly police. Missed flights? Customer service agent at the airport for your airline or call the airline help line.
reply
sirreal14
1 hour ago
[-]
As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
reply
psyclobe
52 minutes ago
[-]
> and I've even seen unit tests > that mock the actual function > under test.

Yup. Ai is so fickle it’ll do anything to accomplish the task. But ai is just a tool it’s all about what you allow it to do. Can’t blame ai really.

reply
dpark
45 minutes ago
[-]
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
reply
doyougnu
23 minutes ago
[-]
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.

I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?

reply
teej
4 minutes ago
[-]
Your coworkers were probably writing subtle bugs before AI too.
reply
jfalcon
1 hour ago
[-]
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.

-206dev

reply
mips_avatar
1 hour ago
[-]
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
reply
elzbardico
3 minutes ago
[-]
If AI replaces software engineers, people outside tech doesn't have much chance of surviving it too.
reply
assemblyman
1 minute ago
[-]
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.

I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.

Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.

Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.

Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.

I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.

reply
decimalenough
6 minutes ago
[-]
While everybody else is ranting about AI, I'll rant about something else: trip planning apps. There have been literally thousands of attempts at this and AFAICT precisely zero have ever gotten any traction. There are two intractable problems in this space.

1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:

2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.

reply
shepardrtc
1 hour ago
[-]
Ok so a few thoughts as a former Seattleite:

1. You were a therapy session for her. Her negativity was about the layoffs.

2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.

3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.

4. I don't think people hate AI, they hate the hype.

Anyways, your app actually does sound interesting so I signed up for it.

reply
hexator
19 minutes ago
[-]
Some people really do hate AI, it's not entirely about the layoffs. This is a well insulated bubble but you can find tons of anti-AI forums online.
reply
mips_avatar
58 minutes ago
[-]
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
reply
groos
1 hour ago
[-]
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
reply
pnathan
27 minutes ago
[-]
As a layoff justification and a hurryup tool, it is pretty loathesome. People use their jobs for their housing, food, etc.
reply
somekyle2
1 hour ago
[-]
Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do. But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
reply
tptacek
1 hour ago
[-]
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
reply
wk_end
1 hour ago
[-]
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.

Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.

reply
tptacek
1 hour ago
[-]
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).

Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.

I think it's just not true that non-tech people are especially opposed to AI.

reply
sleepybrett
47 minutes ago
[-]
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
reply
somekyle2
1 hour ago
[-]
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
reply
tptacek
1 hour ago
[-]
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
reply
pesus
49 minutes ago
[-]
Frankly, tech deserves its bad reputation in SF (and worldwide, really).

One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.

I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.

reply
treis
20 minutes ago
[-]
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
reply
pesus
9 minutes ago
[-]
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
reply
tptacek
41 minutes ago
[-]
I don't agree with any of this. I just think it's aggravating to live in a company town.
reply
neutronicus
51 minutes ago
[-]
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.

At least, that's my wife's experience working on a contract with a state government at a big tech vendor.

reply
majormajor
1 hour ago
[-]
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."

There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.

reply
kg
1 hour ago
[-]
EDIT: Removed part of my post that pissed people off for some reason. shrug

It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.

reply
tptacek
1 hour ago
[-]
The claim I was responding to implied that non-techies distinctively hate AI. You're a techie.
reply
themafia
7 minutes ago
[-]
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters

That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.

reply
IAmBroom
46 minutes ago
[-]
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
reply
mips_avatar
1 hour ago
[-]
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
reply
_keats
1 hour ago
[-]
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.

I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics

reply
Forgeties79
1 hour ago
[-]
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
reply
lambchoppers
1 hour ago
[-]
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.

The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.

reply
Forgeties79
32 minutes ago
[-]
> health and safety seems irrelevant to me

Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.

reply
vunderba
1 hour ago
[-]
From the article:

> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.

I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.

reply
mips_avatar
1 hour ago
[-]
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
reply
epolanski
1 hour ago
[-]
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
reply
Ekaros
43 minutes ago
[-]
If there is a g in there I will pronounce a g there. I have some standards and that is one. Pronouncing every single letter.
reply
badc0ffee
2 minutes ago
[-]
That's a gnarly standard you have there.
reply
basscomm
21 minutes ago
[-]
> Pronouncing every single letter.

Now I want to know how you pronounce words like: through, bivouac, and queue.

reply
mips_avatar
1 hour ago
[-]
It's pronounced wanderfull in Norwegian
reply
epolanski
1 hour ago
[-]
And how many of your users are going to have Nordic backgrounds?

I personally thought it was wander _fughel_ or something.

Let alone how difficult it is to remember how to spell it and look it up on Google.

reply
efskap
1 hour ago
[-]
Maybe that's why they didn't go with the English cognate i.e. Wanderfowl, since being foul isn't great branding
reply
isomorphic
1 hour ago
[-]
What? You don't want travel tips from an itinerant swinger? Or for itinerant swingers?
reply
zmmmmm
35 minutes ago
[-]
As a place with a high density of people with agency to influence the outcome, I think it's important for people here to acknowledge that much of what the negative people think is probably 100% true.

There will absolutely some cases where AI is used well. But probably the larger fraction will be where AI does not give better service, experience or tool. It will be used to give a cheaper but shittier one. This will be a big win for the company or service implementing it, but it will suck for literally everybody else involved.

I really believe there's huge value in implementing AI pervasively. However it's going to be really hard work and probably take 5 years to do it well. We need to take an engineering and human centred approach and do it steadily and incrementally over time. The current semi-religious fervour about implementing it rapidly and recklessly is going to be very harmful in the longer term.

reply
JumpCrisscross
1 hour ago
[-]
> AI-powered map

> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.

She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.

reply
Freak_NL
4 minutes ago
[-]
The product website isn't convincing either. It's only in private beta, and the first example shows 'A scenic walking tour of Venice' as the desired trip. I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice, including all highlights people write and post about a lot on social media to show how great their life is. But if you asked anyone knowledgable about travel in that region, the counter questions would be 'Why Venice specifically? I thought you hated crowds — have you considered less crowded alternatives where you will be appreciated more as a tourist? Have you actually been to Italy at all?'.

LLMs are always going to give you the most plausible thing for your query, and will likely just rehash the same destinations from hundreds of listicles and status signalling social media posts.

She probably understood this from the minimal description given.

reply
paxys
34 minutes ago
[-]
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
reply
mips_avatar
31 minutes ago
[-]
That's probably the difference
reply
vessenes
1 hour ago
[-]
Thanks for the post - it's work to write and synthesize, and I always appreciate it!

My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?

With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.

In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.

While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.

I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.

reply
hexator
17 minutes ago
[-]
Moving to the Cloud proved to be a pretty nice moneymaker far faster and more concretely than AI has been for these companies. It's a fair comparison regarding corporate pushes but not anything more than that.
reply
mips_avatar
38 minutes ago
[-]
It does feel like without a compelling AI product Microsoft isn't super differentiated. Maybe Satya is right that scale is a differentiation, but I don't think people are as trapped in an AI ecosystem as they were in Azure.
reply
whoamiopa
30 minutes ago
[-]
Very new ex-MSFT here. I couldn’t relate more with your friend. That’s exactly what happened. I left Microsoft about 5 weeks ago and it’s been really hard to detox from that culture.

AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.

I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.

Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.

reply
themafia
8 minutes ago
[-]
Instead of admitting you built the wrong thing you denigrate a friend and someone whom you admire. Instead of reconsidering the value of AI you immediately double down.

This is a product of hurt feelings and not solid logic.

reply
alphazard
18 minutes ago
[-]
The 3 observations at the end seem spot on, but I don't know that AI is the cause. It's just where we are in the business cycle, and AI is the thing going on this time around.

Right now, late in the business cycle, "tech" companies are dominated by non-technical people, who don't know how to write software, and aren't even capable of thinking through a real problem well enough to design software to solve it. This happens because people make up imposter roles like scrum master, and product manager, and then convert their friends into these roles to get them jobs, and build up their own political faction at a company. The salaries and opinions of these roles directly crowd out those of real talent.

Take a minute to cut through the bullshit about what each role is supposed to contribute at each part of the development cycle, and focus on the amount of decision-making influence that each person has over engineering resources. It's not weighted towards the innovative or creative people; it's probably inversely weighted. That's all you need to know; don't expect good products until that's fixed.

I'll know things have come around full circle when startups are recruiting with: huge management ratios (10+) or flat orgs, remote work or private offices, no product org, everyone in eng can program, everyone in sales can sell, etc. as selling points.

reply
thorum
56 minutes ago
[-]
People hate what the corporations want AI to be and people hate when AI is used the way corporations seem to think it should be used, because the executives at these companies have no taste and no vision for the future of being human. And that is what people think of when they hear “AI”.

I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.

reply
nullbound
1 hour ago
[-]
'If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent."'

It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).

But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).

reply
beloch
1 hour ago
[-]
The full quote from that section is worth repeating here.

---------

"If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.

Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.

But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.

Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." "

------------

On the one hand, if you were going to bet big on AI, there are aspects of this approach that make sense. e.g. Force everyone to use the company's no-good AI tools so that they become good. However, not permitting employees outside of the "AI org" to fix things neatly nixes the gains you might see while incurring the full cost.

It sounds like MS's management, the same as many other tech corp's, has become caught up in a conceptual bubble of "AI as panacea". If that bubble doesn't pop soon, MS's products could wind up in a very bad place. There are some very real threats to some of MS's core incumbencies right now (e.g. from Valve).

reply
kg
1 hour ago
[-]
I know of at least one bigco that will no longer hire anyone, period, who doesn't have at least 6 months of experience using genai to code and isn't enthusiastic about genai. No exceptions. I assume this is probably true of other companies too.

I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...

reply
empressplay
1 hour ago
[-]
Like any tool, the longer you use it the better you learn where you can extract value from it and where you can't, where you can leverage it and where you shouldn't. Because your behaviour is linked to what you get out of the LLM, this can be quite individual in nature, and you have to learn to work with it through trial and error. But in the end engineers do appear to become more productive 'pairing' with an LLM, so it's no surprise companies are favouring LLM-savvy engineers.
reply
bigstrat2003
47 minutes ago
[-]
> But in the end engineers do appear to become more productive 'pairing' with an LLM

Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.

reply
sleepybrett
43 minutes ago
[-]
So far, for me, it's just an annoying tool that gets worse outcomes potentially faster than just doing it by hand.

It doesn't matter how much I use it. It's still just an annoying tool that makes mistakes which you try to correct by arguing with it but then eventually just fix it yourself. At best it can get you 80% there.

reply
rr808
21 minutes ago
[-]
There has always been a lot of Microsoft hate, but now its a whole new level. Windows now really sucks, My new laptop is all Linux for the first time ever. I dont see why this company is still so valuable. Most people only use a browser now and some ios apps, there is no need for Windows or Microsoft (and of course Azure is never anyone's first choice). Steam makes the gamers happy to leave too.
reply
w4yai
8 minutes ago
[-]
Gaming.
reply
palmotea
1 hour ago
[-]
> But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard "AI."

So what's different between Seattle and San Francisco? Does Seattle have more employee-workers and San Francisco has more people hustling for their startup?

I assume Bali (being a vacation destination) is full people who are wealthy enough to feel they're insulated from whatever will happen.

reply
Klonoar
38 minutes ago
[-]
I live in Seattle now, and have lived in San Francisco as well.

Seattle has more “normal” people and the overall rhetoric about how life “should be” is in many ways resistant to tech. There’s a lot to like about the city, but it absolutely does not have a hustle culture. I’ve honestly found it depressing coming from the East Coast.

Tangent aside, my point is that Seattle has far more of a comparison ground of “you all are building shit that doesn’t make the world better, it just devalues the human”. I think LLMs have (some) strong use cases, but it is impossible to argue that some of the societal downsides we see are ripe for hatred - and Seattle will latch on to that in a heartbeat.

reply
wrs
57 minutes ago
[-]
Seattle has always been a second-mover when it comes to hype and reality distortion. There is a lot more echo chamber fervor (and, more importantly, lots of available FOMO money to burn) in SF around whatever the latest hotness is.
reply
mips_avatar
55 minutes ago
[-]
My SF friends think they have a shot at working at a company whose AI products are good (cursor, anthropic, etc.), so that removes a lot of the hopelessness.

Working for a month out of Bali was wonderful, it's mostly Australians and Dutch people working remotely. Especially those who ran their own businesses were super encouraging (though maybe that's just because entrepreneurs are more supportive of other entrepreneurs).

reply
lukev
54 minutes ago
[-]
Interesting that this talks about people in tech who hate AI; it's true, tech seems actually fairly divided with respect to AI sentiment.

You know who's NOT divided? Everyone outside the tech/management world. Antipathy towards AI is extremely widespread.

reply
IAmBroom
42 minutes ago
[-]
And yet there are multiple posts ITT (obviously from tech-oriented people) proclaiming that large swaths of the non-tech world love AI.

An opinion I've personally never encountered in the wild.

reply
not_the_fda
1 hour ago
[-]
I don't think the phenomenon is limited to Seattle.
reply
jofla_net
1 hour ago
[-]
Its not. I know some ex bay area devs who are the same mind, and i'm not too far off.

I think its definitely stronger in MS as my friend on the inside tells me, than most places.

There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.

I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.

reply
mgaunard
16 minutes ago
[-]
The only clear applications for AI in software engineering are for throwaway code, which interestingly enough isn't used in software engineering at all, or for when you're researching how to do something, for which it's not as reliable as reading the docs.

They should focus more on data engineering/science and other similar fields which is a lot more about those, but since there are often no tests there, that's a bit too risky.

reply
medhir
23 minutes ago
[-]
wow — this hit me hard.

I live in Seattle, and got laid off from Microsoft as a PM in Jan of this year.

Tried in early 2024 to demonstrate how we could leverage smaller models (such as Mixtral) to improve documentation and tailor code samples for our auth libraries.

The usual “fiefdom” politics took over and the project never gained steam. I do feel like I was put in a certain “non-AI” category and my career stalled, even though I took the time to build AI-integrated prototypes and present them to leadership.

It’s hard to put on a smile and go through interviews right now. It feels like the hard-earned skills we bring to the table are being so hastily devalued, and for what exactly?

reply
captainkrtek
6 minutes ago
[-]
I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.

I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.

It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.

It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.

I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."

reply
wavemode
18 minutes ago
[-]
My previous software job was for a Seattle-based team within Amazon's customer support org.

I consider it divine intervention that I departed shortly before LLMs got big. I can't imagine the unholy machinations my former team has been tasked with working on since I left.

reply
adverbly
5 minutes ago
[-]
Some massive bait in this article. Like come on author - do you seriously have these thoughts and beliefs?

> It felt like the culture wanted change. > > That world is gone.

Ummm source?

> This belief system—that AI is useless and that you're not good enough to work on it anyway

I actually don't know anyone with this belief system. I'm pretty slow on picking up a lot of AI tooling, but I was slow to pick up JS frameworks as well.

It's just smart to not immediately jump on a bandwagon when things are changing so fast because there is a good chance you're backing the wrong horse.

And by the way, you sound ridiculous when you call me a dinosaur just because I haven't started using a tool that didn't even exist 6 months ago. FOMO sales tactics don't work on everyone, sorry to break it to you.

reply
1vuio0pswjnm7
11 minutes ago
[-]
"But in San Francisco, people still believe they can change the world-so sometimes they actually do."

? For the better, or for the worse ?

reply
0_____0
1 hour ago
[-]
My 2¢... LLMs are kind of amazing for structured text output like code. I have a completely different experience using LLMs for assistance writing code (as a relative novice) than I do in literally every other avenue of life.

Electricl engineering? Garbage.

Construction projects? Useless.

But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.

reply
knollimar
45 minutes ago
[-]
I'm experimenting with the gemini 3 and will do opus 4.5 soon, but I've seen huge jumps doing EE for construction over the last batch of models.

I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)

Let me know what you've experienced. Not many construction EE on HN.

reply
par
1 hour ago
[-]
I think reading the room is required here. You and your friend can both be right at the same time. You want to build an AI-enabled app, and indeed there's plenty of opportunity for it I'm sure. And your friend can hate what it's done to their job stability and the industry. Also, totally unrelated, but is the meaning or etymology behind the app name Wanderfugl? I initially read it as Wanderfungl.
reply
IAmBroom
41 minutes ago
[-]
I "spoke" it to myself while reading, and instantly heard "Wonderfuckle".
reply
mips_avatar
52 minutes ago
[-]
It's wandering bird in Norwegian
reply
qoez
58 minutes ago
[-]
It's probably good if some portion of the engineering culture is irrationally against AI and like refuses to adopt it sort of amish style. There's probably a ton still good work that can only be done if every aspect of a product/thing is given focused human attention to it, some that might out-compete AI aided ones.
reply
lisp2240
40 minutes ago
[-]
I’m all for neurodivergent acceptance but it has caused monumentally obnoxious people like this to assume everyone else is the problem. A little self awareness would solve a lot of problems.
reply
ragnoroct
25 minutes ago
[-]
HN guidelines ask commenters to avoid name-calling. You can critique the article without slurs.
reply
decimalenough
15 minutes ago
[-]
The name "Wanderfugl" is wanderfully fugly.

Oddly, the screenshots in the article show the name as "Wanderfull".

reply
1zael
52 minutes ago
[-]
I don't think the root cause here is AI. It's the repeated pattern of resistance to massive technological change by system-level incentives. This story has happened again and again throughout recent history.

I expect it to settle out in a few years where: 1. The fiduciary duties of company shareholders will bring them to a point of stopping to chase AI hype and instead derive an understanding of whether it's driving real top-line value for their business or not. 2. Mid to senior career engineers will have no choice but to level up their AI skills to stay relevant in the modern workforce.

reply
side_up_down
1 hour ago
[-]
There's a great non-AI point in this article - Seattle has great engineers. In pursuing startups, Seattle engineers are relatively unambitious compared to the Bay Area. By that I mean there's less "shooting for unicorns" and a comparatively more reserved startup culture and environment.

I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.

reply
sleepybrett
34 minutes ago
[-]
My pet theory is that most of the investor class in seattle is ex microsoft and ex amazon. Neither microsoft nor amazon are really big splashy unicorns. Amazon's greatest innovation (aws) isn't even their original line of business and is now 'boring'. No doubt they've innovated all over their business in both little and big ways, but not splashy ways, hell every time amazon tries to splash they seem to fall on their ass more often than not (look at their various cancelled hardware lines, their game studios, etc. Alexa still chugs on, but she's not getting appreciably better to the end user over even the last 10 years).

Microsoft is the same, a generally very practical company just trying to practical company stuff.

All the guys that made their bones, vested and rested and now want to turn some of that windfall into investments likely don't have the kind of risk tolerance it takes to fund a potential unicorn. All smart people I'm sure, smart enough to negotiate big windfalls from ms/az but far less risk tollerant than a guy in SF who made their investment nestegg building some risky unicorn.

reply
ryanwhitney
1 hour ago
[-]
Our (on-the-way-out) mayor likes it!

"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."

reply
neutronicus
12 minutes ago
[-]
It's satisfying to hear that Microsoft engineers hate Microsoft's AI offerings as much as I do.

Visual Studio is great. IntelliSense is great. Nothing open-source works on our giant legacy C++ codebase. IntelliSense does.

Claude is great. Claude can't deal with millions of lines of C++.

You know what would be great? If Microsoft gave Claude the ability to semantic search the same way that I can with Ctrl-, in Visual Studio. You know what would be even better? If it could also set breakpoints and inspect stuff in the Debugger.

You know what Microsoft has done? Added a setting to Visual Studio where I can replace the IntelliSense auto-complete UI, that provides real information determined from semantic analysis of the codebase and allows me to cycle through a menu of possibilities, with an auto-complete UI that gives me a single suggestion of complete bullshit.

Can't you put the AI people and the Visual Studio people in a fucking room together? Figure out how LLMs can augment your already-really-good-before-AI product? How to leverage your existing products to let Claude do stuff that Claude Code can't do?

reply
ajkjk
54 minutes ago
[-]
> like building an AI product made me part of the problem.

It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.

That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.

reply
basscomm
4 minutes ago
[-]
> It's not about their careers.

That's the thing, though, it is about their careers.

It's not just that people are annoyed that someone who spends years to decades learning their craft and then someone who put a prompt into a chatbot that spit out an app that mostly works without understanding any of the code that they 'wrote'.

It's that the executives are positively giddy at the prospect that they can get rid of some number their employees and the rest will use AI bots to pick up the slack. Humans need things like a desk and dental insurance and they fall unconscious for several hours every night. AI agents don't have to take lunch breaks or attend funerals or anything.

Most employees that have figured this out resent AI getting shoved into every facet of their jobs because they know exactly what the end goal is: that lots of jobs are going to be going away and nothing is going to replace them. And then what?

reply
ajkjk
43 minutes ago
[-]
I feel like this is a textbook example of how people talk past each other. There are people in this world who operate under personal utility maximization, and they think everyone else does also. Then there are people who are maximizing for justice: trying to do the most meaningful work themselves while being upset about injustices. Call it scrupulosity, maybe. Executives doing stupid pointless things to curry favor is massively unjust, so it's infuriating.

If you are a utilitarian person and you try to parse a scrupulous person according to your utilitarianism of course their actions and opinions will make no sense to you. They are not maximizing for utility, whatsoever, in any sense. They are maximizing for justice. And when injustices are perpetrated by people who are unaccountable, it creates anger and complaining. It's the most you can do. The goal is to get other people to also be mad and perhaps organize enough to do something about it. When you ignore them, when you fail to parse anything they say as about justice, then yes, you are part of the problem.

reply
IAmBroom
40 minutes ago
[-]
> like [being involved in creation of the problem] made me a part of the problem.

Yeah, that's weird. Why would anyone think that? /s

reply
tasspeed
1 hour ago
[-]
textbook way to NOT rollout AI for your org. AI has genuine benefits to white collar workers, but they are not trained for the use-cases that would actually benefit them, nor are they trained in what the tech is actually good at. they are being punished for using the tools poorly (with no guidance on how to use them "good"), and when they use the tools well, they fear being laid off once an SOP for their AI workflows are written.
reply
stego-tech
1 hour ago
[-]
This isn’t just a Seattle thing, but I do think the outsized presence of specific employers there contributes to an outsized negativity around AI.

Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.

I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.

reply
AstroBen
1 hour ago
[-]
I'm not surprised you're getting bad reactions from people who aren't already bought-in. You're starting from a firm "I'm right! They're wrong!" with no attempt to understand the other side. I'm sure that comes across not just in your writing
reply
etempleton
39 minutes ago
[-]
Everyone who has been told AI is a panacea by executive leadership who barely understand it feels this way.
reply
cwillu
10 minutes ago
[-]
“I didn't fully grok how tone deaf I was being though.

[…]

Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.”

Nope, still completely fucking tone deaf.

reply
toast0
1 hour ago
[-]
> After a pause I tried to share how much better I've been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn't fully grok how tone deaf I was being though. She's drowning in resentment.

Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away [1]. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.

If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.

[1] Just a couple days ago, my spouse was complaining to her friend about a change that Facebook made, and her friend pasted an AI suggestion for how to fix it with like 7 steps that were all fabricated. That isn't helpful at all. It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.

reply
cwillu
8 minutes ago
[-]
It's like porn: use it privately if you have to, but don't make it my problem.
reply
TulliusCicero
1 hour ago
[-]
I've recently found that it can be a useful substitute for stackoverflow. It does occasionally make shit up, but stackoverflow and forums searching also has a decently high miss rate as well, so that doesn't piss me off too much. And it's usually immediately obvious when a method doesn't exist, so it doesn't waste a lot of time for each incident.

Specifically I was using Gemini to answer questions about Godot specifically for C# (not gdscript or using the IDE, where documentation and forums support are stronger), and it was mostly quite good for that.

reply
ispeaknumbers
1 hour ago
[-]
this reads like an ad for your project
reply
exmadscientist
1 hour ago
[-]
It reads like it's AI-edited, which is deliciously ironic.

(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)

reply
mips_avatar
1 hour ago
[-]
My creative writing teacher in college drilled the m dash into me. I can’t really write without them now
reply
jasonjmcghee
1 hour ago
[-]
I think the presence of em dashes is a very poor metric for determining if something is AI generated. I'm not sure why it's so popular.
reply
NewsaHackO
53 minutes ago
[-]
I think it's because it is difficult to actually add an em dash when writing with a keyboard (except I heard on Macs). So it's either they 1)memorized the em dash alt code, 2)had a keyboard shortcut for the key, or 3)are using the character map to insert it every time, all of which are a stretch for a random online post.
reply
exmadscientist
1 hour ago
[-]
For me it is that they are wrongly used in this piece. Em dashes as appositives have the feel of interruption—like this—and are to be used very sparingly. They're a big bump in the narrative's flow, and are to be used only when you want a big bump. Otherwise appositives should be set off with commas, when the appositive is critical to the narrative, or parentheses (for when it isn't). Clause changes are similar—the em dash is the biggest interruption. Colons have a sense of finality: you were building up to this: and now it is here. Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time. Like this. And so full stops should be your default clause splice when you're revising.

Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.

(Also the quirky three-point bullet list with a three-point recitation at the end with bolded leadoffs to each bullet point and a final punchy closer sentence is totally an AI thing too.)

But, hey, I guess I fit the stereotype!—I'm in Seattle and I hate AI, too.

reply
twodave
23 minutes ago
[-]
> Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time.

IIRC (it's been a while) there are 2 cases where a semi-colon is acceptable. One is when connecting two closely-related independent clauses (i.e. they could be two complete sentences on their own, or joined by a conjunction). The other is when separating items in a list, when the items themselves contain commas.

reply
IAmBroom
37 minutes ago
[-]
OMG, beautifully described! (not sarcastic!)
reply
wrs
48 minutes ago
[-]
Ironically, years ago I fell into the habit of using too many non-interrupting em dashes because people thought semicolons were pretentious.

But introductory rhetorical questions? As sentence fragments? There I draw the line.

reply
exmadscientist
1 hour ago
[-]
Also, for sheer delightful perversity, I ran the above comment through Copilot/ChatGPT and asked it to revise, and this is what I got. Note the text structuring and how it has changed! (And how my punctuation games are gone, but we expected that.)

>>>

For me, the issue is that they’re misused in this piece. Em dashes used as appositives carry the feel of interruption—like this—and should be employed sparingly. They create a jarring bump in the narrative’s flow, and that bump should only appear when you want it. Otherwise, appositives belong with commas (when they’re integral to the sentence) or parentheses (when they’re not). Clause breaks follow the same logic: the em dash is the strongest interruption. Colons convey a sense of arrival—you’ve been building up to this: and now it’s here. Semicolons are for those rare cases when two clauses can’t quite stand alone as separate sentences; most of the time, a full stop is cleaner. Like this. Which is why full stops should be your default splice when revising.

Sprinkling em dashes everywhere—even if each one is technically correct—feels like AI writing. The system has learned what they are, how they work, and when they fit, but it hasn’t learned how to revise for overall flow or density. The result is too many dashes, when the right number should be low, because they’re heavy emphasis—true interruptions.

(And yes, the quirky three-point bullet list with bolded openers and a punchy closer at the end is another hallmark of AI prose.)

But hey, I guess I fit the stereotype—I’m in Seattle, and I hate AI too.

reply
jakubmazanec
50 minutes ago
[-]
Related article posted here https://news.ycombinator.com/item?id=46133941 explains it: "Within the A.I.’s training data, the em dash is more likely to appear in texts that have been marked as well-formed, high-quality prose. A.I. works by statistics. If this punctuation mark appears with increased frequency in high-quality writing, then one way to produce your own high-quality writing is to absolutely drench it with the punctuation mark in question. So now, no matter where it’s coming from or why, millions of people recognize the em dash as a sign of zero-effort, low-quality algorithmic slop."
reply
mips_avatar
1 hour ago
[-]
So the funny thing is m dashes have always been a great trick to help your writing flow better. I guess gpt4o figured this out in RLHF and now it's everywhere
reply
cosmicgadget
1 hour ago
[-]
Ironic? The author is working on an AI project.
reply
npunt
1 hour ago
[-]
The irony is that AI writing style is pretty off-putting, and the story itself was about people being put off by the author's AI project.
reply
cosmicgadget
1 hour ago
[-]
You mean Wanderfugl???
reply
mips_avatar
1 hour ago
[-]
An iconic name
reply
excalibur
13 minutes ago
[-]
Always amazed to see people who don't hate AI.
reply
itg
57 minutes ago
[-]
Was this written by AI? It sounds like the writing style of an elementary school student. Almost entirely made of really simple sentence structures, and for whatever reason I find it really annoying to read.
reply
hoppersoft
1 hour ago
[-]
This person crafts quite the straw man!

> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups

I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.

I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.

reply
kentm
9 minutes ago
[-]
This person wrote a blog post admitting to tone-deafness in cheerleading AI and talking about all the ways AI hype has negatively impacted peoples' work environment. But then they wrap up by concluding that its the anti-AI people that are the problem. Thats a really weird conclusion to come to at the end of that blog post. My expectation was that the end result was "We should be measured and mindful with our advocacy, read the room, and avoid aggressively pushing AI in ways that negatively impact peoples' lives."
reply
tombert
1 hour ago
[-]
I got a thread on SomethingAwful gassed [1] because it was about an AI radio station app I was working on. People on that forum do not like AI.

I think some of the reasons that they were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.

I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.

[1] basically the hall of shame for bad threads.

reply
symbogra
1 hour ago
[-]
I honestly expected this to be about sanctimonious lefties complaining about a single chatgpt query using an Olympic swimming pool worth of water, but it was actually about Seattle big tech workers hating it due to layoffs and botched internal implementations which is a much more valid reason to hate it.

My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.

reply
neilv
3 minutes ago
[-]
Lots of creators (e.g., writers, illustrators, voice actors) hate "AI" too.

Not only because it's destroying creator jobs while also ripping off creators, but it's also producing shit that's offensively bad to professionals.

One thing that people in tech circles might not be aware of is that people outside of tech circles aren't thinking that tech workers are smart. They haven't thought that for a long time. They are generally thinking that tech workers are dimwit exploiter techbros, screwing over everyone. This started before "AI", but now "AI" (and tech billionaires backing certain political elements) has poured gasoline on the fire. Good luck getting dates with people from outside our field of employment. (You could try making your dating profile all about enjoying hiking and dabbling with your acoustic guitar, but they'll quickly know you're the enemy, as soon as you drive up in a Tesla, or as soon you say "actually..." before launching into a libertarian economics spiel over coffee.)

reply
exasperaited
58 minutes ago
[-]
It’s like you saw all the evidence and drew the conclusion you were most comfortable with, despite what the evidence suggests.
reply
watwut
1 hour ago
[-]
The author has unquestioning assumption that the only innovation possible is the one with AI. That is genuinely weird. Even if one believes in AI, innovation in non-AI space should be possible, no?

Second, engineering and innovation are two different categories. Most of engineering is about ... making things work. Fixing bugs, refactoring fragile code, building new features people need or want. Maybe AI products would be hated less if they were just a little less about being able to pretend they are an innovation and just a little more about making things works.

reply
mensetmanusman
34 minutes ago
[-]
Finance and HR are supposed to demoralize parts of organizations asking for too many resources.
reply
blairanderson
19 minutes ago
[-]
Seattle is going to tax the fuck out of big-tech, for better or worse.
reply
lateforwork
1 hour ago
[-]
I love AI but I find Microsoft AI to be mostly useless. You'd think that anything called Copilot can do things for you, but most of the time it just gives you text answers. Even when it is in the context of the application it can't give you better answers than ChatGPT, Claude or Perplexity. What is the point of that?

Satya has completely wasted their early lead in AI. Google is now the leader.

reply
jmull
38 minutes ago
[-]
It's almost like the hype of AI is massively ahead of the reality, and the people being directly squeezed by that dynamic don't like how it feels.
reply
jesse_dot_id
1 hour ago
[-]
AI is in the Radium phase of its world-changing discovery life cycle. It's fun and novel, so every corporate grifter in the world is cramming it into every product that they can, regardless of it making sense. The companies being the most reckless will soon develop a cough, if they haven't already.
reply
arjie
1 hour ago
[-]
I wonder if I'm the guy in the bubble or if all these people are in the bubble. Everyone I know is really enjoying using these tools. I wrote a comment yesterday about how much my life has improved https://news.ycombinator.com/item?id=46131280

But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.

Honestly, this has been revolutionary for me for getting things done.

reply
empressplay
58 minutes ago
[-]
I recently returned to the world of education and it's _everywhere_. I feel for those people who hate LLMs because they've already lost the war.
reply
bgwalter
44 minutes ago
[-]
Wanderfugl is a strange for an "AI" powered map. The Wandervogel movement was against industrialization and pro nature. I'm sure they would have looked down on iPhones and centralized "AI" that gives them instructions where to go.

Again a somewhat positive term (if you focus on "back to nature" and ignore the nationalist parts) is taken, assimilated and turned on its head.

reply
mips_avatar
1 hour ago
[-]
Author here if anyone has thoughts
reply
jfalcon
32 minutes ago
[-]
I get the feeling that this is supposed to be about the economics of a fairly expensive city/state and that "six-figure salary", but you don't really call it out.

If it was about the technology, then it would be no different than being a java/c++ developer and calling someone who does html and javascript their equal so pay them. It's not.

People get anxious when something may cause them to have to change - especially in terms of economics and the pressures that puts on people beyond just "adulting". But I don't really think you explained the why of their anxiety.

Pointing the finger at AI is like telling the Germans that all their problems are because of Jews without calling out why the Germans are feeling pressure from their problems in the first place.

reply
nickff
1 hour ago
[-]
I was under the distinct impression that Seattle was somewhat divided over 'big tech', with many long-term residents resenting Microsoft and Amazon's impact on the city (and longing for the 'artsy and free-spirited' place it used to be). Do you think those non-techies are sympathetic to the Microsofties and Amazonians? This is a genuine question, as I've never lived in Seattle, but I visit often, and live in the PNW.
reply
caconym_
58 minutes ago
[-]
> Do you think those non-techies are sympathetic to the Microsofties and Amazonians?

As somebody who has lived in Seattle for over 20 years and spent about 1/3 of it working in big tech (but not either of those companies), no, I don't really think so. There is a lot of resentment, for the same reasons as everywhere else: a substantial big tech presence puts anyone who can't get on the train at a significant economic disadvantage.

reply
jfalcon
41 minutes ago
[-]
It depends on how AI affects your economy.

If you are a writer or a painter or a developer - in a city as expensive as Seattle - then one may feel a little threatened. Then it becomes the trickle down effect, if I lose my job, I may not be able to pay for my dog walker, or my child care or my hair dresser, or...

Are they sympathetic? It depends on how much they depend on those who are impacted. Everyone wants to get paid - but AI don't have kids to feed or diapers to buy.

reply
mips_avatar
1 hour ago
[-]
They kind of are, though I think so many locals now work in big tech in some way that it's shifted a bit. I wish we could return to being a bit more artsy and free spirited
reply
MicrosoftShill
49 minutes ago
[-]
I've lived in the Seattle area most of my life and lived in San Francisco for a year.

SF embraces tech and in general (politics, etc) has a culture of being willing to try new things. Overall tech hostility is low, but the city becoming a testbed for projects like Waymo is possibly changing that. There is a continuous argument that their free-spirited culture has been cannibalized by tech.

Seattle feels like the complete opposite. Resistant to change, resistant to trying things, and if you say you work in tech you're now a "techbro" and met with eyerolls. This is in part because in Seattle if you are a "techbro" you work for one of the megacorps whereas in SF a "techbro" could be working for any number of cool startups.

As you mentioned, Seattle has also been taken over by said megacorps which has colored the impressions of everyone. When you have entire city blocks taken over by Microsoft/Amazon and the roads congested by them it definitely has some negative domino effects.

As an aside, on TV we in the Seattle area get ads about how much Amazon has been doing for the community. Definitely some PR campaign to keep local hostility low.

reply
jfalcon
9 minutes ago
[-]
I'm sure the 5% employee tax in Seattle and the bill being introduced in Olympia will do more to smooth things over than some quirky blipvert will.

I think most people in Seattle know how economics works, logic follows: while "techbro" don't work is true: if "techbro" debt > income: unless assets == 0: sellgighustle else sellhousebeforeforeclosure nomoreseattleforyou("techbro") end else "gigbot" isn't summoned and people don't get paid. "techbro" health-- due to high expense of COBRA. [etc...] end end

reply
sleepybrett
31 minutes ago
[-]
'how much they do for the community' like trying to buy elections so we won't tax them, same thing boeing and microsoft did. Anytime out local government gets a little uppity suddenly these big corps are looking to move like boeing largely did. Remember Amazon HQ2, at least part of the reasoning behind that disaster was seattlites asking, 'what the hell is amazon doing for us besides driving up rents and snarling traffic?'

(.. and exactly how is boeing doing since it was forced to move away from 'engineering culture' by moving out of the city where their workforce was trained and training the next generation. Oh yeah planes are falling out of the sky and their software is pushing planes into the ground.)

reply
throwaway_dang
1 hour ago
[-]
Out of curiosity, is this piece just some content that you created in the hopes of boosting your company's mindshare?
reply
mips_avatar
1 minute ago
[-]
I'm just really isolated right now, I've been building solo for a long time. I don't have anyone to share my thoughts with, which is something I used to really value at Microsoft.
reply
chankstein38
1 hour ago
[-]
Howdy! I personally don't really understand the "point" the article is trying to make. I mostly agree with your sentiment that AI can be useful. I too have seen a massive increase in productivity in my hobbies, thanks to LLMs.

As to the point of the article, is it just to say "People shouldn't hate LLMs"? My takeaway was more "This person's future isn't threatened directly so they just aren't understanding why people feel this way." but I also personally believe that, if the CEOs have their way, AI will threaten every job eventually.

So yeah I guess I'm just curious what the conclusion presented here is meant to be?

reply
mips_avatar
8 minutes ago
[-]
I guess in conclusion I'm saying that it's hard to build in Seattle, and that's really unfortunate.
reply
IAmBroom
33 minutes ago
[-]
Nope, no one does. This thread is devoid of opinion on the topic.
reply
mips_avatar
15 minutes ago
[-]
I think people just have a lot of frustration to get off their chest, which is fine.
reply
caconym_
1 hour ago
[-]
It kinda seems like you're conflating Microsoft with Seattle in general. From the outside, what you say about Microsoft specifically seems to be 100% true: their leadership has gone fucking nuts and their irrational AI obsession is putting stifling pressure on leaf level employees. They seem convinced that their human workforce is now a temporary inconvenience. But is this representative of Seattle tech as a whole? I'm not sure. True, morale at Amazon is likely also suffering due to recent layoffs that were at least partly blamed on AI.

Anecdotally, I work at a different FAANMG+whatever company in Seattle that I feel has actually done a pretty good job with AI internally: providing tools that we aren't forced to use (i.e. they add selectable functionality without disrupting existing workflows), not tying ratings/comp to AI usage (seriously how fucking stupid are they over in Redmond?), and generally letting adoption proceed organically. The result is that people have room to experiment with it and actually use it where it adds real value, which is a nonzero but frankly much narrower slice than a lot of """technologists""" and """thought leaders""" are telling us.

Maybe since Microsoft and Amazon are the lion's share (are they?) of big tech employment in Seattle, your point stands. But I think you could present it with a bit of a broader view, though of course that would require more research on your part.

Also, I'd be shocked if there wasn't a serious groundswell of anti-AI sentiment in SF and everywhere else with a significant tech industry presence. I suspect you are suffering from a bit of bias due to running in differently-aligned circles in SF vs. Seattle.

reply
mips_avatar
3 minutes ago
[-]
I think probably the safest place to be right now emotionally is a smaller company. Something about the hype right now is making Microsoft/Amazon act worse. Be curious to hear what specifically your company is doing to give people agency.
reply
rawgabbit
1 hour ago
[-]
Regarding "And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not."

As a customer, I actually had an MS account manager once yelled at me for refusing to touch <latest newfangled vaporware from MS> with a ten foot pole. Sorry, burn me a dozen times; I don't have any appendages left to care. I seriously don't get Microsoft. I am still flabbergasted anytime anyone takes Microsoft seriously.

reply
robocat
33 minutes ago
[-]
> MS account manager once yelled at me

Presumably the account manager is under a lot of pressure internally...

Do they repeatedly yell at you?

Do you know how your <vaporware> usage was measured - what metrics was the account manager supposed to improve?

reply
rawgabbit
15 minutes ago
[-]
He was trying to get people to use the Azure unnamed service. I assume others like me did a trial, POC, and immediately ran away screaming.
reply
cosmicgadget
1 hour ago
[-]
Would love to hear more anecdotes from former colleagues.
reply
mips_avatar
50 minutes ago
[-]
One fun one was the leadership of Windows Update became obsessed with shipping AI models via Windows update, but they can't safely ship files larger than 200mb inside of an update.
reply
nrhrjrjrjtntbt
1 hour ago
[-]
I like that you shared the insight. Feels like you shared a secret to the world that is not so secret if you work a Microsoft (I guess this is less about the city)

I feel bad for people who work at dystopian places where you can't just do the job, try to get ahead etc. It is set up to make people fail and play politics.

I wonder if the company is dying slowly but with AI hype qaand old good foundations keeping her stock price going.

reply
mips_avatar
5 minutes ago
[-]
Well I think it's interesting how much what goes on inside of the major employers that affects Seattle. Like crappy behavior inside of Microsoft is felt outside of it.
reply
the_af
1 hour ago
[-]
Out of curiosity, did you redact this with AI?

It has all the telltale signs: lots of em-dashes but also "punched up" paragraphs, a lot of them end with a zinger, e.g.

> Amazon folks are slightly more insulated, but not by much. The old Seattle deal—Amazon treats you poorly but pays you more—only masks the rot.

or

> Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.

Once or twice can be coincidence, but a full article of it reads a tiny bit like AI slop.

reply
mips_avatar
48 minutes ago
[-]
I wrote it by hand but I had an AI do some edits. I got the m dash drilled into me by my creative writing teacher in college.
reply
smikhanov
1 hour ago
[-]
"Grabbed lunch" is an awful phrase
reply
smikhanov
1 hour ago
[-]
Oh, and there's also "grok" just few paragraphs later!
reply
mips_avatar
1 hour ago
[-]
It kind of is
reply
inshard
1 hour ago
[-]
Seattle hits a doom (helped by the gloom) loop every winter. This too shall pass.
reply
venturecruelty
1 hour ago
[-]
I'm stuck between feeling bad because this is my field–I spend most days worrying about not being able to pay my bills or get another job–and wanting to shake every last tech worker by the shoulders and yell "WAKE UP!" at them. If you are unhappy with what your employer is doing, because they have more power over you, you don't have to just sit there and take it. You can organize.

Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.

It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?

reply
cosmicgadget
1 hour ago
[-]
To the extent that Microsoft pushes their employees to use all their other shitty products, Copilot seeks like just another one (it can't be more miserable/broken than Sharepoint).
reply
pnathan
31 minutes ago
[-]
AI the hype beast product and the club for workers is a plague that I frankly hate.

AI the manual algorithm to generate code and analyze images is quite an interesting underlying tech.

reply
SecretDreams
1 hour ago
[-]
> My former coworker—the composite of three people for anonymity—now believes she's both unqualified for AI work and *that AI isn't worth doing anyway*. *She's wrong on both counts*, but the culture made sure she'd land there.

I'm not sure they're as wrong as these statements imply?

Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?

reply
chankstein38
1 hour ago
[-]
Oh but we're all supposed to swoon over the author's ability to make ANOTHER AI powered mapping solution! Probably vibecoded and bloated too. Just what we need, obviously all the haters are wrong! /s
reply
empressplay
57 minutes ago
[-]
Honestly if it's using a swiss-army-knife framework it's already bloated.
reply
piljoong
1 hour ago
[-]
This isn’t really a common-folk-vs-tech-bros story. It’s about one specific part of Seattle’s tech culture reacting to AI hype. People outside that circle often have very different incentives.
reply
yieldcrv
1 hour ago
[-]
Unlike Seattle, in Los Angeles there are few software engineers but I would not utter AI at all here

Its an infinite moving goalpost of hate, if its an actor, "creative", writer, AI is a monolithic doom, next its theoretical public policy or the lack thereof, and if they have nothing that affects them about it then it's about the energy use and environment

nobody is going to hear about what your AI does, so don't mention anything about AI unless you're trying to earn or raise money. Its a double life

reply
ToucanLoucan
1 hour ago
[-]
Literally everyone I know is sick of AI. Sick of it being crowbar'd into tools we already use and find value in. Sick of it being hyped at us as though it's a tech moment it simply isn't. Sick of companies playing at being forward thinking and new despite selling the same old shit but they've bolted a chatbot to it, so now it's "AI." Sick of integrations and products that just plain do not fucking work.

I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.

reply
venturecruelty
1 hour ago
[-]
Oh, I will happily get in your face and tell you your AI garbage sucks. I'm not afraid of these people, and you shouldn't be, either. Bring back social pressure. We successfully shamed Google Glassholes into obscurity, we can do it again. This shit has infested entire operating systems now, all so someone can get another billion dollars, while the rest of us struggle to make rent. It's made my career miserable, for so many reasons. It's made my daily life miserable. I'm so sick and tired of it.
reply
ThrowawayR2
16 minutes ago
[-]
> "shamed Google Glassholes into obscurity"

Except it didn't stick? https://news.ycombinator.com/item?id=43088369

reply
marcosdumay
45 minutes ago
[-]
> This shit has infested entire operating systems now

Well, it's not the fault on a random person doing some project that may even be cool.

I'll certainly adjust my priors and start treating the person as probably an idiot. But if given evidence they are not, I'm interest on what they are doing.

reply
ToucanLoucan
1 hour ago
[-]
The thing that stops me being outwardly hostile is that there are a minority, and it is a minor, minor minority, of applications for AI that are actually pretty interesting and useful. It's just catastrophically oversaturated with samey garbage that does nothing.

I'm all for shaming people who just link to ChatGPT and call their whatever thing AI powered. If you're actually doing some work though and doing something interesting, I'll hear you out.

reply
cvsswebshit
1 hour ago
[-]
Slop.
reply
akomtu
1 hour ago
[-]
Good for them. It turns out, the common folk have more wisdom than tech bros with regard to AI.
reply
typs
1 hour ago
[-]
Ah yes. The big tech employees of Amazon and Microsoft, the common folk.
reply
RandallBrown
1 hour ago
[-]
This article is about how the tech bros in Seattle hate AI.
reply
lowbloodsugar
1 hour ago
[-]
The article reports Microsoft SDEs complaining about Copilot and being forced to use it. It's "worse than competitors' tools."

No shit. But that's hardly everyone is Seattle. I'd imagine people at Amazon aren't upset about being forced to use Copilot, or Google folks.

reply
bgwalter
56 minutes ago
[-]
Nadella needs to go. If he wants to set up a sweatshop, he can do it elsewhere rather than ruining a Western company.

Maybe he can be "AI" officer at Infosys.

EDIT: The sycophancy of the downvoters for their beloved kings is striking. People like Nadella literally tell "AI" skeptics to leave. I am telling Nadella to leave. The enshittification process of Microsoft has started under Nadella.

reply
defraudbah
1 hour ago
[-]
Good! Everyone in AI hates Seattle!
reply
jfalcon
57 minutes ago
[-]
206dev here...

Oh yeah, call out a tech city and all the butt-hurt-ness comes out. Perfect example of "Rage Bait".

People here aren't hurt because of AI - people here are hurt because they learned they were just line items in a budget.

When the interest rates went up in 2022/2023 and the cheap money went away, businesses had to pivot their taxes while appeasing the shareholder.

Remember that time when Satya went to a company sponsored rich people thing with Aerosmith or whomever playing while announcing thousands of FTE's being laid off? Yeah, that...

If your job can be done by a very small shell script, why wasn't it done before?

reply