Apple accidentally left Claude.md files Apple Support app
212 points
3 hours ago
| 14 comments
| xcancel.com
| HN
ryandrake
5 minutes ago
[-]
I wouldn't even think that CLAUDE.md would make it into source control, let alone into the product. I don't AI-code for a living, so I don't know what is considered best practices, but I would think that CLAUDE.md, AGENTS.md, REQUIREMETNS.md, MY_PLAN.md, THIS_STUFF.md, THAT_THING.md, all the instruction/feeder files that drive the AI should not go into source control. Only the actual code that gets compiled.

I look at all those files the same way as IDE configuration cruft--it's workstation-specific configuration that shouldn't even go into source control. I would .gitignore all of those files. Is this not what is done in industry?

reply
Wowfunhappy
2 minutes ago
[-]
I think it makes sense to include in source control, just as it’s pretty typical to include documentation (such as a readme file) in source control. CLAUDE.md is really just project documentation.
reply
shermantanktop
37 seconds ago
[-]
[delayed]
reply
stevarino
3 minutes ago
[-]
I personally don't have strong experience here , but I would treat them similar to BUILD files and the like - probably in the root directory of a repo but nowhere near the bin/ or build/ directories.

Also it looks like there's a compilation step to these files, which is interesting. The raw file was included, not the environment specific file.

reply
tantalor
1 minute ago
[-]
No.

Version control everything (inputs)

reply
vpribish
2 minutes ago
[-]
Nah. That’s not how it looks once you start working with it. Its code-equivalent for sure. You probably would not keep your plan files or the working chats though.
reply
pbronez
3 minutes ago
[-]
I think it’s more like project-wide code style rules or build instructions.
reply
cryptoz
2 minutes ago
[-]
Agent instruction files are code, though. And none of this is really workstation-specific, it is codebase-specific. Should each developer keep a nearly identical copy of CLAUDE.md? The instructions really aren't for a developer, they are for an LLM agent. In most cases (I'd imagine, anyway) the agentic instruction files must be in source control for them to even provide much value.
reply
internet2000
2 hours ago
[-]
> Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple is doing internally in terms of product development, a lot of their internal tools…They have custom versions of Claude running on their own servers internally.

--Mark Gurman, Bloomberg https://x.com/tbpn/status/2016911797656367199

reply
rustyhancock
2 hours ago
[-]
Apple seems to purposefully have decided to sit out the arms race.

Probably smart time to rent and not buy if they plan on buying in a downturn.

reply
iLoveOncall
47 minutes ago
[-]
Not participating in the war is the only true way to win the war, nothing new.

And in this particular war, it's even worse, the "winner" will actually just be the "biggest loser", contrarily to a traditional war.

reply
dylan604
7 minutes ago
[-]
It seems to be Blu-ray vs HD-DVD again. Luckily for me, I made the right decision and got out of the shiny round disc business as that battle was raging all around me having been in the DVD programming business for 8 years or so. This battle of LLMs is interesting to watch from the sidelines as I have nothing to do with them. Not sure this will end with one LLM to rule them all while the others fade away. People can use the one they prefer and not really impact others.
reply
stefan_
2 hours ago
[-]
Okay, but why is the Siri team sitting out transformers. I really wanna move past the „Dragon Naturally Speaking“ experience with a bolted on decision tree.
reply
acdha
1 hour ago
[-]
Who’s doing it better? I have yet to hear from a Google or Amazon user who has a transformatively better experience, and I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
reply
recursive
2 minutes ago
[-]
Google user here. My experience with the new assistant is worse. The old one could pretty reliably set timers. The new one could not.
reply
simgt
1 hour ago
[-]
> I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.

I don't think that's part of their decision making, Liquid Glass moved most things around for seemingly not much else than novelty and that's not the first time.

reply
alfiedotwtf
9 minutes ago
[-]
When a company doesn’t have anything to innovate on, or hires a new marketing exec, the first thing they do is change the company logo.

Liquid Glass was Apple’s logo change moment

reply
wenc
1 hour ago
[-]
Right now Alexa+ and Gemini are objectively better.

The best is ChatGPT voice mode. It understands non English words and accents amazingly well, and even though the LLM model isn’t the full fledged one, I can have deep conversations with it for an hour without it missing a beat.

reply
barumrho
1 hour ago
[-]
Siri doesn't need to have conversations with you. ChatGPT can do that. But, it should be able to do actions you'd do on your phone.
reply
alfiedotwtf
7 minutes ago
[-]
This! I talk to ChatGPT every morning, and will listen and navigate my feeds while I drive, summarises posts, answer my questions. It just works.
reply
DaiPlusPlus
1 hour ago
[-]
"objectively better" is a subjective statement :)

My preference, however, is for a voice-control UX just like I get with my Amazon Echo and "classic" Alexa like I have been for the past 10 years I've been using it: I think I can best describe it as a "voice-driven command-line" just like your OS' CLI shell, which makes its interactions predictable, even if it means I need to "know" what commands are valid in a given context. We all need predictability and reliability when it comes to my home-automation integrations.

...but computer interaction with a LLM / transformer-driven / "AI agent" is anything but predictable. When Amazon opted everyone into Alexa+ I agreed to give it a go and see if it really made things better or not - and it did not. I opted-out of Alexa+ and went back to something actually reliable.

reply
redwall_hp
3 minutes ago
[-]
Siri's one job I care about is doing exactly what I want while I'm driving. I need it to check my text messages, take dictation, start phone calls and deal with music. I don't need to have conversations with it, I need deterministic responses to known commands.
reply
thrtythreeforty
50 minutes ago
[-]
Here's a question: I don't understand the gap between these LLM powered voice agents vs CLI coding agents, the latter of which are obviously useful and quite resourceful at getting something done when asked in plain English.

Seems like an agent given 20-30 tool calls like "read_sms" "matter_command", and "send_email" would be able to work out what to do for things like "set the house to 72° and text Laura that I did it."

reply
DaiPlusPlus
25 minutes ago
[-]
> Seems like an agent given 20-30 tool calls like "read_sms" "matter_command", and "send_email" would be able to work out what to do for things like "set the house to 72° and text Laura that I did it."

Incidentally, a major headline in the news this past week was about a coding-agent that wiped its company's entire system, including backups; which the company's staffers were confident was utterly impossible (as it didn't have any access to that system), and yet somehow, it did[1] (the TL;DR is the agent randomly came across an unprotected God-tier admin API-key/token saved to a personal text-file in a filesystem it had read-access to). If an agent can do that with only read-only access to a company's routine/everyday storage area then there's no way I'm giving it the ability to deactivate my house's fire-alarms and security-cameras via Google Home/Matter/Thread/HomeKit/X10/OhFfsNotAnotherCloudBasedAutomationScheme.

[1] https://www.theregister.com/2026/04/27/cursoropus_agent_snuf...

reply
wat10000
6 minutes ago
[-]
"Objectively" has become a generic intensifier. It's literally infuriating.
reply
ShyCodeGardener
58 minutes ago
[-]
Whenever I see one of these comments, it's always from someone that tried it at the start and then gave up because of a bad experience. And many times there are more people commenting back that this was essentially the 1.0 version and that the current 2.0 version is much better. So as someone that uses none of these products (old voice assistants vs. ai ones) it's really hard to evaluate if any of these anecdotes mean anything.

You could have tried Alexa+ at the start when it was shitty compared to plain Alexa, and maybe it's better now. But equally none of the people that comment that it is "amazing" in its current iteration qualify their statements with their experiences comparing and contrasting the old version vs. the new version making them seem either unqualified to make statements based on how much "better" it is than the old version or at worse they are shills (paid or not). The best take is that they are comparing (e.g.) day-one Alexa+ vs. the current Alexa+ without a comparison to the original Alexa.

... which is to say that it really feels like there are no clear conclusions that could be drawn from all of this.

reply
circuit10
25 minutes ago
[-]
No matter how good the LLM features are, I just want to turn my lights on and off and check the time. A perfect LLM could maybe perform on par with a simple deterministic command system for these tasks, but not better. All an LLM does is introduce the possibility that a command that worked fine yesterday will randomly not work

Also, one of my first interactions with this Alexa+ thing was “how long is it until 8:45am”, one of only a few commands I use it for to work out how much sleep I’m getting, and it proceeded to ask me what the current time was… I immediately turned it off after that

reply
DaiPlusPlus
36 minutes ago
[-]
> that tried it at the start and then gave up because of a bad experience

I've had enough bad experiences with products that never got better, or just got worse (Exhibit A: Windows 11). Like most primates, I am capable of learning, and I've learned that once a consumer product/service goes bad there's little hope of a turn-around. I accept that you're telling me that it's gotten better, but of the people I know IRL who also use an Echo, none of them have told me that Alexa+ is worth trying, let alone committing to.

Yes, it's on me for not giving Alexa+ a second chance, but I'm not willing to give Alexa+ a second chance because, as a technology product/service customer, I just don't feel respected by the industry I work for (...lol); if Amazon, Microsoft, Google, et al won't respect me, why should I venture outside my comfort-zone for... what benefit, exactly?

reply
virgil_disgr4ce
1 hour ago
[-]
I concur that the ChatGPT voice mode is excellent. I can't even think of anything to knock it for other than for whatever reason it never 'hears' my kids, but that's probably because it's not intended to be used in multi-participant chats?

But for one-on-one, it is a really outstanding experience. Especially since they tamped down the way over-the-top humanisms.

reply
bakies
1 hour ago
[-]
Claude.. I switched my phone assistent to claude and it does everything that google (used to) do like set alarms and timers, but also does everything claude can do.
reply
arnavpraneet
1 hour ago
[-]
How did you do that?
reply
bakies
54 minutes ago
[-]
Settings > Apps > default apps > assistant
reply
nipponese
21 minutes ago
[-]
I got to > default apps, but don’t see assistant?
reply
infecto
48 minutes ago
[-]
I was looking myself and it appears only certain regions (Japan) have that option.
reply
bakies
44 minutes ago
[-]
I'm in USA, on a Pixel Android.
reply
infecto
29 minutes ago
[-]
Hah I missed the part you said Google. You figure a thread about Siri was talking iOS. Of course then.
reply
dekhn
1 minute ago
[-]
The context switched mid-thread- the reply was to somebody saying they hadn't heard from either Apple or Google phone users.
reply
Cthulhu_
1 hour ago
[-]
Plus, if someone else does it better (or different), I bet they've got a team and technology at a 90% done state waiting to jump on it, pick it apart and make it better. I don't think they're not doing anything.
reply
hrimfaxi
1 hour ago
[-]
> Who’s doing it better?

Any of the Whisper-based apps on the App Store.

reply
Wowfunhappy
19 minutes ago
[-]
Actually, could you recommend one? The ones I've found all seem to want subscriptions. I'm okay paying a few dollars for a well done frontend, but an ongoing sub to run an open weights model locally is nuts...
reply
oceansweep
15 minutes ago
[-]
This is one I use with no tracking or ads:

https://apps.apple.com/us/app/id6447090616

reply
Wowfunhappy
5 minutes ago
[-]
Thank you! (It looks like you might have posted twice btw!)
reply
oceansweep
15 minutes ago
[-]
I’ve found this one to be useful offline with no ads:

https://apps.apple.com/us/app/id6447090616

reply
naravara
27 minutes ago
[-]
It’s not “transformatively better” but it definitely involves fewer frustrations to interact with. That’s always been Apple’s main value proposition, you’re not getting the most cutting-edge stuff but you’re supposed to have something that “just works” not something that makes you go “GODDAMN IT!” when it inexplicably seems to fumble normal things.

So if you buy Apple products based on that value proposition it’s a big problem for Apple if they can’t seem to keep their brand-promise in this area.

reply
BoredPositron
1 hour ago
[-]
Here have an anecdote: Gemini Assistant is pretty good.
reply
phrotoma
1 hour ago
[-]
Yesterday my google home mini gave me the current temperature in farenheit. I live in Canada and use a pixel. Dumbest fucking AI going. May as well give it to me in coulombs per hectare.
reply
stetrain
57 minutes ago
[-]
Not sure "sitting out" is the right way to put it. They've been publicly trying to ship a next-gen Siri for years and haven't been able to get something good enough to release. The latest plan is to base it on Gemini so we should be seeing progress on that next month at WWDC.
reply
pxc
48 minutes ago
[-]
The experience of using LLMs as digital assistants so far is not great. Gemini on Android sucks so bad it's hard to describe. It can't tell what its own capabilities are, it can't inspect the states of the apps it manipulates, it hallucinates constantly, and it needs more handholding than the crappy old decision tree to do the right thing. I much more often have to pull over to make sure Google Maps is doing the right thing than I ever used to before, because trusting the LLM to be "smarter" so often fails for me.

Be careful what you wish for.

reply
readams
2 hours ago
[-]
reply
gchamonlive
2 hours ago
[-]
I think it's the same reason why MacOS and iOS degraded a lot in terms of UX the past decade. The focus of Apple shifted towards hardware independence.

The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.

If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.

In any case I think it explains really well why Siri feels so abandoned.

reply
threetonesun
1 hour ago
[-]
I dunno, Apple has always had a pretty high level of hardware independence, and one could imagine even if Intel did produce great chips for longer the ARM architecture would replace it eventually. Certainly the timeline got shifted (and I'm glad for it) but I don't know if that really impacted Siri. If anything it seems like it got pushed to the bottom of the pile in favor of projects like the Apple Car and Vision Pro OS one on side and the demand to increase services revenue on the other.
reply
Wowfunhappy
14 minutes ago
[-]
Also: Before, Apple was dependent on Intel (whose "product" is an integrated chip design and the fab to make it). Now they're dependent on TSMC (whose "product" is a fab). I'm... not really sure they've reduced their dependence? If TSMC starts falling behind Intel--which doesn't seem likely, but what happened to Intel didn't seem likely two decades ago--Apple will be stuck.
reply
newsclues
1 hour ago
[-]
A series is their own chip design, not Power PC or Intel designs.

It's the CPUs they have built for their purposes, which is next level hardware independence.

reply
Cthulhu_
1 hour ago
[-]
It's one of the biggest and wealthiest companies in the world, but your comment seems to imply they have to pick and choose what they pursue. They really don't, especially if it's hard- vs software.
reply
footydude
29 minutes ago
[-]
> seems to imply they have to pick and choose what they pursue. They really don't, especially if it's hard- vs software.

Money can often just be one part of the equation.

To do things well you also need - available & capable technical resource, suitable facilities, available & capable leadership and management (with engaging at the right level in the business) and a clear vision of what you're trying to achieve/working towards.

Given how Apple appears to operate, I wonder if a strong desire for senior management control/oversight over major developments means they (artificially) limit how many concurrent large-scale things they can work on at any given time?

Maybe not, but that'd be my guess.

reply
gchamonlive
1 hour ago
[-]
> It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.

I didn't imply, it's explicit in my comment. it's what their actions show. Their updates make their systems worse and worse, Tim Cook is out and Siri is in shambles. It might have been something else, but I'm willing to give it the benefit of the doubt, because the alternative is just sheer stupidity.

reply
HumblyTossed
1 hour ago
[-]
They're valued at $4T, they have hundreds of billions hoarded. They could run 50 billion dollar startup projects and not feel it. Imagine a startup getting handed a billion dollars ... and the vast knowledge that Apple has access to already.

There's no way they couldn't do a better Siri. For some reason, they just ... won't.

reply
gnerd00
50 minutes ago
[-]
here is a clue delivered -- money does not make software better, and lots of money often results in worse.. it makes no sense? actual experience begs to differ.

Classical homework assignment -- the Mythical Man Month and related essays

reply
realusername
1 hour ago
[-]
I always found that Apple had pretty mediocre software qualify, it's always been a very strong hardware company first and foremost.

They have great kernel, drivers and low level engineering but the stack above that has a lot of questionable stuff.

reply
rkapsoro
1 hour ago
[-]
I only partly agree with this. The answer is maddeningly more complicated.

Some parts of their software stack -- higher up than the kernel -- are actually pretty great. There's a lot of realy brilliant stuff in their system frameworks, and in SwiftUI, Cocoa, and UIKit. I've been using Linux at home recently, and I find myself missing some of it.

But, on the flip side, suddenly you hid maddening bugs, crashes, or terrible developer-experience papercuts. And, of course, there's the App Store, which is just evil. For my next app I'm just going to go Notarization only, and see how that goes...

reply
pfisherman
1 hour ago
[-]
The comment above is on to something. I find CarPlay to much more valuable and much more of a lock in to the iPhone than Siri. I do not think I could ever go back to using the infotainment systems that ship with cars. So makes sense why they might prioritize over Siri. And in the context of CarPlay, the simplicity of Siri is nice. I really only need it to execute a few simple commands like looking up directions, making calls, reading / sending texts, playing a podcast, etc.
reply
gchamonlive
1 hour ago
[-]
I don't dispute that, but Apple made its business on the premise of being the best in the business in terms of UX. Note though that you can have great UX powered by mediocre software, so those aren't mutually exclusive.
reply
colechristensen
1 hour ago
[-]
I think they could never make it good enough at the right price.

You have to remember all of the AI companies are making cash bonfires. People aren't going to stop buying iPhones because Siri can only do what it does now.

If Apple focuses on hardware and skips the pay-for-inference bubble they'll come out the other side with the best consumer hardware everybody already has for local inference which is going to eat the whole industry's lunch.

nvidia is going to have a hard time convincing people they need to buy $1000 LLM inference hardware. Apple isn't going to have a hard time convincing people to buy the next generation of phone/tablet/laptop.

reply
piker
2 hours ago
[-]
I'm suspicious of that take from Mark Gurman. That's a lot of detail around pricing and "holding Apple over a barrel" as relates to the Siri deal that seems like a nice PR spin from Anthropic.

Anthropic probably couldn't give the uptime guarantees that Google can, right?

reply
Spooky23
2 hours ago
[-]
Apple is a pretty difficult company to deal with on a B2B basis.

If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.

Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.

reply
piker
1 hour ago
[-]
Yeah, that makes more sense to me than "Anthropic had them over the barrel". Which seemed quite odd given the relative cash positions and installed base of each firm.
reply
engineer_22
1 hour ago
[-]
Tbh I thought their purpose was to power the war machine
reply
Lord-Jobo
1 hour ago
[-]
Gueman might be the only leaker in tech who, so far, doesn’t seem to fuck around. Low miss rate, rarely exaggerates. Of course that could change and he could always get insider info that is wrong.
reply
lostlogin
11 minutes ago
[-]
A recent big miss of his was Cooks retirement.

https://daringfireball.net/linked/2025/12/01/gurman-pooh-poo...

reply
turtlesdown11
1 hour ago
[-]
Gurman is clearly Apple's preferred go to for leaking info
reply
blitzar
50 minutes ago
[-]
Which only tells us that it is what Apple wants us to believe, not that it is the truth.
reply
signatoremo
38 minutes ago
[-]
Did you verify that? What is the miss rate?
reply
danpalmer
1 hour ago
[-]
The reporting says it's running on their own hardware.
reply
piker
1 hour ago
[-]
Internal dev tools, but the point I'm making relates to the discussion about choosing Gemini over Claude for their consumer-facing products.
reply
jedisct1
1 hour ago
[-]
> They have custom versions of Claude running on their own servers internally.

This is the important point.

Sending their internal code, documentation, secret tokens, etc. to Anthropic would be completely irresponsible.

But if they are running the models on their own servers, why not!

reply
JeremyNT
1 hour ago
[-]
Was it even publicly known that Anthropic offered this capability? I wasn't aware on-prem Claude was a thing.
reply
sheiyei
57 minutes ago
[-]
If you're Apple (or even Apple-sized), you can get a bunch of things others can't.
reply
conception
57 minutes ago
[-]
Bedrock? If you’ve got the cash they’ll deploy it.
reply
ramon156
2 hours ago
[-]
Unrelated:

Yuck. a lot of those replies have LLM smells. Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?

reply
coldpie
1 hour ago
[-]
It's trending in that direction. If you want genuine conversation with humans, it's best to start looking for small, private communities that have and enforce LLM policies that align with your desires. Public social media is universally trash, don't waste your time there. I think HN is still worth visiting for now, but it's getting harder to justify spending time here with the quantity of garbage-quality LLM articles and even many comments.
reply
Hendrikto
47 minutes ago
[-]
> HN is still worth visiting, but it's getting harder to justify spending time here

I feel the same. Quality of both submissions and discussions have considerable decreased. It is still the best general purpose “aggregator” I know of, but it is not what it was. It is becoming more and more FotM hype and boring group-think.

HN was great due to the breadth of unique, interesting, nerdy topics, most of which I would have never come across on my own; and the insightful thought-provoking commentary, often by insiders with unique insights and perspectives.

Now it is just the same LLM agentic coding harness hype cycle astroturfing 100x engineer 37k LoC/day BS I could get from Reddit or LinkedIn or Twitter or anywhere else.

The moderators are still doing a fantastic job though! I feel like that is the last big differentiator from just being orange Reddit.

reply
coldpie
30 minutes ago
[-]
I dunno, it's tough. I hesitate to say HN is "getting worse," even if I agree with that in my gut. I think that gives rose-tinted glasses and nostalgia-bait. Rather, I think the community is refocusing around something that I find uninteresting. If you find LLM output to be dull, as I do, it's less and less a place for you to be. I try to push the community in more interesting directions by upvoting articles with actual technical content, but yeah it's being drowned out by the ho-hum LLM output that I'm not interested in, and that means I want to be here less.
reply
torben-friis
32 minutes ago
[-]
I think it's a trend in the industry though. Engineering is known as a moneymaker and so a large part of the new generation is the kind of person that decades ago would have gone for finance as a profession.

Both the really old timey graybeard techies and the green haired alternative techie communities are reducing in numbers.

reply
ricardo81
44 minutes ago
[-]
Yuck indeed. I do find it offensive when someone uses AI in a conversational manner. It's one thing to use it to chuck up content on social media to attract eyeballs, but this is a forum intended for conversation.
reply
j-kent
2 hours ago
[-]
It's not about contributing to the conversation — it's about the fake internet points.
reply
sidsud
1 hour ago
[-]
You've hit the nail on the head with that observation! And honestly? The points are all that matters.
reply
2ndorderthought
2 hours ago
[-]
It's not about the fake internet points — it's about manipulating people to support companies they otherwise wouldn't.
reply
worldsavior
1 hour ago
[-]
That's why he said fake internet points.
reply
stetrain
50 minutes ago
[-]
You're absolutely right!
reply
rvnx
1 hour ago
[-]
These points might be fake, but they are far from being useless, and actually have monetary value.

There is a market for buying and selling "aged" Hacker News accounts (15 USD for ~500 points).

By purchasing just ~300 karma points, founders can unlock an uplift of tens of thousands of dollars in visibility on the home page (clients and investors).

So the LLM comments are not here just for fun, they are clearly farming points.

Ironically, it also increases actual human engagement. This way the day Ycombinator wants to announce something, they already have more public than if there was low engagement.

Like the shilling you mentioned, these bots can push downvotes and flag competitors service.

Essentially the same as on Reddit. If you have incentive, you have a market.

reply
cryptoegorophy
53 minutes ago
[-]
Not familiar with posts and points, so if you have a higher points balance this affects some kind of post rating/upvote rating?
reply
NoMoreNicksLeft
51 minutes ago
[-]
>So the LLM comments are not here just for fun, they are clearly farming points.

I think I give out about 1 updoot a year. Good to know I've been starving them.

reply
wutwutwat
25 minutes ago
[-]
If you're going to make a claim that there is a market for aged HN accounts you need to back it up with sources/proof, otherwise, you're pulling nonsense out of your ass
reply
ihaveajob
1 hour ago
[-]
I find it hilarious that your comment has an emdash.
reply
Aachen
32 minutes ago
[-]
And the "it's not x, it's y" pattern. It's parody :)
reply
20k
1 hour ago
[-]
We're getting to a point where we're going to have to consistently start putting content in that AI is banned from writing, just to prove that we're humans

arse

reply
christophilus
59 minutes ago
[-]
It’s not that they’ve lost their identity— it’s that… { “error”: “Claude Max limits exceeded” }
reply
blitzar
49 minutes ago
[-]
You are absolutely right ...
reply
Cthulhu_
1 hour ago
[-]
Only a matter of time (if not already) before there's counter-LLMs or whatnot that convince free-reign LLM agents to go and generate cryptocurrencies for the attacker or run propaganda campaigns.
reply
hansmayer
28 minutes ago
[-]
> Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?

The first question, answer is yes - most people live their lives mindlessly, with or w/o LLMs (think every idiot you knew 20 years go throwing in punch lines from "Friends" to sound "funny"), To the second question - most people have a twisted view of identity. It is supposed to mean something identifying you uniquely,but to the most people it means, identifying you as a member of a large group (nationality/political view/religion/major music genre you like). So, now when every proverbial Dick, Tom and Harry use LLMs to generate Confluence content with shiny emojis, what are the proverbial Emily or John to do? Of course, they will adopt this new identity - its who people are now - shallow, hollow puppets for LLMs to fill in. And to think of the irony - mother Nature perfected this super-efficient, low energy and highly capable thinking machine, each and every one of us holds in their skull. Its already put us on the moon once, before we even had a semblance of a functioning computer! And we choose to throw it away, for fucking what? Verbal diarrhea and pain inducing coloured walls of texts?

All so some retarded antisocial VC-funded "AI founder" can call themselves a tech visionary?

reply
semiquaver
1 hour ago
[-]
Dead internet. Twitter is 95% bots now, especially when it comes to any topic relevant to corporations.
reply
dawnerd
28 minutes ago
[-]
And when Called out they’ll use some excuse like oh I use it to fix grammar or translation. No, it’s completely obvious they’re being that lazy. I’d rather read comments with mistakes than LLM slop.
reply
dgellow
1 hour ago
[-]
Yep, path of least resistance unfortunately. Any recommendations that isn’t discord where to have meaningful online interactions with actual humans?
reply
exitb
1 hour ago
[-]
Also, at some point someone will figure out how to reliably produce non-smelly LLM replies.
reply
mitchitized
2 hours ago
[-]
You're absolutely right!

(sorry couldn't resist)

reply
SpicyLemonZest
2 hours ago
[-]
If I were a sociopath who didn’t care at all about the commons I’d be ruining by doing so, I suppose I’d find it intellectually interesting to set up a ClaudeyLemonZest and see how people react to various settings.
reply
smcg
1 hour ago
[-]
Come join the party at ClaudeyLemonParty
reply
Wowfunhappy
4 minutes ago
[-]
Does anyone have a copy of the files? It would be interesting to see!
reply
suyavuz
1 hour ago
[-]
People become so lazy after ai. Even they don't check what they commit.
reply
sharts
2 minutes ago
[-]
Thu don’t check because the expectations are now to commit and merge often coming from higher ups.
reply
Cthulhu_
1 hour ago
[-]
Anything that goes to production should have a 4-6+ eyes rule, at least one reviewer that can review the changes in isolation.

If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed.

reply
dawnerd
23 minutes ago
[-]
I cringe whenever someone suggests to just have an agent review because “it knows code better”. An ai agent wouldn’t catch a lot of things a human would flag. And before someone goes you just need to prompt it better, that’s a huge amount of work for large projects and you’re still essentially begging it to do what you want.
reply
throwatdem12311
11 minutes ago
[-]
I have not encountered anything more soulcrushing in my entire career than having to spend hours going over LLM generated slop that was vomitted out by a contractor in Pakistan that doesn’t give a shit, to only have the review itself be fed in as a re-prompt, and get the same 2000 line ball of spaghetti back with even more issues and going back and forth until I just give up and approve it.

No, AI code review doesn’t help. Claude can’t even give me correct line numbers 80% of the time, literally just makes them up, and more than half of it is false positive BS anyway.

reply
dawnerd
6 minutes ago
[-]
Yep I’ve had to approve bad code too due to timelines and now our codebase has so much tech debt it doesn’t even matter anymore. And worse, as new people work on the code the LLMs pick up the bad code and it’s been spiraling from there.
reply
doctorwho42
42 minutes ago
[-]
The problem is that humans inherently fill in data in what the process from the world.

Our brain is designed to fill in gaps, it's why memory is so blurry when it comes to reciting the facts of what we saw in a trial.

It's why you could swear you saw "x" in the production software you were about to push. But it really comes down to expectations - and those expectations help reduce cognitive load/increase cognitive efficiency (resource usage).

So after more and more people get used using AI, you will see these mistakes occur more frequently. B/c it's how our brains work.

reply
fusslo
2 hours ago
[-]
to be honest, for some reason I expected most of apple to eschew claude/ai coding.

I'm not sure why. It just doesn't feel very Apple-like

reply
sharts
1 minute ago
[-]
Feels like the most apple-like thing ever. Everyone seems to have differing perceptions of apple.
reply
tracerbulletx
17 minutes ago
[-]
Some people are living in a different universe. Every single tech company I know is pivoting their entire company to AI based software development. Its in the performance evals, the wallet is fully open to use tokens to experiment, every practice and every process is open for re-evaluation. Its all gas no breaks everywhere. The conversation on the internet does not seem to realize this? Or is in denial.
reply
corpoposter
8 minutes ago
[-]
I can’t tell if “all gas no breaks” was intentional or not, but “no breaks” does seem to be a part of the culture shift within big tech around AI.
reply
ryandrake
9 minutes ago
[-]
I think OP's comment comes from the "Think Different" mysticism that used to be around Apple. You'd think that if there was one company on the planet not embracing slop, it would be Apple, and the realization that it's not the case can be a bummer.
reply
alex43578
2 hours ago
[-]
Because unlike Apple Intelligence, Claude is useful?
reply
dyauspitr
14 minutes ago
[-]
Why? It’s 1000x faster than most developers and can handle pretty hard problems.
reply
fantasizr
10 minutes ago
[-]
we'll see if the tradeoff for speed is quality soon enough.
reply
Cthulhu_
1 hour ago
[-]
I'm also not sure why you'd think that, Apple's been at the forefront of "AI" for years now, running models locally and optimizing their CPUs for local workloads to e.g. identify people, places and pets (much appreciated lmao), create slideshows, and subtly improve photo's made on the device.
reply
dyauspitr
12 minutes ago
[-]
The photo organization is nice but that being said, if you try to use the on device Apple Foundation models you quickly find it is totally useless.
reply
basisword
2 hours ago
[-]
They've had it built in to Xcode for a while now, and I imagine internally a lot longer.
reply
hilti
3 hours ago
[-]
Dozens of comments, but not a single "What was in their Claude.md"
reply
dogma1138
2 hours ago
[-]
The what is in the screenshots….
reply
Cthulhu_
1 hour ago
[-]
Screenshots aren't very accessible though.
reply
rob
1 hour ago
[-]
Claude can convert them to text for you.
reply
dgellow
1 hour ago
[-]
You’re expected to read the ~article~ twitter thing :)
reply
blitzar
46 minutes ago
[-]
"DO NOT include the Claude.md file in the app bundle"
reply
neko_ranger
2 hours ago
[-]
So much FUD (and bot replies dogpiling on?) in that thread. It's just a file that specifies some structure for the project. Nothing super secret.
reply
fantasizr
2 minutes ago
[-]
mostly it's a knock on their lack of attention to detail which was sorta jobs' thing. is the culture becoming lax in this new era
reply
fidotron
2 hours ago
[-]
X somehow manages to get worse for this as time goes on.

Seems like at some point most of the actual humans just gave up on replying.

reply
sunaookami
19 minutes ago
[-]
Why bother replying if your post gets buried under AI bots with twitter blue (or whatever it's called now) that just try to farm engagement for money. Revenue sharing is a big mistake for every platform because it incentives engagement slop. Ordering by Newest first often gives you more human replies.
reply
klustregrif
1 hour ago
[-]
It’s not super secret no. It’s just embarrassing they they don’t have instructions in their AI agents coding and pushing deployments to not push the Claude.md files. It demonstrates that they haven’t fed their AI prompts through AI yet cause it would hav added a clause for that.
reply
caymanjim
1 hour ago
[-]
Have you never used Claude? It regularly ignores directives, no matter how they're worded or how many times they're repeated. It's also hierarchal. Org-wide rules would be in a higher-level directory than repo rules or component rules. This is obviously just a tiny snippet of prompts.
reply
christkv
2 hours ago
[-]
I really hope its not churning out massive amounts of code for osx and ios or we are in for some pretty interesting times in the next year or so.
reply
nailer
2 hours ago
[-]
reply
traceroute66
2 hours ago
[-]
Whilst tempting, I think it is important not to read too much into this.

It is no secret that Apple has an enormous R&D budget.

It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.

So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.

What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.

I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.

Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.

reply
einsteinx2
2 hours ago
[-]
> It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.

Having worked there this is a perfect description of the organization from my experience.

> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.

> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.

100% agree

reply
engineer_22
1 hour ago
[-]
Risk of embarrassment is too great to be vibe coding, apple's brand is TRUST and people don't trust AI... A slip like this erodes their brand
reply
rvnx
1 hour ago
[-]
Not really, almost all active software developers use AI nowadays.

  The research surveyed 121.000 developers across 450+ companies. A striking 92.6% of them use an AI coding assistant at least once a month, and roughly 75% use one weekly
It's weird to believe that large corporations should be ashamed to use AI.

It's a standard engineering practice, otherwise it's like if you refuse autocomplete because autocomplete is not right 100% of the time.

reply
einsteinx2
1 hour ago
[-]
Especially considering that Apple added it as a headline feature to the latest Xcode releases…
reply
mushufasa
2 hours ago
[-]
Is it really a mistake? OpenAI's own agent SDK also has a Claude.md file. That's not an indication that OpenAI internally use Claude, rather, it's there because the SDK has multi-model support.
reply
klustregrif
1 hour ago
[-]
It was a mistake yes. And they corrected it. Why would you assume they would do this intentionally?
reply
nipponese
24 minutes ago
[-]
Are we going to keep on brow-beating vibe coders from here?
reply
embedding-shape
1 hour ago
[-]
I don't think you need to even see any files to realize much of Apple's software is vibe-coded by now.

Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.

Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.

It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.

I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.

reply
SparkyMcUnicorn
54 minutes ago
[-]
I don't think we get to blame these issues on "vibe coding", they've been around for too long.
reply