> Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple is doing internally in terms of product development, a lot of their internal tools…They have custom versions of Claude running on their own servers internally.
Okay, but why is the Siri team sitting out transformers. I really wanna move past the „Dragon Naturally Speaking“ experience with a bolted on decision tree.
Who’s doing it better? I have yet to hear from a Google or Amazon user who has a transformatively better experience, and I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
I think it's the same reason why MacOS and iOS degraded a lot in terms of UX the past decade. The focus of Apple shifted towards hardware independence.
The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.
If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
In any case I think it explains really well why Siri feels so abandoned.
I dunno, Apple has always had a pretty high level of hardware independence, and once could imagine even if Intel did produce great chips for longer the ARM architecture would replace it eventually. Certainly the timeline got shifted (and I'm glad for it) but I don't know if that really impacted Siri. If anything it seems like it got pushed to the bottom of the pile in favor of projects like the Apple Car and Vision Pro OS one on side and the demand to increase services revenue on the other.
I'm suspicious of that take from Mark Gurman. That's a lot of detail around pricing and "holding Apple over a barrel" as relates to the Siri deal that seems like a nice PR spin from Anthropic.
Anthropic probably couldn't give the uptime guarantees that Google can, right?
Apple is a pretty difficult company to deal with on a B2B basis.
If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.
Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.
Gueman might be the only leaker in tech who, so far, doesn’t seem to fuck around. Low miss rate, rarely exaggerates. Of course that could change and he could always get insider info that is wrong.
I don't think you need to even see any files to realize much of Apple's software is vibe-coded by now.
Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.
Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.
It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.
I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.
If I were a sociopath who didn’t care at all about the commons I’d be ruining by doing so, I suppose I’d find it intellectually interesting to set up a ClaudeyLemonZest and see how people react to various settings.
Is it really a mistake? OpenAI's own agent SDK also has a Claude.md file. That's not an indication that OpenAI internally use Claude, rather, it's there because the SDK has multi-model support.
Whilst tempting, I think it is important not to read too much into this.
It is no secret that Apple has an enormous R&D budget.
It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.
What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.
I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.
Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.
> It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
Having worked there this is a perfect description of the organization from my experience.
> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.
> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.
It’s not super secret no. It’s just embarrassing they they don’t have instructions in their AI agents coding and pushing deployments to not push the Claude.md files. It demonstrates that they haven’t fed their AI prompts through AI yet cause it would hav added a clause for that.
> Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple is doing internally in terms of product development, a lot of their internal tools…They have custom versions of Claude running on their own servers internally.
--Mark Gurman, Bloomberg https://x.com/tbpn/status/2016911797656367199
Apple seems to purposefully have decided to sit out the arms race.
Probably smart time to rent and not buy if they plan on buying in a downturn.
Okay, but why is the Siri team sitting out transformers. I really wanna move past the „Dragon Naturally Speaking“ experience with a bolted on decision tree.
Who’s doing it better? I have yet to hear from a Google or Amazon user who has a transformatively better experience, and I think that’s why they haven’t jumped so far because they have hundreds of millions of users who have daily habits that they don’t want to lightly disturb.
I think it's the same reason why MacOS and iOS degraded a lot in terms of UX the past decade. The focus of Apple shifted towards hardware independence.
The 2010s was marked by Intel's lazy product lineup, year after year pumping rehashes of older products, iterating on top of their 14nm lithography with increasingly minor improvements on its architecture until AMD overcame them. In the process, Apple's partnership with Intel became a liability it had to solve, and a push for the unified ARM architecture was no small feat.
If you ask me I don't think it's justified to degrade the user experience for the sake of focusing on this. It's a trillion dollar company, and has been for a while. Sure it could have tackled both, but what do I know.
In any case I think it explains really well why Siri feels so abandoned.
I dunno, Apple has always had a pretty high level of hardware independence, and once could imagine even if Intel did produce great chips for longer the ARM architecture would replace it eventually. Certainly the timeline got shifted (and I'm glad for it) but I don't know if that really impacted Siri. If anything it seems like it got pushed to the bottom of the pile in favor of projects like the Apple Car and Vision Pro OS one on side and the demand to increase services revenue on the other.
Gemini will be replacing the legacy Siri:
https://blog.google/company-news/inside-google/company-annou...
I'm suspicious of that take from Mark Gurman. That's a lot of detail around pricing and "holding Apple over a barrel" as relates to the Siri deal that seems like a nice PR spin from Anthropic.
Anthropic probably couldn't give the uptime guarantees that Google can, right?
Apple is a pretty difficult company to deal with on a B2B basis.
If you have terms that conflict with theirs, they aren’t very flexible. Anthropic can be similarly difficult, and their needs from a business perspective probably don’t align with Siri. I would imagine that Google has a more flexible/long term approach to absorbing some risk in a revenue share arrangement than anthropic who generally wants cash.
Anthropic’s only purpose is to juice whatever KPI‘s are gonna increase their IPO market cap.
Tbh I thought their purpose was to power the war machine
Gueman might be the only leaker in tech who, so far, doesn’t seem to fuck around. Low miss rate, rarely exaggerates. Of course that could change and he could always get insider info that is wrong.
The reporting says it's running on their own hardware.
Internal dev tools, but the point I'm making relates to the discussion about choosing Gemini over Claude for their consumer-facing products.
I don't think you need to even see any files to realize much of Apple's software is vibe-coded by now.
Had some issues with my monitor apparently seeing connection to my Mac Mini, but the Mac Mini displaying black, apparently somehow got out of sync with my monitor, sleeping the display controller then waking it solved it.
Gathered a bunch of data, wanting to submit a report, since I'm a Apple Developer Program member since like two days ago, and I wanna be a good c̶u̶s̶t̶o̶m̶e̶r̶ user, so I opened up Feedback Assistant.
It asks me for my email, I input it, press enter. A password input appears, but keyboard focus doesn't move there automatically. I know is such a tiny nitpick practically, but tiny shit like this makes it so obvious that not a single person actually tried this UX. 10-15 years ago, Apple would never release something that isn't perfect, but now there are these UX edges absolutely everywhere across the OS.
I ended up not logging in at all, wrote my fix into a tiny fix-display.swift file which I'll run when it happens instead.
Unrelated:
Yuck. a lot of those replies have LLM smells. Do people love being a hollow puppet for LLMs to fill in? Have people lost their identity?
It's not about contributing to the conversation — it's about the fake internet points.
It's not about the fake internet points — it's about manipulating people to support companies they otherwise wouldn't.
You're absolutely right!
(sorry couldn't resist)
If I were a sociopath who didn’t care at all about the commons I’d be ruining by doing so, I suppose I’d find it intellectually interesting to set up a ClaudeyLemonZest and see how people react to various settings.
Come join the party at ClaudeyLemonParty
Is it really a mistake? OpenAI's own agent SDK also has a Claude.md file. That's not an indication that OpenAI internally use Claude, rather, it's there because the SDK has multi-model support.
It was a mistake yes. And they corrected it. Why would you assume they would do this intentionally?
Dozens of comments, but not a single "What was in their Claude.md"
The what is in the screenshots….
Whilst tempting, I think it is important not to read too much into this.
It is no secret that Apple has an enormous R&D budget.
It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives. Apple is an enormous multinational company, it is unlikely they have zero-AI on-site.
What is guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide. Old-school engineering is too important for Apple.
I'm sure journalists and Anthropic would love to have you believe otherwise, but I think we need to keep our feet on the ground here and accept the reality is more old-school.
Afterall, as others have pointed out already here ... whilst the rest of Silicon Valley has been shoveling truckloads of cash at AI, Apple have been patiently sitting, watching the bandwagon trundle along the rails.
> It is no secret that Apple operates with hundreds of siloed teams in order to maintain individual domain expertise. The teams then come together in a collaborative manner to bring together the final products.
Having worked there this is a perfect description of the organization from my experience.
> So yes, it is likely true that SOME teams use SOME LLM for SOME tasks. It is a viable argument from R&D and other perspectives.
> What is almost guaranteed NOT to be the case is that Apple is somehow vibecoding company-wide.
100% agree
Risk of embarrassment is too great to be vibe coding, apple's brand is TRUST and people don't trust AI... A slip like this erodes their brand
Original link: https://x.com/aaronp613/status/2049986504617820551?s=20
to be honest, for some reason I expected most of apple to eschew claude/ai coding.
I'm not sure why. It just doesn't feel very Apple-like
Because unlike Apple Intelligence, Claude is useful?
They've had it built in to Xcode for a while now, and I imagine internally a lot longer.
I really hope its not churning out massive amounts of code for osx and ios or we are in for some pretty interesting times in the next year or so.
So much FUD (and bot replies dogpiling on?) in that thread. It's just a file that specifies some structure for the project. Nothing super secret.
It’s not super secret no. It’s just embarrassing they they don’t have instructions in their AI agents coding and pushing deployments to not push the Claude.md files. It demonstrates that they haven’t fed their AI prompts through AI yet cause it would hav added a clause for that.
X somehow manages to get worse for this as time goes on.
Seems like at some point most of the actual humans just gave up on replying.