The key hurdle for AI to leap is establishing trust with users. No one trusts the big players (for good reason) and it is causing serious anxiety among the investors. It seems Claude acknowledges this and is looking to make trust a critical part of their marketing messaging by saying no ads or product placement. The problem is that serving ads is only one facet of trust. There are trust issues around privacy, intellectual property, transparency, training data, security, accuracy, and simply "being evil" that Claude's marketing doesn't acknowledge or address. Trust, on the scale they need, is going to be very hard for any of them to establish, if not impossible.
I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.
For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.
Either way, both companies are hemorrhaging money.
> ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.
Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some other company. Arrrg.
You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation
> If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 50% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.
This will be an amusing post to revisit in the internet archives when or if they do introduce ads in the future but dressed up in a different presentation and naming. Ultimately the investors will come calling.
They are using this to virtue signal - but in reality it's just not compatible with their businesses model.
Anthropic is mainly focusing on B2B/Enterprise and tool use cases, in terms of active users I'd guess Claude is distant last, but in terms of enterprise/paying customers I wouldn't be surprised if they were ahead of the others.
I believe Perplexity is doing this already, but specifically for looking up products, which is how I use AI sometimes. I am wondering how long before eBay, Amazon etc partner with AI companies to give them more direct API access so they can show suggested products and what not. I like how AI can summarize things for me when looking up products, then I open up the page and confirm for myself.
Won't all the ad revenue come from commerce use cases ... and they seem to be excluding that from this announcement:
> AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce
Why bother with ads when you can just pay an AI platform to prefer products directly? Then every time an agentic decision occurs, the product preference is baked in, no human in the loop. AdTech will be supplanted by BriberyTech.
The only chance of that happening is if Altman somehow feels sufficiently shamed into abandoning the lazy enshittification track to monetization.
I don't think they have an accurate model for what they're doing - they're treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They're not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.
The total mismatch between what they're doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He's savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn't seem like he's got the right mental map for AI, for all he talks a good game.
There are a number of different llms - no reason they all need to do things the same. If you are replacing web search then ads are probably how you earn money. However if you are replacing the work people do for a company it makes more sense to charge for the work. I'm not sure if their current token charges are the right one, but it seems like a better track.
yeah it’s either that or openai has effected a massive own-goal… im leaning toward your view, but hoping that prediction does not manifest. i would be fine with all sorts of shit in life being more expensive but ad-free… but this is certainly a priviledged take and i recognize that.
I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.
It appears they trend in the right direction:
- Have not kissed the Ring.
- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).
- Committing to no ads.
- Willing to risk defense department contract over objections to use for lethal operations [1]
The things that are concerning:
- Palantir partnership (I'm unclear about what this actually is) [3]
- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])
It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.
I'm curious, how do others here think about Anthropic?
Being the 'good guy' is just marketing. It's like a unique selling point for them. Even their name alludes to it. They will only keep it up as long as it benefits them. Just look at the comments from their CEO about taking Saudi money.
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
How do you parse the difference between marketing and having values? I have difficulty with that and I would love to understand how people can be confident one way or the other. In many instances, the marketing becomes so disconnected from actions that it's obvious. That hasn't happen with Anthropic for me.
I am a fairly cynical person. Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context. They are saying this to try to get people angry about ads to drop OpenAI and move to Anthropic. For them, not having ads supports their current objective.
When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.
Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?
I believe in "too big to have values". No company that has grown beyond a certain size has ever had true values. Only shareholder wealth maximisation goals.
Companies, not begin sentient, don't have values, only their leaders/employees do. The question then becomes "when are the humans free to implement their values in their work, and when aren't they". You need to inspecting ownership structure, size, corporate charter and so on, and realize that it varies with time and situation.
No company has values. Anthropic's resistance to the administration is only as strong as their incentive to resist, and that incentive is money. Their execs love the "Twitter vs Facebook" comparison that makes Sam Altman look so evil and gives them a relative halo effect. To an extent, Sam Altman revels in the evil persona that makes him appear like the Darth Vader of some amorphous emergent technology. Both are very profitable optics to their respective audiences.
If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*
I mean, yes and. Companies may do things for broadly marketing reasons, but that can have positive consequences for users and companies can make committed decisions that don't just optimize for short term benefits like revenue or share price. For example, Apple's commitment to user privacy is "just marketing" in a sense, but it does benefit users and they do sacrifice sources of revenue for it and even get into conflicts with governments over the issue.
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
> and even get into conflicts with governments over the issue.
To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.
At the end of the day, the choices in companies we interact with is pretty limited. I much prefer to interact with a company that at least pays lip service to being 'good' as opposed to a company that is actively just plain evil and ok with it.
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
They are the most anti-opensource AI Weights company on the planet, they don't want to do it and don't want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don't have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don't be Evil.
Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
For the sake of me seeing if people like you understand the other side, can you try steelmanning the argument that open weight AI can allow bad actors to cause a lot of harm?
I would not consider myself an expert on LLMs, at least not compared to the people who actually create them at companies like Anthropic, but I can have a go at a steelman:
LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.
Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.
This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".
I wouldn't mind doing my best steelman of the open source AI if he responds (seriously, id try).
Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.
I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.
Since you asked for it, here is my steelman argument :
Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap.
1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it.
2. Less skilled person can exploit or create harmful code - who otherwise could not have.
3. Remove Guards from a open model and jailbreak, which can't be observed anymore (like a unknown zero day attack) since it may be running private.
4. Almost anything digital can be Faked/Manipulated from Original/Overwhelmed with false narratives so they can rank better over real in search.
They are the only AI company more closed than OpenAI, which is quite a feat. Any "commitment" they make should only be interpreted as marketing until they rectify this. The only "good guys" in AI are the ones developing inference engines that let you run models on your own hardware. Any individual model has some problems, but by making models fungible and fully under the users control (access to weights) it becomes a possible positive force for the user.
The problem is that "good" companies cannot succeed in a landscape filled with morally bad ones, when you are in a time of low morality being rewarded. Competing in a rigged market by trying to be 100% morally and ethically right ends up in not competing at all. So companies have to pick and choose the hills they fight on. If you take a look at how people are voting with their dollars by paying for these tools...being a "good" company doesn't seem to factor much into it on aggregate.
exactly. you cant compete morally when cheating, doing illegal things and supporting bad guys are norm. Hence, I hope open models will win in the long term.
Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).
>I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.
There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.
When powerful people, companies, and other organizations like governments do a whole lot of very good and very bad things, figuring out whether this rounds to “more good than bad” or “more bad than good” is kind of a fraught question. I think Anthropic is still in the “more good than bad” range, but it doesn’t make sense to think about it along the lines of heros versus villains. They’ve done things that I put in the “seems bad” column, and will likely do more. Also more good things, too.
They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
> I'm curious, how do others here think about Anthropic?
I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
Given that LLMs essentially stole business models from public (and not!) works the ideal state is they all die in favor of something we can run locally.
What year do you think it is? The US is actively aggressive in multiple areas of the world. As a non US citizen I don’t think helping that effort at the expense of the rest of the world is good.
Their move of disallowing alternative clients to use a Claude Code subscription pissed me off immensely. I triggered a discussion about it yesterday[0]. It’s the opposite of the openness that led software to where it is today. I’m usually not so bothered about such things, but this is existential for us engineers. We need to scrutinise this behaviour from AI companies extra hard or we’re going to experience unprecedented enshittification. Imagine a world where you’ve lost your software freedoms and have no ability to fight back because Anthropic’s customers are pumping out 20x as many features as you.
Anthropic's move of disallowing opencode is quite offputting to me because there really isn't a way to interpret it as anything other than a walled-garden move that abuses their market position to deliberately lock in users.
Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.
In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.
In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in, they are certainly acting as if they do.
The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.
I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.
I don’t know about “good guys” but the fact that they seem to be highly focused on coding rather than general purpose chat bot (hard to overcome chatGPT mindshare there) they have a customer base that is more willing to pay for usage and therefore are less likely to need to add an ad revenue stream. So yes so far I would say they are on stronger ground than the others.
Besides the editorial control -which openai openly flagged to want to remain unbiased- there is a deeper issue with ads-based revenue models in AI: that of margins. If you want ads to cover compute & make margins -looking at roughly $50 ARPU at mature FB/GOOG level- you have two levers: sell more advertisement, or offer dumber models.
This is exactly what chatgpt 5 was about. By tweaking both the model selector (thinking/non-thinking), and using a significantly sparser thinking model (capping max spend per conversation turn), they massively controlled costs, but did so at the expense of intelligence, responsiveness, curiosity, skills, and all the things I've valued in O3. This was the point I dumped openai, and went with claude.
This business model issue is a subtle one, but a key reason why advertisement revenue model is not compatible (or competitive!) with "getting the best mental tools" -margin-maximization selects against businesses optimizing for intelligence.
The vast majority of people don't need smarter models and aren't willing to pay for a subscription. There's an argument to be made that ads on free users will subsidize the power users that demand frontier intelligence - done well this could increase OpenAI's revenue by an order of magnitude.
This is going to be tough to compete against - Anthropic would need to go stratospheric with their (low margin) enterprise revenue.
They are not trying to sell adds. They are trying to sell themselves as a
monthly service. That is what I think when they are trying to convince me to go there to think. I rather go think at Wikipedia.
Idk, brainstorming and ideating is my main use case for AI
I use it as codegen too but I easily have 20x more brainstorming conversations than code projects
Most non-tech people I talk to are finding value with it with traditional things. The main one I've seen flourish is travel planning. Like, booking became super easy but full itinerary planning for a trip (hotels, restaurants, day trips/activities, etc) has been largely a manual thing that I see a lot of non-tech people using llms for. It's very good for open ended plans too, which the travel sites have been horrible at. For instance, "I want to plan a trip to somewhere warm and beachy I don't care about the dates or exactly where" maybe I care about the budget up front but most things I'm flexible on - those kinds of things work well as a conversation.
Wikipedia is, of course very useful, but what it’s not good at is surfacing information I am unfamiliar with. Part of this problem is that Wikipedia editors are more similar to me, and more interested in similar things to me, than the average person writing text that appears online. Part of the problem is that the design of Wikipedia does not make it easy to stumble upon unexpected information; most links are to adjacent topics given they have to be relevant to the current article. But regardless, I’m much more likely to come across a novel concept when chatting with Claude, compared to browsing Wikipedia.
It’s so hard to succeed without selling ads. There’s an exponential growth aspect to these endeavors and ads add a lot of revenue, which investors like, so those who don’t can find that the lost revenue “multiplies” due to lower outside investment, lower stock price growth, etc.
I wish the financial aspects were different, because Anthropic is absolutely correct about ads being antithetical to a good user experience.
Anthropic is very big (the biggest AI co?) in B2B, where you don't have ads. Also, if they end up creating a datacenter full of geniuses, ads won't make sense either.
I really want to applaud Anthropic; I remain cautiously optimistic, but I’m not certain how long they will maintain this posture. I will say that the recent announcement from OpenAI has put me off from ChatGPT — I use Gemini occasionally, because it’s the devil I know. OpenAI has gone back and forth on their positions so many times in a way that feels truly hostile to their users.
100%. Love this approach by Anthropic. The Meta "monetization league" is assembling at OpenAI and doing what they've done best at Meta.
However, I do think we need to take Anthropic's word with a grain of salt, too. To say they're fully working in the user's interest has yet to be proven. This trust would require a lot of effort to be earned. Once the companies intends to or becomes public, incentives change, investors expect money and throwing your users under the bus is a tried and tested way of increasing shareholder value.
I always found Anthropic to be trying hard to signal as one of the "good guys".
I wonder how they can get away without showing Ads when ChatGPT has to be doing it. Will the enterprise business be that profitable that Ads are not required?
Maybe OpenAI is going for something different - democratising access to vast majority of the people. Remember that ChatGPT is what people know about and what people use the free version of.
Who's to say that making Ads by doing this but also prodiding more access is the wrong choice?
Also, Claude holds nothing against ChatGPT in search. From my previous experiences, ChatGPT is just way better at deep searches through the internet than Claude.
ChatGPT is providing a ridiculous amount of free service to gain/keep traction. Others also have free tiers, but to a much lesser extent. It's similar to Uber selling rides at a loss to win markets. It will get you traction, yes, but the bill has to be paid one day.
None of the ai companies are, they are all looking for those multi billion deals to provide the backplane for services like Copilot and Siri.
Consumer chatbots are pure marketing, no company is going to make anything off those $20 per month subs to ai chatbots.
Good on Anthropic! I appreciate how deliberate they are on maintaining user trust. Have preferred Claude's responses more through the API, so I don't imagine this would have affected me as much but it is still nice to see.
I asked for this last week in an hn comment and people were pretty negative about it in the replies.
But I’m happy with position and will cancel my ChatGPT and push my family towards Claude for most things. This taste effect is what I think pushes apple devices into households. Power users making endorsements.
And I think that excess margin is enough to get past lowered ad revenue opportunity.
>An advertising-based business model would introduce incentives that could work against this principle.
I agree with this - I'm not so much worried that ChatGPT is going to silently insert advertising copy into model answers. I'm worried that advertising alongside answers creates bad incentives that then drive future model development. We saw Google Search go down this path.
I appreciate taking a stance, even if nobody is asking. It would be great if it was less of a bad faith effort.
It's great that Anthropic is targeting the businesses of the world. It's a little insincere to than declare "no ads", as if that decision would obviously be the same if the bulk of their (not paying) users.
There are, as far as ads go, perfectly fine opportunities to do them in a limited way for limited things within chatbots. I don't know who they think they are helping by highlighting how to do it poorly.
> Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.
Very diplomatic of them to say "we respect that other AI companies might reasonably reach different conclusions" while also taking a dig at OpenAI on their youtube channel
Anthropic probably saw how much money they made off of the Moltbot hype and figured that they don’t need ad revenue. They can go a step further and build a marketplace for similar setups, paying the developers who make them in micro transactions per tokens.
I think this says a lot about the business approach of Anthopic compared to OpenAI. Just the vast amount of free messages you get from OpenAI is crazy that turning a profit with that seems impossible. Anthropic is growing more slowly but it seems like they are not running a crazy deficit. They do not need to put ads or porn in their chatbot
> There are many good places for advertising. A conversation with Claude is not one of them.
> ...but including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.
Sadly, with my disillusionment with the tech industry, plus the trend of the past 20 years, this smacks of Larry Page's early statements about how bad advertising could distort search results and Google would never do that. Unsurprisingly, I am not able to find the exact quote with Google.
So they have "made a choice" to keep Claude ad-free, they say. "Today [...] Claude’s only incentive is to give a helpful answer", they say. But there's nothing that suggests that they can't make a different choice tomorrow, or whenever it suits them. It's not profitable to betray your trust too early.
I can't really imagine any statement they could give that would ease concerns that at some point in time they change their mind. But for now, it is a relief to read, even if this is a bit of marketing. The longer it goes without being enshittified the better.
It's nice that they don't show ads in conversations with Claude - but I wonder if they collect profiling information from my prompts and activities to sell to advertising firms.
Claude have posted on number of very sarcastic videos on twitter that take a jibe at ads https://x.com/claudeai/status/2019071118036942999 with an ending line "Ads are coming to IA. But not to Claude."
What makes Anthropic seem like early Apple is not just the unique taste, but the courage to stand firm with their vision of what the product should be.
If you broach subjects Anthropic considers sensitive (cyber security, dangerous biotech, etc) Claude is very likely to shut you down completely and refuse to answer. As someone that works in cybersecurity and uses Claude daily, it is annoying to ask a question regarding some feature of Cobalt Strike and have it refuse to answer, even though the tool’s documentation is public. I would have cancelled my ChatGPT subscription at this point if once or twice a month I didn’t need to ask it to look up something when Claude refuses.
I would object to ads across the board in this case (though I’m generally fine with even targeted ads). It would create a customer-client relationship between companies paying to advertise and the AI company, creating an incentive for Anthropic to manipulate the Claude service on their behalf. As an end user that seeks input from Claude on purchasing decisions, I do not want there to be any question as to whether or not it was subtly manipulated.
What other interaction models exist for Claude given that Anthropic seems to be stressing so much that this is for "conversations"?
(Props for them for doing this, don't know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue/margin pressures)
So apparently they're going to run a Super Bowl ad about ChatGPT having ads (without saying ChatGPT of course)........ Has doing an ad that focuses only on something about your competitor ever been the best play? Talk about yourself.
Obviously it's a play, honing in on privacy/anti-ad concerns, like a Mozilla type angle, but really it's a huge ad buy just to slag off the competitors. Worth the expense just to drive that narrative?
ah, good one. Was it Big Blue or Big Brother in general being referenced in that one? Either way I suppose Apple didn't even say much of anything about their product in that one where Anthropic is at least highlighting a feature.
The key hurdle for AI to leap is establishing trust with users. No one trusts the big players (for good reason) and it is causing serious anxiety among the investors. It seems Claude acknowledges this and is looking to make trust a critical part of their marketing messaging by saying no ads or product placement. The problem is that serving ads is only one facet of trust. There are trust issues around privacy, intellectual property, transparency, training data, security, accuracy, and simply "being evil" that Claude's marketing doesn't acknowledge or address. Trust, on the scale they need, is going to be very hard for any of them to establish, if not impossible.
Impossible. The only way to know what is happening is to have the code run on your own infra.
I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.
You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.
For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.
Either way, both companies are hemorrhaging money.
> ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.
Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some other company. Arrrg.
Both companies are making bank on inference
You may not like this sources, but both the tomato throwers to the green visor crowds agree they are losing money. How and when they make up the difference is up to speculation
https://www.wheresyoured.at/why-everybody-is-losing-money-on... https://www.economist.com/business/2025/12/29/openai-faces-a... https://finance.yahoo.com/news/openais-own-forecast-predicts...
Maybe on the API, but I highly doubt that the coding agent subscription plans are profitable at the moment.
For sure not
Could you substantiate that? That take into account training and staffing costs?
The parent specifically said inference, which does not include training and staffing costs.
That is the big question. Got reliable data on that?
(My gut feeling tells me Claude Code is currently underpriced with regards to inference costs. But that's just a gut feeling...)
https://www.wheresyoured.at/costs/
Their AWS spend being higher than their revenue might hint at the same.
Nobody has reliable data, I think it's fair to assume that even Anthropic is doing voodoo math to sleep at night.
> If we subtract the cost of compute from revenue to calculate the gross margin (on an accounting basis),2 it seems to be about 50% — lower than the norm for software companies (where 60-80% is typical) but still higher than many industries.
https://epoch.ai/gradient-updates/can-ai-companies-become-pr...
The context of that quote is OpenAI as a whole.
This will be an amusing post to revisit in the internet archives when or if they do introduce ads in the future but dressed up in a different presentation and naming. Ultimately the investors will come calling.
History is littered with challenger companies chest thumping that they’re never going to do the bad thing, then doing the bad thing like a year later.
"Don't be evil."
> The goals of the advertising business model do not always correspond to providing quality search to users.
- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, 1998
"OpenAI"
They are using this to virtue signal - but in reality it's just not compatible with their businesses model.
Anthropic is mainly focusing on B2B/Enterprise and tool use cases, in terms of active users I'd guess Claude is distant last, but in terms of enterprise/paying customers I wouldn't be surprised if they were ahead of the others.
See Github, which doesn't have display advertising.
History shows that software companies with large chunk of their platform being Free to Use mainly survive thanks to Ads.
It goes well beyond free to use models unfortunately.
I believe Perplexity is doing this already, but specifically for looking up products, which is how I use AI sometimes. I am wondering how long before eBay, Amazon etc partner with AI companies to give them more direct API access so they can show suggested products and what not. I like how AI can summarize things for me when looking up products, then I open up the page and confirm for myself.
Won't all the ad revenue come from commerce use cases ... and they seem to be excluding that from this announcement:
> AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce
Why bother with ads when you can just pay an AI platform to prefer products directly? Then every time an agentic decision occurs, the product preference is baked in, no human in the loop. AdTech will be supplanted by BriberyTech.
if llm ads become a real thing, let’s acknowledge that this is exactly what will happen in no uncertain terms.
The only chance of that happening is if Altman somehow feels sufficiently shamed into abandoning the lazy enshittification track to monetization.
I don't think they have an accurate model for what they're doing - they're treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They're not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.
The total mismatch between what they're doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He's savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn't seem like he's got the right mental map for AI, for all he talks a good game.
There are a number of different llms - no reason they all need to do things the same. If you are replacing web search then ads are probably how you earn money. However if you are replacing the work people do for a company it makes more sense to charge for the work. I'm not sure if their current token charges are the right one, but it seems like a better track.
yeah it’s either that or openai has effected a massive own-goal… im leaning toward your view, but hoping that prediction does not manifest. i would be fine with all sorts of shit in life being more expensive but ad-free… but this is certainly a priviledged take and i recognize that.
My thoughts exactly. They are using the Google playbook of "don't be evil" until it becomes extremely profitable to be evil.
You really think the giant ad company would put ads into their product after saying they won't? You should strive to be less cynical.
I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.
It appears they trend in the right direction:
- Have not kissed the Ring.
- Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).
- Committing to no ads.
- Willing to risk defense department contract over objections to use for lethal operations [1]
The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]
- Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])
It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.
I'm curious, how do others here think about Anthropic?
[1]https://archive.is/Pm2QS
[2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...
[3]https://investors.palantir.com/news-details/2024/Anthropic-a...
[4]https://archive.is/4NGBE
Being the 'good guy' is just marketing. It's like a unique selling point for them. Even their name alludes to it. They will only keep it up as long as it benefits them. Just look at the comments from their CEO about taking Saudi money.
Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.
How do you parse the difference between marketing and having values? I have difficulty with that and I would love to understand how people can be confident one way or the other. In many instances, the marketing becomes so disconnected from actions that it's obvious. That hasn't happen with Anthropic for me.
I am a fairly cynical person. Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context. They are saying this to try to get people angry about ads to drop OpenAI and move to Anthropic. For them, not having ads supports their current objective.
When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.
Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?
I believe in "too big to have values". No company that has grown beyond a certain size has ever had true values. Only shareholder wealth maximisation goals.
Companies, not begin sentient, don't have values, only their leaders/employees do. The question then becomes "when are the humans free to implement their values in their work, and when aren't they". You need to inspecting ownership structure, size, corporate charter and so on, and realize that it varies with time and situation.
Anthropic being a PBC probably helps.
People have values, Corporations do not.
No company has values. Anthropic's resistance to the administration is only as strong as their incentive to resist, and that incentive is money. Their execs love the "Twitter vs Facebook" comparison that makes Sam Altman look so evil and gives them a relative halo effect. To an extent, Sam Altman revels in the evil persona that makes him appear like the Darth Vader of some amorphous emergent technology. Both are very profitable optics to their respective audiences.
If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*
I mean, yes and. Companies may do things for broadly marketing reasons, but that can have positive consequences for users and companies can make committed decisions that don't just optimize for short term benefits like revenue or share price. For example, Apple's commitment to user privacy is "just marketing" in a sense, but it does benefit users and they do sacrifice sources of revenue for it and even get into conflicts with governments over the issue.
And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.
Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.
And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.
> and even get into conflicts with governments over the issue.
To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.
[0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...
At the end of the day, the choices in companies we interact with is pretty limited. I much prefer to interact with a company that at least pays lip service to being 'good' as opposed to a company that is actively just plain evil and ok with it.
That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.
They are the most anti-opensource AI Weights company on the planet, they don't want to do it and don't want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don't have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don't be Evil.
https://www.anthropic.com/news/anthropic-s-recommendations-o...
Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.
For the sake of me seeing if people like you understand the other side, can you try steelmanning the argument that open weight AI can allow bad actors to cause a lot of harm?
I would not consider myself an expert on LLMs, at least not compared to the people who actually create them at companies like Anthropic, but I can have a go at a steelman:
LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.
Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.
This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".
"please do all the work to argue my position so I don't have to".
I wouldn't mind doing my best steelman of the open source AI if he responds (seriously, id try).
Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.
I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.
Since you asked for it, here is my steelman argument : Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap. 1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it. 2. Less skilled person can exploit or create harmful code - who otherwise could not have. 3. Remove Guards from a open model and jailbreak, which can't be observed anymore (like a unknown zero day attack) since it may be running private. 4. Almost anything digital can be Faked/Manipulated from Original/Overwhelmed with false narratives so they can rank better over real in search.
They are the only AI company more closed than OpenAI, which is quite a feat. Any "commitment" they make should only be interpreted as marketing until they rectify this. The only "good guys" in AI are the ones developing inference engines that let you run models on your own hardware. Any individual model has some problems, but by making models fungible and fully under the users control (access to weights) it becomes a possible positive force for the user.
I am on the opposite side of what you are thinking.
- Blocking access to others (cursor, openai, opencode)
- Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs
- partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.
at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.
The problem is that "good" companies cannot succeed in a landscape filled with morally bad ones, when you are in a time of low morality being rewarded. Competing in a rigged market by trying to be 100% morally and ethically right ends up in not competing at all. So companies have to pick and choose the hills they fight on. If you take a look at how people are voting with their dollars by paying for these tools...being a "good" company doesn't seem to factor much into it on aggregate.
exactly. you cant compete morally when cheating, doing illegal things and supporting bad guys are norm. Hence, I hope open models will win in the long term.
Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).
> Blocking access
> Asking to regulate hardware chips more
> partnerships with [the military-industrial complex]
> only labs doing good in that front are Chinese labs
That last one is a doozy.
I agree, they seem to be following the Apple playbook. Make a closed off platform and present yourself as morally superior.
>I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.
There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.
When powerful people, companies, and other organizations like governments do a whole lot of very good and very bad things, figuring out whether this rounds to “more good than bad” or “more bad than good” is kind of a fraught question. I think Anthropic is still in the “more good than bad” range, but it doesn’t make sense to think about it along the lines of heros versus villains. They’ve done things that I put in the “seems bad” column, and will likely do more. Also more good things, too.
They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.
> I'm curious, how do others here think about Anthropic?
I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.
Given that LLMs essentially stole business models from public (and not!) works the ideal state is they all die in favor of something we can run locally.
Anthropic settled with authors of stolen work for $1.5b, this case is closed, isn't it?
They work with the US military.
Defending the US. So?
What year do you think it is? The US is actively aggressive in multiple areas of the world. As a non US citizen I don’t think helping that effort at the expense of the rest of the world is good.
That's pretty bad.
Sweden too. So there's that.
Their move of disallowing alternative clients to use a Claude Code subscription pissed me off immensely. I triggered a discussion about it yesterday[0]. It’s the opposite of the openness that led software to where it is today. I’m usually not so bothered about such things, but this is existential for us engineers. We need to scrutinise this behaviour from AI companies extra hard or we’re going to experience unprecedented enshittification. Imagine a world where you’ve lost your software freedoms and have no ability to fight back because Anthropic’s customers are pumping out 20x as many features as you.
[0]: https://news.ycombinator.com/item?id=46873708
Anthropic's move of disallowing opencode is quite offputting to me because there really isn't a way to interpret it as anything other than a walled-garden move that abuses their market position to deliberately lock in users.
Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.
In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.
In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in, they are certainly acting as if they do.
The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.
I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.
I don’t know about “good guys” but the fact that they seem to be highly focused on coding rather than general purpose chat bot (hard to overcome chatGPT mindshare there) they have a customer base that is more willing to pay for usage and therefore are less likely to need to add an ad revenue stream. So yes so far I would say they are on stronger ground than the others.
I think I’m not allowed to say what I think should happen to anyone who works with Palantir.
Maybe you could use an LLM to clean up what you want to say
Besides the editorial control -which openai openly flagged to want to remain unbiased- there is a deeper issue with ads-based revenue models in AI: that of margins. If you want ads to cover compute & make margins -looking at roughly $50 ARPU at mature FB/GOOG level- you have two levers: sell more advertisement, or offer dumber models.
This is exactly what chatgpt 5 was about. By tweaking both the model selector (thinking/non-thinking), and using a significantly sparser thinking model (capping max spend per conversation turn), they massively controlled costs, but did so at the expense of intelligence, responsiveness, curiosity, skills, and all the things I've valued in O3. This was the point I dumped openai, and went with claude.
This business model issue is a subtle one, but a key reason why advertisement revenue model is not compatible (or competitive!) with "getting the best mental tools" -margin-maximization selects against businesses optimizing for intelligence.
The vast majority of people don't need smarter models and aren't willing to pay for a subscription. There's an argument to be made that ads on free users will subsidize the power users that demand frontier intelligence - done well this could increase OpenAI's revenue by an order of magnitude.
This is going to be tough to compete against - Anthropic would need to go stratospheric with their (low margin) enterprise revenue.
They are not trying to sell adds. They are trying to sell themselves as a monthly service. That is what I think when they are trying to convince me to go there to think. I rather go think at Wikipedia.
Idk, brainstorming and ideating is my main use case for AI
I use it as codegen too but I easily have 20x more brainstorming conversations than code projects
Most non-tech people I talk to are finding value with it with traditional things. The main one I've seen flourish is travel planning. Like, booking became super easy but full itinerary planning for a trip (hotels, restaurants, day trips/activities, etc) has been largely a manual thing that I see a lot of non-tech people using llms for. It's very good for open ended plans too, which the travel sites have been horrible at. For instance, "I want to plan a trip to somewhere warm and beachy I don't care about the dates or exactly where" maybe I care about the budget up front but most things I'm flexible on - those kinds of things work well as a conversation.
Wikipedia is, of course very useful, but what it’s not good at is surfacing information I am unfamiliar with. Part of this problem is that Wikipedia editors are more similar to me, and more interested in similar things to me, than the average person writing text that appears online. Part of the problem is that the design of Wikipedia does not make it easy to stumble upon unexpected information; most links are to adjacent topics given they have to be relevant to the current article. But regardless, I’m much more likely to come across a novel concept when chatting with Claude, compared to browsing Wikipedia.
It’s so hard to succeed without selling ads. There’s an exponential growth aspect to these endeavors and ads add a lot of revenue, which investors like, so those who don’t can find that the lost revenue “multiplies” due to lower outside investment, lower stock price growth, etc.
I wish the financial aspects were different, because Anthropic is absolutely correct about ads being antithetical to a good user experience.
Anthropic is very big (the biggest AI co?) in B2B, where you don't have ads. Also, if they end up creating a datacenter full of geniuses, ads won't make sense either.
They made an ad to say that they won't have ads, i dont know if they are aware of the irony.
https://x.com/ns123abc/status/2019074628191142065
In any case, they draw undue attention to openAI rather than themselves. Not good advertising
Both openAI and Anthropic should start selling compute devices instead. There is nothing stoping open-source LLMs from eating their lunch mid-term
Ads as a concept are not evil. There have been ads since prehistory.
Littering a potentially quality product with ads which one cannot easily separate is what the evil is.
I really want to applaud Anthropic; I remain cautiously optimistic, but I’m not certain how long they will maintain this posture. I will say that the recent announcement from OpenAI has put me off from ChatGPT — I use Gemini occasionally, because it’s the devil I know. OpenAI has gone back and forth on their positions so many times in a way that feels truly hostile to their users.
Plus, I’m not a huge fan of Sam Altman.
100%. Love this approach by Anthropic. The Meta "monetization league" is assembling at OpenAI and doing what they've done best at Meta.
However, I do think we need to take Anthropic's word with a grain of salt, too. To say they're fully working in the user's interest has yet to be proven. This trust would require a lot of effort to be earned. Once the companies intends to or becomes public, incentives change, investors expect money and throwing your users under the bus is a tried and tested way of increasing shareholder value.
I always found Anthropic to be trying hard to signal as one of the "good guys".
I wonder how they can get away without showing Ads when ChatGPT has to be doing it. Will the enterprise business be that profitable that Ads are not required?
Maybe OpenAI is going for something different - democratising access to vast majority of the people. Remember that ChatGPT is what people know about and what people use the free version of. Who's to say that making Ads by doing this but also prodiding more access is the wrong choice?
Also, Claude holds nothing against ChatGPT in search. From my previous experiences, ChatGPT is just way better at deep searches through the internet than Claude.
ChatGPT is providing a ridiculous amount of free service to gain/keep traction. Others also have free tiers, but to a much lesser extent. It's similar to Uber selling rides at a loss to win markets. It will get you traction, yes, but the bill has to be paid one day.
Clause isn’t trying to compete with OpenAI in the general consumer chat bot space.
None of the ai companies are, they are all looking for those multi billion deals to provide the backplane for services like Copilot and Siri. Consumer chatbots are pure marketing, no company is going to make anything off those $20 per month subs to ai chatbots.
Good on Anthropic! I appreciate how deliberate they are on maintaining user trust. Have preferred Claude's responses more through the API, so I don't imagine this would have affected me as much but it is still nice to see.
I asked for this last week in an hn comment and people were pretty negative about it in the replies.
But I’m happy with position and will cancel my ChatGPT and push my family towards Claude for most things. This taste effect is what I think pushes apple devices into households. Power users making endorsements.
And I think that excess margin is enough to get past lowered ad revenue opportunity.
>An advertising-based business model would introduce incentives that could work against this principle.
I agree with this - I'm not so much worried that ChatGPT is going to silently insert advertising copy into model answers. I'm worried that advertising alongside answers creates bad incentives that then drive future model development. We saw Google Search go down this path.
I appreciate taking a stance, even if nobody is asking. It would be great if it was less of a bad faith effort.
It's great that Anthropic is targeting the businesses of the world. It's a little insincere to than declare "no ads", as if that decision would obviously be the same if the bulk of their (not paying) users.
There are, as far as ads go, perfectly fine opportunities to do them in a limited way for limited things within chatbots. I don't know who they think they are helping by highlighting how to do it poorly.
> Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.
Very diplomatic of them to say "we respect that other AI companies might reasonably reach different conclusions" while also taking a dig at OpenAI on their youtube channel
https://www.youtube.com/watch?v=kQRu7DdTTVA
Anthropic probably saw how much money they made off of the Moltbot hype and figured that they don’t need ad revenue. They can go a step further and build a marketplace for similar setups, paying the developers who make them in micro transactions per tokens.
I think this says a lot about the business approach of Anthopic compared to OpenAI. Just the vast amount of free messages you get from OpenAI is crazy that turning a profit with that seems impossible. Anthropic is growing more slowly but it seems like they are not running a crazy deficit. They do not need to put ads or porn in their chatbot
> There are many good places for advertising. A conversation with Claude is not one of them.
> ...but including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.
Sadly, with my disillusionment with the tech industry, plus the trend of the past 20 years, this smacks of Larry Page's early statements about how bad advertising could distort search results and Google would never do that. Unsurprisingly, I am not able to find the exact quote with Google.
Yeah, it’s a shame we’ve all grown so jarred, I do see this better than nothing.
In this animal farm Orwellian cycle we’ve been going through, at least they start here, unlike others.
I for one commend this, but stay vigilant.
So they have "made a choice" to keep Claude ad-free, they say. "Today [...] Claude’s only incentive is to give a helpful answer", they say. But there's nothing that suggests that they can't make a different choice tomorrow, or whenever it suits them. It's not profitable to betray your trust too early.
I can't really imagine any statement they could give that would ease concerns that at some point in time they change their mind. But for now, it is a relief to read, even if this is a bit of marketing. The longer it goes without being enshittified the better.
It's nice that they don't show ads in conversations with Claude - but I wonder if they collect profiling information from my prompts and activities to sell to advertising firms.
Claude have posted on number of very sarcastic videos on twitter that take a jibe at ads https://x.com/claudeai/status/2019071118036942999 with an ending line "Ads are coming to IA. But not to Claude."
Sure, ad free forever, until it is not.
Great by Anthropic, but I put basically no long term trust in statements like this.
What makes Anthropic seem like early Apple is not just the unique taste, but the courage to stand firm with their vision of what the product should be.
That courage was nowhere to be found when Palantir rolled up with a truckload of cash.
What's the problem with Palantir?
It’s better to not fall for serif fonts and warm colors.
Apple had a vision, all right. It was our fault that we thought they would become the rebel with the hammer, and not the guy on the screen.
Only 4 years old, they haven't existed long enough to be "firm".
Making formal, public statements like this is a good start. It is certainly better than NOT making these sorts of statements.
Yeah. Does anyone remember how long did it take GOOG to remove "Don't be evil" from their motto?
> Anthropic seem like early Apple
sorry but this is silly, nothing suggests this at all.
That’s positive. How is Claude? Is it censorship heavy?
If you broach subjects Anthropic considers sensitive (cyber security, dangerous biotech, etc) Claude is very likely to shut you down completely and refuse to answer. As someone that works in cybersecurity and uses Claude daily, it is annoying to ask a question regarding some feature of Cobalt Strike and have it refuse to answer, even though the tool’s documentation is public. I would have cancelled my ChatGPT subscription at this point if once or twice a month I didn’t need to ask it to look up something when Claude refuses.
Don't understand why more companies don't just make ads opt-in as a trade for more features
A lot of people are ok with ad supported free tiers
(Also is it possible to do ads in a privacy respecting way or do people just object to ads across the board?)
I would object to ads across the board in this case (though I’m generally fine with even targeted ads). It would create a customer-client relationship between companies paying to advertise and the AI company, creating an incentive for Anthropic to manipulate the Claude service on their behalf. As an end user that seeks input from Claude on purchasing decisions, I do not want there to be any question as to whether or not it was subtly manipulated.
Claude is the last place where thinking happens.
What other interaction models exist for Claude given that Anthropic seems to be stressing so much that this is for "conversations"?
(Props for them for doing this, don't know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue/margin pressures)
Claude focuses on enterprise and B2B rather than mass consumer, so it makes sense for them.
RemindMe! 2 years
That's true. CI in all of my conversations with AIThat's true. In all my conversations with AI, I think CIaude's thinking is the richest.
Important to note Anthropic has next to no consumer usage
Wrong (in trumps voice)
From Sama "More Texans use ChatGPT for free than total people use Claude in the US, so we have a differently-shaped problem than they do"
Facts don't care about your feelings
Sama lies all the time.
Does the veneer of goodness despite (alleged) cutthroat business practices from Anthropic bother anyone else?
So apparently they're going to run a Super Bowl ad about ChatGPT having ads (without saying ChatGPT of course)........ Has doing an ad that focuses only on something about your competitor ever been the best play? Talk about yourself.
Obviously it's a play, honing in on privacy/anti-ad concerns, like a Mozilla type angle, but really it's a huge ad buy just to slag off the competitors. Worth the expense just to drive that narrative?
Ads playlist https://www.youtube.com/playlist?list=PLf2m23nhTg1OW258b3XBi...
Wasn't Apple's iconic 1984 ad basically that?
Apple's ad had a woman dressed like a Hooter's waitress to represent themselves. That makes themselves the focus of attention.
https://www.youtube.com/watch?v=ErwS24cBZPc
ah, good one. Was it Big Blue or Big Brother in general being referenced in that one? Either way I suppose Apple didn't even say much of anything about their product in that one where Anthropic is at least highlighting a feature.