Thank you for coming on HN and offering to answer questions.[a]
This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.
OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]
Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]
Is your understanding of OpenAI's current competitive position similar?
Thank you for this, very much appreciate the thoughtful response.
The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.
Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.
I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.
If you have an opinion about that, everyone here would love to hear about it.
Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".
at this point even googles ai search results are better than gpt - obv. this is not for full programs but if you know what youre doing and just want a snippet, thats all you need.
Wild how different experience people can have. Both Google's models and Anthrophic's hallucinate a lot for me, even when I try the expensive plans and with web searches, for some reason, and none of them come close to the accuracy and hallucination-free responses of ChatGPT Pro, which to me still is SOTA and has been since it was made available. But people keep having opposite experiences apparently, I just can't make sense of it.
Kagi (assistant.kagi.com) with Kimi K2.5 (their current default) has worked great for me in scenarios where the search result data is more important than the model.
I.e. what I used to use Google for and when I don't want an AI to overly summarize / editorialize result data.
My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.
As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.
That's not fraud, and it's not sustainable. They aren't going to just keep doing that. It only makes sense if an AI company wants to pay for GPUs with stock, and - more importantly - the GPU company agrees to sell in exchange for stock.
I mean, its a fair question, though it does make some wonder how extreme the answers could be, so I could see why you're being downvoted.
The problem is sometimes on paper everything people like Sam Altman do is legal, despite it harming so many. We've literally had a major RAM producer pull off the consumer RAM market. I feel like Sam Altman should be investigated and heavily scrutinized. He kind of is the biggest bubble in the AI bubble, we're letting him fester too far into it too, and these circular deals have seemingly somewhat stopped for now, but it might only get worse.
Who is “us”? It does seem that some scientists prefer Codex for its math capabilities but when it comes to general frontend and backend construction, Claude Code is just as good and possibly made better with its extensive Skills library.
Both codex and Claude code fail when it comes to extremely sophisticated programming for distributed systems
As a scientist (computational physicist, so plenty of math, but also plenty of code, from Python PoCs to explicit SIMD and GPU code, mostly various subsets of C/C++), I can confirm - Codex is qualitatively better for my usecases than Claude. I keep retesting them (not on benchmarks, I simply use both in parallel for my work and see what happens) after every version update and ever since 5.2 Codex seems further and further ahead. The token limits are also far more generous (and it matters, I found it fairly easy to hit the 5h limit on max tier Claude), but mostly it's about quality - the probability that the model will give me something useful I can iterate on as opposed to discard immediately is much higher with Codex.
For the few times I've used both models side by side on more typical tasks (not so much web stuff, which I don't do much of, but more conventional Python scripts, CLI utilities in C, some OpenGL), they seem much more evenly matched. I haven't found a case where Claude would be markedly superior since Codex 5.2 came out, but I'm sure there are plenty. In my view, benchmarks are completely irrelevant at this point, just use models side by side on representative bits of your real work and stick with what works best for you. My software engineer friends often react with disbelief when I say I much prefer Codex, but in my experience it is not a close comparison.
Have you tried the latest (3.1 pro) Gemini? In my experience, it's notably better for a similar type of problems than Opus 4.6. However, I don't really use OpenAI products to compare.
I've tried both against similar and haven't found it such a clear cut difference. I still find neither are able to fully implement a complex algorithm I worked on in the past correctly with the same inputs. Not sharing exactly the benchmark I'm using but think about something for improving performance of N^2 operations that are common in physics and you can probably guess the train of thought.
I'm in that camp -- I have the max-tier subscription to pretty much all the services, and for now Codex seems to win. Primarily because 1) long horizon development tasks are much more reliable with codex, and 2) OpenAI is far more generous with the token limits.
Gemini seems to be the worst of the three, and some open-weight models are not too bad (like Kimi k2.5). Cursor is still pretty good, and copilot just really really sucks.
Claude Code, Codex, and Cursor are old news. If you're having problems, it's because you're not using the latest hotness: Cludge. Everyone is using it now - don't get left behind.
Us = me and say /r/codex or wherever Codex users are. I've tried both, liked both, but in my projects one clearly produces better results, more maintainable code and does a better job of debugging and refactoring.
That's interesting, I actively use both and usually find it to be a toss up which one performs better at a given task. I generally find Claude to be better with complex tool calls and Codex to be better at reviewing code, but otherwise don't see a significant difference.
If you want to find an advocate for Codex that can give a pretty good answer as to why they think it's better, go ask Eric Provencher. He develops https://repoprompt.com/. He spends a lot of time thinking in this space and prefers Codex over Claude, though I haven't checked recently to see if he still has that opinion. He's pretty reachable on Discord if you poke around a bit.
Quite irrelevant what factions think. This or that model may be superior for these and those use cases today, and things will flip next week.
Also. RLHF mean that models spit out according to certain human preference, so it depends what set of humans and in what mood they've been when providing the feedback.
On the contrary, I very much care about what the other factions think because I want to know if things have already flipped and the easiest way to do so is just ask someone who's been using the tool. Of course the correct thing to do is to set up some simple evals, but there is a subjective aspect to these tools that I think hearing boots on the ground anecdata helps with.
For that I'm not so sure. I tried both early 2025 and was disappointed in their ability to deal with a TCA based app (iOS) and Jetpack compose stuff on Android, but I assume Opus 4.6 and GPT 5.4 are much better.
My rule of thumb is that its good for anything "broad", and weaker for anything "deep". Broad tasks are tasks which require working knowledge of lots of random stuff. Its bad at deep work - like implementing a complex, novel algorithm.
LLMs aren't able to achieve 100% correctness of every line of code. But luckily, 100% correctness is not required for debugging. So its better at that sort of thing. Its also (comparatively) good at reading lots and lots of code. Better than I am - I get bogged down in details and I exhaust quickly.
An example of broad work is something like: "Compile this C# code to webassembly, then run it from this go program. Write a set of benchmarks of the result, and compare it to the C# code running natively, and this python implementation. Make a chart of the data add it to this latex code." Each of the steps is simple if you have expertise in the languages and tools. But a lot of work otherwise. But for me to do that, I'd need to figure out C# webassembly compilation and go wasm libraries. I'd need to find a good charting library. And so on.
I think its decent at debugging because debugging requires reading a lot of code. And there's lots of weird tools and approaches you can use to debug something. And its not mission critical that every approach works. Debugging plays to the strengths of LLMs.
As some other people mentioned, using both/multiple is the way to go if it's within your means.
I've been working on a wide range of relatively projects and I find that the latest GPT-5.2+ models seem to be generally better coders than Opus 4.6, however the latter tends to be better at big picture thinking, structuring, and communicating so I tend to iterate through Opus 4.6 max -> GPT-5.2 xhigh -> GPT-5.3-Codex xhigh -> GPT-5.4 xhigh. I've found GPT-5.3-Codex is the most detail oriented, but not necessarily the best coder. One interesting thing is for my high-stakes project, I have one coder lane but use all the models do independent review and they tend to catch different subsets of implementation bugs. I also notice huge behavioral changes based on changing AGENTS.md.
In terms of the apps, while Claude Code was ahead for a long while, I'd say Codex has largely caught up in terms of ergonomics, and in some things, like the way it let's you inline or append steering, I like it better now (or where it's far, far, ahead - the compaction is night and day better in Codex).
(These observations are based on about 10-20B/mo combined cached tokens, human-in-the-loop, so heavy usage and most code I no longer eyeball, but not dark factory/slop cannon levels. I haven't found (or built) a multi-agent control plane I really like yet.)
Codex won me over with one simple thing. Reliability. It crashed less, had less load shedding and its configuration is well designed.
I do regular evaluation of both codex and Claude (though not to statistical significance) and I’m of the opinion there is more in group variance on outcome performance than between them.
Not a scientist and use codex for anything complex.
I enjoy using CC more and use it for non coding tasks primarily, but for anything complex (honestly most of what I do is not that complex), I feel like I am trading future toil for a dopamine hit.
Many paying customers say that Anthropic degraded the capability of Opus and Claude Code in the last months and the outcomes are worse. There are even discussions on HN about this.
I’m one of those ‘us’, Claude’s outputs require significant review and iteration effort (to put it bluntly they get destroyed by gpt and Gemini). I’m basically using sonnet to do code search and write up since it is a better (more human-like) writer than gpt and faster and more reliable than gemini, but that’s about it.
I also find Codex much more generous in terms of what you get with a Pro ($20/mo) subscription. I use it pretty much non-stop and I have yet to hit a limit. Weekly reset is much better as well.
Usage limits are more generous and GPT 5.4 is a good model, but yes, UI/UX lags behind Claude Code. Currently I'm especially missing /rewind with code restoration and proper support for plugin marketplaces
X restricts what you can view without logging in. Many folks don't want to log in to X, for obvious reasons. Posting an xcancel link is kinda like folks posting various `archive` URLs to bypass paywalls, work around overloaded servers, etc. That's an extremely common practice here that usually goes without comment.
Personally, I prefer Claude for coding, but I still prefer ChatGPT for hashing out ideas for my projects (which tend to be game designs). So I use both.
But by page 5, those stories have around 50-60 karma, while claude page five is still 500+
(i found your comment surprising based on my daily hn reading recollection - i mostly read top N daily and feel i only occassionally see codex stories).
Yeah we moved to Claude a few months ago, mostly because the devs kept using it anyway. Altman stuff is interesting but at the end of the day you just go with whatever tool works
The statements around the sexual abuse allegations seemed to be the most puzzling to me - his sister’s allegations and claims of underage partners because he has a tendency to hook up with younger partners. It does seem like this piece gives him a pretty clean bill of health in that matter - I guess would you be able to talk about how you investigated?
Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.
Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.
False memories are much, much more common than actual recovered memories, unfortunately. OCD is a really common cause of it. People think of OCD as a physical thing, but for many people it presents as emotional rumination and can lead to false memories.
Correct, because there truly isn’t a great way to answer with certainty - there was evidence in the 80s of suggestive techniques being used by poorly trained psychologists, and there are many people who remember and then find corroboration.
There’s a lot more who remember and may not have corroboration more than with themselves and among their close friends or healthcare provider. Part of CSA is usually there is very little a kid can do about evidence, as the power discrepancy is far too much. Often with rich abusers, the exact same process occurs. Perps pick victims who are vulnerable or controllable, and constantly seek power and domination. Nothing to do with the boardroooms or batch of ceo billionaires running the economy right now certainly.
I am very sympathetic to the situation you describe. I certainly think it is possible that Annie is describing something that happened. I think the author did a fair job of representing the allegations, finding the right balance between disclosing that they were unable to corroborate the allegations without dismissing them.
That said, "recovering" memories as a therapy does not pass any sort of sniff test and it doesn't take a concerted effort to discredit the concept. Human memory is very malleable. Patients with mental health issues (which could predate abuse, or could be caused by abuse) are often in search of answers and that makes them very vulnerable.
Could a memory be buried deep in our subconscious, forgotten, only to return to the surface later? Sure, we all forget things and then remember them when triggered by something, whether that's a smell or sound or something else entirely. But can we engineer that process, with any degree of reliability? How can we even begin to reliably reverse engineer the triggers?
I think it is also important to keep in mind that Annie is rich, and the health care available to rich people can be very predatory. There are endless examples of nonsense therapies for all types of health, from ear seeds to treatments for "chronic Lyme".
Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them. If Annie's memories were triggered in adulthood, sure, that's really no different than remembering something... but "recovered"? That is something else entirely.
Correct me where I'm wrong, I'd like to learn your perspective, maybe there's a missing piece.
Recovered memory therapy was a discredited hypnotherapy that leaned heavily on suggestion or was associated often with fairly coercive interrogations during the 80s CSA panic - https://en.wikipedia.org/wiki/Day-care_sex-abuse_hysteria
> Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them.
Agree, though I think the mechanism can be a bit more towards the idea of a “recovery” of traumatic memory, even if the term as understood carries false connotations.
The concept you’re missing is dissociation, and dissociative disorders. In the 40s it was called just “hysteria”, and for many cases up to the late 90s an extreme form was called multiple personality disorder, now DID (dissociative identity disorder). https://en.wikipedia.org/wiki/Dissociative_disorder
Not everyone who goes through traumatic events will respond to it via dissociation of identity, and indeed not all people are equally capable of developing a dissociative disorder, 2 people may go through very similar events (say survive a war as siblings or even twins) and one might dissociate the traumatic experience and one might not. Dissociation doesn’t work quite like you might imagine from a term like “multiple personalities” - that happens in some extreme cases, but think of identity dissociation as an adaptive response to events or situations that are paradoxical (esp to a child’s mind), extreme or traumatic, and can’t be escaped or use of other mechanisms cant be called upon.
Dissociation is on a sort of spectrum, where at one side you have common experiences like zoning out when on a common commute, and on another you have separated self-parts/alter egos to handle wildly different situations.
It’s a mechanism I frankly wasn’t aware of and I’m not sure that I would be able to fully beleive or empathize with, but for my getting a diagnosis of a dissociative disorder changed my life, and made a thousand things about me that I could never figure out make sense. The “model” as it put it at the time responded to experiment, and by recognizing that I was dealing with pretty constant, heavy dissociation and different self states with memory deficiencies helped me figure out how to work through a ton of really intractable problems for me. I’m finally after decades of ineffective therapy able to really understand how I work.
Idk how to talk about it without sounding like I’m trying to sell the idea. But yeah it was a mind blowing thing to me. Over the last 20 years especially a ton of truly respectable research has been done and the increase in efficacy of treatments on dissociation, and trauma generally is one of the unsung advancements for humanity in the last decade. I think the number is that around 3-6% of people meet the clinical criteria for a dissociative disorder - OSDD, DID, DPDR, or dissociative amnesia. 5x more people than have schizophrenia, 5x more than have red hair.
The TLDR is dissociation is an important mechanism that most people don’t know about but has had a wave of research and study and is much more common than one might expect. The sad part is how often dissociative disorders correlate w abuse.
I'm confused by what you're saying. Can you help me reconcile your first post
> It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation.
with
> Recovered memory therapy was a discredited hypnotherapy
I read your first post as standing up for recovered memory therapy and I can't find how the discussion of dissociation makes a difference. Does Fontain have it right that by "recovered memory" you mean "things people happened to remember on their own"?
I’m reading more now and I think the missing piece for me is the distinction between “repressed” memories and “recovered” memories.
I understood repressed memories to be an accepted idea, distinct from “recovered” memories. I am reading that the people mentioned in your original comment rejected the idea of repressed memory altogether, and believed that everything traumatic must be remembered.
So, to me, reading that someone “recovered” memory reads like they went through a specific type of therapy intended to “find” these repressed memories. Whereas to you, “recovered” memories could be repressed memories that came back to the surface organically — whether at random, triggered or through a therapy intended to deal with disassociating. Is that right?
Hi Ronan, thanks for the article and for answering questions.
My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?
I just spent a while reading the article. I really appreciate you writing it. In my case, it made me like Sam Altman a lot more. But I was only able to conclude this because of all the evidence you took the time to put together. It paints the picture of someone trying to do something very difficult in a rapidly changing environment and a lot of pressure, but still making the important choices and not shirking them.
Interesting to hear! While this hasn’t been a commonplace reaction, I think if I do my job right it should allow people to read the facts as they will, exactly like this. It’s strenuously designed to be fair and, where appropriate, even generous.
This is a vast and tricky question. The business model has basically fallen out from under journalism, and especially this kind of labor-intensive investigative reporting. The media landscape is increasingly dominated by moneyed individuals and companies essentially buying up the discourse.
I would really suggest subscribing to and finding ways to amplify independent outlets and journalists, and encouraging others to do so.
Only anti-trust action against big tech to break their ad monopoly (to make journalism profitable again) and breaking up media conglomerates (to reduce concentration of power in the journalism industry) can save journalism from becoming just a mouthpiece for the powerful. These things can only happen through politics. We need a political solution to save journalism.
Got it! Any recommendations on who to subscribe to? Any personal links for you?
In developer communities often you can support individual developers or groups through a monthly subscription / donation on their github page or similar.
Well, this piece was in The New Yorker, which is reasonably priced and regularly includes excellent investigative journalism. I get the physical copies, which can be too much to keep up with if you try to read everything, but it’s easy enough if you skim and just read the things that stick out as being of particular interest.
The New Yorker also comes with Apple News+ subscriptions (part of an Apple One plan that many people get for extra iCloud storage) which further includes a number of top-tier and local news orgs such as the Wall Street Journal, LA Times, SF Chronicle, Times of London, etc.
Treating quality investigative reporting like the scarce resource that it is, as one of the most well-known can you shed any light on why Reuters would delegate resources to commission investigative reporters to unmask Banksy (in a world where all-things-Epstein represents an unending source of investigative opportunities in the public interest)?
We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.
All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.
> whether the company that branded itself as the ethical AI lab actually is one
FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.
Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.
We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.
Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.
From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.
My model is that Anthropic was founded by OpenAI engineers who self-selected for safety-consciousness. However, it's still subject to the same problem: power corrupts. I think they are better than OpenAI but they are definitely sliding.
It should perhaps be generalized as "employees usually match the general consensus of their peer-group". Before everyone considered Meta to be ersatz drug dealers, they'd report that they feel what everyone feels.
Google was "do no evil" until they had to choose between that and making the money. The culture has to be not only professed but tested.
Depending on what part of Google you work for, you can absolutely feel good about what you do. The vast majority of employees don't work on ads or adjacent areas. I've never seen another company actually care for non profit related externalities so much. People talk about it like it's the same as Haliburton or Oracle and that's not true.
The snide response is "of COURSE you can care about non-profit related externalities when your giant evil ad business is bringing in absolute dump loads of cash".
And there's something true there; few companies are Snidely Whiplash evil (maybe the lawnmower but even that is just what it is) - and having large amounts of cash affords you options in many areas.
TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.
Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.
So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.
I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.
It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.
The idea that it's not okay to arm the military is a position of privilege. The ethical issues are around how the military chooses to use its abilities, not around giving them the tools to do their jobs. We're talking about folks who are willing to give their lives up for others. If you're not going to serve yourself you should at least be willing to help them live. This has nothing to do with whether or not you support the political uses of the military. If world war 3 breaks out and you are forced to serve, you may find yourself feeling differently.
Yes and... that's a position of privilege that anyone in the position should ethically take.
It's unfair to sweep provision of methods to the military under a "respect the service" catch-all justification.
Two things can simultaneously be true: (1) individuals serving in the military are making sacrifices (in terms of pay, family life, personal safety) that deserve respect and (2) the military as a political institution will amorally deploy whatever capabilities it has access to, to achieve political aims.
There's a reason the US stopped offensive chemical, biological warfare, and tactical nuclear device research and production -- effective capabilities will be used if they exist.
Maybe people inside the company think Anthropic behaves ethically, which says something scary about either their ethical standards or their general awareness, considering how much documented unethical behavior we've seen from Anthropic leadership.[1]
[1] "Unless Its Governance Changes, Anthropic Is Untrustworthy" https://anthropic.ml/
If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.
> the company actually is ethical and safety conscious everywhere
I wonder what Anthropic tries to achieve by spreading such blatant lies with their bot accounts. I'm definitely not buying anything from a company so morally corrupt to smear the competition while claiming to be somehow "ethical". And I'm not talking just about this thread, it's a recurring pattern on Reddit.
>the company actually is ethical and safety conscious everywhere
Anthropic is emphatically not safe. None of the AI labs with customers (i.e., excluding a few small nonprofits whose revenue comes from donations) are anything like safe -- because of extinction risk. The famous positive regard that Anthropic employees have for their organization's mission means almost nothing because there have been hundreds of quite destructive cults and political parties whose members believed that theirs is the most ethical and benign organization ever.
The best thing you can say about Anthropic is that if you have to support some AI lab by becoming a customer, investor or employee, it is slightly less dangerous for the world to support Anthropic than OpenAI although IMHO (and I admit I am in a minority on this among extinction-risk activists) it is slightly less dangerous to support Google Deep Mind or Mistral than Anthropic.
All four organizations I mentioned should be shut down tomorrow with their assets returned to shareholders.
The current crop of services provided by the leading AI labs are IMHO positive on net in their effect of people and society, but the leading AI labs are spending a large fraction of the 100s of billions of dollars they've received from investors on creating more powerful models, and they might succeed in their goal of creating models that are much more powerful than the ones they have now, which is when most of the danger would manifest.
The leaders of all of the leading AI labs have the ambition of completely transforming society and the world through AI.
For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.
I think you might be surprised that more and more Software Engineers are souring on Anthropic (the company) and the decisions the company has made recently. Not the whole drama with the US Government, but them locking down the usage of plans to their own tooling.
That really rubbed a lot of people the wrong way, as ultimately one might have a favorite tool, then suddenly they are forced to use another tool.
There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.
You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.
But just because that's true doesn't mean this article isn't very much relevant and needed.
"how easy it is, for those of us who play no part in public affairs, to sneer at the compromises required of those who do" - robert harris
Not making any value judgements, but I can see how one might value their interpretability research higher than what the ceo says in a time where the corrupt, criminal executive branch is muscling in to everything from what's written on currency, to journalistic sources. I generally blame fascists before i blame those unable or unwilling to resist them. though obviously, ideally, we'd all lock arms and, together through friendship, crush authoritarians and fascists.
They are a private company. They have zero obligation to sell anything to any part of the government or military. The only reason they are involved in "public affairs" is because they want to profit from the government. Moreover, long before this DoW controversy, they had plenty of nationalist and anti-China rhetoric in their press releases, more so than the other AI firms.
The other explanation besides profit is that they're true believers that democratic militaries should be stronger than the military of dictators around the world, including AI capabilities.
Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.
Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.
So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.
I'm a mod here and wanted to let you know 2 things: (1) I've marked your account with a beta feature that displays a colored line to the left of new comments (since you last viewed the page). It might help you keep track of this rather large thread.*
(2) I'm sorry the post was downranked off the frontpage for a while this afternoon. A software penalty kicks in when the discussion seems overheated ("flamewar detector") but I turned this off as soon as I became aware of it. We make a point of moderating HN less when a story is YC-related (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) but as this goes against standard internet axioms, people usually assume the opposite.
(* And yes, any reader who wants this is welcome to email hn@ycombinator.com to ask - I haven't turned it on for everyone because I'm worried it would slow the site down. Also, it's a bit buggy and not only have I not had time to fix it, I've forgotten what the bugs are.)
Wonderful work and writing, Ronan -- I'm appreciative of your careful balance between objective fact-finding and synthesis.
For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.
I had a question about reporting conventions. In the paragraph where Altman is said to have told Murati that his allies were "going all out" to damage her reputation, the claim is attributed to "someone with knowledge of the conversation" but the attribution is tucked inconspicuously into the middle of the sentence (rather than say leading upfront ("According to someone with knowledge of the conversation, Altman...")) and Altman's non-recollection appears only parenthetically.
As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?
> in 2014, [Graham] had recruited Altman to be his successor as president.
> [Graham's] judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.
One thing I don't understand is why Paul Graham offered YC to Altman if he knew how slippery he was..
Nice biography from Loopt to OpenAI. Why no mention of the Worldcoin cryptocurrency https://x.com/sama/status/1451203161029427208 in this piece? Was there nothing interesting to report in that area?
Hi Ronan. TCatK is a phenomenal book, not only in exposing the wrongdoing of powerful people, but also in presenting the meta-issue of how hard it was to get the word out, and you handled it all with nuance. You're about as close as I have to a personal hero.
Long time HN lurker, made an account just to say that :)
Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?
The piece is an interrogation of this very question, at great length and with some nuance. I think what it does most usefully is scrutinize an array of different answers to the question.
My own impression after many hours of conversation is that he is identifying something of a true north star when he frames this around "winning." There are people in the story who talk about him emphasizing a desire for power (as opposed to, say, wealth). I think he probably also believes, to some extent, the story he tells that equates winning, and his gaining power, with a superabundant utopian future for all.
However, I think critics correctly highlight a tension between his statements about centering humanity writ large and his tilt into relentless accelerationism.
Hi Ronan, absolutely wild to see you here in the belly of the beast.
I have not read the article yet, because I get the physical magazine and look forward to reading it analog. I therefore only have an inconsequential question.
I love the New Yorker’s house style and editorial “voice,” and I have always been curious about the editing process. I enjoyed the recent exhibit at the NYPL, which had some marked up drafts with editor feedback and author comments.
Did you find that your editors made significant changes to the voice of the piece, and/or do you find any aspects of their editing process particularly notable or unusual?
Can’t wait to read this one, and hope the HN crowd treats you well.
I won’t get into behind-the-scenes specifics here but I think you can imagine how pressurized this topic was and the amount of heat that tends to generate. I’m used to getting a lot of blowback and it’s never fun. I just hope the work is meticulous and fair enough, and that enough people see the benefits of that, that I get to continue to do it.
I am appreciative of your work on this piece. I'd love to see one that goes deeper into Dario Amodei. Perhaps even a series of profiles on the central figures of this AI era.
Hey, just want to say thanks for the piece and for all the hard work and effort you did to get this out there. I've published a bit here and there, and the actual writing is only ~50% of the work load (for me at least). So thanks for going through all the effort and pain to get it out, really appreciate all the work you do for me and the rest of Joe Public.
If you want another story to run, I'd really love to see an investigation into how these different companies are convincing governements that the only path forward to win global dominance is through achieving 'agi' first and how much that contributes to the reckless acceleration of ai software and infrastructure development
Also a good expose on accelerationists and e/accs and who among the elites fall in this group is direly needed as well
Please ask The New Yorker to extend some of their very generous subscription sale prices to Canada, I would subscribe to print if even a single sale applied to us, but all the sales are always USA only.
Do you think the recent conflict between Anthropic and the Department of War, and the apparent bootlicking by OpenAI has fundamentally altered the public perception of OAI? Are they the baddies now in the general public opinion?
In depth reporting is great. This is a really tricky topic to cover over the course of 18 months. A year and a half ago OpenAI was ascendant, now it's -at best- stalling and, more likely, trending toward irrelevant.
How do you feel about the title of your article? I assume an editor chose it.
Clearly he's straight up evil; between tanking the global economy, constantly lying, and raping his 3 year old sister, it feels really disingenuous to me to frame this as an open question.
“Tonight isn’t just about the people in front of the camera. In this room are some of the most important TV and film executives in the world. People from every background. But they all have one thing in common: They’re all terrified of Ronan Farrow.”
From time to time I have been accused of being an apologist for Sam Altman, but I have always tried to assess information based upon what it says instead of whether it matches an existing narrative. You list a number of distortions in your article which show the problem. If you are a good person, bad stories about you may be fake. If you are a bad person, bad stories about you may still be fake.
My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.
I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.
In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.
I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.
I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.
I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.
While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.
I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.
"Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.
Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.
The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.
I think sometimes you have to look at the patterns rather than at the single claim. If a large amount of people, that are completely unrelated, tell you very similar experiences they had with Altman, you can take that as a good indicator of his general character.
And if this tendency to misunderstand/be misunderstood always results it Altman gaining more power, even if we give him the reason of the doubt and say that doesn't do it on purpose, it's still a big problem, given the responsibility he has.
The article also mentions many moments where apparently Altman straight out lied, as opposed to being "very persuasive, if you believe those sources then I don't think it's also possible to think he's sincere.
I cannot open the article again to get the exact quotes, but the few I remember were:
- one time he was claiming he didn't send a message, while people were literally showing him the message he sent, with the confirmation of another OpenAI employee
- another time when he accused people of organising a coup, and that someone from the board informed him, and after the person from the board was called in the meeting Altman claimed he never said those words and never accused anyone
These cases can't be put to persuasion, that Altman changed their view, or that someone misremembered, they either happened or they didn't
I have experience in dealing with Sam Altman-like behavior. I hope to explain how their tactics unfold.
> I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.
There are two angles to this: from an individual perspective and from a collective one.
One's interaction with such a manipulator isn't a single shot. There is not a single event that they are “beaten”. First, one gets persuaded --- you might argue that there's nothing wrong with a skillful persuasion. At some point they realize that the reality is not in line with their expectations. They bring the point up to the manipulator and ask for a change, this time in more concrete terms. The manipulator agrees with the change, negotiates compromises, and the relationship continues. After some time the manipulated party realizes that things are not going in the direction they desire. This time they ask for more concrete terms, without accepting any compromises. The manipulator accepts, yet continues to act against the terms. The manipulated party is now angry and directly confronts the manipulator. The manipulator apologizes and tells that none of it was intentional, and asks for another chance. However, at that point, the manipulator has run out of “politically correct” “persuasion tactics”, and tells blatant lies to make the other party behave.
From a collective perspective, even those “politically correct” “persuasion tactics” are discovered to be lies, because what the manipulator told different parties are in direct opposition to each other, i.e., they cannot all be truths.
> Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.
I understand how her behavior may raise a flag for the unsuspecting, but it was exactly the right one. Manipulators prey on the benefit of the doubt. If Toner were to bring Altman's behavior into attention of others, no doubt that Altman would manipulate them successfully.
It's unfortunate that many people are unaware of these tactics and assume the best of intentions, when such assumptions fuel the manipulation that they would better avoid.
It’s wild to write something like “I have experience with Sam Altman-like behavior” and expect us to come along for a 5+ paragraph ride that actually has no Sam Altman connection at all except the one you imagine is true.
Talk to your therapist about your problems. Don’t project them on people you don’t know and seemingly have no actual first-hand experience with.
I'm sorry that it wasn't clear. I didn't mean to imply that I was going to connect to Sam Altman. I specifically wanted to address why it wasn't the case that people were “intellectually beaten” by Sam Altman.
> except the one you imagine is true
I'm not sure what you mean. I told about an example of manipulation that I witnessed. I later learned that these were common tactics employed by con-artists, scammers, etc.
> Don’t project them on people you don’t know and seemingly have no actual first-hand experience with.
I don't need first-hand experience with someone to understand that they are a manipulator. I am comfortable forming my opinion based on reports.
> what information would significantly change your views
Quite simple: show me any single action took by Sam Altman which can not be construed as an attempt to get him more power/money/influence. You can't find it.
The difference between what he claims to believe and what he actually does is a textbook example of sociopathy.
I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influence. I can believe that a persons intentions are good, but I can't make everyone in the world do that, and that is what you are asking.
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him"
To play your game, he got married, had a child, and joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.
You could still construe those actions as evil if you choose to see them as evil.
I'm not going to claim that Sam Altman is not a sociopath, I lack the information and knowledge of psychology to make that determination. On the other hand I have not detected those attributes in anyone who has claimed he is a sociopath.
It seems odd that people seem to take offense at the notion that arbitrary people do not reach a conclusion that requires specialised expert knowledge and a decent amount of irrefutable evidence.
> I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influence
Try the other way around, via negativa. We definitely can find plenty of examples of people stepping out of positions of power, deciding not to do something because of moral conflict, etc. Is there any case of such action from Sam?
Fuck, anyone with any semblance of moral fortitude would refuse to take money from the Saudis. But he had no problem to do it.
> joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.
No, this is selection bias. What he did was to put himself in a position where he could have his fingers on any and every possible pie, and then when of these things turned out to be something believed to be valuable by people with money, then he manouvered himself to be in the driver seat.
When people are described as sociopathic it’s not about any particular lie, but the relationship that the person has with the truth, which is that they will lie when it suits them and tell the truth when it suits them and they don’t seem to distinguish morally between them. And more than that, they treat people the same way, and will use them while it suits them and then dispose of them when they are inconvenient.
I have the feeling that if you write an article in that style, the subject of the story becomes the hero even if you insert a couple of negatives. In the same manner that Michael Corleone becomes the hero of The Godfather.
I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.
On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.
My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).
I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.
However I'm not going to pay for yet another subscription to access one article I'm interested in.
I'm sure you can't do anything about this, but I just wanted you to know.
You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.
Many have tried it (as well as the oft-recommended micropayments idea) and it never justifies the added expense and overhead of the customization. Closest is probably the NYTimes’ gift article feature.
Probably true, it's more likely that it's a variation on "there are only a small percentage of people willing to pay any amount of money for an article, so if we offer one-time options, a large enough percentage of people who would have otherwise subscribed with recurring revenue instead pay one-time so their lifetime value is lower"
Looking online it looks like the newsstand price of an issue is around $10 (which I'd assume is heavily ad subsidized, if anyone is still buying print ads?) which is an interesting data point for a pricing model. (Of course, I looked online because I have no idea where I'd find a newsstand around here - the nearest newsstand that show up on google maps has reviews that say "It's just snacks and scratch tickets." and "three newspapers and no magazines" - I may have to stop by just to see what three newspapers they have :-)
Damn, just wanted to say reporters are scary... The amount of detail here is huge. You think of hackers as the ones good at doxing... Nah, its reporters.
Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."
I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"
It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!
Reading this makes me even happier to pay for Anthropic.
Amodei and his sister saw through the behavior and called it out.
" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."
If you think Amodei is significantly different you’re going to be disappointed. There is nothing he has done that can’t be adequately explained as furthering his own interests. Remember how Musk doesn’t like Altman too? It’s because they’re all the same people, competing for the same thing.
I can go with the thesis that individuals need community control (boards, regulations, laws) in order to be accountable but is there some specific evidence that Amodei is the same? It seems like a "both sides" argument.
I don’t need to convince you, we’ve been through enough cults of personality, time will tell. But I’ve been right enough times to back myself. Maybe it’s because I grew up around a lot of people like them? They can’t hide that they would say whatever they think you want to hear.
Actually it’s funny: Their lack of empathy/emotional intelligence would also make them susceptible to thinking that talking to an LLM is like talking to a person, so maybe they really did think AGI was around the corner!
There’s been enough divergence between words and actions from Amodei for me to also consider him deceitful, if that’s really the low bar you want to set. I’m not saying he’s worse than Altman, just to be clear.
I mean he quit what he considered to be a problematic company, founded another one, that one’s models refused to do things that the previous company would do, then his new company refused to do the US government’s evil bidding while the other company happily went along with it.
> I mean he quit what he considered to be a problematic company
Problematic why though? For the reasons publicly stated? Then why isn’t Anthropic just what OpenAI was “supposed” to be then? We know what that was from their charter, and Anthropic is not that.
> then his new company refused to do the US government’s evil bidding while the other company happily went along with it
You’re sure about that are you? I don’t see how you possibly could be, unless you’ve taken the PR at face value, before it was all quietly swept away under the next headline.
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”
You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.
Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.
FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.
Yes, but first I want to be very clear on some things.
1. I could have hidden my identify behind a throwaway. I did not feel that would be appropriate when making this calim.
2. I am not looking for anything, literally at all. Any follow ups for blogs; anything that would benefit I will not answer.
3. This is NOT a new account, I am very easy to find; I am 6'1 140lbs
I was working for a company called NationBuilder and I had the opportunity to go on a work trip. Outside of a talk he had just given I was waiting for my ride and I looked over like...damn thats the speaker. I wanted to say Hi; he damn near flagged down the police. I apologized and just decided to move on.
Note: It was in Reno, and no I don't want to go into details; the others are not hard to find because I happened upon them via blog posts so i'm sure if someone with the accumen of RF wants to know, he will find.
I have heard similar stores from several people in the years since. I AM NOT CALLING THIS PERSON RACIST. I am saying; he is observably scared of black people and that is not someone I want making descions about how the world moves foward.
I wonder if this stems from Sam getting beat up by a black guy. From the article:
> When Altman was sixteen or seventeen, he said, he was out late in a predominantly gay neighborhood in St. Louis and was subjected to a brutal physical attack and homophobic slurs. Altman did not report the incident, and he was reluctant to give us more details on the record, saying that a fuller telling would “make me look like I’m manipulative or playing for sympathy.”
Maybe just Occam's Razor -- any time I've seen Sam talk in public he just seems to be a neurotic, anxious individual that would have a hard time interacting with people in any normal context. In a world of infinite variables it's hard to say that his aversion was due to your race -- there's really not much to go on here.
Thank you for sharing this. I 100% believe it, and it lines up with my experience with other people who came from similar backgrounds as Sam Altman - i.e. white, rich, privileged, and attending elite universities.
I will disagree with one part - I do believe it is racism. Most will never admit it publicly, but if they think you're one of them, it often comes out rather quickly, especially when alcohol is involved.
It's sad to me that "racism" is such a divisive word to many, and is met with defensiveness rather than introspection and communication. Trying to not be racist takes work, and communication, and is a process, not a state.
I appreciate OP's sharing as well. Also, racism isn't peddled only by rich white elite university attendees, it reaches into all the corners.
An extranordinary claim needs a bit more evidence than one datapoint where in his defense maybe he is scared of anyone he doesn't know trying to talk on the street.
Agreed, his two posts read really weirdly. He made a deliberately vague(?) initial post to get a response and I'm not sure how I feel about his story as you've said, if I was Sam Altman I'd be wary of anyone coming up to me too.
Just to clarify, because I am not sure I am reading this correctly:
Your statement that he is terrified of black people is based on you (presumably a black person) running into him outside an event, and him reacting with fear/extreme caution when you approached him?
Not defending Sam, but if that is the case, then it's the kind of thing that Sam can hold up and say "Do you really think my critics are intellectually honest?"
Rock solid evidence is what brings people down. Stretched truths, assumptions, and careful half-truth wording, are all ammo the accused will use to strengthen their side.
Not defending Sam, but if that is the case, then it's the kind of thing that Sam can hold up and say "Do you really think my critics are intellectually honest?"
Why? It sounds like they were in an environment with many people and Sam reacted negatively to the black guy. It's not like the story was, "so I followed him down a deserted alley and he got scared, so he must be racist."
It sounds like Sam was approached on the street by a stranger, and he had a negative reaction. Which is fairly common for high profile people, especially people with a following of haters (let's not deny AI/data center general unrest).
I cannot see any legitimacy to the claim besides the commentor's own interpretation of the situation. They posit this like the authors would want to know, but here I am doing the first thing the authors of the article would do, and I'm getting downvotes for it. The author(s) won't touch it anyway.
Note: To all the downvotes; I did this publicly and not anon for a reason, if you will do the same I am more than willing to provide evidence for all of these claims as long as its done publicly and in the open.
PG said something along the lines of: "There should be no truth that is increasingly unpopular to speak."
If you don't believe what I shared is true, address that directly. But seeing my post sitting at 1 point and [flagged] after 2 hours is not OK. Just as DJT can't flag away his issues, you shouldn't be able to do so on HN.
One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings. I really hope that what happened to my post is not the beginning or a continuance of the end for that ethos.
> One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings.
That has never been the case, because HN is frequented by humans and humans are biased. Someone who claims to be unaffected by feelings is someone you cannot trust, as it means they are blind to their own shortcomings. Being robotic about the world is no way to live—that’s how you get people who are so concerned with nitpicks and “ackshually” that they completely lose sight of what’s important. They become easy to manipulate because they are more concerned with the letter of the law than its spirit or true justice.
Objectivity and empiricism are positive traits but should be employed selectively. Emotions aren’t a weakness, they are what drives us to change and improve. Understanding your own emotions equips you better to understand the world. But they too can be used to manipulate you. To truly grow, you have to employ your emotional and rational sides together. Focusing on just the rational will get you far but not all the way.
HN is primarily about curiosity—it’s in the guidelines four times—and you can’t have that without emotion.
>> One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings.
> That has never been the case, because HN is frequented by humans and humans are biased. Someone who claims to be unaffected by feelings is someone you cannot trust, as it means they are blind to their own shortcomings.
Yes, and HN is full of people like that: simultaneously arrogant and stupid software engineers whose arrogance is founded on their own ignorance and self-regard. "Grounded in observability, empirical evidence, not bias or feelings" actually sounds like a smokescreen to obscure one's bias and feelings from oneself.
> Being robotic about the world is no way to live—that’s how you get people who are so concerned with nitpicks and “ackshually” that they completely lose sight of what’s important. They become easy to manipulate because they are more concerned with the letter of the law than its spirit or true justice.
They're also easy to manipulate, because their emotions can be appealed to without them having enough awareness to be on guard. For instance: you can manipulate many software engineers by working your position into the form of a technical "system" (e.g. Econ 101) then praise them for being smart little boys for understanding and believing it.
I don't know if he is a racist or not, but forget HN. Last couple years it has gone on the deep end, not sure if delusion or $ interests, but it is impossible to have a decent conversation here. I think the only reason this article stayed up is because OAI is starting to be a bit 'toxic' now, but if this was published a year ago, it would have been flagged to oblivion.
So just ignore those points and flags. HN *used* to be a nice place for intelectual conversations, even if you disagreed with each other. Now is nothing more than bots, people with financial interests in this bubble or sycophants.
I tried to respond to your comment with some personal observations on racist currents in this community, but my comment immediately got flagged. So yeah! This site ain't what it used to be. Best for the good folks to seek community elsewhere, I reckon. I miss the old days as well, but I don't think they're coming back.
If this site ever was anti-racist, that must have been a long time ago. I threw away my old account many years ago only to come back with this one (because it's difficult to completely ignore HN if you work in tech) and the reason I threw that one away was in part the overwhelming reactionary bias in this community.
The "progressives" were at best silent "don't rock the boat" types more inclined to insist on civility than to challange reactionary sentiments while the reactionaries ranged from dog-whistling to outspoken, across the entire range of white supremacism, sexism, homophobia, transphobia, antisemitism, zionism and so on. The only comments that would ever get flagged or downvoted were those that were explicit enough to be seen as "impolite" because they happened to spell out calls for genocide or violence rather than merely gesturing at it with the thinnest veneer of plausible deniability.
Well, I do remember it being more about the underdogs and a cheeky "fuck the system" attitude without much malice. Maybe I just wasn't tuned into this stuff back then. Now, though, both users and tech leaders can unironically parrot Stormfront rhetoric from 10 years ago (using vaguely cordial language) and no one even bats an eye. The kind of stuff that would have made you unemployable just a few years ago.
When I think of HN in the before times, I think of people like Aaron Swartz. Would he have enjoyed his technical discussions peppered with comments on how the West is being "invaded" and "outbred" by third-world hordes? Based on what I know about him -- and please correct me if I'm wrong -- I'm guessing he would have noped out of that kind of community in a flash. Yet nowadays I see this kind of talk here all the time, percolating all the way up to industry leaders like Musk and DHH.
Just came to say, I appreciate your emotionally intelligent and balanced take on your experience, where it would have been very easy to react and let emotions take over (understandably).
It's disappointing to me that a completely factual personal experience can be relayed with zero spin – and yet some of the replies act as if it's 100% spin without any factual evidence. Some people seem to prefer to respond to an imaginary version of a conversation rather than the one that's actually happening in front of them.
Thank you for sharing this experience with us. Don't worry about the downvotes. That's just how it is here sometimes. I don't think it reflects the views of most readers.
For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.
At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...
Interestingly we are still experiencing the technological momentum inspired and created by what OpenAI used to be. AI for humanity.
Given the initiative started circa 2017, much of the goods remain. It's a hijack of creative geniuses who got together, which is now turning into cow milking tech.
I remember reading these direct quotes from SA in 2016 from the New Yorker and thinking, yeah, this guy is just miserable:
> “Well, I like racing cars. I have five, including two McLarens and an old Tesla. I like flying rented planes all over California. Oh, and one odd one—I prep for survival. My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources. I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
> "If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future. But I do care much more about my family and friends.”
> "The thing most people get wrong is that if labor costs go to zero... The cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.”
> "...we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”
This doesn't seem like someone who's miserable at all to me. They seem like someone who has a wide variety of hobbies and is and is intellectually interested in futurism
Yeah, I have an half baked thought about billionaires like this that they truly want the best for this world even if they have to seek it by immoral means.
Funny you bring this up because I always think back to a story, in the New York Times if I recall correctly but perhaps the Journal or SFC, talking about how him and his friends got upset when asked to leave a high end french restaurant due to him wearing sneakers. They pulled a "Do you know who he is?" well before he was even tied to OAI. Always left a bad taste in my mouth and stuck with me a decade on.
Tangentially, without being too specific, I have someone incredibly close to me that has recently had interactions with the upper echelons of OAI's exec team and... the stories are not kind. I imagine when your company is being run by a morally bankrupt tech bro you are short on integrity.
After 10+ years of hearing anecdotes about sama I am starting to wonder if maybe the word on the street is true and he really is just as selfish and blind as people make him out to be. At this point, the optics surrounding OAI vs. Anthropic are just plain bad. They should have gotten rid of him before when they had the chance.
I don't follow public figures or news anywhere near enough to have a meaningful opinion on Sam Altman, but I find one interesting snippet here, which is that there is a straightforward prediction in there. He did say ten to twenty years and it's only been ten, but still, I can't think of a single good or service that families need or commonly want that is an order of magnitude cheaper. It makes me wonder if he's become any less confident of this or any other prediction.
I don't want to be holier than him or thou or anyone else, but it is the kind of thing I've found of myself quite a bit. I made a lot of confident predictions about the future 15-25 years ago on the Internet, and even though I'm not a public figure and nobody will ever hold me to task for being wrong, I can see it for myself. The predictions are still there. They weren't universally wrong, but I didn't do much better than chance. It's a big reason I no longer bother to make predictions. I have no idea what the future will bring and I'm comfortable with the uncertainty. It doesn't feel like very many people on the Internet are.
As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page.
One of the decidedly eerier parts of this story as you keep reading are all the gaps between what people are saying about Altman, and what they clearly want to say about Altman but can't.
This can be true I suppose, but equally I have a few friends who practically play characters as if they've resigned themselves to a role in a sitcom. For instance: one of my friends is late to just about everything and treats everyone as if we are on-call. We plainly note this repeatedly, the friend is, I hope, equally frustrated and embarrassed by it, and in spite of this nothing changes. This is obviously a critical element to their broader character.
Perhaps you mean to distinguish social groups without much intimacy? To which I'm sure we could provide some convincing cases, but this seems like a silly heuristic generally.
I have been in or next to a number of social circles with such missing stairs, where for various reasons people in the groups have decided to not directly acknowledge certain Facts that are known about some members, because it would involve them confronting their hypocrisy.
Someone cheating regularly on their partner, flagrant substance use problems, controlling people who ostracize anyone who doesn't agree with their sometimes insane perspectives...
People will go along with quite a lot to avoid friction, especially as they get older and picking up new social circles becomes higher cost.
It's possibly the most telling thing, when you see what people say is a hard line versus how they actually respond to it.
that's not ADHD. People with ADHD would improve - it may take a LOT of time, but it will happen. Quite often they will go to the extreme and come in way too early. My bet would be on Cluster B personality trait e.g. lack of empathy and constant need for attention and validation.
That is not always true and not always with everyone. Many people who have ADHD have unsolvable time blindness. They don't mean to do it but their brain chemistry literally disallows them from not doing so in many cases.
if this isn’t a joke - new yorker style uses a diaresis when a word has a repeated vowel where the second vowel is part of a different syllable. coördinate, coöperate, and reëlect are probably the most common places where this comes up
This might be the major dilemma in the tech industry today, where the natural tendencies of literalism and optimism among technologists has turned into a form of defensive credulity. The real world rigor of The New Yorker’s editorial standards and concerns about defamation necessitate this circumscribed style that rewards close reading and skepticism, but those aren’t in favor in the tech industry currently.
With this in mind, I think you would be the perfect investigative journalist to track down the archives of The National Enquirer.
This was our "hometown" gossip paper in South Florida, and you should have seen the pictures and stuff that they did print. And this was after threats of celebrity lawsuits in the mid-1970's had curtailed any tendency to exaggerate.
Back when almost nobody outside of New York had heard of Trump, he started coming down to play golf and made quite an impression among the well-established Florida real-estate operators. They could see right through him like any other fake millionaire from New York, which were a dime a dozen. There was just a general consensus among many visitors that what happens in South Florida stays in South Florida. Epstein grew up in this environment.
You would see pictures of him with unidentified non-Stormy dates, and some insinuation in the gossip column but you knew they were holding back from anything that could not be truly verified.
By the time of his presidential run, it looks like he had become well acquainted with David Pecker who owned the Enquirer. I wouldn't be surprised when he sold the publishing company that there are archives somewhere that contain all the supporting stuff that was unverified at the time. When Trump & Epstein were much younger running buddies for so long.
> The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”
I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.
Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.
And not intending to defend the motives of anyone involved, but I'm hoping we can not worry about literally all jobs being destroyed, and AI companies amassing all the wealth in the world.
Don't we need at least some humans working and earning to buy these AI services? Am I not being imaginative enough? Is it possible for the whole economy to consist just of AI selling services to each other?
I realise that even if AI destroys most jobs, or even just a lot of jobs, and amasses most wealth, or a lot of wealth, it would still be a terrible thing for humans. The word "all" could have just been hyperbole, and it is still a valid point. I just want to know people's thoughts on whether entire replacement is possible.
It's a huge if and honestly I don't believe in it.
Actually, if it ends up like described, it really doesn't matter whether I believe in it. Either it happens and we all die, or it doesn't happen. Pascal's Wager I suppose.
Why keep human consumers to buy your services when you could just amass all the wealth you desire, and have autonomous systems that can ensure your unassailable physical security? You would sit atop the most stratified dominance hierarchy ever achieved, and it would reduce other humans to mere pets or breeding stock. I don’t think normal humans would desire that kind of power, and I don’t believe LLMs will take us there, but I wouldn’t put it past the perverted billionaire maniac.
Surely a Big Sur compound stocked with iodine and gold, protected by security goons fitted with exploding collars, is someone’s definition of paradise.
I think fundamentally, the concern is misplaced. The fact you need to work for wealth is a convention of our constraints. The change in constraints would lead to other means of distribution. It's easy to see if someone who believes more productivity is good would not see making jobs obsolete a real problem. Thew would see us adapting to the new conditions in a relatively short while.
> The fact you need to work for wealth is a convention of our constraints
The current constraint is "you need to produce to have things".
If one company's AI takes all the jobs, and thus does all the producing-to-have-things, the constraint transforms into "you need that company's permission to have things".
Those who are concerned is implying that any new distribution mechanism is not going to favour them.
And under the capitalist system, if nothing changes, the "new" distribution system is indeed not going to favour them - at best there would be some sort of UBI, and at worst you would be left to starve in the streets.
However, i cannot see how one can transition to a new system, and yet have the existing powers in the current system agree and not be disadvantaged.
If you are speaking about the world, hundreds of millions in the next 5 years is probably closer to reality in my opinion. And from your question I think that you already know the answer.
So he says. And the way he proposed reaching that was with a scam cryptocurrency under his control which has rightfully been banned in several countries.
If there's one thing that's clear from the article, it's that he's a proponent of anything that will benefit him, even multiple conflicting things at the same time.
Is there an advocacy arm of OpenAI pushing for legislation for UBI? Or is this like Musk's supposed support for UBI while also insisting that welfare payments to the poor are a bad thing?
It’s also available via public libraries in USA via Libby if your local library system pays for a subscription, so it’s a way to support the magazine indirectly, since your local taxes pays for your library. The downside for weekly is you have to read it that week, no archive access.
The New Yorker posts digital articles in advance of the release of the print edition.
At the bottom of this article it says: Published in the print edition of the April 13, 2026, issue, with the headline “Moment of Truth.”
As someone who reads the print magazine every week, I always scroll down to check if the article will be published and skip it if so (so I can read it when my magazine arrives).
This is pretty hilarious - when I asked ChatGPT to "summarize this article: https://archive.ph/hOYMn", it said it's about Jesus ("The article traces the development of early Christian Latin hymns, especially focusing on how themes about the Virgin Mary and Christ evolved from the 4th to later centuries..."
(https://chatgpt.com/share/69d48476-9bf4-8327-8c19-709865a547...)
Interesting. If you look at the sources it cited, there are a few links about "Sacred Songs and Solos" (likely from related/side content on the page), my guess is it didn't read the main article and instead anchored on those and hallucinated
I usually use free archived versions to read mainstream journalism pieces. Seeing this convinced me to subscribe. I've always loved The New Yorker, and am happy to support serious longform journalism (and I know that Ronan is one of the best).
However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.
> Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.
Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?
It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.
I find it interesting how a lot of cyberpunk does not really include AI or does not present it in transformative way. There is a lot of mind uploading, implants, corpo fun and overall technology permeating all aspects of life, but often AI itself does not actually play a big role.
Counterexamples that come to mind are Neuromancer (AI driving the plot) and Blade Runner (AI antagonists.)
A compromise thesis might be that in cyberpunk media, AI is at never powerful or motivated to fundamentally reform the worldwide crapsack economic system. They don't abolish corporations, although they might take them over.
Of course, if there was a story about an AI taking over the world into a post-scarcity society, it probably wouldn't be filed under "cyberpunk" either...
Rampant capitalism is kinda genre-defining for Cyberpunk so Cyberpunk without corporations wouldn't really be Cyberpunk. _The Matrix_ only qualifies as Cyberpunk because within the matrix the machines effectively control the capitalist power structures to exert their influence.
Abundance/scarcity isn't really about availability, it's more about access. You can have a cyberpunk story in a "post-scarcity" setting in the sense of availability (due to sci-fi tech) but you can't have it without unequal access to those resources.
Agreed, which is why The Culture (series) isn't cyberpunk but The Polity (by Neal Asher) kinda skirts the line, in many ways they are similar except resource inequality still exists on a wide/policy scale in the latter.
AIs are in plenty of cyberpunk stories, but your comment did make me think that they are often rather stereotypically “alien entity characters” and not a kind of corporate technology / weapon that is controlled by a specific organization.
Which is a shame, as it seems to me that the overwhelming risk of AI is from the latter scenario, and not as a rogue individual entity.
I think you can look at Star Trek as a fairly grounded example of where current LLMs could go: the ship's computer is not autonomous in any way but it does accept fairly vague instructions and you can apparently vibe-code the holodeck.
AI is one of the core parts of cyberpunk, through androids / humanoid robots. Blade Runner is completely built on the protagonist having to interact with rogue artificial intelligence.
It's because they're really good at the kind of busywork the average white collar job requires. Most people are out there writing documents and making presentations. Only when you use them for actual complexity does the shortfall become clear.
I'm going to write a silly comment here:
For a moment I thought you wrote "... LLMs. Yeah, they're transformative, but I don't know that they're going to be eating ramen in a Neo-Tokyo street bar anytime soon."
I liked that mental image a lot! (I try to maintain being uncertain whether Deckard was a replicant)
Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.
Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.
> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."
This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.
They're mass media cynically produced to extract maximum profit from lowest common denominator audiences, so the idea that people working in such influential positions find them appealing enough to reference suggests they are members of that lowest common denominator audience.
There's a time and a place for everything, and rejecting popular media as "lowest common denominator" is the most uninspired form of cultural elitism.
Is it cynical to want your <art project> to make a profit? Or for it to make enough profit to subsidize other projects?
Is it cynical to make something accessible so more people who watch it are able to enjoy it?
I agree that it's embarrassing and feels crass when movies both try to be broadly appealing and simultaneously fail to be entertaining or well executed ... but many of the marvel movies clearly surpass that bar.
No one wants to make a bad movie that does poorly with critics and paying customers - but it does happen because making a movie is expensive and complicated and requires a lot of skilled people working together towards the same goal.
Regarding taste: do you think a michelin star chef swears off cheap food like hotdogs or fish and chips? Doubtful - because those foods have their place and the chef is able to enjoy them for what they are rather than use them as an excuse to display a superiority complex.
Yeah, I'm saying professional communication isn't the place for Marvel references, and that those who choose to include references to those movies in their professional communications are revealing something about their media tastes.
If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.
> If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.
This is a very funny quip.
A famous anecdote about a 3* restaurant in NYC is about the servers overhearing a group of diners mentioning how they ran out of time try a "real NYC hot-dog", and the restaurant staff running out to grab one from the corner cart and plating it up nicely; and how this was a highlight of everyone's experience.
Exactly. They share the cultural sensibilities of the average person on the street, and yet they're making decisions that will shape the world for future generations. I think that's bad. I want those decisions being made by people who have a more extensive cultural education. Snobs, if you want to call them that.
Interestingly, the smartest people I know have the widest range of media consumption and understanding. To assume that because someone uses a marvel reference they might not have a deeper cultural education is rather...limited thinking.
Ferran Adria drew culinary inspiration from a bag of potato chips
As someone experienced with a privileged elite educational background, I can guarantee that intellectuals love the highbrow and lowbrow, the authentic and the kitsch; rather, it is a sign that someone is not acculturated if they have the stereotypical impression of the intelligentsia, which makes the OC's comment ironic, they are telling on themselves.
Of course they're average people, why do you think tech or AI company employees are somehow above or beyond the average person? I'm not sure why you'd willingly say you'd want snobs controlling the world, that is somehow even worse and reeks of aristocracy which is why you see replies rejecting your thoughts, it is simply not a western ideal or one to strive towards.
I'm confused as to what your point is. Employees refer to the incident as "the blip." I got no impression that there was a formal memo that went out to the company or the media at large that officially refers to the incident as the blip, merely that employees refer to it as a blip (likely to each other, not too dissimilar to a meme).
And while I don't think someone's media tastes ought to preclude them from making important decisions, I also disagree with your point at large. I don't think the world should be shaped by snobs. The world is already being shaped by snobs in other sense of the word, and I don't see any indication that it's any better than the alternative.
There is also elitism of lack of expectations. Common people should be helped to rise up over the mud produced by culture industry. Meeting them and staying with them in this mud is an actual elitism.
When things reach a certain level of popularity they constitute "mental real estate". Your audience has heard of Groundhog Day, so there is an opening for a movie with that title to make money -- your film will start out already having name recognition and some understanding of what the movie is about.
Thus it is a writer's job not to make references they find appealing to reveal their good taste, but to know what references their audience will find appealing and use them to help communicate concepts. If this bothers you it's because they're insulting you by saying you might be part of the audience that watches Marvel, and you had hoped reading the New Yorker would signal that you aren't.
I agree that these movies are really being cranked out. I hadn't even realised quite the extent of this until I went to look. But I think some of these movies are good enough that it shouldn't be disturbing that people in influential positions find them appealing:
I know a lot of people are critical of the Rotten Tomatoes score, but I find that when a high enough percentage of reviews are positive, it is likely I will enjoy the movie. Some of the Marvel movies have a very high proportion of positive reviews (admittedly, those reviews could be just positive, not very positive). And for most in this list with a very high score, I think it's deserved.
I'm an MCU fan. And while I do agree quality has gone down, I think it's hard to ignore the fact that the MCU did something really novel. They made a franchise that spanned 20+ movies and tied it up in a way that was almost universally loved by nerds and normies alike.
Are there a lot of plot holes and retcons? Yeah. And some bad writing. And the movies that came after have been pretty meh with some exceptions.
But for someone to say that referring to one of the highest grossing films and franchises of all time, means their decisions should be questioned, is quite the stretch.
The issue with marvel really is that it took another 20+ movies worth of unique ip or stories that could have been told out the window. Yeah, highest grossing of all time, but that has been marching up the whole time too, no? Especially selling to china now. Studios would have made the same money I’m guessing spread out over other IP.
I disagree with this characterisation. I loathe mass-media blockbusters, but a journalist has to be in touch with public culture in their goal to spread the truth and inform people, not just high-brow elites, but everybody. This is why their work is usually more influential, interesting and engaging than if it had been written by an academic.
Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.
This thread set off a software penalty called the flamewar detector.* I turned that off as soon as I saw it.
(* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)
This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?
> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.
He's a liar and untrustworthy. Based on their public statements, that's a big part of why the board fired him.
Of course, (despite the fact that Altman previously publicly stated that it was very important that the board can fire him) he got himself unfired very quickly.
It's got to be one of the most unusual biographies of a living person that I've ever come across. Nearly every sentence is a head-turner. If you made it up no one would believe you
The entire thing is a joy to read, you should really set aside some time to cleanse your palette in this age of LLM prose. I mean just look at this juxtaposition
>Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals.
(plus it finally resolves the mystery of "what Ilya saw" that day)
Also since it wasn't stated clearly
>“the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India
I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.
HN generally downvotes and/or flags anything that paints ycombinator in a bad light. As Altman was president of yc from 2014 to 2019 that could be why this is getting downvoted.
Articles critical of Airbnb, one of yc's biggest wins, also get flagged and taken down.
I don’t think the poster you responded to was claiming that moderators directly did this. The flagging system is open to bias from the community at large and certain types of articles(ex. Anything critical of the current admin) get a bunch of real users organically flagging them.
Yes, it's hard to tell sometimes but I've at least learned not to automatically take these personally. Well, partly learned.
I don't think anyone familiar with this community would assume positive bias towards Sam, Airbnb, or even YC anymore - it's quite the contrary, from my perspective, but of course everyone notices different things and has their own view. Ditto for political slants.
I dont assume positive bias, but I do assume that most negative things that get people irked are removed as a result of the mechanics of the flagging system.
Like, I dont really expect puff pieces for ycombinator or the like to get artificially pushed to the top, but I do expect that enough people who are feel culturally or financially invested in ycombinator to flag negative things into oblivion, especially as its completely reasonable that the population of users here has a much higher percentage of those folk than any random population sampling.
Even if your motivation is some utopian vision of the future, you should not be trusted. Utopia is a thought experiment in a philosophy of living taken too far, not something to be reached for earnestly.
Why is it that criticism of people's insatiable greed for wealth and power often gets dismissed with this thought-terminating cliche about utopias?
Desire to live in a society that's less greedy, that rewards compassion and punishes sociopathy is completely valid. We should be pursuing that earnestly because survival of our species depends on it. The people in charge are so drunk on wealth and power that they would rather drive our entire species off a cliff than sacrifice even 10% of their effectively bottomless wealth.
But instead of criticizing our current philosophy that's actively being taken too far and threatens to destroy us, you criticize people who express their frustration with this state of affairs.
The criticism is not of the idea that the world has problems, and that we should look at those problems with the aim of fixing them.
The criticism is of the assumption that a world without problems theoretically could exist.
You may disagree, but you will not find a definition of such a world that everyone can agree on.
Regardless, of whether you agree (that such a definition doesn't exist) or not, if you do plan on bringing about such a utopia, and you begin to meet resistance, the question you will inevitably need to answer is: How do those who resist fit into this utopia?
The historical answer for this question, which by all appearances seems like an inevitable answer, is the reason why people criticise utopian thinking.
Not just the greed. The whole AI is so dangerous that we must be the ones to build it to save humanity, and then gaslighting yourself and everyone around you into believing that your language model is AGI. This is some weird detached from reality cult behavior.
Complete hearsay, but I struck up a convo with someone who had spent a few hours drinking around a campfire with him and a few others at burning man, prior to GPT3's popularity. Apparently he was utterly convinced in his pivotal role to shepherd in a new era with AI, to the point where it got really messianic and culty. He didnt recall much else other than just being really weirded out by the dude.
The AI CEOS and most of their employees are in the same place as that guy. They're just in a more professional context and will be careful not to let their delusions of grandeur look too insane.
I remember watching the fitness function improve while my neural net learned to recognize characters for a project I did in school, and there was something about it that felt powerful. I guess we've always had that with the machines we imbue that have any sort of decision making "intelligence", but mix that with taking psychedelics and you have an interesting cocktail.
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”
This statement rings true.
JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.
At least two of YC's early (mid-aughts) "huge" successes come down to PG unilaterally (or with some help from JL) making some kind of "weird" call. AirBnB and Reddit come to mind. Even Stripe can be traced to him since he basically created the Auctomatic team (Patrick Collison's previous YC entry).
In other words, PG had the "knack" for sometimes encouraging the right weird thing. I'm not sure it's been the same since he handed off the reins, like any other formerly-founder-led company. Nowadays it really gives off the vibe of bean-counting and hype-chasing.
I don't think it's gotten quite as bad as this [0] article suggests, though.
“Today’s news comes at an interesting time. Last week, Business Insider’s Jonathan Marino reported that YC is close to raising several billion dollars for a new fund, with the goal of possibly expanding its scope to later stage funding. It said it’s still in preliminary discussions for this new strategy, but if true, Thiel could definitely play a big role there.”
My recollection was Thiel was injecting cash, a money deal. [0] There was another less advertised play. An established path for the Thiel “Boy Wonder Fellows”. [1]
“In addition to founding PayPal and Palantir and being the first investor in Facebook, Peter has been involved with many of the most important technology companies of the last 15 years, both personally and through Founders Fund, and the founders of those companies will generally tell you he has been their best source of strategic advice. He already works with a number of YC companies, and we’re very happy he’ll be working with more.”
Guess who was involved in the Thiel / YC deal? [2] You are not the only one seeing this as a reputation hit for YC. [3] Even I, disconnected across the other side of the world could see this as an issue.
Having Thiel on board of YC would probably turn off a lot of potentially successful founders. Or maybe it's a way to select for those with a lack of ethics. Having Musk and Thiel visibly associated probably is good from a monetary perspective but it sends all kinds of bad signals.
Really solid piece of journalism. I understand some stuff ends up on the cutting room floor in the editing process as length is eventually a factor. What was the one thing you most regret having to cut out of the final piece?
> Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”
I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.
It's not surprising. I made this comment on HN before, but if you follow him on Twitter, it's pretty remarkable - the CTO of one of the most important technology companies in the world and he has never (that I've seen) posted something with some technical insight, or just anything interesting about technology. It's just boring truisms, cliches, empty statements, etc.
Eh. It doesn't start or stop with people like Altman, Zuckerberg, or Nadella. I think it's a symptom of a broader problem in tech. Half the people on this site made a decision to work at companies that do shady things, and they did that to maximize personal wealth.
The difference isn't that the average techie doesn't dream of making a billion by any means necessary; it's that most of us don't think we have a shot, so we stick to enabling lesser evils to retire with mere millions in the bank.
I don't think it's all that hard to avoid working on anything shady. It's not as easy to avoid being associated with anything shady due to widespread cynicism and a tendency to treat tech companies with thousands of projects as a monolith.
> The difference isn't that the average techie doesn't dream of making a billion by any means necessary
I hope that's not true. If it is, we live in a bleak world indeed.
I can confidently say I've never once dreamed of having billions. I've never wanted billions. Not even in a fanciful manner. What would I do with that money? Buy mansions and megayachts? That's loser stuff
Most of what I want out of life cannot be bought. The pieces that come with a price tag, like a comfortable home, do not require billions
I think only sociopaths want billions because they don't understand spending your life seeking things that actually matter, like family and human connection
What sticks out to me most is that humanity consistently fails to weed these creatures out and regulate society. It's a bug in our social software; we seem to like these broken people rather than recognize that they're a liability.
This isn’t a bug. It’s the driving force of our capitalist society. We are not trying to weed them out. We are trying to encourage them. It’s pretty simple, when they get rich, so do all their investors.
No need to be petty. They have a point. We did this with the words racist and fascist. Overinclusion diluted the term and gave cover for the actual baddies to come in. I'm not sure debating who is and isn't a sociopath is as useful as, say, the degree to which Sam is a liar (versus visible).
I don't know how to define the delineation I'm about to propose. But there is a difference between overinclusivity trashing a morally-loaded, potentially even technical, term, and slang evolving.
While I agree that the word has been misused by some bad actors in the "Woke 1.0 era", it's worth pointing out that this isn't what most people complaining about the word being "diluted" are referring to as these are mostly people flat-out upset by any suggestion that they themselves might hold racist beliefs.
That said, anyone using "racist" as a noun isn't worth your time, nor is anyone who's genuinely upset about people calling concepts, systems or ideologies "racist".
Specifically, the "Woke 1.0 era" culture war arose from two conflicting meanings of the word "racist" largely aligning with two different segments of the population: 1) "racist" as a bad word you call people who are extremely bigoted against people along racial lines and 2) "racist" as a descriptor for systems and ideologies downstream from racialization (i.e. labelling people as racialized - e.g. Black - or non-racialized - i.e. "white") as a mechanism of asserting a power structure. "Wokists" would often conflate the two by applying the word as broadly as the latter definition necessitates while still attempting to use it with the emotional weight and personal judgement of the former definition.
I think a lot of this can be blamed on "pop anti-racism" just as a lot of the earlier "boys are icky" nonsense can be blamed on pop feminism because fully adopting the latter definition requires a critique of systems, which is much more dangerous to anyone benefiting from those systems than merely naming and shaming individuals. Anti-racism (and feminism) ultimately necessitates challenging hierarchical power structures in general and thus necessarily leads to anti-capitalism (which isn't to say all anti-capitalists are anti-racist and feminist - there are plenty of "anti-capitalist" movements that still suffer from racism and sexism just as there are "anti-racists" who hold sexist views or "feminists" who hold racist views). But you can't use that to sell DEI seminars to corporations and corporations can't use that to promote themselves as "woke" - as some companies like Basecamp found out when their internal DEI groups suddenly started taking themselves seriously during the BLM protests, resulting in layoffs and "no politics" policies and a general rightwards shift among corporate America leading up to and into the second Trump presidency (which reinforced this shift, resulting in the current state of most US corporations and their subsidiaries having significantly cut down on their previously omnipresent shallow "virtue signalling").
Racism and fascism have been used correctly, its just that people do not like to be have their beliefs associated with negative things and thus, rather than perform self-reflection about themselves, instead the problem exists elsewhere. I am sure you can come up with outliers that prove what you are saying is true, but across the vast majority of applications of the use of both words they are correct relative to definitions of both words.
I would be curious to hear you expand on that, walk me through it, maybe a small paragraph to explain what over inclusion happened with the weird fascist, what baddies you're vaguely referring to, and connect those dots?
While true and we can see them literally everywhere where there is some money and/or power (even miniscule places like classic banks have easily 1/3 of the staff with clear sociopathic traits, I have to deal with them daily... or whole politics) - thats just human nature, or part of it.
Its up to rest of society to keep them in check since classic morals are highly optional and considered nuissance blocking those games. And here we the rest fail pretty miserably, while having on paper perfect tool - majority vote.
Without having read the article, reacting on the headline: no single person should be allowed to control our future. Democracy is a thing in large parts of the world, and we should try very hard to keep that functioning and even improve it.
The only part of the world where _democracy_ is a thing is Switzerland. Rest of the western world is utterly ruled by politicians, governments with ever more control over _their_ population's private life and money, and some who shout out "democracy", deluded they have any control over anything through voting lol.
It’s hard to know what’s the new information here. Altman’s history has been reported on exhaustively.
Few people have left openai over the year - safety abandonments, non profit status change, deception etc. but there is too much money involved. Here lies the actual rub. A lot of people involved and named in the article are reprehensible, kushners, saudis, Emiratis, PayPal mafia, vc folks with god complexes. But as long as they have the money, we have to dance to their tune.
We really really need a way for our society to be more equitable and hold these people responsible.
It’s less about trusting one person but more about the structure indeed AI is concentrating capital and compute and talent into a few hands so we’ve seen this before with railroads, oil, semiconductors. It brings innovation and also pricing power and political influence.
Would you trust a guy who controls a magical orb that answers everyone's questions for free and satisfactorily enough that people basically pay money to talk more to it, to use it responsibly? I won't.
One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.
We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.
Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
> underlying threat posted by AI to society, the economy and human freedom persists
I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI - unless you're claiming that an AGI would be capable of such independent actions.
AI is similar in transformative power to how the internet was a transformative power - might even be greater, if it is more commonly available for use through out the world. Whether that transformative power is doing good or bad really depends on the people doing it, not on the tech. I would bet that the future is going to be better because of AI, than to imagine a worse future and act to stunt the tech.
> I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI
Of course, it is popular to deny it. People constantly tell themselves "it is people, not tech". They make valid, yet banal and inconsequential statement. This distinction has no bearing on reality.
> So you're saying that if people hadn't invented weapons, there would be no violence?
If anything, if people hadn't invented weapons, they would not use weapons to enact violence, and this in turn will impact the practical nature of violence.
> The claim that AI is itself dangerous has no merit.
My claim is that considering any technology by itself is pointless. There is no such thing as thing by itself. Technology always exists in structural setting, and in turn shapes this structure.
Or perhaps, the underlying threat is personified by Altman, in that our country has repeated and widespread institutional failures to hold the wealthy accountable for wrongdoing.
The threat of AI is, after all, driven by the people who use it.
>But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
Without Sam Altman the compute and improvements for LLMs to be a threat wouldn't have readily existed at all. He was the one who got the ball rolling because of his desperation (SVB collapsed right before the hype bubble started), ego, and quasi-religious desires.
Beyond the question of should we trust Sam Altman to control our future - why on Earth should we want any single individual to control our future at all?
> In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?
I wonder if Sam might abandon the ship soon. Other co-founders already did.
The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.
This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.
But nobody is going to just gift him the same valuation on the next company. It's not like his execution is OpenAI's moat right now. So where would he be going that's a better deal for him?
Founding his own company would be one alternative. Full control. No stigma on the non-profit part. Probably get the same paper money as he got now at OpenAI.
What is the value he adds anyway, being a delusional cult leader where most people around him characterize him as a sociopath?
Is it just his ability to lie and create fear-hype?
It's not like he had anything to do with the technical achievements, except convincing the engineers that they were doing something valuable, but the cat is out of the bag on that.
The fact that some (usually toxic) individuals get there shows that the system is flawed.
The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.
We shouldn't follow billionaires, we should redistribute their money.
If someone founds a company, grows it and owns $1bn of its stock, they don’t have $1bn in cash to distribute. They have a degree of control over the economic activity of that company. Should that control be taken away from them? Who should it be given to?
I can see an argument when it comes to cashing out, but I’m not clear how that should work without creating really weird incentives. Some sort of special tax?
Well yeah. After some amount, you get 100% taxes. So that instead of having billionaires who compete against each other on how rich they are or on the first one to go contaminate the surface of Mars or simply on power, maybe we would end up with people trying to compete on something actually constructive :-). Who knows, maybe even philanthropy!
So, who owns and runs the companies? How do new companies get formed?
I'm not against higher taxation of the wealthy. I think inequality is a serious problem. The issue is what the wealth of these people isn't a big pile of cash they are wallowing in, it's ownership of the companies they build and operate. Is that what we want to take away? How, and what would we do with it?
I think it makes more sense to tax it as that power is converted into cash. I'm not clear how a wealth tax should work.
> I think it makes more sense to tax it as that power is converted into cash
Yeah, that makes sense to me. And those are all good questions of course :-).
> So, who owns and runs the companies?
I guess ownership stays the same, we just need to prevent the companies from growing too big. Because the bigger they are, the more powerful their leaders get, for once (aside from all the problems coming from monopolies). But by taxing them, we prevent the people owning those companies from owning 15 yachts and going to space for breakfast :D.
> How do new companies get formed?
I don't know if that's what you mean, but I often hear "if you prevent those visionaries from becoming crazy rich, nobody will build anything, ever". And I disagree. A ton of people like to build stuff knowing they won't get rich. Usually those people have better incentives (it's hard to have a worse incentive than "becoming rich and powerful", right?).
Some people say "we need to pay so much for this CEO, because otherwise he will go somewhere else and we won't have a competent CEO". I think this is completely flawed. You will always find someone competent to be the CEO of a company with a reasonable salary. Maybe that person will not work 23h a day, maybe they won't harass their workers, sure. But will it be worse in the end? The current situation is that such tech companies are "part of the problem, not of the solution" (the problem being, currently, that we are failing to just survive on Earth).
Big agree, at a certain point a company is big enough that their impact has to be managed democratically. I don't have an issue with effective leaders, the problem is that we reward a certain kind of success with transferable credits that don't necessarily align with people's actual talents or skills.
I want skilled institutional investors who have a track records of making smart bets. I don't want a random person who happened to get lucky in business dictating investment policy for substantial parts of the economy. I want accountability for abuses and mismanagement.
I know China gets a bad rep, but their bird cage market economy seems a lot more stable and predictable than this wild west pyramid scheme stuff we do in the US. Maybe there are advantages for some people in our model, but I really dislike the part where we consistently reward amoral grifters.
> Big agree, at a certain point a company is big enough that their impact has to be managed democratically.
100%. First, a company should not be that big. The whole point of antitrust was to avoid that. The US failed at that, for different reasons, and now end up with huge tech monopolies. And it's difficult to go back because they are so big now.
BTW I would recommend Cory Doctorow's book about those tech monopolies: "Enshittification: why everything suddenly got worse and what to do about it". He explains extremely well the antitrust policies and the problems that arise when you let your companies get too big. It's full of actual examples of tech we all know. He even has an audiobook, narrated by himself!
Well, redistributing their money is (in some cases disingenuously) exactly how they are able to pitch investors. "Sure, value my company at $10B and my shares make me $2B, but we're alllllll gonna make money when hit AGI!!!" That kind of thing.
Sure, I understand why the people around them who benefit from it also want to do that.
My point is that it all only benefits a few people. Those people used to call themselves "kings", appointed by god. Now they are tech oligarchs. If the people realised that it was bad to have kings, eventually maybe they will realise that it is bad to have oligarchs?
You come at the king, you best not miss. Unfortunately, having survived a coup, his odds of surviving the next have improved. Now he knows how they go, what to look for and how he might handle them. I wouldn't bet on him being kicked out, at least while OpenAI is still on top. If OpenAI stumbles and Anthropic or another starts to prevail, only then would I bet on Sam getting pushed out.
I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.
This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.
Your brain is performing "compute-intensive brute-force attacks on the problem/solution space" as you read this very sentence. You trained patterns on English syntax, structure, and semantics since you were a child and it is supporting you now with inference (or interpretation). And, for compute efficiency, you probably have evolution to thank.
people like to say this like they’re apples to apples but this comparison isn’t remotely how the brain actually works - and even if it did, the brain does it automatically without direction and at an infitesimal percentage of the power required.
And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.
One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.
It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.
Thank you, this comparison has been a huge annoyance of mine for the past 3 years of... this same debate over and over.
I think it's the hubris that I find most offensive in this argument: a guy knows one complex thing (programming) and suddenly thinks he can make claims about neuroscience.
Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does. AI is more like a parrot which is trained to give a correct-looking response to any question. The parrot doesn't think, doesn't know what its doing etc, it just does it because it gets a treat every time a "good" answer is prompted. This is why it can't do things like know how many parenthesis are balanced here ((((()))))) (you can test this), it doesn't have any kind of genuine cognition.
I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.
What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?
Yes, we do. Humans share the statistical association ability that LLMs possess, but also conscious meaning and understanding. This is a difference in kind and means that we can generalize beyond the statistical pattern associations that we've extracted from data, so we don't require trillions of examples to develop knowledge.
Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...
They don't need to read every math textbook, paper, and online discussion in existence.
Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.
I love reading posts like this. When you were a child, learning math or grammar, do you not remember bouncing off the walls of incorrect answers, eventually landing on a trajectory down the corridor of the right answer? Or were you always instantly zero-shotting everything?
In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.
Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.
The mistake in these types of arguments is that natural, classical-artificial, and/or neural-net-artificial learning methods all employ some kind of counterexample/counterfactual reasoning, but their underlying methods could well be fundamentally different. Thus these arguments are invalid, until computer science advances enough to explain what the differences and similarities actually are.
This is such a boring cliche by now. "thinking" and "knowing what it's doing" are totally vague statements that we barely understand about the human mind but in every comment section about AI people definitively state that LLMs don't do them, whatever they are.
This is the epitome of learned helplessness, that you need a neuroscience paper to tell you what thinking and knowledge is when you experience it directly all the time, and can't tell that an LLM doesn't have it.
Something is extremely evil about these ideologies that are teaching people that they are NPCs.
> Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does.
This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.
We are very different, and there are some high-profile people that don't even have an internal monologue or self-introspection abilities (one of the other symptoms is having an egg-shaped head)
> This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.
I have a different theory.
Aside from a few exceptions like Blake Lemoine few people seem to really act as if they believe A.I. is doing the same thing the human mind is doing.
My theory is people are for some reason role-playing as people who believe human thought is equivalent to A.I. for undisclosed reasons they themselves may or may not understand. They do not actually believe their own arguments.
> All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning'
If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".
> "I don't trust anyone who claims they're intelligent" doesn't follow from "all they do is <how they work>".
It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.
I can slow down the compute by a factor of a thousand. It would not change the result. But it changes the economics. We only call it intelligent, because we can do the backpropagation, the inference (and training) fast enough and with enough memory for it to appear this way.
If LLMs can come up with superhumanly intelligent solutions, then they're superhumanly intelligent, period. Whether they do this by magic or by stochastic whatever doesn't make any difference at all.
That's moonshot logic that reinforces the parent's point. You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment.
"The cure for cancer" as a phrase doesn't include those solutions. If the headline was "Pope discovers the cure for cancer" and those were his solutions you would say "No he didn't." OP was referring to AI discovering the cure for cancer that cancer research is working towards.
If all they do is "just" brute-force problem solving, then they are already bound to take over R&D & other knowledge work and exponentially accelerate progress, i.e. the SciFi "singularity" BS ends up happening all the same. Whether we classify them as true reasoning is just semantics.
The New Yorker prefers insure to ensure. They have a unique house style. I commented on another thread about alternative spellings like vender instead of vendor, too.
That M-W entry literally says they're different words with different meanings:
> They are in fact different words, but with sufficient overlap in meaning and form as to create uncertainty as to which should be used when.
> We define ensure as “to make sure, certain, or safe” and one sense of insure, “to make certain especially by taking necessary measures and precautions,” is quite similar. But insure has the additional meaning “to provide or obtain insurance on or for,” which is not shared by ensure.
To be fair, I use “ensure” myself, but it’s just one of several quirky elements of the New Yorker’s style, along with the diaeresis on repeated vowels with different sounds (like in reëmerge or coöperate), several uncommon spellings, and unusual conjoinings like “teen-ager” and “per cent.” It’s part of the charm, I suppose
My tendency is to believe that the individuals do not what matter as much, when it comes to the biggest risks. I'm not sure if this is a bias or a theory... but I lean to some sort of "medium is the message" determinism.
>"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."
Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.
Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.
Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.
By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."
At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.
So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.
That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.
Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.
> Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.
Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?
It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.
Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.
Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.
This is the problem with propaganda, you have been told that he was evil as most Indians are led to believe but for people in Sri Lanka he was a great leader.
It set off the flamewar detector. I've turned that off now.
I only saw this thread by chance and almost didn't look, because the title made the piece sound like a flamebait blog post. Fortunately I saw newyorker.com beside the title and looked more closely.
There is dwindling space for sincere independent accountability reporting on big tech like this to a) be created, since it's incredibly resource-intensive and so many resources flow from Silicon Valley, and b) actually reach people, since more platforms are now owned or otherwise influenced by interested parties.
Thank you for looking. Please do spread this kind of reporting in your communities, and subscribe to investigative outlets when you can.
> OpenAI has closed many of its safety-focussed teams
A paper with "ideas to keep people first" was (coincidentally?) published today:
• Worker perspectives
• AI-first entrepreneurs
• Right to AI
• Accelerate grid expansion
• Accelerate scientific discovery and scale the benefits.
• Modernize the tax base
• Public Wealth Fund
• Efficiency dividends
• Adaptive safety nets that work for everyone
• Portable benefits
• Pathways into human-centered work
You can see the vote history here[1]. It's always hard to know exactly why something gets buried. I was a little sad to see the story down-ranked when I saw that you were here in the comments.
But the discussion is generally pretty low quality with these sort of posts. People react without having read the story, or with whatever was on their mind already, or are insubstantive, or simply low effort. I don't think you'll lose k-factor not having a bigger post here.
Sometimes if you talk to the mods, they'll let you know their perspective. I generally find they're correct that people are much better at contributing/disseminating new knowledge to the world on more technical topics here.
Yes, I was surprised that it was downranked when I saw that too. Then I realized it had set off the flamewar detector and it was a simple matter to turn it off. I'm glad we got to this in time, because sometimes we don't, and this was an important case not to miss.
But isn't that circular? If the ranking algorithm used by the mods tends to devalue articles like this because they don't trust the user base to comment intelligently, doesn't that alter the culture of this site to make that more true?
I'm not sure what big_toast meant, but we do trust the user base to comment intelligently (which sometimes works and sometimes not), and we don't devalue articles like this.
We do tend to devalue titles like this, or more likely change them to something more substantive (preferably using a representative phrase from the article body), but I'm worried that if I did that here we would get howls of protest, since YC is part of the story.
I'm sure you're sick of comments about moderation, but I will say, this makes me more sympathetic to the position you're in.
It's an interesting dilemma. Many very respected publications use provocative titles because of the attention economy. And I'm sure you have good data that provocative titles lead to drive-by comments and flame wars.
But I don't think big_toast was entirely wrong that there is a side effect of sometimes burying articles that are by their nature provocative. And how do you distinguish a flame war over a title from a flame war over content? That's not a leading question. I don't know.
For us the litmus test isn't the title, it's whether the article itself can support a substantive discussion on HN. If yes, then we'll rewrite the provocative title to something else, as I mentioned. Ironically this often gives the author more of a voice because (1) the headline was often written by somebody else, and (2) we're pretty diligent about searching in the article itself for a representative phrase that can serve as a good title.
If, on the other hand, the title is provocative and the article does not seem like it can support a substantive discussion on HN, we downweight the submission. There are other reasons why we might do that too—for example, if HN had a recent thread about the same topic.
How do we tell whether an article can support a substantive discussion on HN? We guess. Moderation is guesswork. We have a lot of experience so our guesses are pretty good, but we still get it wrong sometimes.
In the current case, the title is baity while the article clearly passes the 'substantive' test, so the standard thing would have been to edit the title. I didn't do that because, when the story intersects with YC or a YC-funded startup, we make a point of moderating less than we normally do.
I know I'm repeating myself but it's pretty random which readers see which comments, and redundancy defends against message loss!
For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.
I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.
Some concepts from the book:
> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.
> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.
> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.
> Trust your instincts over a person's social role (e.g., doctor, leader, parent)
Check and check.
OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.
I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.
We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.
E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.
I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.
Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.
I was with you right up until the final paragraph, but this made me do a double take:
> OpenAI is too important to trust sama with.
...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.
The whole "super serious what-ifs" game is just marketing.
Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.
I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.
> I'm not even sure we're any closer to AGI than we were before LLMs.
I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).
Just because something is overhyped doesn't mean you have to be dismissive of it.
Or we could be stuck here for decades pending a breakthrough nobody alive today can even conceive of, or we could be compute limited by a half dozen orders of magnitude. Or it could happen next week. That's the nature of breakthroughs--you just can't have any idea when or how (or if) they'll happen.
In hindsight there's an obvious evolutionary pathway from the Wright Flyer to Gemeni/Apollo/Soyuz.. but at the time in 1903 there absolutely was not, and anyone telling you so would be a crank of the highest degree. So it may turn out that LLMs have some place on the evolutionary path to AGI, or it could turn out they're a dead end like Cayley's ornithopters. Show me AGI first, then we can discuss whether LLMs had something to do with it.
In order to get to space, you must first be capable of flight through the atmosphere. That should be apparent to anyone even then because the atmosphere is in between space and the ground.
Regardless of whether spaceflight is still 1000 or 100 or 50 years away, you are still closer than you were before you demonstrated the ability to fly.
Excellent article, truly well-researched. As someone close to a pathological liar [1], the idea that one could be at the forefront of the creation of an artificial superintelligence confirms all the existential risks of such a piece of technology and how naïve, if not ignorant, the average starry-eyed tech worker and investor is about this whole endeavour. It's easy to believe there is a lot of idealism and wish for a better world, but underneath the greedy drive for money and power is excellently summarized in Greg Brockman's own thoughts: “So what do I really want? [...] Financially what will take me to $1B.”
Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.
---
1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.
Oh, I agree that's the correct answer. I just don't see the article actually ending up with that answer. I see it waffling. Basically, the article ends up saying that, well, we told you about all this dodgy stuff, but what he's doing is working.
I think you are misunderstanding the point of journalism. It can be debated whether the title should be such a question. Nevertheless, the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself. You can see the attempts at the balanced part in the article where an allegation/statement is made about Altman followed by parentheses saying that Altman recalls the exchange differently/does not remember.
> the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself.
I get that this is the claimed ideal of journalism, at least for straight reporting. The problem is that it's impossible.
There isn't time or space to present all the information; the journalist has to filter. And filtering is never unbiased. Even the attempt to be "balanced" is a bias--see next item.
"Balanced" always seems to mean "give equal time and space to each side". But what if the two sides really are unbalanced? What if there's a huge pile of information pointing one way, and a few items that might point the other way if you believe them--and then the journalist insists on only showing you a few items from the first pile, so that the presentation is "balanced"? You never actually get a real picture of the facts.
There's a story that I first encountered in one of Douglas Hofstadter's books, about two kids fighting over a piece of cake: Kid A wants all of it for himself, Kid B wants to split it equally. An adult comes along and says, "Why don't you compromise? Kid A gets three-quarters and Kid B gets one-quarter." To me, the author of this article comes off like that adult.
In any case, all that assumes that this article is supposed to be just straight reporting, no opinion. For which, see the next item.
> It can be debated whether the title should be such a question.
Yes, it certainly can. If this article is just supposed to be straight reporting--no editorializing--then that title is definitely out of place. That title is an editorial--and the article either needs to own that and state the conclusion it's trying to argue for, or it shouldn't have had that title in the first place.
> "Balanced" always seems to mean "give equal time and space to each side".
I agree with you that this seems to be the idea people have when "balanced" is mentioned. I don't think this is correct. You can easily have a balanced article which has lots of evidence pointing one way or the other. I think that this article is like that. Boatload of pointers towards Altman being a sly person with reporters asking him about those exchanges and him basically shrugging each time.
The journalists credibility is doing quite a bit of lifting here as we have to trust that they put in the effort. One such example is the molesting accusations which the reporters say they heavily looked into and were not able to find any corroborating evidence.
> You never actually get a real picture of the facts.
Yes, it is a fundamental impossibility in lots of cases. That's why we trust the reporters that they did as good a job as they could to present all pertinent information.
> That title is an editorial ...
I do not perceive it to be editorialised. It states an arguably real possibility that Altman may/does have lots of real power. I am guessing that you believe that the "can he be trusted" is an editorialisation that points towards him being untrustworthy. If that is the case, I think those would be your biases knowing that he is probably not trustworthy. I see it just as an objective question.
Imagine a different situation: you have local elections into your small town. There is a new mayor candidate and during the next term, there will be some money to be given to residents for renovations and such, but not enough for everyone. You don't know this candidate. A local reporter, whom you trust, writes an article "New mayor candidate favoured in polls - will he be fair with the renovation money?". It is a piece trying to shed light on who this candidate is as a person, what was his life before moving into your village, etc. so that voters like you can decide whether to give him your vote. It is not editorialised, as it does not point either way.
I don't trust him. He already made statements that
convinced me I don't want to touch anything he
controls. In a way it is similar to Meta and co. For
some reason the US corporations behave very suspiciously
once past a certain threshold size. With Win11 from
Microsoft I always wonder whether there is a not so
hidden subagenda in place.
Fuck no! Of course he can't be trusted. We know that. Nobody questions that. We know that about most of the "elites" running the show.
We're just in this shitty pit of despair where people are desperate. It's difficult to campaign for good when you're struggling and capital can jerk people around.
People pursue good for the sake of good at cost to themselves when times are very good or times are very, very bad.
The last quote, to a layperson, may sound completely sinister, but therein lies a deep and open computer science question: AIs really do seem to get their special capabilities from having a degree of freedom to output wrong and false answers. This observation goes all the way back to some of Alan Turing's musings on how an AI might one day be possible. And then there were early theorems related to this e.g. PAC learning. I'd love to know about what's happened since on this aspect, such as the role of noise and randomness, and maybe even hallucinations are a feature-not-bug in a fundamental sense, etc.
This is unfair to the original article, which is well-researched and worth a read. But the answer this question is _always_ no. Nobody should have as much power as the oligarch class currently does, even if of inscrutable power.
I don't even need to read the article to know that he unequivocally can't be trusted. Every action he's taken to this point have shown he will say literally anything to get what he wants.
Well I just canceled my Claude Pro subscription because of the mysterious limits that I don't experience with codex, even after paying for "extra usage". If Anthropic can't figure out their capacity problems they are in trouble.
I noticed that Apple speech to text has gotten pretty good lately. Is that because they’re paying Google? Not sure I use other AI features from Apple as I have my Siri turned off.
You might be. Or at least I feel like Gemini is actually dumber than a house of bricks - I have multiple examples, just from last week, where following its advice would have lead to damage to equipment and could have hurt someone. That's just trying to work on an electronics project and askin Gemini for advice based on pictures and schematics - it just confidently states stuff that is 100000% bullshit, and I'm so glad that I have at least a basic understanding of how this stuff works or I would have easily hurt myself.
It's somewhat decent at putting together meal plans for me every week, but it just doesn't follow instructions and keeps repeating itself. It hardly feels worth any money right now, like it's some kind of giant joke that all these companies are playing on us, spending billions of these talking boxes that don't seem that intelligent.
I also use claude at work, and for C++ programming it behaves like someone who read a C++ book once and knows all the keywords, but has never actually written anything in C++ - the code it produces is barely usable, and only in very very small portions.
Edit: I just remembered another one that made me incredibly angry. I've been reading the Neuromancer on and off, and I got back into it, but to remind myself of the plot I asked Gemini to summarise the plot only up to chapter 14, and I specifically included the instruction that it should double check it's not spoiling anything from the rest of the book. Lo and behold, it just printed out the summary of the ending and how the characters actions up to chapter 14 relate to it. And that was in the "Pro" setting too. Absolute travesty. If a real life person did that I'd stop being friends with them, but somehow I'm paying money for this. Maybe I'm the clown here.
I just asked like I said, give me plot summary until chapter 14, don't spoil the rest of the book. And of course when I told it what it just did it was like oh I'm sorry, here's a summary without the spoilers for the ending. So clearly it could do it without additional context.
>>Do they even have direct access to published works to use as reference material?
I mean, clearly, given that it did answer my question eventually. Also wasn't it a whole thing that these models got trained on entire book libraries(without necessarily paying for that).
>>I wouldn't expect any LLM to be able to respect such a request
Why though? They seem to know everything about everything, why not this specifically. You can ask it to tell you the plot of pretty much any book/film/game made in the last 100 years and it will tell you. Maybe asking about specific chapters was too much, but Neuromancer exists in free copies all over the internet and it's been discussed to death, if it was a book that came out last year then ok, fair enough, but LLMs had 40 years of discussions about Neuromancer to train on.
But besides, regardless of everything else - if I say "don't spoil the rest of the book" and your response includes "in the last chapter character X dies" then you just failed at basic comprehension? Whether an LLM has any knowledge of the book or not, whether that is even true or not, that should be an unacceptable outcome.
I wouldn't expect an AI to know exactly what happens in every chapter of a book.
Knowing the plot of Neuromancer isn't the same as being able to recite a chapter by chapter summary.
I tried this Neuromancer query a few times and results greatly vary with each regeneration but "do not include spoilers" seems to make Gemuni give more spoilers, not less.
It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?
Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.
"This thing might destroy humanity - we need to build it ASAP" does not really make sense. But it enthrall[s/ed] many smart researchers who would normally demand specific, testable claims and logical responses to those claims.
We have drastically escalated what claims are necessary to motivate startup employees. It used to be that you could merely dangle an interesting problem in front of a researcher. Then you could earn millions, then billions. TAMs in the trillions. AGI will destroy humanity unless you, personally step in. Elon is talking about Kardashev III civilizations. The universe cannot bear the hype being loaded upon it.
I agree with you completely, but the way I see it Anthropic are x100 worse when it comes to amplifying this doomer bs for marketing. It’s their whole shtick.
I would really appreciate it if someone in the know could explain to me how a markov chain with some backpropagation can surpass human cognition. Because right now I call BS.
This whole situation goes to show that yesterday's conspiracy theorists are today's realists. What's happening to USA's leadership and as a country and what's happening with with their top companies is really scary for the rest of us. If this trend continues we're all definitely gonna end up in a kleptocracy.
Not enough people know about her and her allegations towards him. It’s sad to see so much of the rich and powerful literally just can’t stop raping people. Epstein, Trump, Elon, scam Altman. How many more people have to be implicated?
Excellent work. I’ll have to wait until we get the print version delivered to finish as I’m not signed into the new Yorker on my phone.
I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.
I suspect that they are perfectly capable of clicking an archive link or better yet logging in as they are already a subscriber. Maybe, like me, they enjoy reading the physical magazine.
Disclaimer: I have no association with any AI company and have never met Altman or any of the other top AI scientists.
The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.
Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.
What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...
Seeing Sam Altman slowly degrade into the realization that he is in fact not as smart as others in this space has been fascinating to watch. He used to speak with enthusiasm and confidence and now he’s like a scared little boy who got in way too deep.
The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.
They were both pretty smart in certain ways. Altman's very good at being manipulative and raising money though seems so so on the tech. Bankman Fried was smart at crypto and the like but ethically challenged on the don't steal your customer's money part.
Meh. I’m no particular fan of Altman but there’s nothing in this article particularly surprising or terrible.
The whole AI safety thing has always seemed extreme to me and has turned out to be a storm in a teacup. All those prominent people who used to tell us how AI will end humanity seem to have stopped talking about it.
I get the sense that Altman is not particularly like-able person but Bill Gates and Steve Jobs both seem to have scored a 10/10 on their “is this guy a jerk” rating, it’s common for tech CEOs.
So, the article and headline are dramatic but not much really there.
I think all the AI safety obsessed people turn out to have been the ones off course.
Quite frankly, if he went and scrubbed (or had scrubbed) a Facebook thread I got in an argument with him on in 2018 (about the last time someone did an article about him) I can only imagine how obsessive he is about controlling his past and info about it.
> The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”
These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".
Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.
TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)
What might feel like "damage control" is more likely to be the outcome of the even-handedness you get with serious, rigorous reporting. Something the New Yorker is known for.
This article is just another typical New Yorker fluff piece that tries to look deep but misses the actual point.
The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.
The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."
Who cares???????
The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.
If you're talking about the school in Iran, that wasn't OpenAI. That was a Palantir system that pre-dates OAI by a few years, and was due to a bad entry in a spreadsheet, that showed the building as military housing. Which it was a few years ago.
180 people lost their lives because of bad data in spreadsheet, but not AI.
Many years ago. Not "a few years ago".
Also you could make the sentence that 180 people lost their lives because of an evil war, of which USA and Israel are the aggressors. And we definitely don't talk enough about that part.
180 children lost their lives because of decisions by people in the US military (and ultimately the US government / the POTUS).
Let's not fall into the trap of adopting narratives created to waive accountability. The spreadsheet didn't launch a missile, the spreadsheet didn't authorize the strike and the spreadsheet didn't select the target.
Not to mention that "outdated spreadsheet" is also a hilariously anachronistic excuse for a war crime if you consider what kind of satellite technology the US has publicly acknowledged to have access to, let alone what kind of technology it is likely to have access to.
The difference between intentional premeditated murder and reckless endangerment resulting in a killing is not guilt and innocence but merely the severity and nature of a crime. Both demonstrate a callous disregard for the sanctity of human life, one just specifically seeks to extinguish it, the other merely accepts death and suffering as an acceptable outcome.
A bit of a feeling of "so what" here. Maybe he's less trustworthy than some. We have people of X trustworthiness running the government, crypto exchanges, a certain space exploration and satellite company, social media companies, and so on. We know their trustworthiness. Isn't the real issue how to cope?
What's the point of living in an advanced society if you just sit around watching it decay around you? Our ancestors fought for our indifference today, and with attitudes like yours we'll watch our children fight for it again tomorrow.
> Your point is that it's ok he's untrustworthy because lots of people in power are?
It's...weirdly a valid question. If Sam fibs as much as the next guy, we don't have a Sam problem. Focussing on him alon is, best case, a waste of resources. Worst case, it's distracting from real evil. If, on the other hand, as this reporting suggests, Sam is an outlier, then focussing on him does make sense.
I don't disagree, but at some point, I think people need to understand we're dealing with laws of nature here. I mean just look at human history, this has been a problem since the dawn of civilization...
I think if you truly understand social contract theory, how hierarchies are formed, and political theory, you'll realize that oligarchies tend to be nature's equilibrium point for setting social disputes, and all forms of governments regardless of whatever they claim to be, naturally devolve towards them as they tend to represent the highest social entropy (ie equilibrium) state. That's not to say you can't have or move further away from that point and towards another (supposed ideal) form of government, you absolutely can, but it takes work. Perpetual work - of which no set of "rules" can remedy people of having to do in order to sustain it.
The problem however, is most people get complacent. They eventually tire of that work, or are ignorant, and by doing so create a power vacuum which allows things slide back towards that state.
As so, people must decide for themselves one of several possible avenues to pursue:
#1 - Try to convince others (the masses) to join and work together to take power from the few, back to them
#2 - Find a way to join the ranks of the elite few (which thanks to the prisoner's dilemma, unscrupulous means tends to perform better in the short term, even if at the cost of the long term. And if the elite is already corrupt, well, cooperating with it works well)
#3 - Settle for their lot in life
Unfortunately #1 is such a difficult proposition given it requires winning agreement among many whilst many often decide to remain in camp #3 (for complacency/ignorance reasons). And #2 is often easier done without moral integrity, especially at the behest of those in camp #3 whose behavior only helps enable these realities. Thus, is why I think the "ecosystem" as you say, will always tend towards this way - where society tends towards being controlled by an elite few who are rotten.
Robert Michel's realized this and dubbed it the Iron Law of Oligarchy and embraced his own version of #2 for himself. Although, he came to this conclusion through his own observations and reasoning, rather than through historical political theory.
OpenAI is like #3 or #4 of the AI companies right now in terms of power, and last place in the court of public opinion.
I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.
I'm not sure how much of that converts to revenue. If it's free plan users, that's just cost. You can say what you want about "creating a training data moat" but that doesn't seem like it's prevented the other labs from putting out excellent models.
Well we were talking about power and reputation and being well-known and all that. Being more ubiquitous is surely a big part of that. GP seems to think Anthropic is doing better because of the DoD thing. In my estimation, 90% of people do not care about that at all.
makes sense if you think the point of journalism is just to take everyone down a notch instead of... um... informing the public of bad actors
"the local drug-dealing pimp is so passe, we need to investigate the most upstanding members of the community just to be sure" is a frankly insane strategy
Yet when he was fired, 99% of OpenAi employees backed him and were ready to resign. That actual event/evidence is more telling than any hit piece article.
> Yet when he was fired, 99% of OpenAi employees backed him and were ready to resign. That actual event/evidence is more telling than any hit piece article.
It's not telling. The article documents a massive pressure campaign to get that result. There are a lot of reasons why OpenAi employees could have publicly backed him, an example is fear, and there are many others that aren't an endorsement of Altman's character.
I imagine most of them were motivated by money. OpenAI was supposed to be Open. As I understand it, it was not created for shareholder profits and instead was made to benefit everyone? Hence the Open name. Then someone like Sam comes along who can make you incredibly rich by casually ignoring the initial mission. Would you go against this incredibly powerful billionaire who by many accounts is not encumbered by ethical quandaries? In doing so you risk your financial freedom, and for what? OAI is already a husk of its intended purpose. Mine as well get paid to be a sellout.
> OpenAI was on the verge of closing a large investment from Thrive, a venture-capital firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six billion dollars and allow many employees to cash out millions in equity.
Probably a factor in the pro Sam camp. Hard to stand up against a big payday.
Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.
Thank you for coming on HN and offering to answer questions.[a]
This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.
OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]
Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]
Is your understanding of OpenAI's current competitive position similar?
---
[a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...
[b] https://www.latimes.com/business/story/2026-04-01/openais-sh...
[c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
Thank you for this, very much appreciate the thoughtful response.
The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.
Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.
I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.
If you have an opinion about that, everyone here would love to hear about it.
Ronan Farrow's expertise is investigations into elite amorality, not evaluating technical products. Why are you asking this question?
I didn't asking him to evaluate them. I asked him how customer and partners perceive them.
He's had so many conversations that he likely has a sense of how perceptions of the company and its offerings have changed.
I'm curious.
Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".
at this point even googles ai search results are better than gpt - obv. this is not for full programs but if you know what youre doing and just want a snippet, thats all you need.
Wild how different experience people can have. Both Google's models and Anthrophic's hallucinate a lot for me, even when I try the expensive plans and with web searches, for some reason, and none of them come close to the accuracy and hallucination-free responses of ChatGPT Pro, which to me still is SOTA and has been since it was made available. But people keep having opposite experiences apparently, I just can't make sense of it.
Kagi (assistant.kagi.com) with Kimi K2.5 (their current default) has worked great for me in scenarios where the search result data is more important than the model.
I.e. what I used to use Google for and when I don't want an AI to overly summarize / editorialize result data.
oh thats probably because im a cheap-skate and just use the free garbo models. im sure the pro version is quite good.
My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.
As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.
How would fraud help here? Don't they just need scale of lots of customers paying a little bit? How do you fraud your way into that?
they don't need customers, when the customers ere each others companies for example the deals openAI nvidia oracle made
That's not fraud, and it's not sustainable. They aren't going to just keep doing that. It only makes sense if an AI company wants to pay for GPUs with stock, and - more importantly - the GPU company agrees to sell in exchange for stock.
If you were in charge of the deciding what should be done with Sam Altman, what would you choose?
I mean, its a fair question, though it does make some wonder how extreme the answers could be, so I could see why you're being downvoted.
The problem is sometimes on paper everything people like Sam Altman do is legal, despite it harming so many. We've literally had a major RAM producer pull off the consumer RAM market. I feel like Sam Altman should be investigated and heavily scrutinized. He kind of is the biggest bubble in the AI bubble, we're letting him fester too far into it too, and these circular deals have seemingly somewhat stopped for now, but it might only get worse.
Many of us prefer OpenAI's Codex, because we think it's a better product.
No comment on the CEO: I just find the product superior in everything but UI/UX and conversation. It's better at quality code.
Who is “us”? It does seem that some scientists prefer Codex for its math capabilities but when it comes to general frontend and backend construction, Claude Code is just as good and possibly made better with its extensive Skills library.
Both codex and Claude code fail when it comes to extremely sophisticated programming for distributed systems
As a scientist (computational physicist, so plenty of math, but also plenty of code, from Python PoCs to explicit SIMD and GPU code, mostly various subsets of C/C++), I can confirm - Codex is qualitatively better for my usecases than Claude. I keep retesting them (not on benchmarks, I simply use both in parallel for my work and see what happens) after every version update and ever since 5.2 Codex seems further and further ahead. The token limits are also far more generous (and it matters, I found it fairly easy to hit the 5h limit on max tier Claude), but mostly it's about quality - the probability that the model will give me something useful I can iterate on as opposed to discard immediately is much higher with Codex.
For the few times I've used both models side by side on more typical tasks (not so much web stuff, which I don't do much of, but more conventional Python scripts, CLI utilities in C, some OpenGL), they seem much more evenly matched. I haven't found a case where Claude would be markedly superior since Codex 5.2 came out, but I'm sure there are plenty. In my view, benchmarks are completely irrelevant at this point, just use models side by side on representative bits of your real work and stick with what works best for you. My software engineer friends often react with disbelief when I say I much prefer Codex, but in my experience it is not a close comparison.
Have you tried the latest (3.1 pro) Gemini? In my experience, it's notably better for a similar type of problems than Opus 4.6. However, I don't really use OpenAI products to compare.
I've tried both against similar and haven't found it such a clear cut difference. I still find neither are able to fully implement a complex algorithm I worked on in the past correctly with the same inputs. Not sharing exactly the benchmark I'm using but think about something for improving performance of N^2 operations that are common in physics and you can probably guess the train of thought.
>As a scientist (computational physicist,
Is there one that you prefer for, i dunno, physics?
I'm in that camp -- I have the max-tier subscription to pretty much all the services, and for now Codex seems to win. Primarily because 1) long horizon development tasks are much more reliable with codex, and 2) OpenAI is far more generous with the token limits.
Gemini seems to be the worst of the three, and some open-weight models are not too bad (like Kimi k2.5). Cursor is still pretty good, and copilot just really really sucks.
Claude Code, Codex, and Cursor are old news. If you're having problems, it's because you're not using the latest hotness: Cludge. Everyone is using it now - don't get left behind.
Cludge has been left behind by Clanker, that’s the new hotness. 45B valuation!
ive heard that poob has it for you!
Us = me and say /r/codex or wherever Codex users are. I've tried both, liked both, but in my projects one clearly produces better results, more maintainable code and does a better job of debugging and refactoring.
That's interesting, I actively use both and usually find it to be a toss up which one performs better at a given task. I generally find Claude to be better with complex tool calls and Codex to be better at reviewing code, but otherwise don't see a significant difference.
If you want to find an advocate for Codex that can give a pretty good answer as to why they think it's better, go ask Eric Provencher. He develops https://repoprompt.com/. He spends a lot of time thinking in this space and prefers Codex over Claude, though I haven't checked recently to see if he still has that opinion. He's pretty reachable on Discord if you poke around a bit.
Quite irrelevant what factions think. This or that model may be superior for these and those use cases today, and things will flip next week.
Also. RLHF mean that models spit out according to certain human preference, so it depends what set of humans and in what mood they've been when providing the feedback.
On the contrary, I very much care about what the other factions think because I want to know if things have already flipped and the easiest way to do so is just ask someone who's been using the tool. Of course the correct thing to do is to set up some simple evals, but there is a subjective aspect to these tools that I think hearing boots on the ground anecdata helps with.
Any difference in performance on mobile development?
For that I'm not so sure. I tried both early 2025 and was disappointed in their ability to deal with a TCA based app (iOS) and Jetpack compose stuff on Android, but I assume Opus 4.6 and GPT 5.4 are much better.
yea Im not in this "us" you speak of.
Of course you're not one of "us" if you're one of "them".
I've found claude startlingly good at debugging race conditions and other multithreading issues though.
My rule of thumb is that its good for anything "broad", and weaker for anything "deep". Broad tasks are tasks which require working knowledge of lots of random stuff. Its bad at deep work - like implementing a complex, novel algorithm.
LLMs aren't able to achieve 100% correctness of every line of code. But luckily, 100% correctness is not required for debugging. So its better at that sort of thing. Its also (comparatively) good at reading lots and lots of code. Better than I am - I get bogged down in details and I exhaust quickly.
An example of broad work is something like: "Compile this C# code to webassembly, then run it from this go program. Write a set of benchmarks of the result, and compare it to the C# code running natively, and this python implementation. Make a chart of the data add it to this latex code." Each of the steps is simple if you have expertise in the languages and tools. But a lot of work otherwise. But for me to do that, I'd need to figure out C# webassembly compilation and go wasm libraries. I'd need to find a good charting library. And so on.
I think its decent at debugging because debugging requires reading a lot of code. And there's lots of weird tools and approaches you can use to debug something. And its not mission critical that every approach works. Debugging plays to the strengths of LLMs.
As some other people mentioned, using both/multiple is the way to go if it's within your means.
I've been working on a wide range of relatively projects and I find that the latest GPT-5.2+ models seem to be generally better coders than Opus 4.6, however the latter tends to be better at big picture thinking, structuring, and communicating so I tend to iterate through Opus 4.6 max -> GPT-5.2 xhigh -> GPT-5.3-Codex xhigh -> GPT-5.4 xhigh. I've found GPT-5.3-Codex is the most detail oriented, but not necessarily the best coder. One interesting thing is for my high-stakes project, I have one coder lane but use all the models do independent review and they tend to catch different subsets of implementation bugs. I also notice huge behavioral changes based on changing AGENTS.md.
In terms of the apps, while Claude Code was ahead for a long while, I'd say Codex has largely caught up in terms of ergonomics, and in some things, like the way it let's you inline or append steering, I like it better now (or where it's far, far, ahead - the compaction is night and day better in Codex).
(These observations are based on about 10-20B/mo combined cached tokens, human-in-the-loop, so heavy usage and most code I no longer eyeball, but not dark factory/slop cannon levels. I haven't found (or built) a multi-agent control plane I really like yet.)
Codex won me over with one simple thing. Reliability. It crashed less, had less load shedding and its configuration is well designed.
I do regular evaluation of both codex and Claude (though not to statistical significance) and I’m of the opinion there is more in group variance on outcome performance than between them.
This is the way. Eg. IME Gemini is really damn good at sql.
Not a scientist and use codex for anything complex.
I enjoy using CC more and use it for non coding tasks primarily, but for anything complex (honestly most of what I do is not that complex), I feel like I am trading future toil for a dopamine hit.
I have been using Codex AND Claude side by side for the same project*, with the same prompts.
Codex has been consistently better on almost every level.
* (an open source framework for 2D games in Godot 4.6 GDScript, mostly using AI to review existing code)
Many paying customers say that Anthropic degraded the capability of Opus and Claude Code in the last months and the outcomes are worse. There are even discussions on HN about this.
Last one is from yesterday: https://news.ycombinator.com/item?id=47660925
I’m one of those ‘us’, Claude’s outputs require significant review and iteration effort (to put it bluntly they get destroyed by gpt and Gemini). I’m basically using sonnet to do code search and write up since it is a better (more human-like) writer than gpt and faster and more reliable than gemini, but that’s about it.
I also find Codex much more generous in terms of what you get with a Pro ($20/mo) subscription. I use it pretty much non-stop and I have yet to hit a limit. Weekly reset is much better as well.
Usage limits are more generous and GPT 5.4 is a good model, but yes, UI/UX lags behind Claude Code. Currently I'm especially missing /rewind with code restoration and proper support for plugin marketplaces
GPT/claude/gemini is pretty interchangeable at this point.
Absolutely not the case. They're complementary.
I prefer GLM 5.1 and MiniMax 2.7. With a better harness like Forge Code, I have better results for way less money than by using GPT and Opus.
i find myself being more productive with codex/copilot on coding tasks, but claude does seem to be better at planning
Does this work for people? To me having a "better product" would be completely irrelevant if the use cases are evil.
Shill talk
He’s replying on this twitter thread - perhaps someone with an account can ask there and link his comment here?
https://xcancel.com/RonanFarrow/status/2041127882429206532#m
Here is the actual link, not a link to some weird third-party site that can't be trusted.
https://x.com/RonanFarrow/status/2041127882429206532
FYI xcancel is just a mirror that allows reading replies without needing an account.
Whereas X can be trusted?
Yes? It's the data source, not a third-party. How is this even a question?
There's pedantic, and then there's needlessly pedantic.
xcancel is a valid workaround for X links on Hacker News and is sufficient for original attribution.
X restricts what you can view without logging in. Many folks don't want to log in to X, for obvious reasons. Posting an xcancel link is kinda like folks posting various `archive` URLs to bypass paywalls, work around overloaded servers, etc. That's an extremely common practice here that usually goes without comment.
Personally, I prefer Claude for coding, but I still prefer ChatGPT for hashing out ideas for my projects (which tend to be game designs). So I use both.
It's worth noting Codex has 2x more stories than Claude https://hn.algolia.com/?query=codex
But by page 5, those stories have around 50-60 karma, while claude page five is still 500+
(i found your comment surprising based on my daily hn reading recollection - i mostly read top N daily and feel i only occassionally see codex stories).
Yeah we moved to Claude a few months ago, mostly because the devs kept using it anyway. Altman stuff is interesting but at the end of the day you just go with whatever tool works
> You may want to provide proof online that you are who you say you are
Unfortunately it probably doesn't even matter here on HN considering how brigaded down this story is predictably getting.
But yeah, it was a fantastic piece.
It wasn't getting "brigaded down" - it set off a software penalty called the flamewar detector. I turned that off as soon as I saw it.
Thank you for keeping HN sane :-)
Fair request, here you go: https://x.com/RonanFarrow/status/2041203911697068112
The statements around the sexual abuse allegations seemed to be the most puzzling to me - his sister’s allegations and claims of underage partners because he has a tendency to hook up with younger partners. It does seem like this piece gives him a pretty clean bill of health in that matter - I guess would you be able to talk about how you investigated?
Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.
Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.
All fair points on trauma and memory.
As noted in the piece, we spent months talking to Altman's partners and what we found and didn't is as described.
Thanks for the response! Cheers just fully reread the piece and appreciate your reporting.
It's super neat to see you here on HN taking questions, kudos :)
False memories are much, much more common than actual recovered memories, unfortunately. OCD is a really common cause of it. People think of OCD as a physical thing, but for many people it presents as emotional rumination and can lead to false memories.
That's not a fair assessment. "False memory syndrome" and "repressed/recovered memory" are both outside scientific mainstream consensus.
Correct, because there truly isn’t a great way to answer with certainty - there was evidence in the 80s of suggestive techniques being used by poorly trained psychologists, and there are many people who remember and then find corroboration.
There’s a lot more who remember and may not have corroboration more than with themselves and among their close friends or healthcare provider. Part of CSA is usually there is very little a kid can do about evidence, as the power discrepancy is far too much. Often with rich abusers, the exact same process occurs. Perps pick victims who are vulnerable or controllable, and constantly seek power and domination. Nothing to do with the boardroooms or batch of ceo billionaires running the economy right now certainly.
I am very sympathetic to the situation you describe. I certainly think it is possible that Annie is describing something that happened. I think the author did a fair job of representing the allegations, finding the right balance between disclosing that they were unable to corroborate the allegations without dismissing them.
That said, "recovering" memories as a therapy does not pass any sort of sniff test and it doesn't take a concerted effort to discredit the concept. Human memory is very malleable. Patients with mental health issues (which could predate abuse, or could be caused by abuse) are often in search of answers and that makes them very vulnerable.
Could a memory be buried deep in our subconscious, forgotten, only to return to the surface later? Sure, we all forget things and then remember them when triggered by something, whether that's a smell or sound or something else entirely. But can we engineer that process, with any degree of reliability? How can we even begin to reliably reverse engineer the triggers?
I think it is also important to keep in mind that Annie is rich, and the health care available to rich people can be very predatory. There are endless examples of nonsense therapies for all types of health, from ear seeds to treatments for "chronic Lyme".
Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them. If Annie's memories were triggered in adulthood, sure, that's really no different than remembering something... but "recovered"? That is something else entirely.
Correct me where I'm wrong, I'd like to learn your perspective, maybe there's a missing piece.
What's the evidence that Annie is rich?
> recovering" memories as a therapy
Recovered memory therapy was a discredited hypnotherapy that leaned heavily on suggestion or was associated often with fairly coercive interrogations during the 80s CSA panic - https://en.wikipedia.org/wiki/Day-care_sex-abuse_hysteria
> Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them.
Agree, though I think the mechanism can be a bit more towards the idea of a “recovery” of traumatic memory, even if the term as understood carries false connotations.
The concept you’re missing is dissociation, and dissociative disorders. In the 40s it was called just “hysteria”, and for many cases up to the late 90s an extreme form was called multiple personality disorder, now DID (dissociative identity disorder). https://en.wikipedia.org/wiki/Dissociative_disorder
Not everyone who goes through traumatic events will respond to it via dissociation of identity, and indeed not all people are equally capable of developing a dissociative disorder, 2 people may go through very similar events (say survive a war as siblings or even twins) and one might dissociate the traumatic experience and one might not. Dissociation doesn’t work quite like you might imagine from a term like “multiple personalities” - that happens in some extreme cases, but think of identity dissociation as an adaptive response to events or situations that are paradoxical (esp to a child’s mind), extreme or traumatic, and can’t be escaped or use of other mechanisms cant be called upon.
Dissociation is on a sort of spectrum, where at one side you have common experiences like zoning out when on a common commute, and on another you have separated self-parts/alter egos to handle wildly different situations.
It’s a mechanism I frankly wasn’t aware of and I’m not sure that I would be able to fully beleive or empathize with, but for my getting a diagnosis of a dissociative disorder changed my life, and made a thousand things about me that I could never figure out make sense. The “model” as it put it at the time responded to experiment, and by recognizing that I was dealing with pretty constant, heavy dissociation and different self states with memory deficiencies helped me figure out how to work through a ton of really intractable problems for me. I’m finally after decades of ineffective therapy able to really understand how I work.
Idk how to talk about it without sounding like I’m trying to sell the idea. But yeah it was a mind blowing thing to me. Over the last 20 years especially a ton of truly respectable research has been done and the increase in efficacy of treatments on dissociation, and trauma generally is one of the unsung advancements for humanity in the last decade. I think the number is that around 3-6% of people meet the clinical criteria for a dissociative disorder - OSDD, DID, DPDR, or dissociative amnesia. 5x more people than have schizophrenia, 5x more than have red hair.
My favorite public clinical resource I point to people is the CTAD Clinic YouTube - https://youtube.com/@thectadclinic?si=5AyR5H8K8Cf2sn3C
Pretty easy to understand explainers from a clinician in the UK.
For a more clinical and study approach this one is the currently best put together research IMO: https://www.taylorfrancis.com/books/edit/10.4324/97810030573...
The TLDR is dissociation is an important mechanism that most people don’t know about but has had a wave of research and study and is much more common than one might expect. The sad part is how often dissociative disorders correlate w abuse.
I'm confused by what you're saying. Can you help me reconcile your first post
> It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation.
with
> Recovered memory therapy was a discredited hypnotherapy
I read your first post as standing up for recovered memory therapy and I can't find how the discussion of dissociation makes a difference. Does Fontain have it right that by "recovered memory" you mean "things people happened to remember on their own"?
Thank you very much for the details.
I’m reading more now and I think the missing piece for me is the distinction between “repressed” memories and “recovered” memories.
I understood repressed memories to be an accepted idea, distinct from “recovered” memories. I am reading that the people mentioned in your original comment rejected the idea of repressed memory altogether, and believed that everything traumatic must be remembered.
So, to me, reading that someone “recovered” memory reads like they went through a specific type of therapy intended to “find” these repressed memories. Whereas to you, “recovered” memories could be repressed memories that came back to the surface organically — whether at random, triggered or through a therapy intended to deal with disassociating. Is that right?
Hi Ronan, thanks for the article and for answering questions.
My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?
I just spent a while reading the article. I really appreciate you writing it. In my case, it made me like Sam Altman a lot more. But I was only able to conclude this because of all the evidence you took the time to put together. It paints the picture of someone trying to do something very difficult in a rapidly changing environment and a lot of pressure, but still making the important choices and not shirking them.
Interesting to hear! While this hasn’t been a commonplace reaction, I think if I do my job right it should allow people to read the facts as they will, exactly like this. It’s strenuously designed to be fair and, where appropriate, even generous.
Hi Ronan appreciate you being here. what would help you and others continue to do journalism like this? (including commenting on HN?)
This is a vast and tricky question. The business model has basically fallen out from under journalism, and especially this kind of labor-intensive investigative reporting. The media landscape is increasingly dominated by moneyed individuals and companies essentially buying up the discourse.
I would really suggest subscribing to and finding ways to amplify independent outlets and journalists, and encouraging others to do so.
Only anti-trust action against big tech to break their ad monopoly (to make journalism profitable again) and breaking up media conglomerates (to reduce concentration of power in the journalism industry) can save journalism from becoming just a mouthpiece for the powerful. These things can only happen through politics. We need a political solution to save journalism.
Got it! Any recommendations on who to subscribe to? Any personal links for you?
In developer communities often you can support individual developers or groups through a monthly subscription / donation on their github page or similar.
Well, this piece was in The New Yorker, which is reasonably priced and regularly includes excellent investigative journalism. I get the physical copies, which can be too much to keep up with if you try to read everything, but it’s easy enough if you skim and just read the things that stick out as being of particular interest.
The New Yorker also comes with Apple News+ subscriptions (part of an Apple One plan that many people get for extra iCloud storage) which further includes a number of top-tier and local news orgs such as the Wall Street Journal, LA Times, SF Chronicle, Times of London, etc.
The Sam Altman piece can be read here: https://apple.news/APTX4OkywRWeJXIL7b8a7zQ
Drop Site News, 404 Media, Boston Review, The Intercept, and Atavist are all very worth supporting.
Treating quality investigative reporting like the scarce resource that it is, as one of the most well-known can you shed any light on why Reuters would delegate resources to commission investigative reporters to unmask Banksy (in a world where all-things-Epstein represents an unending source of investigative opportunities in the public interest)?
Because "the public interest" is more widely defined than you think.
We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.
All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.
> whether the company that branded itself as the ethical AI lab actually is one
FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.
Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.
We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.
Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.
From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.
My model is that Anthropic was founded by OpenAI engineers who self-selected for safety-consciousness. However, it's still subject to the same problem: power corrupts. I think they are better than OpenAI but they are definitely sliding.
Eventually something like what happened with the DOW might happen again (hope not) and the IPO will leave them beholden to shareholders.
If the leadership doesn’t bend it might get replaced. It’s annoying. I think Claude is atm the best AI assistant, by far.
> every engineer in the bay area has a way of framing the business they work for as a benign force for good
This isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.
It should perhaps be generalized as "employees usually match the general consensus of their peer-group". Before everyone considered Meta to be ersatz drug dealers, they'd report that they feel what everyone feels.
Google was "do no evil" until they had to choose between that and making the money. The culture has to be not only professed but tested.
Depending on what part of Google you work for, you can absolutely feel good about what you do. The vast majority of employees don't work on ads or adjacent areas. I've never seen another company actually care for non profit related externalities so much. People talk about it like it's the same as Haliburton or Oracle and that's not true.
The snide response is "of COURSE you can care about non-profit related externalities when your giant evil ad business is bringing in absolute dump loads of cash".
And there's something true there; few companies are Snidely Whiplash evil (maybe the lawnmower but even that is just what it is) - and having large amounts of cash affords you options in many areas.
TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.
Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.
So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.
Indeed. The bad behavior is emergent, where most individual intentions are good. Good story, bad outcome.
I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.
It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.
The idea that it's not okay to arm the military is a position of privilege. The ethical issues are around how the military chooses to use its abilities, not around giving them the tools to do their jobs. We're talking about folks who are willing to give their lives up for others. If you're not going to serve yourself you should at least be willing to help them live. This has nothing to do with whether or not you support the political uses of the military. If world war 3 breaks out and you are forced to serve, you may find yourself feeling differently.
Yes and... that's a position of privilege that anyone in the position should ethically take.
It's unfair to sweep provision of methods to the military under a "respect the service" catch-all justification.
Two things can simultaneously be true: (1) individuals serving in the military are making sacrifices (in terms of pay, family life, personal safety) that deserve respect and (2) the military as a political institution will amorally deploy whatever capabilities it has access to, to achieve political aims.
There's a reason the US stopped offensive chemical, biological warfare, and tactical nuclear device research and production -- effective capabilities will be used if they exist.
Maybe people inside the company think Anthropic behaves ethically, which says something scary about either their ethical standards or their general awareness, considering how much documented unethical behavior we've seen from Anthropic leadership.[1]
[1] "Unless Its Governance Changes, Anthropic Is Untrustworthy" https://anthropic.ml/
If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.
I have multiple friends at Anthropic. I can second this. One thing I notice about Anthropic culture is that it is unusually kind.
So much so that I worry they won't be Machiavellian enough to survive. Hope I am wrong.
I think cynicism is deserved just from observing Dario's remarks.
Im curious — how do ethical and safety conscious manifest themselves there? Is it more cultural or process driven? Do you have any examples?
> the company actually is ethical and safety conscious everywhere
I wonder what Anthropic tries to achieve by spreading such blatant lies with their bot accounts. I'm definitely not buying anything from a company so morally corrupt to smear the competition while claiming to be somehow "ethical". And I'm not talking just about this thread, it's a recurring pattern on Reddit.
>the company actually is ethical and safety conscious everywhere
Anthropic is emphatically not safe. None of the AI labs with customers (i.e., excluding a few small nonprofits whose revenue comes from donations) are anything like safe -- because of extinction risk. The famous positive regard that Anthropic employees have for their organization's mission means almost nothing because there have been hundreds of quite destructive cults and political parties whose members believed that theirs is the most ethical and benign organization ever.
The best thing you can say about Anthropic is that if you have to support some AI lab by becoming a customer, investor or employee, it is slightly less dangerous for the world to support Anthropic than OpenAI although IMHO (and I admit I am in a minority on this among extinction-risk activists) it is slightly less dangerous to support Google Deep Mind or Mistral than Anthropic.
All four organizations I mentioned should be shut down tomorrow with their assets returned to shareholders.
The current crop of services provided by the leading AI labs are IMHO positive on net in their effect of people and society, but the leading AI labs are spending a large fraction of the 100s of billions of dollars they've received from investors on creating more powerful models, and they might succeed in their goal of creating models that are much more powerful than the ones they have now, which is when most of the danger would manifest.
The leaders of all of the leading AI labs have the ambition of completely transforming society and the world through AI.
Are your friends also credited in Silicon Valley (2014)?
For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.
I think you might be surprised that more and more Software Engineers are souring on Anthropic (the company) and the decisions the company has made recently. Not the whole drama with the US Government, but them locking down the usage of plans to their own tooling.
That really rubbed a lot of people the wrong way, as ultimately one might have a favorite tool, then suddenly they are forced to use another tool.
There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.
You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.
But just because that's true doesn't mean this article isn't very much relevant and needed.
Because it is.
The New Yorker has given plenty of coverage about Anthropic in their past issues earlier this year.
After the US launched its attack on Iran, the ethical AI lab's CEO wrote: "Anthropic has much more in common with the Department of War than we have differences." - https://www.anthropic.com/news/where-stand-department-war
"how easy it is, for those of us who play no part in public affairs, to sneer at the compromises required of those who do" - robert harris
Not making any value judgements, but I can see how one might value their interpretability research higher than what the ceo says in a time where the corrupt, criminal executive branch is muscling in to everything from what's written on currency, to journalistic sources. I generally blame fascists before i blame those unable or unwilling to resist them. though obviously, ideally, we'd all lock arms and, together through friendship, crush authoritarians and fascists.
They are a private company. They have zero obligation to sell anything to any part of the government or military. The only reason they are involved in "public affairs" is because they want to profit from the government. Moreover, long before this DoW controversy, they had plenty of nationalist and anti-China rhetoric in their press releases, more so than the other AI firms.
The other explanation besides profit is that they're true believers that democratic militaries should be stronger than the military of dictators around the world, including AI capabilities.
Not sure that quote has aged well from a close personal friend and spirited defender of Peter Mandelson.
“I was only following orders”—not a legitimate defense for some footsoldier.
“I had the burden of impacting public affairs through my wildly succesful corporation”—poor them.
Seriously blame anyone other than the fucking abuser. These people
We should stop talking about potential problems or perpetrators, when we have talked about them “enough”?
That would be irrational.
We should give air time to other problems?
I think everyone agrees with that.
You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.
OP says they’ve been working on this for 18 months. Most of what you’ve said wasn’t the case until much more recently.
Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.
Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.
So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.
Ronan Farrow on Hacker News. Now I’ve seen everything.
I’ve really appreciated how substantive and polite the discourse here is, overall!
I'm a mod here and wanted to let you know 2 things: (1) I've marked your account with a beta feature that displays a colored line to the left of new comments (since you last viewed the page). It might help you keep track of this rather large thread.*
(2) I'm sorry the post was downranked off the frontpage for a while this afternoon. A software penalty kicks in when the discussion seems overheated ("flamewar detector") but I turned this off as soon as I became aware of it. We make a point of moderating HN less when a story is YC-related (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) but as this goes against standard internet axioms, people usually assume the opposite.
(* And yes, any reader who wants this is welcome to email hn@ycombinator.com to ask - I haven't turned it on for everyone because I'm worried it would slow the site down. Also, it's a bit buggy and not only have I not had time to fix it, I've forgotten what the bugs are.)
>beta feature that displays a colored line to the left of new comments (since you last viewed the page)
Can't wait until this is released!
If you don't want to wait, the refined HN extension has had this feature for a long time, but it is device specific.
https://github.com/plibither8/refined-hacker-news
It’s good to have you! We try to keep it civil :)
Not a question but just wanted to make sure you saw this:
https://theonion.com/anyone-else-have-those-weird-dreams-whe...
Also this exclusive interview with the man himself:
https://theonion.com/the-onions-exclusive-interview-with-sam...
Includes gems such as:
Q: What informs your personal sense of morality?
A: Previous things I’ve gotten away with.
Q: Why did you decide to devote your life to AI?
A: I just saw so much suffering in the world that needed to be automated.
Wonderful work and writing, Ronan -- I'm appreciative of your careful balance between objective fact-finding and synthesis.
For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.
Do you see a path through this?
I had a question about reporting conventions. In the paragraph where Altman is said to have told Murati that his allies were "going all out" to damage her reputation, the claim is attributed to "someone with knowledge of the conversation" but the attribution is tucked inconspicuously into the middle of the sentence (rather than say leading upfront ("According to someone with knowledge of the conversation, Altman...")) and Altman's non-recollection appears only parenthetically.
As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?
> in 2014, [Graham] had recruited Altman to be his successor as president.
> [Graham's] judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.
One thing I don't understand is why Paul Graham offered YC to Altman if he knew how slippery he was..
Paul answers that here: https://x.com/paulg/status/2041363640499200353?s=20
Perhaps your question answers itself.
Nice biography from Loopt to OpenAI. Why no mention of the Worldcoin cryptocurrency https://x.com/sama/status/1451203161029427208 in this piece? Was there nothing interesting to report in that area?
It was mentioned, but not by name.
Just wanted to say what an incredible person you are! Catch and Kill and the related reporting was awesome too!
This is so appreciated, thank you! These stories can honestly take a lot out of me so thoughtful reactions mean a lot.
Hi Ronan. TCatK is a phenomenal book, not only in exposing the wrongdoing of powerful people, but also in presenting the meta-issue of how hard it was to get the word out, and you handled it all with nuance. You're about as close as I have to a personal hero.
Long time HN lurker, made an account just to say that :)
Great reporting.
Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?
The piece is an interrogation of this very question, at great length and with some nuance. I think what it does most usefully is scrutinize an array of different answers to the question.
My own impression after many hours of conversation is that he is identifying something of a true north star when he frames this around "winning." There are people in the story who talk about him emphasizing a desire for power (as opposed to, say, wealth). I think he probably also believes, to some extent, the story he tells that equates winning, and his gaining power, with a superabundant utopian future for all.
However, I think critics correctly highlight a tension between his statements about centering humanity writ large and his tilt into relentless accelerationism.
(Other people's) money.
Hi Ronan, absolutely wild to see you here in the belly of the beast.
I have not read the article yet, because I get the physical magazine and look forward to reading it analog. I therefore only have an inconsequential question.
I love the New Yorker’s house style and editorial “voice,” and I have always been curious about the editing process. I enjoyed the recent exhibit at the NYPL, which had some marked up drafts with editor feedback and author comments.
Did you find that your editors made significant changes to the voice of the piece, and/or do you find any aspects of their editing process particularly notable or unusual?
Can’t wait to read this one, and hope the HN crowd treats you well.
This is brilliant work, guys. Did you get any pressure to soften or spike the story?
I won’t get into behind-the-scenes specifics here but I think you can imagine how pressurized this topic was and the amount of heat that tends to generate. I’m used to getting a lot of blowback and it’s never fun. I just hope the work is meticulous and fair enough, and that enough people see the benefits of that, that I get to continue to do it.
I am appreciative of your work on this piece. I'd love to see one that goes deeper into Dario Amodei. Perhaps even a series of profiles on the central figures of this AI era.
Is this something you've thought about?
Hey, just want to say thanks for the piece and for all the hard work and effort you did to get this out there. I've published a bit here and there, and the actual writing is only ~50% of the work load (for me at least). So thanks for going through all the effort and pain to get it out, really appreciate all the work you do for me and the rest of Joe Public.
If you want another story to run, I'd really love to see an investigation into how these different companies are convincing governements that the only path forward to win global dominance is through achieving 'agi' first and how much that contributes to the reckless acceleration of ai software and infrastructure development
Also a good expose on accelerationists and e/accs and who among the elites fall in this group is direly needed as well
I know why the cantilevered pool statement is there and why you mentioned it.
I’m sure you don’t know half of the totally fucked up things Sam did to get “revenge” for the slight of a leaking pool.
Please ask The New Yorker to extend some of their very generous subscription sale prices to Canada, I would subscribe to print if even a single sale applied to us, but all the sales are always USA only.
what model was used to create the visual at the top of the article?
The last couple sentences tie things up really nicely.
Great article.
Thank you for fielding questions. And please don't stop, your work is great.
As someone on a budget, how can I pay for good journalism when it so spread out across various (expensive) outlets?
Paying for 1 is doing more than paying for 0.
It's not your responsibility to fund for every single one, just find the one you like the most and subscribe to that one.
Any plans to tackle any of the other folks who might be mentioned in the same sentence as Altman, like Darius Amodei?
Do you think the recent conflict between Anthropic and the Department of War, and the apparent bootlicking by OpenAI has fundamentally altered the public perception of OAI? Are they the baddies now in the general public opinion?
Do we have a choice?
Seems a bit conspiracy theorist to me
Have you considered doing a piece on Aaron Swartz? Timnit Gebru? Michael O. Church?
It could be titled "Hypergraphia"
In depth reporting is great. This is a really tricky topic to cover over the course of 18 months. A year and a half ago OpenAI was ascendant, now it's -at best- stalling and, more likely, trending toward irrelevant.
Love the visual. Fantastic.
How do you feel about the title of your article? I assume an editor chose it.
Clearly he's straight up evil; between tanking the global economy, constantly lying, and raping his 3 year old sister, it feels really disingenuous to me to frame this as an open question.
hey I loved that Ricky Gervais joke about you at the globes
For those that don’t know or remember:
“Tonight isn’t just about the people in front of the camera. In this room are some of the most important TV and film executives in the world. People from every background. But they all have one thing in common: They’re all terrified of Ronan Farrow.”
From time to time I have been accused of being an apologist for Sam Altman, but I have always tried to assess information based upon what it says instead of whether it matches an existing narrative. You list a number of distortions in your article which show the problem. If you are a good person, bad stories about you may be fake. If you are a bad person, bad stories about you may still be fake.
My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.
I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.
In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.
I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.
I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.
I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.
While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.
I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.
"Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.
Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.
The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.
I think sometimes you have to look at the patterns rather than at the single claim. If a large amount of people, that are completely unrelated, tell you very similar experiences they had with Altman, you can take that as a good indicator of his general character.
And if this tendency to misunderstand/be misunderstood always results it Altman gaining more power, even if we give him the reason of the doubt and say that doesn't do it on purpose, it's still a big problem, given the responsibility he has.
The article also mentions many moments where apparently Altman straight out lied, as opposed to being "very persuasive, if you believe those sources then I don't think it's also possible to think he's sincere. I cannot open the article again to get the exact quotes, but the few I remember were: - one time he was claiming he didn't send a message, while people were literally showing him the message he sent, with the confirmation of another OpenAI employee - another time when he accused people of organising a coup, and that someone from the board informed him, and after the person from the board was called in the meeting Altman claimed he never said those words and never accused anyone
These cases can't be put to persuasion, that Altman changed their view, or that someone misremembered, they either happened or they didn't
Paul made a statement today: https://x.com/paulg/status/2041363640499200353?s=20
It clarifies he did not fire Sam
I overall agree with your takeaway, but this is not a criticism of the article itself.
I have experience in dealing with Sam Altman-like behavior. I hope to explain how their tactics unfold.
> I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.
There are two angles to this: from an individual perspective and from a collective one.
One's interaction with such a manipulator isn't a single shot. There is not a single event that they are “beaten”. First, one gets persuaded --- you might argue that there's nothing wrong with a skillful persuasion. At some point they realize that the reality is not in line with their expectations. They bring the point up to the manipulator and ask for a change, this time in more concrete terms. The manipulator agrees with the change, negotiates compromises, and the relationship continues. After some time the manipulated party realizes that things are not going in the direction they desire. This time they ask for more concrete terms, without accepting any compromises. The manipulator accepts, yet continues to act against the terms. The manipulated party is now angry and directly confronts the manipulator. The manipulator apologizes and tells that none of it was intentional, and asks for another chance. However, at that point, the manipulator has run out of “politically correct” “persuasion tactics”, and tells blatant lies to make the other party behave.
From a collective perspective, even those “politically correct” “persuasion tactics” are discovered to be lies, because what the manipulator told different parties are in direct opposition to each other, i.e., they cannot all be truths.
> Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.
I understand how her behavior may raise a flag for the unsuspecting, but it was exactly the right one. Manipulators prey on the benefit of the doubt. If Toner were to bring Altman's behavior into attention of others, no doubt that Altman would manipulate them successfully.
It's unfortunate that many people are unaware of these tactics and assume the best of intentions, when such assumptions fuel the manipulation that they would better avoid.
It’s wild to write something like “I have experience with Sam Altman-like behavior” and expect us to come along for a 5+ paragraph ride that actually has no Sam Altman connection at all except the one you imagine is true.
Talk to your therapist about your problems. Don’t project them on people you don’t know and seemingly have no actual first-hand experience with.
I'm sorry that it wasn't clear. I didn't mean to imply that I was going to connect to Sam Altman. I specifically wanted to address why it wasn't the case that people were “intellectually beaten” by Sam Altman.
> except the one you imagine is true
I'm not sure what you mean. I told about an example of manipulation that I witnessed. I later learned that these were common tactics employed by con-artists, scammers, etc.
> Don’t project them on people you don’t know and seemingly have no actual first-hand experience with.
I don't need first-hand experience with someone to understand that they are a manipulator. I am comfortable forming my opinion based on reports.
Paul Grahams's latest public statement on the issue:
https://x.com/paulg/status/2041363640499200353
> My prima facie view on Altman has been that he presents as sincere.
That is how pathological liars present.
> what information would significantly change your views
Quite simple: show me any single action took by Sam Altman which can not be construed as an attempt to get him more power/money/influence. You can't find it.
The difference between what he claims to believe and what he actually does is a textbook example of sociopathy.
I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influence. I can believe that a persons intentions are good, but I can't make everyone in the world do that, and that is what you are asking.
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him"
To play your game, he got married, had a child, and joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.
You could still construe those actions as evil if you choose to see them as evil.
I'm not going to claim that Sam Altman is not a sociopath, I lack the information and knowledge of psychology to make that determination. On the other hand I have not detected those attributes in anyone who has claimed he is a sociopath.
It seems odd that people seem to take offense at the notion that arbitrary people do not reach a conclusion that requires specialised expert knowledge and a decent amount of irrefutable evidence.
> I cannot find a single action of anyone that cannot be construed as an attempt to get them power/money/influence
Try the other way around, via negativa. We definitely can find plenty of examples of people stepping out of positions of power, deciding not to do something because of moral conflict, etc. Is there any case of such action from Sam?
Fuck, anyone with any semblance of moral fortitude would refuse to take money from the Saudis. But he had no problem to do it.
> joined an AI research organisation at a time when everybody thought the big advances were much further away than they turned out to be.
No, this is selection bias. What he did was to put himself in a position where he could have his fingers on any and every possible pie, and then when of these things turned out to be something believed to be valuable by people with money, then he manouvered himself to be in the driver seat.
When people are described as sociopathic it’s not about any particular lie, but the relationship that the person has with the truth, which is that they will lie when it suits them and tell the truth when it suits them and they don’t seem to distinguish morally between them. And more than that, they treat people the same way, and will use them while it suits them and then dispose of them when they are inconvenient.
The article is paywalled, where can we read it?
I have the feeling that if you write an article in that style, the subject of the story becomes the hero even if you insert a couple of negatives. In the same manner that Michael Corleone becomes the hero of The Godfather.
I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.
On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.
My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).
For what it’s worth, I don’t think the piece at all avoids key areas of disillusionment with the technology. Quite the contrary.
As bad as altman might be he’s just another sociopathic Tech Bro
I’m far more concerned with the 25 million dollar personal bribe OpenAI president Greg Brockman gave Donald Trump for his reelection -
the fact that a tech company can influence the outcome of an election directly is evil
Far more evil than Altmans shenanigans
Hi Ronan,
I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.
However I'm not going to pay for yet another subscription to access one article I'm interested in.
I'm sure you can't do anything about this, but I just wanted you to know.
You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.
You could buy a physical copy (and this isn't meant to sound sarcastic).
You can walk down to a bookstore or anywhere that sells magazines and buy a physical copy
I’ve often thought about a model like this and would love to see a few news outlets run it as a pilot and see how it stacks up.
Many have tried it (as well as the oft-recommended micropayments idea) and it never justifies the added expense and overhead of the customization. Closest is probably the NYTimes’ gift article feature.
I really doubt the implementation difficulty is the actual reason. It's not hard to have an extra table of specific article permissions.
Probably true, it's more likely that it's a variation on "there are only a small percentage of people willing to pay any amount of money for an article, so if we offer one-time options, a large enough percentage of people who would have otherwise subscribed with recurring revenue instead pay one-time so their lifetime value is lower"
You could hit up a public library...
Looking online it looks like the newsstand price of an issue is around $10 (which I'd assume is heavily ad subsidized, if anyone is still buying print ads?) which is an interesting data point for a pricing model. (Of course, I looked online because I have no idea where I'd find a newsstand around here - the nearest newsstand that show up on google maps has reviews that say "It's just snacks and scratch tickets." and "three newspapers and no magazines" - I may have to stop by just to see what three newspapers they have :-)
Or just switch your browser to Reader Mode and it's free.
There's a very minor typo in the article:
> “Investors are, like, I need to know you’re gonna stick with this when times get hard,”
Should be:
> “Investors are like, I need to know you’re gonna stick with this when times get hard,”
I'm not seeing a typo. Just a stylistic difference.
In "that's, like, your opinion", "like" is an interjection, you can take it out and not change the meaning: "That's your opinion".
In "investors were like, you need to grow", you're semi-quoting someone, and can't take it out: "investors were you need to grow".
Pretty sure the correction is wrong, not merely a stylistic choice.
Hard hitting journalism here. Is the person who lied for years to promote himself trustworthy? More news at 11!
Damn, just wanted to say reporters are scary... The amount of detail here is huge. You think of hackers as the ones good at doxing... Nah, its reporters.
Dang, can you substantiate that this is actually Mr. Farrow like he claims?
Or Mr Farrow can you post some evidence somewhere we can see?
https://news.ycombinator.com/item?id=47663895
Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."
I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"
It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!
I don't get why writers do this. It makes sense for fiction, but why a factual, non-fiction article?
Reading this makes me even happier to pay for Anthropic.
Amodei and his sister saw through the behavior and called it out.
" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."
If you think Amodei is significantly different you’re going to be disappointed. There is nothing he has done that can’t be adequately explained as furthering his own interests. Remember how Musk doesn’t like Altman too? It’s because they’re all the same people, competing for the same thing.
I can go with the thesis that individuals need community control (boards, regulations, laws) in order to be accountable but is there some specific evidence that Amodei is the same? It seems like a "both sides" argument.
I don’t need to convince you, we’ve been through enough cults of personality, time will tell. But I’ve been right enough times to back myself. Maybe it’s because I grew up around a lot of people like them? They can’t hide that they would say whatever they think you want to hear.
Actually it’s funny: Their lack of empathy/emotional intelligence would also make them susceptible to thinking that talking to an LLM is like talking to a person, so maybe they really did think AGI was around the corner!
The problem with Altman isn't that he's "furthering his own interests". It's the deceitful behaviour he employs in the process.
There’s been enough divergence between words and actions from Amodei for me to also consider him deceitful, if that’s really the low bar you want to set. I’m not saying he’s worse than Altman, just to be clear.
I mean he quit what he considered to be a problematic company, founded another one, that one’s models refused to do things that the previous company would do, then his new company refused to do the US government’s evil bidding while the other company happily went along with it.
Not small differences to me.
> I mean he quit what he considered to be a problematic company
Problematic why though? For the reasons publicly stated? Then why isn’t Anthropic just what OpenAI was “supposed” to be then? We know what that was from their charter, and Anthropic is not that.
> then his new company refused to do the US government’s evil bidding while the other company happily went along with it
You’re sure about that are you? I don’t see how you possibly could be, unless you’ve taken the PR at face value, before it was all quietly swept away under the next headline.
The only foundation model CEOs I trust are Demis and Yann
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”
You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.
Quite interesting to see that PG basically indirectly enabled all of this by putting Sam in the position of a president of YC.
That guy is a snake, not just while at YC.
I do wonder exactly what he was doing to be considered a snake. I know he was working on other things, but what else was going on?
Read the article?
video link?
Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.
FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.
Can you give more details?
It wouldn't particularly surprise me if Sam Altman were racist, but I'm curious what the specific incident you observed was.
Yes, but first I want to be very clear on some things.
1. I could have hidden my identify behind a throwaway. I did not feel that would be appropriate when making this calim.
2. I am not looking for anything, literally at all. Any follow ups for blogs; anything that would benefit I will not answer.
3. This is NOT a new account, I am very easy to find; I am 6'1 140lbs
I was working for a company called NationBuilder and I had the opportunity to go on a work trip. Outside of a talk he had just given I was waiting for my ride and I looked over like...damn thats the speaker. I wanted to say Hi; he damn near flagged down the police. I apologized and just decided to move on.
Note: It was in Reno, and no I don't want to go into details; the others are not hard to find because I happened upon them via blog posts so i'm sure if someone with the accumen of RF wants to know, he will find.
I have heard similar stores from several people in the years since. I AM NOT CALLING THIS PERSON RACIST. I am saying; he is observably scared of black people and that is not someone I want making descions about how the world moves foward.
I wonder if this stems from Sam getting beat up by a black guy. From the article:
> When Altman was sixteen or seventeen, he said, he was out late in a predominantly gay neighborhood in St. Louis and was subjected to a brutal physical attack and homophobic slurs. Altman did not report the incident, and he was reluctant to give us more details on the record, saying that a fuller telling would “make me look like I’m manipulative or playing for sympathy.”
Maybe just Occam's Razor -- any time I've seen Sam talk in public he just seems to be a neurotic, anxious individual that would have a hard time interacting with people in any normal context. In a world of infinite variables it's hard to say that his aversion was due to your race -- there's really not much to go on here.
Thank you for sharing this. I 100% believe it, and it lines up with my experience with other people who came from similar backgrounds as Sam Altman - i.e. white, rich, privileged, and attending elite universities.
I will disagree with one part - I do believe it is racism. Most will never admit it publicly, but if they think you're one of them, it often comes out rather quickly, especially when alcohol is involved.
It's sad to me that "racism" is such a divisive word to many, and is met with defensiveness rather than introspection and communication. Trying to not be racist takes work, and communication, and is a process, not a state.
I appreciate OP's sharing as well. Also, racism isn't peddled only by rich white elite university attendees, it reaches into all the corners.
An extranordinary claim needs a bit more evidence than one datapoint where in his defense maybe he is scared of anyone he doesn't know trying to talk on the street.
Also mentioned was that more evidence is not hard to find
If this is noteworthy, the burden of proof should be on the poster, not the reader to substantiate these claims.
I don't really like this take, as it tries to make being informed somehow not the readers responsibility.
Agreed, his two posts read really weirdly. He made a deliberately vague(?) initial post to get a response and I'm not sure how I feel about his story as you've said, if I was Sam Altman I'd be wary of anyone coming up to me too.
Just to clarify, because I am not sure I am reading this correctly:
Your statement that he is terrified of black people is based on you (presumably a black person) running into him outside an event, and him reacting with fear/extreme caution when you approached him?
Not defending Sam, but if that is the case, then it's the kind of thing that Sam can hold up and say "Do you really think my critics are intellectually honest?"
Rock solid evidence is what brings people down. Stretched truths, assumptions, and careful half-truth wording, are all ammo the accused will use to strengthen their side.
Not defending Sam, but if that is the case, then it's the kind of thing that Sam can hold up and say "Do you really think my critics are intellectually honest?"
Why? It sounds like they were in an environment with many people and Sam reacted negatively to the black guy. It's not like the story was, "so I followed him down a deserted alley and he got scared, so he must be racist."
It sounds like Sam was approached on the street by a stranger, and he had a negative reaction. Which is fairly common for high profile people, especially people with a following of haters (let's not deny AI/data center general unrest).
I cannot see any legitimacy to the claim besides the commentor's own interpretation of the situation. They posit this like the authors would want to know, but here I am doing the first thing the authors of the article would do, and I'm getting downvotes for it. The author(s) won't touch it anyway.
It's a little weird to be scared of random strangers, famous or not.
If this happened when Altman was already so well-known so as to make this a problem, maybe he shouldn't have been traveling on his own?
Private security is a thing he can afford (now, at least).
Note: To all the downvotes; I did this publicly and not anon for a reason, if you will do the same I am more than willing to provide evidence for all of these claims as long as its done publicly and in the open.
PG said something along the lines of: "There should be no truth that is increasingly unpopular to speak."
If you don't believe what I shared is true, address that directly. But seeing my post sitting at 1 point and [flagged] after 2 hours is not OK. Just as DJT can't flag away his issues, you shouldn't be able to do so on HN.
One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings. I really hope that what happened to my post is not the beginning or a continuance of the end for that ethos.
> One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings.
That has never been the case, because HN is frequented by humans and humans are biased. Someone who claims to be unaffected by feelings is someone you cannot trust, as it means they are blind to their own shortcomings. Being robotic about the world is no way to live—that’s how you get people who are so concerned with nitpicks and “ackshually” that they completely lose sight of what’s important. They become easy to manipulate because they are more concerned with the letter of the law than its spirit or true justice.
Objectivity and empiricism are positive traits but should be employed selectively. Emotions aren’t a weakness, they are what drives us to change and improve. Understanding your own emotions equips you better to understand the world. But they too can be used to manipulate you. To truly grow, you have to employ your emotional and rational sides together. Focusing on just the rational will get you far but not all the way.
HN is primarily about curiosity—it’s in the guidelines four times—and you can’t have that without emotion.
>> One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings.
> That has never been the case, because HN is frequented by humans and humans are biased. Someone who claims to be unaffected by feelings is someone you cannot trust, as it means they are blind to their own shortcomings.
Yes, and HN is full of people like that: simultaneously arrogant and stupid software engineers whose arrogance is founded on their own ignorance and self-regard. "Grounded in observability, empirical evidence, not bias or feelings" actually sounds like a smokescreen to obscure one's bias and feelings from oneself.
> Being robotic about the world is no way to live—that’s how you get people who are so concerned with nitpicks and “ackshually” that they completely lose sight of what’s important. They become easy to manipulate because they are more concerned with the letter of the law than its spirit or true justice.
They're also easy to manipulate, because their emotions can be appealed to without them having enough awareness to be on guard. For instance: you can manipulate many software engineers by working your position into the form of a technical "system" (e.g. Econ 101) then praise them for being smart little boys for understanding and believing it.
I don't know if he is a racist or not, but forget HN. Last couple years it has gone on the deep end, not sure if delusion or $ interests, but it is impossible to have a decent conversation here. I think the only reason this article stayed up is because OAI is starting to be a bit 'toxic' now, but if this was published a year ago, it would have been flagged to oblivion.
So just ignore those points and flags. HN *used* to be a nice place for intelectual conversations, even if you disagreed with each other. Now is nothing more than bots, people with financial interests in this bubble or sycophants.
I tried to respond to your comment with some personal observations on racist currents in this community, but my comment immediately got flagged. So yeah! This site ain't what it used to be. Best for the good folks to seek community elsewhere, I reckon. I miss the old days as well, but I don't think they're coming back.
If this site ever was anti-racist, that must have been a long time ago. I threw away my old account many years ago only to come back with this one (because it's difficult to completely ignore HN if you work in tech) and the reason I threw that one away was in part the overwhelming reactionary bias in this community.
The "progressives" were at best silent "don't rock the boat" types more inclined to insist on civility than to challange reactionary sentiments while the reactionaries ranged from dog-whistling to outspoken, across the entire range of white supremacism, sexism, homophobia, transphobia, antisemitism, zionism and so on. The only comments that would ever get flagged or downvoted were those that were explicit enough to be seen as "impolite" because they happened to spell out calls for genocide or violence rather than merely gesturing at it with the thinnest veneer of plausible deniability.
Well, I do remember it being more about the underdogs and a cheeky "fuck the system" attitude without much malice. Maybe I just wasn't tuned into this stuff back then. Now, though, both users and tech leaders can unironically parrot Stormfront rhetoric from 10 years ago (using vaguely cordial language) and no one even bats an eye. The kind of stuff that would have made you unemployable just a few years ago.
When I think of HN in the before times, I think of people like Aaron Swartz. Would he have enjoyed his technical discussions peppered with comments on how the West is being "invaded" and "outbred" by third-world hordes? Based on what I know about him -- and please correct me if I'm wrong -- I'm guessing he would have noped out of that kind of community in a flash. Yet nowadays I see this kind of talk here all the time, percolating all the way up to industry leaders like Musk and DHH.
Just came to say, I appreciate your emotionally intelligent and balanced take on your experience, where it would have been very easy to react and let emotions take over (understandably).
Thank you for sharing this.
It's disappointing to me that a completely factual personal experience can be relayed with zero spin – and yet some of the replies act as if it's 100% spin without any factual evidence. Some people seem to prefer to respond to an imaginary version of a conversation rather than the one that's actually happening in front of them.
Thank you for sharing this experience with us. Don't worry about the downvotes. That's just how it is here sometimes. I don't think it reflects the views of most readers.
well, just based off that group photo of the openclaw developer and the staff at openai, i wouldnt be surprised if there was some truth to this
> Altman is absolutely terrified of Black
Can you share more about how this manifested?
The longer I live, the more secrets coming out I see, the less surprised I am with every next one.
Career ladders have a tendency to hoist up those among us with the worst personality traits.
I really hope @ronanfarrow addresses this. Thanks for sharing
For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.
At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...
Interestingly we are still experiencing the technological momentum inspired and created by what OpenAI used to be. AI for humanity.
Given the initiative started circa 2017, much of the goods remain. It's a hijack of creative geniuses who got together, which is now turning into cow milking tech.
I remember reading these direct quotes from SA in 2016 from the New Yorker and thinking, yeah, this guy is just miserable:
> “Well, I like racing cars. I have five, including two McLarens and an old Tesla. I like flying rented planes all over California. Oh, and one odd one—I prep for survival. My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources. I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
> "If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future. But I do care much more about my family and friends.”
> "The thing most people get wrong is that if labor costs go to zero... The cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.”
> "...we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”
This doesn't seem like someone who's miserable at all to me. They seem like someone who has a wide variety of hobbies and is and is intellectually interested in futurism
Yeah, I have an half baked thought about billionaires like this that they truly want the best for this world even if they have to seek it by immoral means.
Funny you bring this up because I always think back to a story, in the New York Times if I recall correctly but perhaps the Journal or SFC, talking about how him and his friends got upset when asked to leave a high end french restaurant due to him wearing sneakers. They pulled a "Do you know who he is?" well before he was even tied to OAI. Always left a bad taste in my mouth and stuck with me a decade on.
Tangentially, without being too specific, I have someone incredibly close to me that has recently had interactions with the upper echelons of OAI's exec team and... the stories are not kind. I imagine when your company is being run by a morally bankrupt tech bro you are short on integrity.
After 10+ years of hearing anecdotes about sama I am starting to wonder if maybe the word on the street is true and he really is just as selfish and blind as people make him out to be. At this point, the optics surrounding OAI vs. Anthropic are just plain bad. They should have gotten rid of him before when they had the chance.
I don't follow public figures or news anywhere near enough to have a meaningful opinion on Sam Altman, but I find one interesting snippet here, which is that there is a straightforward prediction in there. He did say ten to twenty years and it's only been ten, but still, I can't think of a single good or service that families need or commonly want that is an order of magnitude cheaper. It makes me wonder if he's become any less confident of this or any other prediction.
I don't want to be holier than him or thou or anyone else, but it is the kind of thing I've found of myself quite a bit. I made a lot of confident predictions about the future 15-25 years ago on the Internet, and even though I'm not a public figure and nobody will ever hold me to task for being wrong, I can see it for myself. The predictions are still there. They weren't universally wrong, but I didn't do much better than chance. It's a big reason I no longer bother to make predictions. I have no idea what the future will bring and I'm comfortable with the uncertainty. It doesn't feel like very many people on the Internet are.
Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.
Fantastic reporting.
As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page.
One of the decidedly eerier parts of this story as you keep reading are all the gaps between what people are saying about Altman, and what they clearly want to say about Altman but can't.
Throughout my life, what colleagues/friends are unwilling to remark plainly on has been the most telling factor of someone’s character to me.
This can be true I suppose, but equally I have a few friends who practically play characters as if they've resigned themselves to a role in a sitcom. For instance: one of my friends is late to just about everything and treats everyone as if we are on-call. We plainly note this repeatedly, the friend is, I hope, equally frustrated and embarrassed by it, and in spite of this nothing changes. This is obviously a critical element to their broader character.
Perhaps you mean to distinguish social groups without much intimacy? To which I'm sure we could provide some convincing cases, but this seems like a silly heuristic generally.
I have been in or next to a number of social circles with such missing stairs, where for various reasons people in the groups have decided to not directly acknowledge certain Facts that are known about some members, because it would involve them confronting their hypocrisy.
Someone cheating regularly on their partner, flagrant substance use problems, controlling people who ostracize anyone who doesn't agree with their sometimes insane perspectives...
People will go along with quite a lot to avoid friction, especially as they get older and picking up new social circles becomes higher cost.
It's possibly the most telling thing, when you see what people say is a hard line versus how they actually respond to it.
Maybe they have ADHD because the symptoms fit, if they really do acknowledge the problem yet cannot fix it.
that's not ADHD. People with ADHD would improve - it may take a LOT of time, but it will happen. Quite often they will go to the extreme and come in way too early. My bet would be on Cluster B personality trait e.g. lack of empathy and constant need for attention and validation.
ADHD frequently co-occurs with other conditions.
That is not always true and not always with everyone. Many people who have ADHD have unsolvable time blindness. They don't mean to do it but their brain chemistry literally disallows them from not doing so in many cases.
> where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long
For anyone unfamiliar with this process, the New Yorker documentary is well worth the watch: https://www.netflix.com/title/81770824
> ... where every word is chosen carefully...
In light of that...
> "Texts from this period show Altman coördinating closely with Nadella"
Why did you make the odd choice of a diaresis on this word?
if this isn’t a joke - new yorker style uses a diaresis when a word has a repeated vowel where the second vowel is part of a different syllable. coördinate, coöperate, and reëlect are probably the most common places where this comes up
https://www.newyorker.com/culture/culture-desk/the-curse-of-...
You mention many proxies of Musk who post negative content about Altman.
In your investigation were you able to determine if Altman has similar proxies?
How common would you say that this is? Do these kinds of people generally have teams of people who sling mud for them?
Can you speculate on how that manifests on a site like Hackernews?
This might be the major dilemma in the tech industry today, where the natural tendencies of literalism and optimism among technologists has turned into a form of defensive credulity. The real world rigor of The New Yorker’s editorial standards and concerns about defamation necessitate this circumscribed style that rewards close reading and skepticism, but those aren’t in favor in the tech industry currently.
With this in mind, I think you would be the perfect investigative journalist to track down the archives of The National Enquirer.
This was our "hometown" gossip paper in South Florida, and you should have seen the pictures and stuff that they did print. And this was after threats of celebrity lawsuits in the mid-1970's had curtailed any tendency to exaggerate.
Back when almost nobody outside of New York had heard of Trump, he started coming down to play golf and made quite an impression among the well-established Florida real-estate operators. They could see right through him like any other fake millionaire from New York, which were a dime a dozen. There was just a general consensus among many visitors that what happens in South Florida stays in South Florida. Epstein grew up in this environment.
You would see pictures of him with unidentified non-Stormy dates, and some insinuation in the gossip column but you knew they were holding back from anything that could not be truly verified.
By the time of his presidential run, it looks like he had become well acquainted with David Pecker who owned the Enquirer. I wouldn't be surprised when he sold the publishing company that there are archives somewhere that contain all the supporting stuff that was unverified at the time. When Trump & Epstein were much younger running buddies for so long.
Calling your own article all those things is a major turn-off.
We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.
I'm interested in knowing more about this topic, do you have any resources about the relationship between Swhartz and Altman?
It’s not difficult to find these, Aaron always said that Sam was not to be trusted.
Apparently Aaron Swartz and Sam Altman were classmates at the original 2005 Y Combinator class. This article has a picture of them literally standing next to each other: https://www.hindustantimes.com/trending/throwback-photo-of-f...
The OP says this:
> The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”
It's mentioned in the submitted article (about half way through), you should read it.
I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.
Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.
I also find that interesting.
And not intending to defend the motives of anyone involved, but I'm hoping we can not worry about literally all jobs being destroyed, and AI companies amassing all the wealth in the world.
Don't we need at least some humans working and earning to buy these AI services? Am I not being imaginative enough? Is it possible for the whole economy to consist just of AI selling services to each other?
I realise that even if AI destroys most jobs, or even just a lot of jobs, and amasses most wealth, or a lot of wealth, it would still be a terrible thing for humans. The word "all" could have just been hyperbole, and it is still a valid point. I just want to know people's thoughts on whether entire replacement is possible.
Do you need ants buying services from humans for the world economy to function?
If AI will indeed become superintelligent, we won't matter.
It's a huge if and honestly I don't believe in it.
Actually, if it ends up like described, it really doesn't matter whether I believe in it. Either it happens and we all die, or it doesn't happen. Pascal's Wager I suppose.
Why keep human consumers to buy your services when you could just amass all the wealth you desire, and have autonomous systems that can ensure your unassailable physical security? You would sit atop the most stratified dominance hierarchy ever achieved, and it would reduce other humans to mere pets or breeding stock. I don’t think normal humans would desire that kind of power, and I don’t believe LLMs will take us there, but I wouldn’t put it past the perverted billionaire maniac.
Surely a Big Sur compound stocked with iodine and gold, protected by security goons fitted with exploding collars, is someone’s definition of paradise.
They could always visit lawnmower at the Lanai compound if they got bored of Big Sur.
I think fundamentally, the concern is misplaced. The fact you need to work for wealth is a convention of our constraints. The change in constraints would lead to other means of distribution. It's easy to see if someone who believes more productivity is good would not see making jobs obsolete a real problem. Thew would see us adapting to the new conditions in a relatively short while.
> The fact you need to work for wealth is a convention of our constraints
The current constraint is "you need to produce to have things".
If one company's AI takes all the jobs, and thus does all the producing-to-have-things, the constraint transforms into "you need that company's permission to have things".
Hence the top-level question.
The new conditions almost surely being like the old conditions: slavery, sexual exploitation, etc.
Those who are concerned is implying that any new distribution mechanism is not going to favour them.
And under the capitalist system, if nothing changes, the "new" distribution system is indeed not going to favour them - at best there would be some sort of UBI, and at worst you would be left to starve in the streets.
However, i cannot see how one can transition to a new system, and yet have the existing powers in the current system agree and not be disadvantaged.
>Thew would see us adapting to the new conditions in a relatively short while.
Say ~5 million jobs in the next 10 years are automated away, which industries do those people move to?
With college being exorbitantly expensive, that locks out many people from re-skilling in other fields.
As people race to other industries, that forces down wages because now there is a larger pool to select from.
How do we ensure people are taken care of when UBI is all but fiscally impossible in the US?
If you are speaking about the world, hundreds of millions in the next 5 years is probably closer to reality in my opinion. And from your question I think that you already know the answer.
Altman is an advocate of Universal Basic Income, as far as I'm aware. That doesn't sound like he's not worried about massive job losses.
https://www.cbsnews.com/news/sam-altman-universal-basic-inco...
https://finance.yahoo.com/news/sam-altman-wants-universal-ex...
> Altman is an advocate of Universal Basic Income
So he says. And the way he proposed reaching that was with a scam cryptocurrency under his control which has rightfully been banned in several countries.
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
It would be absolutely astounding to read TFA and then continue to take this weasel and his word at face value.
If there's one thing that's clear from the article, it's that he's a proponent of anything that will benefit him, even multiple conflicting things at the same time.
Meaningfully?
Is there an advocacy arm of OpenAI pushing for legislation for UBI? Or is this like Musk's supposed support for UBI while also insisting that welfare payments to the poor are a bad thing?
It's easy to advocate for something when you know it's essentially impossible to implement.
https://archive.ph/hOYMn
I hope ronan farrow doesn't mind his article being shared like this
It’s also available via public libraries in USA via Libby if your local library system pays for a subscription, so it’s a way to support the magazine indirectly, since your local taxes pays for your library. The downside for weekly is you have to read it that week, no archive access.
Which edition? I looked at April 6th and can’t see the article.
The New Yorker posts digital articles in advance of the release of the print edition.
At the bottom of this article it says: Published in the print edition of the April 13, 2026, issue, with the headline “Moment of Truth.”
As someone who reads the print magazine every week, I always scroll down to check if the article will be published and skip it if so (so I can read it when my magazine arrives).
Truth > revenue
The information is more important than the wants of the writer, always.
I’m not going to pay for another newspaper subscription just to read one article
This is pretty hilarious - when I asked ChatGPT to "summarize this article: https://archive.ph/hOYMn", it said it's about Jesus ("The article traces the development of early Christian Latin hymns, especially focusing on how themes about the Virgin Mary and Christ evolved from the 4th to later centuries..." (https://chatgpt.com/share/69d48476-9bf4-8327-8c19-709865a547...)
Sharing what an LLM has to say about a thing is like sharing what you dreamt of last night — no one really cares.
unless you dreamt how to debug that problem no-one else can solve
It works better if you said it came to you under the influence of heavy drugs
Why even have GPT summarize it? Just RTFA.
I like to consume content in a breadth-first way. Title -> Summary -> Maybe read it.
Interesting. If you look at the sources it cited, there are a few links about "Sacred Songs and Solos" (likely from related/side content on the page), my guess is it didn't read the main article and instead anchored on those and hallucinated
I wouldn't be surprised if the owner of that site has AI scraping protections enabled.
[1] is also good to read as a follow-up, and compare the personalities
https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...
This was a great article, and absolutely savage in some of its characterizations.
The fact that this reads as deranged fantasy and yet I can believe is 100% real is insane lol
I read this a few days ago, excellent article and an absolutely insane story.
I usually use free archived versions to read mainstream journalism pieces. Seeing this convinced me to subscribe. I've always loved The New Yorker, and am happy to support serious longform journalism (and I know that Ronan is one of the best).
However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.
> Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.
Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?
It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.
I find it interesting how a lot of cyberpunk does not really include AI or does not present it in transformative way. There is a lot of mind uploading, implants, corpo fun and overall technology permeating all aspects of life, but often AI itself does not actually play a big role.
Counterexamples that come to mind are Neuromancer (AI driving the plot) and Blade Runner (AI antagonists.)
A compromise thesis might be that in cyberpunk media, AI is at never powerful or motivated to fundamentally reform the worldwide crapsack economic system. They don't abolish corporations, although they might take them over.
Of course, if there was a story about an AI taking over the world into a post-scarcity society, it probably wouldn't be filed under "cyberpunk" either...
Well yeah, that's what "alignment" means...
Rampant capitalism is kinda genre-defining for Cyberpunk so Cyberpunk without corporations wouldn't really be Cyberpunk. _The Matrix_ only qualifies as Cyberpunk because within the matrix the machines effectively control the capitalist power structures to exert their influence.
Abundance/scarcity isn't really about availability, it's more about access. You can have a cyberpunk story in a "post-scarcity" setting in the sense of availability (due to sci-fi tech) but you can't have it without unequal access to those resources.
Agreed, which is why The Culture (series) isn't cyberpunk but The Polity (by Neal Asher) kinda skirts the line, in many ways they are similar except resource inequality still exists on a wide/policy scale in the latter.
AIs are in plenty of cyberpunk stories, but your comment did make me think that they are often rather stereotypically “alien entity characters” and not a kind of corporate technology / weapon that is controlled by a specific organization.
Which is a shame, as it seems to me that the overwhelming risk of AI is from the latter scenario, and not as a rogue individual entity.
It is a pretty core part of Cyberpunk the "franchise" though, both tabletop and more recent video game.
I think as well if you look closer, many cyberpunk worlds imply AI through robots, computers with personality etc.
I think you can look at Star Trek as a fairly grounded example of where current LLMs could go: the ship's computer is not autonomous in any way but it does accept fairly vague instructions and you can apparently vibe-code the holodeck.
Im hoping more for red dwarf
I find that more realistic then, because it appears that's the trajectory we are going on with regards to AI, as a tool not a panacea.
AI is one of the core parts of cyberpunk, through androids / humanoid robots. Blade Runner is completely built on the protagonist having to interact with rogue artificial intelligence.
Hyperion has a pretty well-developed view of AGI.
I assume it just becomes one of those things as ubiquitous as Wi-Fi
Deus Ex is an outlier, AI is a core part of that plot
The first Cyberpunk book, Neuromancer, has a plot which revolves around A.I recruiting human agents to forward its plans...
It's because they're really good at the kind of busywork the average white collar job requires. Most people are out there writing documents and making presentations. Only when you use them for actual complexity does the shortfall become clear.
Well I'd hope they're transformative, they're using transformers after all. We just need to pay attention to them, that's all they need.
Do they need all our attention?
I'm going to write a silly comment here: For a moment I thought you wrote "... LLMs. Yeah, they're transformative, but I don't know that they're going to be eating ramen in a Neo-Tokyo street bar anytime soon."
I liked that mental image a lot! (I try to maintain being uncertain whether Deckard was a replicant)
Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.
They also prefer some less common spellings. For instance, just noticed “vender” instead of “vendor” in an article this morning.
It isn’t for all repeated vowels; only for when the 2 vowels don’t make a single sound. So “chicken coop” wouldn’t have a dieresis
It would if the chickens formed a business structure that was owned and democratically controlled by its member-owners.
Great point :D
That, is likely co-op.
That's the joke.
Unless it was a chicken coöp... One of few cases it actually resolves an ambiguity!
It's also to distinguish metal bands. Motörhead.
Archive link: https://archive.is/2026.04.06-100412/https://www.newyorker.c...
Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.
> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."
This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.
They're mass media cynically produced to extract maximum profit from lowest common denominator audiences, so the idea that people working in such influential positions find them appealing enough to reference suggests they are members of that lowest common denominator audience.
The people shaping the future have no taste.
There's a time and a place for everything, and rejecting popular media as "lowest common denominator" is the most uninspired form of cultural elitism.
Is it cynical to want your <art project> to make a profit? Or for it to make enough profit to subsidize other projects?
Is it cynical to make something accessible so more people who watch it are able to enjoy it?
I agree that it's embarrassing and feels crass when movies both try to be broadly appealing and simultaneously fail to be entertaining or well executed ... but many of the marvel movies clearly surpass that bar.
No one wants to make a bad movie that does poorly with critics and paying customers - but it does happen because making a movie is expensive and complicated and requires a lot of skilled people working together towards the same goal.
Regarding taste: do you think a michelin star chef swears off cheap food like hotdogs or fish and chips? Doubtful - because those foods have their place and the chef is able to enjoy them for what they are rather than use them as an excuse to display a superiority complex.
> There's a time and a place for everything
Yeah, I'm saying professional communication isn't the place for Marvel references, and that those who choose to include references to those movies in their professional communications are revealing something about their media tastes.
If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.
> If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.
This is a very funny quip.
A famous anecdote about a 3* restaurant in NYC is about the servers overhearing a group of diners mentioning how they ran out of time try a "real NYC hot-dog", and the restaurant staff running out to grab one from the corner cart and plating it up nicely; and how this was a highlight of everyone's experience.
That they relate to the common person and aren't overly snobby?
Exactly. They share the cultural sensibilities of the average person on the street, and yet they're making decisions that will shape the world for future generations. I think that's bad. I want those decisions being made by people who have a more extensive cultural education. Snobs, if you want to call them that.
Interestingly, the smartest people I know have the widest range of media consumption and understanding. To assume that because someone uses a marvel reference they might not have a deeper cultural education is rather...limited thinking.
Ferran Adria drew culinary inspiration from a bag of potato chips
As someone experienced with a privileged elite educational background, I can guarantee that intellectuals love the highbrow and lowbrow, the authentic and the kitsch; rather, it is a sign that someone is not acculturated if they have the stereotypical impression of the intelligentsia, which makes the OC's comment ironic, they are telling on themselves.
Of course they're average people, why do you think tech or AI company employees are somehow above or beyond the average person? I'm not sure why you'd willingly say you'd want snobs controlling the world, that is somehow even worse and reeks of aristocracy which is why you see replies rejecting your thoughts, it is simply not a western ideal or one to strive towards.
> why do you think tech or AI company employees are somehow above or beyond the average person?
They're supposed to be elite. They went to the best schools, many of them have PhDs, they are getting paid insane amounts of money.
Lol. I can tell you right now they're not elite.
I'm confused as to what your point is. Employees refer to the incident as "the blip." I got no impression that there was a formal memo that went out to the company or the media at large that officially refers to the incident as the blip, merely that employees refer to it as a blip (likely to each other, not too dissimilar to a meme).
And while I don't think someone's media tastes ought to preclude them from making important decisions, I also disagree with your point at large. I don't think the world should be shaped by snobs. The world is already being shaped by snobs in other sense of the word, and I don't see any indication that it's any better than the alternative.
There is also elitism of lack of expectations. Common people should be helped to rise up over the mud produced by culture industry. Meeting them and staying with them in this mud is an actual elitism.
Marvel movies absolutely target the lowest common denominator of film watchers. To deny that is delusional.
When things reach a certain level of popularity they constitute "mental real estate". Your audience has heard of Groundhog Day, so there is an opening for a movie with that title to make money -- your film will start out already having name recognition and some understanding of what the movie is about.
Thus it is a writer's job not to make references they find appealing to reveal their good taste, but to know what references their audience will find appealing and use them to help communicate concepts. If this bothers you it's because they're insulting you by saying you might be part of the audience that watches Marvel, and you had hoped reading the New Yorker would signal that you aren't.
The writers of this piece didn't make the reference.
No, but they chose to include it. Presumably there were a lot less apt references they chose not to include.
I agree that these movies are really being cranked out. I hadn't even realised quite the extent of this until I went to look. But I think some of these movies are good enough that it shouldn't be disturbing that people in influential positions find them appealing:
I know a lot of people are critical of the Rotten Tomatoes score, but I find that when a high enough percentage of reviews are positive, it is likely I will enjoy the movie. Some of the Marvel movies have a very high proportion of positive reviews (admittedly, those reviews could be just positive, not very positive). And for most in this list with a very high score, I think it's deserved.
https://en.wikipedia.org/wiki/List_of_Marvel_Cinematic_Unive...
Arguably, one indication of the limitations of the Rotten Tomatoes score is the number of these Marvel movies with high scores :)
Btw, I'm not trying to convince you that if you watch the movies you'll like them. Just that they may not all be as bad as you think.
I'm an MCU fan. And while I do agree quality has gone down, I think it's hard to ignore the fact that the MCU did something really novel. They made a franchise that spanned 20+ movies and tied it up in a way that was almost universally loved by nerds and normies alike.
Are there a lot of plot holes and retcons? Yeah. And some bad writing. And the movies that came after have been pretty meh with some exceptions.
But for someone to say that referring to one of the highest grossing films and franchises of all time, means their decisions should be questioned, is quite the stretch.
The issue with marvel really is that it took another 20+ movies worth of unique ip or stories that could have been told out the window. Yeah, highest grossing of all time, but that has been marching up the whole time too, no? Especially selling to china now. Studios would have made the same money I’m guessing spread out over other IP.
I disagree with this characterisation. I loathe mass-media blockbusters, but a journalist has to be in touch with public culture in their goal to spread the truth and inform people, not just high-brow elites, but everybody. This is why their work is usually more influential, interesting and engaging than if it had been written by an academic.
Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.
This thread set off a software penalty called the flamewar detector.* I turned that off as soon as I saw it.
(* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)
This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?
> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.
Well they did indeed have a coup so looks like Altman was right.
He's a liar and untrustworthy. Based on their public statements, that's a big part of why the board fired him.
Of course, (despite the fact that Altman previously publicly stated that it was very important that the board can fire him) he got himself unfired very quickly.
A new Ronan Farrow piece is a rare gift (and Marantz is no slouch). Can't wait to read this in the physical magazine when it arrives!
I hadn't heard of him before. The wiki article is worth a look
https://en.wikipedia.org/wiki/Ronan_Farrow
It's got to be one of the most unusual biographies of a living person that I've ever come across. Nearly every sentence is a head-turner. If you made it up no one would believe you
Truly a unique individual on every level.
I didn't have the mental energy to read the whole thing but man the final paragraph is some really good writing. Way to tie it all in together.
The entire thing is a joy to read, you should really set aside some time to cleanse your palette in this age of LLM prose. I mean just look at this juxtaposition
>Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals.
(plus it finally resolves the mystery of "what Ilya saw" that day)
Also since it wasn't stated clearly
>“the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India
That was Sydney if I understand correctly.
I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.
that animation of Altman with a thousand faces is oddly unsettling. Good job, new yorker.
Why is the story so downranked? Folks at HackerNews have something to do with it ?
It off the flamewar detector, a,k.a. the overheated discussion detector. I've turned that off now - this is obviously a serious article.
HN generally downvotes and/or flags anything that paints ycombinator in a bad light. As Altman was president of yc from 2014 to 2019 that could be why this is getting downvoted.
Articles critical of Airbnb, one of yc's biggest wins, also get flagged and taken down.
I'm not sure whether you meant this about moderator interventions or not, but our actual practice is the opposite:
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
As those comments explain, this has been the #1 rule of HN moderation from the beginning. See also https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....
I don’t think the poster you responded to was claiming that moderators directly did this. The flagging system is open to bias from the community at large and certain types of articles(ex. Anything critical of the current admin) get a bunch of real users organically flagging them.
Yes, it's hard to tell sometimes but I've at least learned not to automatically take these personally. Well, partly learned.
I don't think anyone familiar with this community would assume positive bias towards Sam, Airbnb, or even YC anymore - it's quite the contrary, from my perspective, but of course everyone notices different things and has their own view. Ditto for political slants.
I dont assume positive bias, but I do assume that most negative things that get people irked are removed as a result of the mechanics of the flagging system.
Like, I dont really expect puff pieces for ycombinator or the like to get artificially pushed to the top, but I do expect that enough people who are feel culturally or financially invested in ycombinator to flag negative things into oblivion, especially as its completely reasonable that the population of users here has a much higher percentage of those folk than any random population sampling.
Of course he cannot be trusted. Anyone whose motivation is based on greed is by nature untrustworthy.
Even if your motivation is some utopian vision of the future, you should not be trusted. Utopia is a thought experiment in a philosophy of living taken too far, not something to be reached for earnestly.
Why is it that criticism of people's insatiable greed for wealth and power often gets dismissed with this thought-terminating cliche about utopias?
Desire to live in a society that's less greedy, that rewards compassion and punishes sociopathy is completely valid. We should be pursuing that earnestly because survival of our species depends on it. The people in charge are so drunk on wealth and power that they would rather drive our entire species off a cliff than sacrifice even 10% of their effectively bottomless wealth.
But instead of criticizing our current philosophy that's actively being taken too far and threatens to destroy us, you criticize people who express their frustration with this state of affairs.
The criticism is not of the idea that the world has problems, and that we should look at those problems with the aim of fixing them.
The criticism is of the assumption that a world without problems theoretically could exist.
You may disagree, but you will not find a definition of such a world that everyone can agree on.
Regardless, of whether you agree (that such a definition doesn't exist) or not, if you do plan on bringing about such a utopia, and you begin to meet resistance, the question you will inevitably need to answer is: How do those who resist fit into this utopia?
The historical answer for this question, which by all appearances seems like an inevitable answer, is the reason why people criticise utopian thinking.
Not just the greed. The whole AI is so dangerous that we must be the ones to build it to save humanity, and then gaslighting yourself and everyone around you into believing that your language model is AGI. This is some weird detached from reality cult behavior.
Complete hearsay, but I struck up a convo with someone who had spent a few hours drinking around a campfire with him and a few others at burning man, prior to GPT3's popularity. Apparently he was utterly convinced in his pivotal role to shepherd in a new era with AI, to the point where it got really messianic and culty. He didnt recall much else other than just being really weirded out by the dude.
The AI CEOS and most of their employees are in the same place as that guy. They're just in a more professional context and will be careful not to let their delusions of grandeur look too insane.
I remember watching the fitness function improve while my neural net learned to recognize characters for a project I did in school, and there was something about it that felt powerful. I guess we've always had that with the machines we imbue that have any sort of decision making "intelligence", but mix that with taking psychedelics and you have an interesting cocktail.
lol thats like 99% of planet earth, including the animals
no it isn't lol
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”
This statement rings true.
JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.
I don't think they were useful at all. If anything they pulled down YCs up to that point stellar reputation.
At least two of YC's early (mid-aughts) "huge" successes come down to PG unilaterally (or with some help from JL) making some kind of "weird" call. AirBnB and Reddit come to mind. Even Stripe can be traced to him since he basically created the Auctomatic team (Patrick Collison's previous YC entry).
In other words, PG had the "knack" for sometimes encouraging the right weird thing. I'm not sure it's been the same since he handed off the reins, like any other formerly-founder-led company. Nowadays it really gives off the vibe of bean-counting and hype-chasing.
I don't think it's gotten quite as bad as this [0] article suggests, though.
[0] https://stanfordreview.org/is-yc-for-cowards/
“Today’s news comes at an interesting time. Last week, Business Insider’s Jonathan Marino reported that YC is close to raising several billion dollars for a new fund, with the goal of possibly expanding its scope to later stage funding. It said it’s still in preliminary discussions for this new strategy, but if true, Thiel could definitely play a big role there.”
My recollection was Thiel was injecting cash, a money deal. [0] There was another less advertised play. An established path for the Thiel “Boy Wonder Fellows”. [1]
“In addition to founding PayPal and Palantir and being the first investor in Facebook, Peter has been involved with many of the most important technology companies of the last 15 years, both personally and through Founders Fund, and the founders of those companies will generally tell you he has been their best source of strategic advice. He already works with a number of YC companies, and we’re very happy he’ll be working with more.”
Guess who was involved in the Thiel / YC deal? [2] You are not the only one seeing this as a reputation hit for YC. [3] Even I, disconnected across the other side of the world could see this as an issue.
[0] https://www.inc.com/business-insider/peter-thiel-is-joining-...
[1] https://boingboing.net/2016/08/25/peter-thiel-y-combinator-f...
[2] https://www.ycombinator.com/blog/welcome-peter/
[3] https://qz.com/810778/y-combinator-has-no-problem-with-partn...
Having Thiel on board of YC would probably turn off a lot of potentially successful founders. Or maybe it's a way to select for those with a lack of ethics. Having Musk and Thiel visibly associated probably is good from a monetary perspective but it sends all kinds of bad signals.
https://archive.is/Cd0Yl
Another Mirror on archive.org for archival purpose: https://web.archive.org/web/20260407180452/https://serjaimel...
Really solid piece of journalism. I understand some stuff ends up on the cutting room floor in the editing process as length is eventually a factor. What was the one thing you most regret having to cut out of the final piece?
> Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”
I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.
It's not surprising. I made this comment on HN before, but if you follow him on Twitter, it's pretty remarkable - the CTO of one of the most important technology companies in the world and he has never (that I've seen) posted something with some technical insight, or just anything interesting about technology. It's just boring truisms, cliches, empty statements, etc.
Eh. It doesn't start or stop with people like Altman, Zuckerberg, or Nadella. I think it's a symptom of a broader problem in tech. Half the people on this site made a decision to work at companies that do shady things, and they did that to maximize personal wealth.
The difference isn't that the average techie doesn't dream of making a billion by any means necessary; it's that most of us don't think we have a shot, so we stick to enabling lesser evils to retire with mere millions in the bank.
I don't think it's all that hard to avoid working on anything shady. It's not as easy to avoid being associated with anything shady due to widespread cynicism and a tendency to treat tech companies with thousands of projects as a monolith.
> The difference isn't that the average techie doesn't dream of making a billion by any means necessary
I hope that's not true. If it is, we live in a bleak world indeed.
I can confidently say I've never once dreamed of having billions. I've never wanted billions. Not even in a fanciful manner. What would I do with that money? Buy mansions and megayachts? That's loser stuff
Most of what I want out of life cannot be bought. The pieces that come with a price tag, like a comfortable home, do not require billions
I think only sociopaths want billions because they don't understand spending your life seeking things that actually matter, like family and human connection
> The difference isn't that the average techie doesn't dream of making a billion by any means necessary
That's actually the difference, most people don't want a billion
Yeah, sure…
it is disappointing, but is it shocking that people most driven by gaining money/power are the ones the most successful at achieving it?
What sticks out to me most is that humanity consistently fails to weed these creatures out and regulate society. It's a bug in our social software; we seem to like these broken people rather than recognize that they're a liability.
Most people don't care as long as it does not affect them directly.
Trust is not a bug
You need to accept that every generation some people are going to try and fuck things up.
Then you get to decide to stop or help them
This isn’t a bug. It’s the driving force of our capitalist society. We are not trying to weed them out. We are trying to encourage them. It’s pretty simple, when they get rich, so do all their investors.
Sociopaths don't have much going for them in life other than winning status games.
Sociopath is the next word that people seem to want to entirely destroy the meaning of
[flagged]
> Struck a nerve?
No need to be petty. They have a point. We did this with the words racist and fascist. Overinclusion diluted the term and gave cover for the actual baddies to come in. I'm not sure debating who is and isn't a sociopath is as useful as, say, the degree to which Sam is a liar (versus visible).
Speaking of overinclusion, 'wild' is my nominee for 2026 as I'm seeing it all over the place.
> 'wild' is my nominee for 2026
I don't know how to define the delineation I'm about to propose. But there is a difference between overinclusivity trashing a morally-loaded, potentially even technical, term, and slang evolving.
I'm sorry, we did what with the word "racist"?
> we did what with the word "racist"?
“Overinclusion diluted the term and gave cover for the actual baddies to come in.” The next sentence.
Yeah, no shit. I was hoping for more specificity.
While I agree that the word has been misused by some bad actors in the "Woke 1.0 era", it's worth pointing out that this isn't what most people complaining about the word being "diluted" are referring to as these are mostly people flat-out upset by any suggestion that they themselves might hold racist beliefs.
That said, anyone using "racist" as a noun isn't worth your time, nor is anyone who's genuinely upset about people calling concepts, systems or ideologies "racist".
Specifically, the "Woke 1.0 era" culture war arose from two conflicting meanings of the word "racist" largely aligning with two different segments of the population: 1) "racist" as a bad word you call people who are extremely bigoted against people along racial lines and 2) "racist" as a descriptor for systems and ideologies downstream from racialization (i.e. labelling people as racialized - e.g. Black - or non-racialized - i.e. "white") as a mechanism of asserting a power structure. "Wokists" would often conflate the two by applying the word as broadly as the latter definition necessitates while still attempting to use it with the emotional weight and personal judgement of the former definition.
I think a lot of this can be blamed on "pop anti-racism" just as a lot of the earlier "boys are icky" nonsense can be blamed on pop feminism because fully adopting the latter definition requires a critique of systems, which is much more dangerous to anyone benefiting from those systems than merely naming and shaming individuals. Anti-racism (and feminism) ultimately necessitates challenging hierarchical power structures in general and thus necessarily leads to anti-capitalism (which isn't to say all anti-capitalists are anti-racist and feminist - there are plenty of "anti-capitalist" movements that still suffer from racism and sexism just as there are "anti-racists" who hold sexist views or "feminists" who hold racist views). But you can't use that to sell DEI seminars to corporations and corporations can't use that to promote themselves as "woke" - as some companies like Basecamp found out when their internal DEI groups suddenly started taking themselves seriously during the BLM protests, resulting in layoffs and "no politics" policies and a general rightwards shift among corporate America leading up to and into the second Trump presidency (which reinforced this shift, resulting in the current state of most US corporations and their subsidiaries having significantly cut down on their previously omnipresent shallow "virtue signalling").
Racism and fascism have been used correctly, its just that people do not like to be have their beliefs associated with negative things and thus, rather than perform self-reflection about themselves, instead the problem exists elsewhere. I am sure you can come up with outliers that prove what you are saying is true, but across the vast majority of applications of the use of both words they are correct relative to definitions of both words.
I would be curious to hear you expand on that, walk me through it, maybe a small paragraph to explain what over inclusion happened with the weird fascist, what baddies you're vaguely referring to, and connect those dots?
While true and we can see them literally everywhere where there is some money and/or power (even miniscule places like classic banks have easily 1/3 of the staff with clear sociopathic traits, I have to deal with them daily... or whole politics) - thats just human nature, or part of it.
Its up to rest of society to keep them in check since classic morals are highly optional and considered nuissance blocking those games. And here we the rest fail pretty miserably, while having on paper perfect tool - majority vote.
Or, some fraction of otherwise good/normal people who “win” are turned into sociopaths by the power and sycophancy.
Suchir Balaji deserves to have his death investigated further.
Without having read the article, reacting on the headline: no single person should be allowed to control our future. Democracy is a thing in large parts of the world, and we should try very hard to keep that functioning and even improve it.
People are voting with their wallets
Thats not democracy.
It's also not not democracy. It has little to do with a form of government.
>People are voting with their wallets
A handful of people's wallets are much deeper than vast swaths of the population. None of this AI shit would be happening without their funding.
The only part of the world where _democracy_ is a thing is Switzerland. Rest of the western world is utterly ruled by politicians, governments with ever more control over _their_ population's private life and money, and some who shout out "democracy", deluded they have any control over anything through voting lol.
It’s hard to know what’s the new information here. Altman’s history has been reported on exhaustively.
Few people have left openai over the year - safety abandonments, non profit status change, deception etc. but there is too much money involved. Here lies the actual rub. A lot of people involved and named in the article are reprehensible, kushners, saudis, Emiratis, PayPal mafia, vc folks with god complexes. But as long as they have the money, we have to dance to their tune.
We really really need a way for our society to be more equitable and hold these people responsible.
I still don't understand how the transformation from non-profit to for-profit was possible legally.
The non-profit still exists and has the for profit as a subsiduary.
Anything is possible with a big enough bank account.
>PayPal mafia, vc folks with god complexes
HR would like you to tell the difference between the two photos.
It’s less about trusting one person but more about the structure indeed AI is concentrating capital and compute and talent into a few hands so we’ve seen this before with railroads, oil, semiconductors. It brings innovation and also pricing power and political influence.
Would you trust a guy who controls a magical orb that answers everyone's questions for free and satisfactorily enough that people basically pay money to talk more to it, to use it responsibly? I won't.
One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.
> there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.
Or if the person lying is in a position of power?
I assume stuff gets cut for length in the editing process. What was the thing you wish had remained that was edited out?
he won't. if anything, openai is falling behind recently. the trend won't change easily. it is like the old time Netscape.
We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.
Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
> underlying threat posted by AI to society, the economy and human freedom persists
I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI - unless you're claiming that an AGI would be capable of such independent actions.
AI is similar in transformative power to how the internet was a transformative power - might even be greater, if it is more commonly available for use through out the world. Whether that transformative power is doing good or bad really depends on the people doing it, not on the tech. I would bet that the future is going to be better because of AI, than to imagine a worse future and act to stunt the tech.
> I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI
Of course, it is popular to deny it. People constantly tell themselves "it is people, not tech". They make valid, yet banal and inconsequential statement. This distinction has no bearing on reality.
So you're saying that if people hadn't invented weapons, there would be no violence?
The claim that AI is itself dangerous has no merit.
> So you're saying that if people hadn't invented weapons, there would be no violence?
If anything, if people hadn't invented weapons, they would not use weapons to enact violence, and this in turn will impact the practical nature of violence.
> The claim that AI is itself dangerous has no merit.
My claim is that considering any technology by itself is pointless. There is no such thing as thing by itself. Technology always exists in structural setting, and in turn shapes this structure.
Or perhaps, the underlying threat is personified by Altman, in that our country has repeated and widespread institutional failures to hold the wealthy accountable for wrongdoing.
The threat of AI is, after all, driven by the people who use it.
It's because we only really know one economic system but we've known many people
>But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.
Without Sam Altman the compute and improvements for LLMs to be a threat wouldn't have readily existed at all. He was the one who got the ball rolling because of his desperation (SVB collapsed right before the hype bubble started), ego, and quasi-religious desires.
"If I don't destroy humanity someone far worse will do it" -Sam Altman
https://en.wikipedia.org/wiki/Roko%27s_basilisk
The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious
It really stands out and judging from the overall excellent quality of this article, that was very intentional. Answers the headline, too.
Life would be so much easier if I was that forgetful
Altman's character is almost irrelevant next to how frictionless it is for a handful of people to set defaults for millions.
Beyond the question of should we trust Sam Altman to control our future - why on Earth should we want any single individual to control our future at all?
Greg Brockman honestly sounds like a psychopath:
> In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?
I wonder if Sam might abandon the ship soon. Other co-founders already did.
The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.
This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.
But nobody is going to just gift him the same valuation on the next company. It's not like his execution is OpenAI's moat right now. So where would he be going that's a better deal for him?
Founding his own company would be one alternative. Full control. No stigma on the non-profit part. Probably get the same paper money as he got now at OpenAI.
What is the value he adds anyway, being a delusional cult leader where most people around him characterize him as a sociopath? Is it just his ability to lie and create fear-hype?
It's not like he had anything to do with the technical achievements, except convincing the engineers that they were doing something valuable, but the cat is out of the bag on that.
And OpenAI's influence is hugely exaggerated compared to, say, Google.
Yes, and it seems people hate him more than Google co-founders, for example.
All the downsides without much upside...
> Yes, and it seems people hate him more than Google co-founders, for example.
Sergey Brin is trying to change that lately, but Altman still has a sizable head start.
IMHO, nobody is remotely worth $1B, period.
The fact that some (usually toxic) individuals get there shows that the system is flawed.
The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.
We shouldn't follow billionaires, we should redistribute their money.
If someone founds a company, grows it and owns $1bn of its stock, they don’t have $1bn in cash to distribute. They have a degree of control over the economic activity of that company. Should that control be taken away from them? Who should it be given to?
I can see an argument when it comes to cashing out, but I’m not clear how that should work without creating really weird incentives. Some sort of special tax?
> Some sort of special tax?
Well yeah. After some amount, you get 100% taxes. So that instead of having billionaires who compete against each other on how rich they are or on the first one to go contaminate the surface of Mars or simply on power, maybe we would end up with people trying to compete on something actually constructive :-). Who knows, maybe even philanthropy!
So, who owns and runs the companies? How do new companies get formed?
I'm not against higher taxation of the wealthy. I think inequality is a serious problem. The issue is what the wealth of these people isn't a big pile of cash they are wallowing in, it's ownership of the companies they build and operate. Is that what we want to take away? How, and what would we do with it?
I think it makes more sense to tax it as that power is converted into cash. I'm not clear how a wealth tax should work.
> I think it makes more sense to tax it as that power is converted into cash
Yeah, that makes sense to me. And those are all good questions of course :-).
> So, who owns and runs the companies?
I guess ownership stays the same, we just need to prevent the companies from growing too big. Because the bigger they are, the more powerful their leaders get, for once (aside from all the problems coming from monopolies). But by taxing them, we prevent the people owning those companies from owning 15 yachts and going to space for breakfast :D.
> How do new companies get formed?
I don't know if that's what you mean, but I often hear "if you prevent those visionaries from becoming crazy rich, nobody will build anything, ever". And I disagree. A ton of people like to build stuff knowing they won't get rich. Usually those people have better incentives (it's hard to have a worse incentive than "becoming rich and powerful", right?).
Some people say "we need to pay so much for this CEO, because otherwise he will go somewhere else and we won't have a competent CEO". I think this is completely flawed. You will always find someone competent to be the CEO of a company with a reasonable salary. Maybe that person will not work 23h a day, maybe they won't harass their workers, sure. But will it be worse in the end? The current situation is that such tech companies are "part of the problem, not of the solution" (the problem being, currently, that we are failing to just survive on Earth).
Big agree, at a certain point a company is big enough that their impact has to be managed democratically. I don't have an issue with effective leaders, the problem is that we reward a certain kind of success with transferable credits that don't necessarily align with people's actual talents or skills.
I want skilled institutional investors who have a track records of making smart bets. I don't want a random person who happened to get lucky in business dictating investment policy for substantial parts of the economy. I want accountability for abuses and mismanagement.
I know China gets a bad rep, but their bird cage market economy seems a lot more stable and predictable than this wild west pyramid scheme stuff we do in the US. Maybe there are advantages for some people in our model, but I really dislike the part where we consistently reward amoral grifters.
> Big agree, at a certain point a company is big enough that their impact has to be managed democratically.
100%. First, a company should not be that big. The whole point of antitrust was to avoid that. The US failed at that, for different reasons, and now end up with huge tech monopolies. And it's difficult to go back because they are so big now.
BTW I would recommend Cory Doctorow's book about those tech monopolies: "Enshittification: why everything suddenly got worse and what to do about it". He explains extremely well the antitrust policies and the problems that arise when you let your companies get too big. It's full of actual examples of tech we all know. He even has an audiobook, narrated by himself!
Well, redistributing their money is (in some cases disingenuously) exactly how they are able to pitch investors. "Sure, value my company at $10B and my shares make me $2B, but we're alllllll gonna make money when hit AGI!!!" That kind of thing.
Sure, I understand why the people around them who benefit from it also want to do that.
My point is that it all only benefits a few people. Those people used to call themselves "kings", appointed by god. Now they are tech oligarchs. If the people realised that it was bad to have kings, eventually maybe they will realise that it is bad to have oligarchs?
Control + Altman + Delete
It seems unlikely OpenAI can survive long term with Sam at the helm. Challenge is folks already realized that once and yet here we are.
You come at the king, you best not miss. Unfortunately, having survived a coup, his odds of surviving the next have improved. Now he knows how they go, what to look for and how he might handle them. I wouldn't bet on him being kicked out, at least while OpenAI is still on top. If OpenAI stumbles and Anthropic or another starts to prevail, only then would I bet on Sam getting pushed out.
I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.
This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.
I don't think anyone does claim they are superhumanly intelligent today in any general way? The question is how they will do in the future.
Your brain is performing "compute-intensive brute-force attacks on the problem/solution space" as you read this very sentence. You trained patterns on English syntax, structure, and semantics since you were a child and it is supporting you now with inference (or interpretation). And, for compute efficiency, you probably have evolution to thank.
people like to say this like they’re apples to apples but this comparison isn’t remotely how the brain actually works - and even if it did, the brain does it automatically without direction and at an infitesimal percentage of the power required.
And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.
One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.
It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.
Thank you, this comparison has been a huge annoyance of mine for the past 3 years of... this same debate over and over.
I think it's the hubris that I find most offensive in this argument: a guy knows one complex thing (programming) and suddenly thinks he can make claims about neuroscience.
Great post
Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does. AI is more like a parrot which is trained to give a correct-looking response to any question. The parrot doesn't think, doesn't know what its doing etc, it just does it because it gets a treat every time a "good" answer is prompted. This is why it can't do things like know how many parenthesis are balanced here ((((()))))) (you can test this), it doesn't have any kind of genuine cognition.
> Human cognition is nothing like AI "cognition."
I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.
What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?
Yes, we do. Humans share the statistical association ability that LLMs possess, but also conscious meaning and understanding. This is a difference in kind and means that we can generalize beyond the statistical pattern associations that we've extracted from data, so we don't require trillions of examples to develop knowledge.
Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...
They don't need to read every math textbook, paper, and online discussion in existence.
Our DNA does contain our pre-training, though. It's not true that we're an entirely blank slate.
Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.
The point I'm trying to make is that I don't think we know, so we can't say either way.
In your example, would the human have ever had contact with other humans, or would it be placed in the room as a baby with no further input?
They grew up in a tribe that hasn't discovered numbers yet.
Those who argue that AI is like human cognition don't know much about AI or human cognition.
Those who argue that AI is like a parrot don't know much about anything at all.
I love reading posts like this. When you were a child, learning math or grammar, do you not remember bouncing off the walls of incorrect answers, eventually landing on a trajectory down the corridor of the right answer? Or were you always instantly zero-shotting everything?
In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.
Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.
The mistake in these types of arguments is that natural, classical-artificial, and/or neural-net-artificial learning methods all employ some kind of counterexample/counterfactual reasoning, but their underlying methods could well be fundamentally different. Thus these arguments are invalid, until computer science advances enough to explain what the differences and similarities actually are.
This is such a boring cliche by now. "thinking" and "knowing what it's doing" are totally vague statements that we barely understand about the human mind but in every comment section about AI people definitively state that LLMs don't do them, whatever they are.
This is the epitome of learned helplessness, that you need a neuroscience paper to tell you what thinking and knowledge is when you experience it directly all the time, and can't tell that an LLM doesn't have it. Something is extremely evil about these ideologies that are teaching people that they are NPCs.
I know I'm thinking, I have no idea if you're thinking, or if you're a human or an LLM.
They aren't so vague that you would argue the parrot is thinking.
Why not?
> Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does.
This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.
We are very different, and there are some high-profile people that don't even have an internal monologue or self-introspection abilities (one of the other symptoms is having an egg-shaped head)
> This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.
I have a different theory.
Aside from a few exceptions like Blake Lemoine few people seem to really act as if they believe A.I. is doing the same thing the human mind is doing.
My theory is people are for some reason role-playing as people who believe human thought is equivalent to A.I. for undisclosed reasons they themselves may or may not understand. They do not actually believe their own arguments.
AI is more like a parrot which is trained to give a correct-looking response to any question.
A parrot that writes better code and English prose than I do?
I would like to buy your parrot.
If you think this way then why not talk to LLMs exclusively. Don’t let the oxytocin cloud your ability to problem solve.
I get you're trying to do the whole "humans and LLMs are the same" bit, but it's just plainly false. Please stop.
> All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning'
If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".
Has generative AI made material progress on curing cancer? Has it produced any breakthroughs, at all?
In b4
- it’s the worst it’ll ever be - big leaps happened the fast few months bro
Etc.
Personally I think llm’s can be very powerful in a narrow-band. But the more substance a thing involves, the more a human is needed to be involved.
> "I don't trust anyone who claims they're intelligent" doesn't follow from "all they do is <how they work>".
It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.
I can slow down the compute by a factor of a thousand. It would not change the result. But it changes the economics. We only call it intelligent, because we can do the backpropagation, the inference (and training) fast enough and with enough memory for it to appear this way.
If LLMs can come up with superhumanly intelligent solutions, then they're superhumanly intelligent, period. Whether they do this by magic or by stochastic whatever doesn't make any difference at all.
Like..a calculator?
Take a calculator to the International Math Olympiad and let's see how you do.
That's moonshot logic that reinforces the parent's point. You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment.
> You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment
That's not a cure. Like yes, I'd care if the AI says it cures cancer while nuking Chicago. But that isn't what OP said.
"The cure for cancer" as a phrase doesn't include those solutions. If the headline was "Pope discovers the cure for cancer" and those were his solutions you would say "No he didn't." OP was referring to AI discovering the cure for cancer that cancer research is working towards.
If all they do is "just" brute-force problem solving, then they are already bound to take over R&D & other knowledge work and exponentially accelerate progress, i.e. the SciFi "singularity" BS ends up happening all the same. Whether we classify them as true reasoning is just semantics.
calculator is superhumanly intelligent
Yeah and everything is just atoms. If you reduce anything enough it’s not real.
I bet Satya Nadella is regretting defending Altman now.
Who would you trust more: Sam Altman, or a council of 1000 representative AI models?
YC invests in people, not ideas. They have vetted him. They are always right about people. It's probably nothing.
Interesting!
This Sam Altman video is addictive. I could watch it over and over.
Ask Condé Nast if he can be trusted..
https://www.reddit.com/r/AskReddit/s/VWJVBNzc2u
Uncrappified link: <https://old.reddit.com/r/AskReddit/comments/3cs78i/whats_the...>
Two "insure" typos?
The New Yorker prefers insure to ensure. They have a unique house style. I commented on another thread about alternative spellings like vender instead of vendor, too.
> The New Yorker prefers insure to ensure. They have a unique house style.
That's not a stylistic choice, it's just incorrect use of English.
Well that’s just, like, your opinion, man. https://www.merriam-webster.com/dictionary/insure
That M-W entry literally says they're different words with different meanings:
> They are in fact different words, but with sufficient overlap in meaning and form as to create uncertainty as to which should be used when.
> We define ensure as “to make sure, certain, or safe” and one sense of insure, “to make certain especially by taking necessary measures and precautions,” is quite similar. But insure has the additional meaning “to provide or obtain insurance on or for,” which is not shared by ensure.
Definition 2: "to make certain especially by taking necessary measures and precautions"
From the article:
> He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them.
> Others were uncomfortable sharing concerns about Altman because they felt there was not a sufficient effort to insure anonymity.
> [...] to insure that the technology was deployed safely
All of these work just fine with that definition of "insure." Your comment that it's "incorrect use of English" is wrong.
The bit you quoted says there’s substantial overlap between the two. The New Yorker style is to prefer “insure” in cases where either could work.
I'm unconvinced but I'll ensure I do my homework before grammar-policing again :)
To be fair, I use “ensure” myself, but it’s just one of several quirky elements of the New Yorker’s style, along with the diaeresis on repeated vowels with different sounds (like in reëmerge or coöperate), several uncommon spellings, and unusual conjoinings like “teen-ager” and “per cent.” It’s part of the charm, I suppose
In American English, "insure" can also mean "to make sure" as in "ensure", in additional to meaning "to take out insurance for".
TIL!
Dictation likely and not caught by editing.
> Altman does not recall the exchange.
Altman SAYS he does not recall the exchange. Not the same thing.
My tendency is to believe that the individuals do not what matter as much, when it comes to the biggest risks. I'm not sure if this is a bias or a theory... but I lean to some sort of "medium is the message" determinism.
>"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."
Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.
Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.
Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.
By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."
At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.
So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.
That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.
Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.
> while Y.C. took a six- or seven-per-cent cut
shamefully have to admit that my monkey-brain smirked because of an accidental 67-meme in a serious article.
Sam failed upwards.
if you have to ask if someone can be trusted, they usually can't
It's the golden rule of news article headlines: if the headline is a question, the answer to it is always a negative.
https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines
And if you think you don't have to ask if someone can be trusted, you're usually wrong.
> Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.
Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?
It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.
Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.
Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.
No
The main animated picture reminded me of evil king Ravan from Ramayan with 10 heads. Not sure it is intentionally done that way.
This is the problem with propaganda, you have been told that he was evil as most Indians are led to believe but for people in Sri Lanka he was a great leader.
he doesn't control his own future... chatgpt implodes in 18 months max depending upon how the strait of hormuz play goes...
Seems this got buried from the front page very quickly
It set off the flamewar detector. I've turned that off now.
I only saw this thread by chance and almost didn't look, because the title made the piece sound like a flamebait blog post. Fortunately I saw newyorker.com beside the title and looked more closely.
There is dwindling space for sincere independent accountability reporting on big tech like this to a) be created, since it's incredibly resource-intensive and so many resources flow from Silicon Valley, and b) actually reach people, since more platforms are now owned or otherwise influenced by interested parties.
Thank you for looking. Please do spread this kind of reporting in your communities, and subscribe to investigative outlets when you can.
> OpenAI has closed many of its safety-focussed teams
A paper with "ideas to keep people first" was (coincidentally?) published today:
https://openai.com/index/industrial-policy-for-the-intellige...This was an excellent piece with many new pieces of information in it. Thanks to you and your coauthor for getting it released.
You can see the vote history here[1]. It's always hard to know exactly why something gets buried. I was a little sad to see the story down-ranked when I saw that you were here in the comments.
But the discussion is generally pretty low quality with these sort of posts. People react without having read the story, or with whatever was on their mind already, or are insubstantive, or simply low effort. I don't think you'll lose k-factor not having a bigger post here.
Sometimes if you talk to the mods, they'll let you know their perspective. I generally find they're correct that people are much better at contributing/disseminating new knowledge to the world on more technical topics here.
[1]: https://news.social-protocols.org/stats?id=47659135
Yes, I was surprised that it was downranked when I saw that too. Then I realized it had set off the flamewar detector and it was a simple matter to turn it off. I'm glad we got to this in time, because sometimes we don't, and this was an important case not to miss.
But isn't that circular? If the ranking algorithm used by the mods tends to devalue articles like this because they don't trust the user base to comment intelligently, doesn't that alter the culture of this site to make that more true?
I'm not sure what big_toast meant, but we do trust the user base to comment intelligently (which sometimes works and sometimes not), and we don't devalue articles like this.
We do tend to devalue titles like this, or more likely change them to something more substantive (preferably using a representative phrase from the article body), but I'm worried that if I did that here we would get howls of protest, since YC is part of the story.
I'm sure you're sick of comments about moderation, but I will say, this makes me more sympathetic to the position you're in.
It's an interesting dilemma. Many very respected publications use provocative titles because of the attention economy. And I'm sure you have good data that provocative titles lead to drive-by comments and flame wars.
But I don't think big_toast was entirely wrong that there is a side effect of sometimes burying articles that are by their nature provocative. And how do you distinguish a flame war over a title from a flame war over content? That's not a leading question. I don't know.
For us the litmus test isn't the title, it's whether the article itself can support a substantive discussion on HN. If yes, then we'll rewrite the provocative title to something else, as I mentioned. Ironically this often gives the author more of a voice because (1) the headline was often written by somebody else, and (2) we're pretty diligent about searching in the article itself for a representative phrase that can serve as a good title.
If, on the other hand, the title is provocative and the article does not seem like it can support a substantive discussion on HN, we downweight the submission. There are other reasons why we might do that too—for example, if HN had a recent thread about the same topic.
How do we tell whether an article can support a substantive discussion on HN? We guess. Moderation is guesswork. We have a lot of experience so our guesses are pretty good, but we still get it wrong sometimes.
In the current case, the title is baity while the article clearly passes the 'substantive' test, so the standard thing would have been to edit the title. I didn't do that because, when the story intersects with YC or a YC-funded startup, we make a point of moderating less than we normally do.
I know I'm repeating myself but it's pretty random which readers see which comments, and redundancy defends against message loss!
Girls and boys, this is a prime example of a rhetoric question.
Na, it will be Dario instead of Sam, Id say? :-))
For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.
I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.
Some concepts from the book:
> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.
> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.
> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.
> Trust your instincts over a person's social role (e.g., doctor, leader, parent)
Check and check.
OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.
I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.
We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.
E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.
That's not a third category, that's just a sociopath as seen by themself.
I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.
Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.
> I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.
Yes that is the core trait I highlighted in the 1st bullet.
> I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.
There is -- I call it "corpo sociopath." The corpo sociopath really comes out in the workplace, less so in personal life.
I was with you right up until the final paragraph, but this made me do a double take:
> OpenAI is too important to trust sama with.
...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.
The whole "super serious what-ifs" game is just marketing.
Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.
I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.
> I'm not even sure we're any closer to AGI than we were before LLMs.
I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).
Just because something is overhyped doesn't mean you have to be dismissive of it.
Point is that LLMs could be a local minima we are now economically stuck in until the hype wears off.
Or we could be stuck here for decades pending a breakthrough nobody alive today can even conceive of, or we could be compute limited by a half dozen orders of magnitude. Or it could happen next week. That's the nature of breakthroughs--you just can't have any idea when or how (or if) they'll happen.
In hindsight there's an obvious evolutionary pathway from the Wright Flyer to Gemeni/Apollo/Soyuz.. but at the time in 1903 there absolutely was not, and anyone telling you so would be a crank of the highest degree. So it may turn out that LLMs have some place on the evolutionary path to AGI, or it could turn out they're a dead end like Cayley's ornithopters. Show me AGI first, then we can discuss whether LLMs had something to do with it.
In order to get to space, you must first be capable of flight through the atmosphere. That should be apparent to anyone even then because the atmosphere is in between space and the ground.
Regardless of whether spaceflight is still 1000 or 100 or 50 years away, you are still closer than you were before you demonstrated the ability to fly.
It's fairly obvious sociopathy is a prerequisite for top CEO jobs. Some just hide it better than others or have better PR people
No
Of course not. No one can be trusted to control our future.
Excellent article, truly well-researched. As someone close to a pathological liar [1], the idea that one could be at the forefront of the creation of an artificial superintelligence confirms all the existential risks of such a piece of technology and how naïve, if not ignorant, the average starry-eyed tech worker and investor is about this whole endeavour. It's easy to believe there is a lot of idealism and wish for a better world, but underneath the greedy drive for money and power is excellently summarized in Greg Brockman's own thoughts: “So what do I really want? [...] Financially what will take me to $1B.”
Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.
---
1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.
ugh, i don't understand why only altman scares you? what about google, china, and other players?
for me, the answer >>> we need to create our own systems. decentralized agent networks and etc.
if you don't want to depend on one person or one company controlling your AI, build your own infrastructure.
the concentration of power in one/two persons is the problem.
Sounds like a snake pit. None of them can be trusted. If we have to rely on companies to self appoint a benevolent ‘AI dictator’ we’re fucked.
The only high profile person in AI I’d consider perhaps worthy of trust is Demis Hassabis.
If you are asking if a single human can be trusted with such a responsibility, the answer is, by default, no.
Does the article ever actually answer the title question?
The answer is no, he can't be trusted
Oh, I agree that's the correct answer. I just don't see the article actually ending up with that answer. I see it waffling. Basically, the article ends up saying that, well, we told you about all this dodgy stuff, but what he's doing is working.
God forbid an article presents all the evidence from all parties and asks you to reach a conclusion by yourself...
Sorry for the snark. But I genuinely think the way they did this was perfect.
> I genuinely think the way they did this was perfect.
Evidently we disagree. I responded about that to another commenter downthread.
Trusted to increase shareholder value is also questionable
I think you are misunderstanding the point of journalism. It can be debated whether the title should be such a question. Nevertheless, the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself. You can see the attempts at the balanced part in the article where an allegation/statement is made about Altman followed by parentheses saying that Altman recalls the exchange differently/does not remember.
> the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself.
I get that this is the claimed ideal of journalism, at least for straight reporting. The problem is that it's impossible.
There isn't time or space to present all the information; the journalist has to filter. And filtering is never unbiased. Even the attempt to be "balanced" is a bias--see next item.
"Balanced" always seems to mean "give equal time and space to each side". But what if the two sides really are unbalanced? What if there's a huge pile of information pointing one way, and a few items that might point the other way if you believe them--and then the journalist insists on only showing you a few items from the first pile, so that the presentation is "balanced"? You never actually get a real picture of the facts.
There's a story that I first encountered in one of Douglas Hofstadter's books, about two kids fighting over a piece of cake: Kid A wants all of it for himself, Kid B wants to split it equally. An adult comes along and says, "Why don't you compromise? Kid A gets three-quarters and Kid B gets one-quarter." To me, the author of this article comes off like that adult.
In any case, all that assumes that this article is supposed to be just straight reporting, no opinion. For which, see the next item.
> It can be debated whether the title should be such a question.
Yes, it certainly can. If this article is just supposed to be straight reporting--no editorializing--then that title is definitely out of place. That title is an editorial--and the article either needs to own that and state the conclusion it's trying to argue for, or it shouldn't have had that title in the first place.
> "Balanced" always seems to mean "give equal time and space to each side". I agree with you that this seems to be the idea people have when "balanced" is mentioned. I don't think this is correct. You can easily have a balanced article which has lots of evidence pointing one way or the other. I think that this article is like that. Boatload of pointers towards Altman being a sly person with reporters asking him about those exchanges and him basically shrugging each time.
The journalists credibility is doing quite a bit of lifting here as we have to trust that they put in the effort. One such example is the molesting accusations which the reporters say they heavily looked into and were not able to find any corroborating evidence.
> You never actually get a real picture of the facts. Yes, it is a fundamental impossibility in lots of cases. That's why we trust the reporters that they did as good a job as they could to present all pertinent information.
> That title is an editorial ... I do not perceive it to be editorialised. It states an arguably real possibility that Altman may/does have lots of real power. I am guessing that you believe that the "can he be trusted" is an editorialisation that points towards him being untrustworthy. If that is the case, I think those would be your biases knowing that he is probably not trustworthy. I see it just as an objective question.
Imagine a different situation: you have local elections into your small town. There is a new mayor candidate and during the next term, there will be some money to be given to residents for renovations and such, but not enough for everyone. You don't know this candidate. A local reporter, whom you trust, writes an article "New mayor candidate favoured in polls - will he be fair with the renovation money?". It is a piece trying to shed light on who this candidate is as a person, what was his life before moving into your village, etc. so that voters like you can decide whether to give him your vote. It is not editorialised, as it does not point either way.
I don't trust him. He already made statements that convinced me I don't want to touch anything he controls. In a way it is similar to Meta and co. For some reason the US corporations behave very suspiciously once past a certain threshold size. With Win11 from Microsoft I always wonder whether there is a not so hidden subagenda in place.
Fuck no! Of course he can't be trusted. We know that. Nobody questions that. We know that about most of the "elites" running the show.
We're just in this shitty pit of despair where people are desperate. It's difficult to campaign for good when you're struggling and capital can jerk people around.
People pursue good for the sake of good at cost to themselves when times are very good or times are very, very bad.
Right now times are only merely very bad.
The last quote, to a layperson, may sound completely sinister, but therein lies a deep and open computer science question: AIs really do seem to get their special capabilities from having a degree of freedom to output wrong and false answers. This observation goes all the way back to some of Alan Turing's musings on how an AI might one day be possible. And then there were early theorems related to this e.g. PAC learning. I'd love to know about what's happened since on this aspect, such as the role of noise and randomness, and maybe even hallucinations are a feature-not-bug in a fundamental sense, etc.
Watch Altman's reaction in Tucker Carlson interview to the question about (alleged) murder of OpenAI researcher Suchir Balaji.
The overall response and particularly the body language speaks a lot.
This is unfair to the original article, which is well-researched and worth a read. But the answer this question is _always_ no. Nobody should have as much power as the oligarch class currently does, even if of inscrutable power.
I don't even need to read the article to know that he unequivocally can't be trusted. Every action he's taken to this point have shown he will say literally anything to get what he wants.
Simple: NOOOOOOO!
How is this even a question?
Am I the only one that feels like Claude is clearly winning code generation, and Gemini in general LLM?
I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.
Therefore, I feel like “Sam Altman may control our future” is a far stretch.
Well I just canceled my Claude Pro subscription because of the mysterious limits that I don't experience with codex, even after paying for "extra usage". If Anthropic can't figure out their capacity problems they are in trouble.
I doubt Anthropic see this as their capacity problem. They like "extra usage", and users who don't, well its their capacity problem.
how is gemini winning in general llm. what is general llm .
General LLM is what Apple is paying Google for.
I noticed that Apple speech to text has gotten pretty good lately. Is that because they’re paying Google? Not sure I use other AI features from Apple as I have my Siri turned off.
> Is that because they’re paying Google?
No, the Google deal hasn't shipped yet.
>>and Gemini in general LLM?
You might be. Or at least I feel like Gemini is actually dumber than a house of bricks - I have multiple examples, just from last week, where following its advice would have lead to damage to equipment and could have hurt someone. That's just trying to work on an electronics project and askin Gemini for advice based on pictures and schematics - it just confidently states stuff that is 100000% bullshit, and I'm so glad that I have at least a basic understanding of how this stuff works or I would have easily hurt myself.
It's somewhat decent at putting together meal plans for me every week, but it just doesn't follow instructions and keeps repeating itself. It hardly feels worth any money right now, like it's some kind of giant joke that all these companies are playing on us, spending billions of these talking boxes that don't seem that intelligent.
I also use claude at work, and for C++ programming it behaves like someone who read a C++ book once and knows all the keywords, but has never actually written anything in C++ - the code it produces is barely usable, and only in very very small portions.
Edit: I just remembered another one that made me incredibly angry. I've been reading the Neuromancer on and off, and I got back into it, but to remind myself of the plot I asked Gemini to summarise the plot only up to chapter 14, and I specifically included the instruction that it should double check it's not spoiling anything from the rest of the book. Lo and behold, it just printed out the summary of the ending and how the characters actions up to chapter 14 relate to it. And that was in the "Pro" setting too. Absolute travesty. If a real life person did that I'd stop being friends with them, but somehow I'm paying money for this. Maybe I'm the clown here.
I'm curious: did you give Gemini the entire text of Neuromancer or did you expect it to use search results for chapters 1 to 14?
I would have just fed it the text of chapters 1 to 14 from a non drm copy.
I just asked like I said, give me plot summary until chapter 14, don't spoil the rest of the book. And of course when I told it what it just did it was like oh I'm sorry, here's a summary without the spoilers for the ending. So clearly it could do it without additional context.
I wouldn't expect any LLM to be able to respect such a request. Do they even have direct access to published works to use as reference material?
Also, last time I played 20 questions with ChatGPT, it needed 97 turns and tons of my active hinting to get the answer.
>>Do they even have direct access to published works to use as reference material?
I mean, clearly, given that it did answer my question eventually. Also wasn't it a whole thing that these models got trained on entire book libraries(without necessarily paying for that).
>>I wouldn't expect any LLM to be able to respect such a request
Why though? They seem to know everything about everything, why not this specifically. You can ask it to tell you the plot of pretty much any book/film/game made in the last 100 years and it will tell you. Maybe asking about specific chapters was too much, but Neuromancer exists in free copies all over the internet and it's been discussed to death, if it was a book that came out last year then ok, fair enough, but LLMs had 40 years of discussions about Neuromancer to train on.
But besides, regardless of everything else - if I say "don't spoil the rest of the book" and your response includes "in the last chapter character X dies" then you just failed at basic comprehension? Whether an LLM has any knowledge of the book or not, whether that is even true or not, that should be an unacceptable outcome.
I wouldn't expect an AI to know exactly what happens in every chapter of a book.
Knowing the plot of Neuromancer isn't the same as being able to recite a chapter by chapter summary.
I tried this Neuromancer query a few times and results greatly vary with each regeneration but "do not include spoilers" seems to make Gemuni give more spoilers, not less.
As for the titular question, Betteridge's law of headlines applies. The answer is: No, we can't trust Sam Altman.
No.
The very idea of “trusting” monopoly capitalism.
I don’t know, but any time I see an interview of Altman and I look at those eyes, I get creeped out.
Can Sam "The board can fire me, I think that's important." Altman be trusted?
If for no other reason, given what happened when the board fired him... no. I'd say not.
No. Next question.
People call him Scam Altman for a reason.
It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?
Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.
"This thing might destroy humanity - we need to build it ASAP" does not really make sense. But it enthrall[s/ed] many smart researchers who would normally demand specific, testable claims and logical responses to those claims.
We have drastically escalated what claims are necessary to motivate startup employees. It used to be that you could merely dangle an interesting problem in front of a researcher. Then you could earn millions, then billions. TAMs in the trillions. AGI will destroy humanity unless you, personally step in. Elon is talking about Kardashev III civilizations. The universe cannot bear the hype being loaded upon it.
I agree with you completely, but the way I see it Anthropic are x100 worse when it comes to amplifying this doomer bs for marketing. It’s their whole shtick.
I haven't read it yet. The answer is no.
i think im shadowbanned :(
Fixed now.
I would really appreciate it if someone in the know could explain to me how a markov chain with some backpropagation can surpass human cognition. Because right now I call BS.
I hope somebody just publishes The Ilya Memos. Sounds like a fun read
Hey, Ronan. Did the IPO come up at all in the research or interviews for this article? A yes or no will suffice, and color it if you want. ~_^
This whole situation goes to show that yesterday's conspiracy theorists are today's realists. What's happening to USA's leadership and as a country and what's happening with with their top companies is really scary for the rest of us. If this trend continues we're all definitely gonna end up in a kleptocracy.
End up? It already is.
I believe Annie Altman.
Not enough people know about her and her allegations towards him. It’s sad to see so much of the rich and powerful literally just can’t stop raping people. Epstein, Trump, Elon, scam Altman. How many more people have to be implicated?
Annie Altman is more credible than a serial scammer
"Good luck, have fun, don't die."
tautology
A contradiction in terms is not called a tautology.
no
I think the answer is a resounding "no".
Anyone who deliberately seeks power should not be given it.
Excellent work. I’ll have to wait until we get the print version delivered to finish as I’m not signed into the new Yorker on my phone.
I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.
This is above your comment: https://archive.is/2026.04.06-100412/https://www.newyorker.c...
I suspect that they are perfectly capable of clicking an archive link or better yet logging in as they are already a subscriber. Maybe, like me, they enjoy reading the physical magazine.
I place my trust in Betteridge's Law of Headlines.
Disclaimer: I have no association with any AI company and have never met Altman or any of the other top AI scientists.
The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.
Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.
What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...
People do vary even if none are perfect. Demis Hassabis has a pretty good reputation amongst the AI leaders. Altman seems unusually shifty.
> He's doing his job as it is defined (make OpenAI profitable.)
What? OpenAI was a non-profit until Sam made it for-profit.
could someone please give a tldr? this was way too long
The answer to the question in the title is "no".
Seeing Sam Altman slowly degrade into the realization that he is in fact not as smart as others in this space has been fascinating to watch. He used to speak with enthusiasm and confidence and now he’s like a scared little boy who got in way too deep.
The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.
They were both pretty smart in certain ways. Altman's very good at being manipulative and raising money though seems so so on the tech. Bankman Fried was smart at crypto and the like but ethically challenged on the don't steal your customer's money part.
And they both peddle the same altruism smokescreen. Sociopath leader playbook.
Nope, never trust this man. His history proves why you cannot. Pure greed.
> Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”
lol do you think these guys have ever been hit? Let alone in the face. They’d probably be less eager to mouth off as much as they do if so.
Harvey Dent…
The brighter the picture, the darker the negative
Hybris.
speak for yourself, he doesnt control my future.
Please don't leave us hanging; what makes you immune?
Meh. I’m no particular fan of Altman but there’s nothing in this article particularly surprising or terrible.
The whole AI safety thing has always seemed extreme to me and has turned out to be a storm in a teacup. All those prominent people who used to tell us how AI will end humanity seem to have stopped talking about it.
I get the sense that Altman is not particularly like-able person but Bill Gates and Steve Jobs both seem to have scored a 10/10 on their “is this guy a jerk” rating, it’s common for tech CEOs.
So, the article and headline are dramatic but not much really there.
I think all the AI safety obsessed people turn out to have been the ones off course.
The guy called out for being a sociopath by a multitude of Silicon Valley CEOs of all people, sure we can trust him our future.
Quite frankly, if he went and scrubbed (or had scrubbed) a Facebook thread I got in an argument with him on in 2018 (about the last time someone did an article about him) I can only imagine how obsessive he is about controlling his past and info about it.
Betteridge's law of headlines: no
obviously not
Looks like Betteridge's law of headlines applies here too.
Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word "NO."
Betteridge strikes again
> The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”
These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".
Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.
Short answer: No. Long answer: Hell, no.
Rule of Headlines says "no"
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
tl;dr
No, he cannot.
Can anybody tho?
Yeah, some people can more than others.
[flagged]
We've banned this account.
We detached this comment from https://news.ycombinator.com/item?id=47668579 and marked off-topic.
>"Sam Altman may control our future"
TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)
The New Yorker is owned by Conde Nast just as Reddit. Conde Nast has a deal with OpenAI:
https://www.reuters.com/technology/openai-signs-deal-with-co...
This is a damage control piece, and you see that the most stinging comments here get downvoted.
What might feel like "damage control" is more likely to be the outcome of the even-handedness you get with serious, rigorous reporting. Something the New Yorker is known for.
He is cooked. Only a matter of time before the whole thing blows up. Once a scammer, always a scammer.
No one person control our future. Stop there.
Some people have far, far more power over our lives than others. More than they deserve, frankly.
Yeah, but one person can fuck a lot of shit up.
Well, no, obviously not. Not one bit.
LOL, no.
No. Why is this a question?
"could", "may", "might" - these words do so much heavy lifting in "journalism". Almost always it's an invitation to worry and be miserable.
No
just like Zuck.
1. No.
2. You cannot "control" superintelligent AI.
No.
No
No.
"Any headline that ends in a question mark can be answered by the word no."
https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
In this case, "Ick, no!"
[flagged]
[flagged]
This article is just another typical New Yorker fluff piece that tries to look deep but misses the actual point.
The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.
The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."
Who cares???????
The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.
I don't see anything bad about Altman in this article that cant be explained by the chaos of growing a billion dollar company in a few years.
He’s a grown ass man tweeting in all lowercase, that’s all I needed to know.
I could more or less infer the rest from that.
Why would anyone trust him at all? their tech is used to bomb children, all of these rich folks are immoral only about their selfish gain.
> their tech is used to bomb children
If you're talking about the school in Iran, that wasn't OpenAI. That was a Palantir system that pre-dates OAI by a few years, and was due to a bad entry in a spreadsheet, that showed the building as military housing. Which it was a few years ago.
180 people lost their lives because of bad data in spreadsheet, but not AI.
Many years ago. Not "a few years ago". Also you could make the sentence that 180 people lost their lives because of an evil war, of which USA and Israel are the aggressors. And we definitely don't talk enough about that part.
Palantir was using anthropic and its use is being replaced by openai.
Yes but not for the system that decided to bomb a school. That was a Palanter in house system.
Afaik the palantir system utilized ai.
180 children lost their lives because of decisions by people in the US military (and ultimately the US government / the POTUS).
Let's not fall into the trap of adopting narratives created to waive accountability. The spreadsheet didn't launch a missile, the spreadsheet didn't authorize the strike and the spreadsheet didn't select the target.
Not to mention that "outdated spreadsheet" is also a hilariously anachronistic excuse for a war crime if you consider what kind of satellite technology the US has publicly acknowledged to have access to, let alone what kind of technology it is likely to have access to.
The difference between intentional premeditated murder and reckless endangerment resulting in a killing is not guilt and innocence but merely the severity and nature of a crime. Both demonstrate a callous disregard for the sanctity of human life, one just specifically seeks to extinguish it, the other merely accepts death and suffering as an acceptable outcome.
Please talk to your criminal defense lawyer.
This is nonsense.
> bomb children
Looks like that's the only thing that happens in every war, according to useful westerners.
Any idea how stupid this title sounds!? It's past exaggeration.
A bit of a feeling of "so what" here. Maybe he's less trustworthy than some. We have people of X trustworthiness running the government, crypto exchanges, a certain space exploration and satellite company, social media companies, and so on. We know their trustworthiness. Isn't the real issue how to cope?
What's the point of living in an advanced society if you just sit around watching it decay around you? Our ancestors fought for our indifference today, and with attitudes like yours we'll watch our children fight for it again tomorrow.
What's your proposal? We knew he's as trustworthy as the others, and it sounds like you agree. What are you doing about them? Legally or illegally?
Mostly we don't need 3,000 words on how untrustworthy he is. We could use 3,000 words on how to remove his influence.
Your point is that it's ok he's untrustworthy because lots of people in power are?
> Your point is that it's ok he's untrustworthy because lots of people in power are?
It's...weirdly a valid question. If Sam fibs as much as the next guy, we don't have a Sam problem. Focussing on him alon is, best case, a waste of resources. Worst case, it's distracting from real evil. If, on the other hand, as this reporting suggests, Sam is an outlier, then focussing on him does make sense.
No, it's that the entire ecosystem is rotten to the core, and it actively selects, rewards, and protects flawed personality types.
And when you're dealing with a potential existential threat, this is an existential problem.
I don't disagree, but at some point, I think people need to understand we're dealing with laws of nature here. I mean just look at human history, this has been a problem since the dawn of civilization...
I think if you truly understand social contract theory, how hierarchies are formed, and political theory, you'll realize that oligarchies tend to be nature's equilibrium point for setting social disputes, and all forms of governments regardless of whatever they claim to be, naturally devolve towards them as they tend to represent the highest social entropy (ie equilibrium) state. That's not to say you can't have or move further away from that point and towards another (supposed ideal) form of government, you absolutely can, but it takes work. Perpetual work - of which no set of "rules" can remedy people of having to do in order to sustain it.
The problem however, is most people get complacent. They eventually tire of that work, or are ignorant, and by doing so create a power vacuum which allows things slide back towards that state.
As so, people must decide for themselves one of several possible avenues to pursue:
#1 - Try to convince others (the masses) to join and work together to take power from the few, back to them
#2 - Find a way to join the ranks of the elite few (which thanks to the prisoner's dilemma, unscrupulous means tends to perform better in the short term, even if at the cost of the long term. And if the elite is already corrupt, well, cooperating with it works well)
#3 - Settle for their lot in life
Unfortunately #1 is such a difficult proposition given it requires winning agreement among many whilst many often decide to remain in camp #3 (for complacency/ignorance reasons). And #2 is often easier done without moral integrity, especially at the behest of those in camp #3 whose behavior only helps enable these realities. Thus, is why I think the "ecosystem" as you say, will always tend towards this way - where society tends towards being controlled by an elite few who are rotten.
Robert Michel's realized this and dubbed it the Iron Law of Oligarchy and embraced his own version of #2 for himself. Although, he came to this conclusion through his own observations and reasoning, rather than through historical political theory.
Not sure where I said it's OK? Please point it out.
We have to deal with it. Or are you suggesting we should purchase a controlling interest and vote him off the board?
OpenAI is like #3 or #4 of the AI companies right now in terms of power, and last place in the court of public opinion.
I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.
OpenAI has ~30x the userbase of Anthropic.
I'm not sure how much of that converts to revenue. If it's free plan users, that's just cost. You can say what you want about "creating a training data moat" but that doesn't seem like it's prevented the other labs from putting out excellent models.
Well we were talking about power and reputation and being well-known and all that. Being more ubiquitous is surely a big part of that. GP seems to think Anthropic is doing better because of the DoD thing. In my estimation, 90% of people do not care about that at all.
Around the same revenue due to Anthropics strong enterprise strategy
Perhaps, but I'd venture the ear of the regime is even more valuable.
They’re all in the negative excluding subsidies, hard core coders are more valuable than high schoolers cheating on homework.
makes sense if you think the point of journalism is just to take everyone down a notch instead of... um... informing the public of bad actors
"the local drug-dealing pimp is so passe, we need to investigate the most upstanding members of the community just to be sure" is a frankly insane strategy
I might expect such a subjective, gossipy exposé of a public official, but this of a private individual in a non-public sector commercial company?
Yet when he was fired, 99% of OpenAi employees backed him and were ready to resign. That actual event/evidence is more telling than any hit piece article.
> Yet when he was fired, 99% of OpenAi employees backed him and were ready to resign. That actual event/evidence is more telling than any hit piece article.
It's not telling. The article documents a massive pressure campaign to get that result. There are a lot of reasons why OpenAi employees could have publicly backed him, an example is fear, and there are many others that aren't an endorsement of Altman's character.
I imagine most of them were motivated by money. OpenAI was supposed to be Open. As I understand it, it was not created for shareholder profits and instead was made to benefit everyone? Hence the Open name. Then someone like Sam comes along who can make you incredibly rich by casually ignoring the initial mission. Would you go against this incredibly powerful billionaire who by many accounts is not encumbered by ethical quandaries? In doing so you risk your financial freedom, and for what? OAI is already a husk of its intended purpose. Mine as well get paid to be a sellout.
Like I said, as it currently stands the evidence is 5,000 with Sam and like 5-15 against him (from the article).
We can theorize on motivations all day, but since the hitpiece didn't bother contacting any 'pro Sam' employees, it's a moot point.
> OpenAI was on the verge of closing a large investment from Thrive, a venture-capital firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six billion dollars and allow many employees to cash out millions in equity.
Probably a factor in the pro Sam camp. Hard to stand up against a big payday.