Top 6 science guys are long gone. Open AI is run by marketing, business, software and productization people.
When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.
AI has now evolved beyond just the science and it's biggest issue is in the productization. Finding use cases for what's already available ALONG with new models will be where success lies.
ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works. It always sucks when mission and vision don't align with the nerds ideas, but I think it's probably the best move for both parties.
If we look at history of innovation and invention it’s very typical the original discovery and final productization are done by different people. For many reasons, but a lot of them are universal I would say.
E.g. Oppenheimer’s team created the bomb, then following experts finetuned the subsequent weapon systems and payload designs. Etc.
How important are top science guys though? OpenAI has a thousand employees and almost unlimited money, and llm are better understood, I would guess continous development will beat singular genius heroes?
Will they though? Last I heard OpenAI isn't profitable, and I don't know if it's safe to assume they every will be.
People keep saying that LLMs are an existential threat to search, but I'm not so sure. I did a quick search (didn't verify in any way if this is a feasible number) to find that Google on average makes about 30 cents in revenue per query. They make a good profit on that because processing the query costs them almost nothing.
But if processing a query takes multiple seconds on a high-end GPU, is that still a profitable model? How can they increase revenue per query? A subscription model can do that, but I'd argue that a paywalled service immediately means they're not a threat to traditional ad-supported search engines.
> When the next wave of new deep learning innovations sweeps the world,
that won't happen, the next scam will be different
it was crypto until FTX collapsed then the usual suspects led by a16z leaned on OpenAI to rush whatever they had on market hence the odd naming of ChatGPT 3.5.
When the hype is finally realized to be just mass printing bullshit -- relevant bullshit, yes, which sometimes can be useful but not billions of dollars of useful -- there will be something else.
Yeah, I think him leaving was a huge blow to OpenAI that they have maybe not yet recovered from. Clearly there is no moat to transformer-based LLM development (other than money), but in terms of pace of development (insight as to what is important) I think Anthropic have the edge, although Reka are also storming ahead at impressive pace.
Resignations lead to more resignations....unless mgmt. can get on top of it and remedy it quickly, which rarely happens. I've seen it happen way too many times working 25 years in tech.
Relying on specific people was never a good strategy, people will change but this will be a good test of their crazy governance structure. I think of it similar to political systems - if it can't withstand someone fully malicious getting in power then it's not a good system
When walking around the U of Toronto, I often think that ~10 years ago Ilya was in a lab next to Alex trying to figure things out. I can't believe this new AI wave started there. Ilya, Karpathy, Jimmy Ba, and many more were at the right time when Hinton was there too.
Oh man that was an amazing time at UoT. We also got GPU versions of btc mining from that group.
We also had Ethereum be born right around there as well around 2014. I remember the first Ethereum meetups around Queen and Spadina with Vitalik.
But to another posters point. Even though we had the father of deep learning Geoffrey Hinton and lumiaries like Ilya, and Vitalik, we didn't manage to get any real benefit from that.
Wow! By the time I arrived, Hinton was gone. As well as many great professors that started their own companies or were poached by big players (i.e. Sanja-Nvidia). At least I got to learn NN from Jimmy Ba (author of Adam). Now, he's working at xAI.
I've thought about this one for a long time having lived in both SV and Canada. It is a complicated one but there are a handful of critical road blocks in Canada that make it more challenging.
(1) Access to size of market even if online being US vs 'foreign' has advantages in political arena/regulatory benefits
(2) Significant tax advantages for US investors vs limited tax advantages for Canada (Angel+VC)
(3) Risk Appetite (impacted by size of market) - compounded by tax disadvantages (why would you take risk if your lining the pockets of the government?)
(4) Bench depth on talent once you really start to scale your company
(5) CAD strength (double edged sword) - talent goes South for better salaries (+ you need to compete), if the company revenue is in USD and employees are paid in CAD
(6) Start-ups paying in equity, early employees taking on that risk actually will get taxed heavily under new cap gains so the incentive to work hard for money is lower.
(7) Network effects of being in the valley - idea percolation, new playbooks, talent, competitiveness, company fitness
I will add that in this very specific AI case there is limited way you are going to find the depth of talent and capital in the country to make that company fly at the scale it needs to be.
Why would anyone start the game on hard mode when easy mode is a border drive away?
Us is so outrageously better than the rest that people fly across oceans to start businesses there. Canada, being next door, doesn't have the distance moat to at least slow down the brain drain
As another Canadian, I feel the same but I'm not surprised one bit.
Canada is actively hostile towards tech and suffers from crippling salaries and investments. The idea of "business" in this country is buying a house and renting its basement.
Our government's incompetence is comical, we are nothing but more than a tech talent / immigration proxy for United States at this point.
The Canadian Dream is to get a great education and then move of the US.
You might want to blame the government or this or that but I think as a Canadian I've finally come to reckon with the fact that it's just not in the Canadian ethos to do risky things like make startups. Of course there are exceptions to the rule but they are very very rare. Canadian investors don't want to take big risks and the Americans are just next door waiting to gobble up the talent in search of capital.
Canada is addicted to rent seeking, monopoly businesses, corporations that push regulatory capture on the gov't and then parasitize, and -- most of all -- ripping resources out of the ground and selling them cheap, or doing the same with real estate.
My latest annoyance is all the moaning and groaning about the latest capital gains tax increase. People complaining on one hand about how the Canadian economy lacks productivity, and then screaming to high heaven about tax policy that mostly only impacts people making quick speculative cash.
Investment takes no risks in this country because they don't have to. They just dump money into real estate or oil & gas instead and then hang at the lake in the Muskokas.
As much as people on this site like to complain about Europe (and a lot of it is merited) - I've found that Canada manages to be worse. Even having lower on avg bureaucracy
Yeah Canada just spends a ton of taxpayer money to create great institutions like U of T and Waterloo, so that their graduates can all go to Silicon Valley and make 2-3x the money.
seemed inevitable after that ouster attempt, probably just working out the details of the exit. But the day after their new features release announcement?
I believe Omni was his work based on an interview he gave about end to end multimodal training being needed to move to the next level of understanding.
I would imagine he’d been thinking about it for a while, and maybe with all the buzz about him at the same time of the release, he was asked to decide.
Could be a clever play. They sandwiched google io with news which has taken attention from Google. Plus they just had a big announcement so the negative news hits a little less hard.
What do you mean how often, that is a foundation for the most successful economic model in humans. Some may not be discarded, but they will never get enough credit compared to a clueless head with a $1M smile talking to clueless heads with $1B wallets. We should thank god/nature that people who understand and do things exist in our species at all.
Why do people treat these technologists doing career moves, as if this was lineup changes in a major league sports teams?
Are these "first name" (ugh) "influencers" smart? Sure.
Smart is not that rare. These people are technologists like most of you, they aren't notably smarter, they just got lucky in their career direction and specialization. They aren't business geniuses.
They're just people filling roles.
Do changes in leadership affect a business? Sure? I guess? About 5% as much as you'd think from the tea-spilling gossip-rag chatter around AI people.
Enough already. Attend to the technology. Attend to the actual work. The number of you who are professionally impacted by these people changing paychecks is closer to zero than 50%.
Meta's next for him? There's lots of money being poured into their AI division and there's lots of compute & being able to do any kind of research he might want.
Does it matter that the people who dedicated the last decade to developing breakthrough work have left? It is a mistake to think that their luck streak will continue and their departure isn't a sign of decay at OpenAI. They may as well cash-in on their notoriety while it is of value. The odds are more in favor of other teams blazing new trails.
Not to be a conspiracy theorist, but the phrase "So long, and thanks for everything" used in the tweet reminds me of "So long, and thanks for all the fish" from the dolphins in The Hitchhiker's Guide To The Galaxy. The background there is that dolphins are secretly more intelligent than humans, and are leaving Earth without them when its destruction is imminent (something the humans don't see coming).
I did once leave a company with a phrase just like that :P A few people there actually got the reference and congratulated me for the burn.
I wonder how the proposed regulations to make noncompetes unenforceable affect moves like this. Or was he sufficiently high up that his existing noncompete would have survived?
A few years ago? Probably catastrophic, he was Chief Scientist after all.
Now? Probably not too much, they have enough investment, and additionally talented people wanting to join. I mean, Andrej Karpathy also joined and left OpenAI twice and it didn't impact operations much.
I think OpenAI is now where Google was at or just before its IPO, a few key players leaving isn't going to impact them as much as it would have in its earlier founding days, and there is plenty of talent who are ready to jump in to fill the shoes of anyone who leaves.
That may be true in term of engineering, but I think everyone had switched to google as their search engine by then. I am not sure openai has captured the market quite in the same way, as I think people are still mostly experimenting with AI, the integration time in any large company is much slower than the rate of progress of AI. And it's not clear to me that there is much of a vendor lockin to use the openai API vs an equivalent competitor.
Ilya hasn’t been working on core models for a while. He’s been focused on superalignment. That’s good for the world. Since OpenAI is leading/closest to AGI, it’s the best place to work on superalignment.
At least now we know GPT-5 has finished development and is now in training from this (I would hope that Iyla got to add all that he hoped to before leaving).
Ilya, thanks for all you have contributed within OpenAI!
He wouldn't have left if he could advance hoomanity further there, the guy has like a 800ms delay for each word and that does not make for a very good liar, perhaps a dutiful one.
The word delay depends on who he is talking to. On his Dwarkesh interview from a year or so back he speed up noticeably, presumably because Dwarkesh is a fast thinker/talker.
Apple is considered to be seriously lagging behind in ML. Just his name alone is probably enough for the time being - They can give him his own lab to do whatever he wants. Ilya will attract enough talent, at least some of whom will be willing to take up responsibility over commercial stuff in the coming years.
They do participate pretty heavily in ML research from what I've seen. To continue your metaphor, they try to invent as many gold digging techniques as possible which exclusively work with their own shovels and buckets.
Maybe Microsoft, for being so close with OpenAI. Maybe Apple, who really needs a tech lead for AI. Maybe Google, his previous workplace, or work for Elon, who was successful in poaching Andrej in the past. Or a startup, he can raise billions if he so wishes. Wherever he goes in a year will compete with OpenAI. Previous time lead researchers had a philosophical disagreement with Sam they left and created Anthropic, which recently caught up to OpenAI. That's the risk of letting Ilya go. And where Ilya goes, other top researchers will go too.
One idea could be the product launch dev day, which is something that originally was a point of tension (overcommercialization vs research). Launching GPT-4o at a dev day basically asserts Sam is picking up no compromise on where they were 6mo ago. Good time to finally leave if protesting that is what he believes in.
But if it were related, then that would presumably be because people within the company (or at least two rather noteworthy people) no longer believe that OpenAI is acting in the best interests of humanity.
Which isn't too shocking really given that a decent chunk of us feel the same way, but then again, we're just nobodies making dumb comments on Hacker News. It's a little different when someone like Ilya really doesn't want to be at OpenAI.
Well it might be in the best (long-term) interests of humanity to have autonomous flying killer robots powered by OpenAI secret military contracting work cut the human population in half, in the name of the long-term ecological health of the planet, and to cull those not smart or fast enough to run away, thus improving the breeding stock.
That's why I don't trust people who run around claiming to be serving the best interests of humanity - glassy-eyed futurists with all the answers should be approached with caution.
> Well it might be in the best (long-term) interests of humanity to have autonomous flying killer robots powered by OpenAI secret military contracting work cut the human population in half, in the name of the long-term ecological health of the planet, and to cull those not smart or fast enough to run away, thus improving the breeding stock.
I love these "kill 'em all and let God sort them out" arguments.
We already have tools to cut the human population in half even without AI. Acting in the best interests of humanity is really a cheesy way to frame it. I'm sure they also told Oppenheimer he was acting in the best interests of humanity.
What? How is this not saying "Well, it might be in the best interests of humanity for OpenAI to do [hypothetical thing that seems pretty bad that OpenAI has never suggested to do], and because they may consider doing said thing, we shouldn't trust them"?
Why does everyone here think that the guy who quit/lost his job at OpenAI because he didn't agree with their corporate shift and departure from the original non-profit vision is going to be lining up for another big corporate job building closed for-profit AI?
The big reason is that when push comes to shove, most of these people don't have any principles.
Sure, if they are in a position of power they will wield it how they want. When he caused the whole fiasco, he probably thought it was going to work.
But if the choice is between losing the position of influence, or deciding between what position of influence to accept next, well you'll see that the principles are very flexible.
We already saw this happen with a few of the "safety" researchers that got fired from OpenAI, and yet started working on X AI (I think?), which is definitely not know for "safety".
I wonder if he thinks LLMs are an AGI dead end and he's not interested in selling a product. There's some academic papers floating around coming to the same conclusion (no links. sorry, studying for a cert exam).
That's been my assumption since the beginning of this drama last year. He seems to have one goal: real AGI. He knows that while LLMs may make something that seems like AGI there's nothing actually intelligent about it and its never going to get them there. OpenAI wants to pivot and sell sell sell because all they see is potential trillions of dollars, and it's time to make money instead of burning more millions/billions chasing a dream.
Yet all the AI weirdos on Twitter seem convinced that Ilya "saw something" (AGI) and got scared and wanted to pull the plug...lmao.
This is counter to every interview ilya has ever given since gpt3--he believes scaling llms can get there that's why they scaled to gpt4 scales at all.
No... A good number of folks will even go so far as to say that "all we do" is token prediction too. It's worth noting -- OpenAI founder Elon Musk claims in a lawsuit that the company has achieved AGI. Make of that what you will, but certainly there are many people on this site who believe in the general potential of LLMs.
Exactly. He’s only founded and led a company that’s built some of the most easily adoptable and exciting innovations in human-computer interactions in the last decade. Total fraud!
Sam "Worldcoin" Altman regrets the loss of a friend that called him out on how OpenAi is becoming closed because the engineers realized they could make a lot of money. Doesn't seem like it is impacting the quality of the models, but it will probably impact openai's impact.
Can you blame the engineers? If you realize LLM tech is neat but ultimately overhyped and probably decades away from truly realizing the promises of general purpose AI, why not just switch goals to making as much money as you can?
There’s a halo around Ilya Sutskever as the Albert Einstein of AI. Are there others on par with his— umm, how would you qualify it—- AI intuition or are we idolizing?
You have used an excellent term: AI intuition. This quality is extremely rare. Einstein probably had a similar kind of intuition in physics, and maybe that's why he was so successful. The ability to see what direction to pursue. Ilya has demonstrated it again and again, first with Alexnet (Hinton said Ilya was the person driving the project, believing in its success when no one else did, while Alex was the main implementer), then with OpenAI when he believed scaling up models is "all we need" to get to AGI, when very few people would agree with that. Today he believes the alignment is very important - perhaps we should listen to him.
I hope Ilya takes care of himself. I can imagine that what happened during the past year is not helpful for one's mental health. I assume the presented relationship with Sam Altman does not reflect reality and the external press surely also causes a lot of pressure.
This is plausible, Elon is a fantastic recruiter and he recruited Ilya for OpenAI. There are reports of xAI buying enormous numbers of GPUs and Elon's level of control of his companies means that Ilya recklessness isn't an issue.
Yeah - he's one of the only people I've seen talk on the topic who really seems to understand where it's going and how to get there. It's possible he's evangelized others at OAI who can carry the torch, but I'm skeptical given the degree of pushback the statements most in need of being represented got from his peers.
Just the opinion of an outsider, so not worth very much. But Ilya seemed to be one of the few who actually believed in the mission. I’m sure it was hard for him to watch the company become so product focused.
OpenAI under Sam strikes me as completely disingenuous - and the constant hyperbolic tweeting by many OpenAI employees just reinforces that.
Too bad. While I don’t really think that OpenAI is on the right track for general intelligence, it certainly could have been a positive for the world.
I wonder if that's true at a certain stage of OpenAI, which because of the product bootstrapping skills of Sam and co, has made his role irrelevant?
I mean, Jakub can take it forward at the current scale and leadership team of Sam and other people, but maybe he could not have earlier, which is where Ilya shone?
Everyone in the photo is @'d in the post. It's @merettm / Jakub Pachocki who is taking over as Chief Scientist. Downvotes are probably because you cheekily mentioned his weight for some reason.
'Back in May 2023, before Ilya Sutskever started to speak at the event, I sat next to him and told him, “Ilya, I listened to all of your podcast interviews. And unlike Sam Altman, who spread the AI panic all over the place, you sound much more calm, rational, and nuanced. I think you do a really good service to your work, to what you develop, to OpenAI.” He blushed a bit, and said, “Oh, thank you. I appreciate the compliment.”
An hour and a half later, when we finished this talk, I looked at my friend and told her, “I’m taking back every single word that I said to Ilya.”
He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”
The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'[0]
The future of the company doesn't depend on one engineer. If he left, it's likely because he had a vision that wasn't in line with Sam or Microsoft. Others will take his place and OpenAI will likely reach Elon Musks' recent prediction that AI will improve 100x in the next few years.
Catching up right now is not a matter of tech innovation, but raw energy and compute.
Of course, the next -revolution- in AI could very well come from Ilya. But why would he bestow that honor to anyone? He can self fund it if he wants. It's an R&D project, not a scaling problem.
I think what is probably very stressful about this space is virtually everyone knows how ChatGPT works. It is not a theoretical leap. It's actually fairly predictable how this shakes out, and OpenAI is pretty vulnerable.
an LLM is a curiosity without user data, anyone with a big silo of data can put out something years behind frontier and still instantly see huge usage. No one wants to go to AI, they want AI to come to them, unless OpenAI can stake a claim in a super novel way they're the Dropbox.
It's not like someone is going to use insider OpenAI knowledge to build an LLM so advanced you switch email, phone, or ERP providers
You will still need the compute resources of Microsoft, Meta etc.
And they have their own people who equally know how LLMs work.
Even raising funds is not a certainty given that VCs are becoming more cautious with AI as they realise it's now a platform fight between the mega corporations.
I don't think it's good to lionize people like that, like some kind of tech Übermensch. He neither knows everything about ChatGPT, nor is he the only person there to know a whole lot about it.
If his concern was irresponsible AI proliferation, too much commercial focus and wanting to move more carefully, accelerating competition doesn't seem like it would align with his goals.
Interesting, both Karpathy and Sutskever are gone from OpenAI now. Looks like it is now the Sam Altman and Greg Brockman show.
I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.
Top 6 science guys are long gone. Open AI is run by marketing, business, software and productization people.
When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.
AI has now evolved beyond just the science and it's biggest issue is in the productization. Finding use cases for what's already available ALONG with new models will be where success lies.
ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works. It always sucks when mission and vision don't align with the nerds ideas, but I think it's probably the best move for both parties.
If we look at history of innovation and invention it’s very typical the original discovery and final productization are done by different people. For many reasons, but a lot of them are universal I would say.
E.g. Oppenheimer’s team created the bomb, then following experts finetuned the subsequent weapon systems and payload designs. Etc.
I don’t feel that OpenAI has a huge moat against say Anthropic. And I don’t know OpenAI needs Microsoft nearly as much as Microsoft needs OpenAI
How important are top science guys though? OpenAI has a thousand employees and almost unlimited money, and llm are better understood, I would guess continous development will beat singular genius heroes?
What an absurd thing to say.
John Schulman is still at OpenAI. As are many others.
Jakub Pachocki is taking over as chief scientist. https://analyticsindiamag.com/meet-jakub-pachocki-openais-ne...
> Open AI is run by marketing, business, software and productization people.
AKA 'the four horsemen of enshitification'.
> They make lots of money
Will they though? Last I heard OpenAI isn't profitable, and I don't know if it's safe to assume they every will be.
People keep saying that LLMs are an existential threat to search, but I'm not so sure. I did a quick search (didn't verify in any way if this is a feasible number) to find that Google on average makes about 30 cents in revenue per query. They make a good profit on that because processing the query costs them almost nothing.
But if processing a query takes multiple seconds on a high-end GPU, is that still a profitable model? How can they increase revenue per query? A subscription model can do that, but I'd argue that a paywalled service immediately means they're not a threat to traditional ad-supported search engines.
I honestly think that is the best course of actions for humanity. Even less chance to see AGI anytime soon if he leaves.
"Productization". You mean "enshitification".
> When the next wave of new deep learning innovations sweeps the world,
that won't happen, the next scam will be different
it was crypto until FTX collapsed then the usual suspects led by a16z leaned on OpenAI to rush whatever they had on market hence the odd naming of ChatGPT 3.5.
When the hype is finally realized to be just mass printing bullshit -- relevant bullshit, yes, which sometimes can be useful but not billions of dollars of useful -- there will be something else.
Same old, same old. The only difference is there is no new catchy tunes. Yet? https://youtu.be/I6IQ_FOCE6I https://locusmag.com/2023/12/commentary-cory-doctorow-what-k...
Karpathy is still a mountain in the area of ML/AI, one of the few people worth following closely on Twitter/X.
I don’t think people give Dario enough credit
Yeah, I think him leaving was a huge blow to OpenAI that they have maybe not yet recovered from. Clearly there is no moat to transformer-based LLM development (other than money), but in terms of pace of development (insight as to what is important) I think Anthropic have the edge, although Reka are also storming ahead at impressive pace.
I love Karpathy. He's like a classical polymath, a scholar and a teacher.
Jakub Pachocki is still in OpenAI though
Greg Brockman is a very good engineer. And that's maybe even more important in the current situation.
Jan Leike has said he's leaving too https://twitter.com/janleike/status/1790603862132596961
The scenario I have in my head is that they had to override the safety team's objections to ship their new models before Google IO happened.
The "safety" team can go eat grass.
I don't believe in AI "safety measures" any more than I do in kitchen cleaver safety measures.
That is, nothing beyond "keep out of kids' reach" and "don't use it like an idiot" but let the cleaver be a damn cleaver.
There goes the so called superalignment:
Ilya
Jan Leike
William Saunders
Leopold Aschenbrenner
All gone
Resignations lead to more resignations....unless mgmt. can get on top of it and remedy it quickly, which rarely happens. I've seen it happen way too many times working 25 years in tech.
So Satya Nadella paid $13 billion to have....Sam Altman :-))
I guess if they really thought we had something to worry about, they would've stayed just to steer things in the right direction.
Doesn't seem like they felt it was required.
Edit: I'd love to know why the down votes, it's an opinion, not a political statement. This community is quite off lately.
Is this a highly controversial statement ? People are truly worried about the future and this is just an anxiety based reaction ?
Daniel “Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI”
“I think AGI will probably be here by 2029, and could indeed arrive this year”
Kokotajlo too.
We are so fucked
Relying on specific people was never a good strategy, people will change but this will be a good test of their crazy governance structure. I think of it similar to political systems - if it can't withstand someone fully malicious getting in power then it's not a good system
The guy with the "Bad universal priors and notions of optimality", which did to Hutter's MIRI program what Gödel did to Hilbert's program.
Any chance you can eli5? I'm familiar with the Godel/Hilbert side but not the relationship to these developments.
When walking around the U of Toronto, I often think that ~10 years ago Ilya was in a lab next to Alex trying to figure things out. I can't believe this new AI wave started there. Ilya, Karpathy, Jimmy Ba, and many more were at the right time when Hinton was there too.
Oh man that was an amazing time at UoT. We also got GPU versions of btc mining from that group.
We also had Ethereum be born right around there as well around 2014. I remember the first Ethereum meetups around Queen and Spadina with Vitalik.
But to another posters point. Even though we had the father of deep learning Geoffrey Hinton and lumiaries like Ilya, and Vitalik, we didn't manage to get any real benefit from that.
Wow! By the time I arrived, Hinton was gone. As well as many great professors that started their own companies or were poached by big players (i.e. Sanja-Nvidia). At least I got to learn NN from Jimmy Ba (author of Adam). Now, he's working at xAI.
And none of them build AI companies in Toronto.
I’m Canadian and disappointed at how ineffective we are at building successful companies.
I've thought about this one for a long time having lived in both SV and Canada. It is a complicated one but there are a handful of critical road blocks in Canada that make it more challenging.
(1) Access to size of market even if online being US vs 'foreign' has advantages in political arena/regulatory benefits
(2) Significant tax advantages for US investors vs limited tax advantages for Canada (Angel+VC)
(3) Risk Appetite (impacted by size of market) - compounded by tax disadvantages (why would you take risk if your lining the pockets of the government?)
(4) Bench depth on talent once you really start to scale your company
(5) CAD strength (double edged sword) - talent goes South for better salaries (+ you need to compete), if the company revenue is in USD and employees are paid in CAD
(6) Start-ups paying in equity, early employees taking on that risk actually will get taxed heavily under new cap gains so the incentive to work hard for money is lower.
(7) Network effects of being in the valley - idea percolation, new playbooks, talent, competitiveness, company fitness
I will add that in this very specific AI case there is limited way you are going to find the depth of talent and capital in the country to make that company fly at the scale it needs to be.
Why would anyone start the game on hard mode when easy mode is a border drive away?
Us is so outrageously better than the rest that people fly across oceans to start businesses there. Canada, being next door, doesn't have the distance moat to at least slow down the brain drain
As another Canadian, I feel the same but I'm not surprised one bit.
Canada is actively hostile towards tech and suffers from crippling salaries and investments. The idea of "business" in this country is buying a house and renting its basement.
Our government's incompetence is comical, we are nothing but more than a tech talent / immigration proxy for United States at this point.
The Canadian Dream is to get a great education and then move of the US.
You might want to blame the government or this or that but I think as a Canadian I've finally come to reckon with the fact that it's just not in the Canadian ethos to do risky things like make startups. Of course there are exceptions to the rule but they are very very rare. Canadian investors don't want to take big risks and the Americans are just next door waiting to gobble up the talent in search of capital.
Canada is addicted to rent seeking, monopoly businesses, corporations that push regulatory capture on the gov't and then parasitize, and -- most of all -- ripping resources out of the ground and selling them cheap, or doing the same with real estate.
My latest annoyance is all the moaning and groaning about the latest capital gains tax increase. People complaining on one hand about how the Canadian economy lacks productivity, and then screaming to high heaven about tax policy that mostly only impacts people making quick speculative cash.
Investment takes no risks in this country because they don't have to. They just dump money into real estate or oil & gas instead and then hang at the lake in the Muskokas.
Aidan Gomez, Nick Frost, and Ivan Zhang, all of whom were Hinton's students at UofT started Cohere (https://cohere.com/about)
As much as people on this site like to complain about Europe (and a lot of it is merited) - I've found that Canada manages to be worse. Even having lower on avg bureaucracy
Couldn't a company like that get a huge tax benefit from the SRED program?
Yeah Canada just spends a ton of taxpayer money to create great institutions like U of T and Waterloo, so that their graduates can all go to Silicon Valley and make 2-3x the money.
Also: All your comedians move to the US to make it big.
seemed inevitable after that ouster attempt, probably just working out the details of the exit. But the day after their new features release announcement?
"Get next major feature to release and you can go as a friend" might have been part of an earlier agreement.
More like they iced him for the last 6 months to ensure he wasn’t taking their lead to a competitor. He probably hasn’t touched anything in that time.
Sounds like a threat.
I mean, people can also get attached to a feature release.
"I want to work with the team to get this thing done"
I believe Omni was his work based on an interview he gave about end to end multimodal training being needed to move to the next level of understanding.
I would imagine he’d been thinking about it for a while, and maybe with all the buzz about him at the same time of the release, he was asked to decide.
Could be a clever play. They sandwiched google io with news which has taken attention from Google. Plus they just had a big announcement so the negative news hits a little less hard.
People will pay relatively less attention on the leaving when announcement after new feature release than other any time.
Jakub Pachocki is amazing. He was in top 20 in Polish algorithm competition:
https://oi.edu.pl/contestants/Jakub%20Pachocki/
Wait, TIL Jakub Pachocki == meret [1], never made the connection.
[1] https://codeforces.com/profile/meret
The usual fate of idealistic people who build something great only to be discarded by management in a power struggle. How often did this repeat?
What do you mean how often, that is a foundation for the most successful economic model in humans. Some may not be discarded, but they will never get enough credit compared to a clueless head with a $1M smile talking to clueless heads with $1B wallets. We should thank god/nature that people who understand and do things exist in our species at all.
the people you need for the revolution are not the same you need after the revolution.
It only works when idealistic people don't know what awaits them (hence the "middle management" layer in most companies).
Making sure generalized AI benefits everybody is the new Don't Be Evil
"We want to put AI in your hands"
to keep??
NO! whatever gave you that idea, evil doer...
Open AI, as in, open your hands and beg for another hit of AI through thick rubber gloves and plexiglass.
Why do people treat these technologists doing career moves, as if this was lineup changes in a major league sports teams?
Are these "first name" (ugh) "influencers" smart? Sure.
Smart is not that rare. These people are technologists like most of you, they aren't notably smarter, they just got lucky in their career direction and specialization. They aren't business geniuses.
They're just people filling roles.
Do changes in leadership affect a business? Sure? I guess? About 5% as much as you'd think from the tea-spilling gossip-rag chatter around AI people.
Enough already. Attend to the technology. Attend to the actual work. The number of you who are professionally impacted by these people changing paychecks is closer to zero than 50%.
No, they are people defining companies which is a significantly less fungible placement and a self-defined role.
Meta's next for him? There's lots of money being poured into their AI division and there's lots of compute & being able to do any kind of research he might want.
I doubt it, the internal politics of it are enough to drive most people crazy.
Does it matter that the people who dedicated the last decade to developing breakthrough work have left? It is a mistake to think that their luck streak will continue and their departure isn't a sign of decay at OpenAI. They may as well cash-in on their notoriety while it is of value. The odds are more in favor of other teams blazing new trails.
Not to be a conspiracy theorist, but the phrase "So long, and thanks for everything" used in the tweet reminds me of "So long, and thanks for all the fish" from the dolphins in The Hitchhiker's Guide To The Galaxy. The background there is that dolphins are secretly more intelligent than humans, and are leaving Earth without them when its destruction is imminent (something the humans don't see coming).
I did once leave a company with a phrase just like that :P A few people there actually got the reference and congratulated me for the burn.
I spotted the reference, but did not think this deep lol. You have a point here.
In that metaphor, is openai the humans or are actual humans the humans? So is openai about to be destroyed or humanity?
Openai would be the humans here and Ilya would be the dolphin. (In the metaphor, the dolphins leave and here Ilya is leaving)
The dolphins are actually openai
That is really smart, I wonder what's going on behind the scenes. Q* perhaps?
I think the parent is implying the opposite here.
I read Sam's Tweet and see "I fired him cause he voted against me"...
Im sorry but every time I see Sam speak, or read what he has to say all I can thing is "petulant man child".
> ... Ilya is easily one of the greatest minds of our generation
> ...Jakub is also easily one of the greatest minds of our generation
I'm not calling you a liar sam, but I just dont believe you.
Trust was irrevocably broken. That’s why he is leaving.
I wonder how the proposed regulations to make noncompetes unenforceable affect moves like this. Or was he sufficiently high up that his existing noncompete would have survived?
How good or bad is this for OpenAI?
A few years ago? Probably catastrophic, he was Chief Scientist after all.
Now? Probably not too much, they have enough investment, and additionally talented people wanting to join. I mean, Andrej Karpathy also joined and left OpenAI twice and it didn't impact operations much.
I think OpenAI is now where Google was at or just before its IPO, a few key players leaving isn't going to impact them as much as it would have in its earlier founding days, and there is plenty of talent who are ready to jump in to fill the shoes of anyone who leaves.
That may be true in term of engineering, but I think everyone had switched to google as their search engine by then. I am not sure openai has captured the market quite in the same way, as I think people are still mostly experimenting with AI, the integration time in any large company is much slower than the rate of progress of AI. And it's not clear to me that there is much of a vendor lockin to use the openai API vs an equivalent competitor.
According to Mr Altman’s tweet (https://twitter.com/sama/status/1790518031640347056) they had not just one but TWO of the greatest minds of this generation.
After this change they will have only one.
Sam Altman's tweet only implies that they had >= 2 greatest minds of this generation, and now they have at least one of this breed of people.
He is the smartest guy in AI but the sum of OpenAI’s talent is greater than his. But he could easily be the next great advancement in the field.
Ilya hasn’t been working on core models for a while. He’s been focused on superalignment. That’s good for the world. Since OpenAI is leading/closest to AGI, it’s the best place to work on superalignment.
Depends on how tight his non-compete is.
I think those are illegal now. They have been in California for a long time.
https://www.ftc.gov/news-events/news/press-releases/2024/04/...
Non competes are illegal in CA
At least now we know GPT-5 has finished development and is now in training from this (I would hope that Iyla got to add all that he hoped to before leaving).
Ilya, thanks for all you have contributed within OpenAI!
GPT-5ANDBAG more like it
He wouldn't have left if he could advance hoomanity further there, the guy has like a 800ms delay for each word and that does not make for a very good liar, perhaps a dutiful one.
The word delay depends on who he is talking to. On his Dwarkesh interview from a year or so back he speed up noticeably, presumably because Dwarkesh is a fast thinker/talker.
From reporting GPT-5 finished pre-training while ago and was in the process of red-teaming.
His phone must be ringing non-stop from all the VCs.
My bet is he joins Ive
Maybe together they can use AI to make a less shitty Christmas tree
pretty sure he's going to Microsoft
The "personally meaningful to me" spells to me that it's probably a personal project?
Nvidia should snatch him.
I have a feeling Apple will make a play for him.
Apple is considered to be seriously lagging behind in ML. Just his name alone is probably enough for the time being - They can give him his own lab to do whatever he wants. Ilya will attract enough talent, at least some of whom will be willing to take up responsibility over commercial stuff in the coming years.
I have a feeling he would like to publish some stuff, and apple doesnt do that
I think so too, GTP-4o but replacing Siri would be world-changing for mobile
They sell the shovels and the buckets, they're not digging for gold.
They do participate pretty heavily in ML research from what I've seen. To continue your metaphor, they try to invent as many gold digging techniques as possible which exclusively work with their own shovels and buckets.
If you look at deep learning super sampling, they are doing digging and being pretty successful at it.
They've got money for a very BIG experiment tho
They would if they could
Doesnt hurt to also sell the gold
That is not the main prupose of Nvidia and will make microsoft, google, fb worried too much to go away from Nvidia.
Being both coach and player at the same time is not a good idea.
What next? Meta?
Maybe Microsoft, for being so close with OpenAI. Maybe Apple, who really needs a tech lead for AI. Maybe Google, his previous workplace, or work for Elon, who was successful in poaching Andrej in the past. Or a startup, he can raise billions if he so wishes. Wherever he goes in a year will compete with OpenAI. Previous time lead researchers had a philosophical disagreement with Sam they left and created Anthropic, which recently caught up to OpenAI. That's the risk of letting Ilya go. And where Ilya goes, other top researchers will go too.
Ilya cares about AI Safety and AGI. Meta's whole positioning is to dismiss it. No way he goes there.
Perhaps that's exactly why he might go there: to change it for a reason (a new company path long term, or just upcoming potential regulations etc.)
I don't believe it either, but in case it happened, it might make some sense that way.
Maybe the best to guarantee safety is to openly share the science. Lecun is also more 'academic style' than most competing labs
I'm guessing his next move is not related to LLMs, maybe not even to the pursuit of AGI.
It depends on the anti-compete clauses in his contract.
I don't know how to word it, but a company that ignores all content rights enforcing a non compete seems ironic to me.
Aren’t those non enforceable now?
Not really a thing in CA, largely unenforceable.
If he goes to Microsoft next it was all prearranged a year ago.
He said next project is personally meaningful, It doesn't seem that he will join other big company in the short term.
Apple is hiring...
And just like that, a drawbridge across OAI's moat.
Why now?
Given that he went radio silent since the voting out Altman fiasco exactly 6 months ago, it's clearly due to that.
One idea could be the product launch dev day, which is something that originally was a point of tension (overcommercialization vs research). Launching GPT-4o at a dev day basically asserts Sam is picking up no compromise on where they were 6mo ago. Good time to finally leave if protesting that is what he believes in.
Why not? We don’t know details that could involve financial agreements
cash out and live the good life? Start his own AI company or support company? Build the next better AI? The sky is the limit
There may be a small print limiting his options.
Tesla also lost top AI lead [0]. Will they come to Apple?
[0] https://news.ycombinator.com/item?id=40361350
Probably not related, but it's worth pointing out that Daniel Kokotajlo (https://www.lesswrong.com/users/daniel-kokotajlo) left last month.
But if it were related, then that would presumably be because people within the company (or at least two rather noteworthy people) no longer believe that OpenAI is acting in the best interests of humanity.
Which isn't too shocking really given that a decent chunk of us feel the same way, but then again, we're just nobodies making dumb comments on Hacker News. It's a little different when someone like Ilya really doesn't want to be at OpenAI.
Well it might be in the best (long-term) interests of humanity to have autonomous flying killer robots powered by OpenAI secret military contracting work cut the human population in half, in the name of the long-term ecological health of the planet, and to cull those not smart or fast enough to run away, thus improving the breeding stock.
That's why I don't trust people who run around claiming to be serving the best interests of humanity - glassy-eyed futurists with all the answers should be approached with caution.
> Well it might be in the best (long-term) interests of humanity to have autonomous flying killer robots powered by OpenAI secret military contracting work cut the human population in half, in the name of the long-term ecological health of the planet, and to cull those not smart or fast enough to run away, thus improving the breeding stock.
I love these "kill 'em all and let God sort them out" arguments.
We already have tools to cut the human population in half even without AI. Acting in the best interests of humanity is really a cheesy way to frame it. I'm sure they also told Oppenheimer he was acting in the best interests of humanity.
You don't need any AI for that. Current technology is quite sufficient.
What? How is this not saying "Well, it might be in the best interests of humanity for OpenAI to do [hypothetical thing that seems pretty bad that OpenAI has never suggested to do], and because they may consider doing said thing, we shouldn't trust them"?
Why would that be presumable when his goodbye statement clearly states the opposite?
This is baseless fear mongering given that.
Why does everyone here think that the guy who quit/lost his job at OpenAI because he didn't agree with their corporate shift and departure from the original non-profit vision is going to be lining up for another big corporate job building closed for-profit AI?
>the guy who quit/lost his job at OpenAI because he didn't agree with their corporate shift and departure from the original non-profit vision
There is no evidence of this being true.
He is one of the biggest proponents of keeping AI closed-source, by the way.
> He is one of the biggest proponents of keeping AI closed-source, by the way.
From quite different reasons than profit, tho
That's a naive way of thinking. Keeping it closed source would only make it available to the highest bidder on the black market.
The big reason is that when push comes to shove, most of these people don't have any principles.
Sure, if they are in a position of power they will wield it how they want. When he caused the whole fiasco, he probably thought it was going to work.
But if the choice is between losing the position of influence, or deciding between what position of influence to accept next, well you'll see that the principles are very flexible.
We already saw this happen with a few of the "safety" researchers that got fired from OpenAI, and yet started working on X AI (I think?), which is definitely not know for "safety".
Maybe “better the devil you know than the devil you don't” applies?
Then...he would have stayed at OpenAI.
I am hoping he goes open source or to Meta
I’m not surprised with what happened with Sam Altmans ousting. He missed the king.
I’m surprised he lasted this long.
Mira Murati also "missed the king" and just delivered the keynote
Seems like she was more appointed, than actually trying to make moves?
Reid Hoffman provided some clear (at least to me) evidence for Mira's non-involvement → https://youtu.be/IgcUOOI-egk?si=FiSPt87v3pM3lfKt&t=851
[flagged]
Great, now I have that whistling stuck in my head again.
Thanks for the reminder though, been a while since I've thought of The Wire :)
Oh, indeed.
I wonder if he thinks LLMs are an AGI dead end and he's not interested in selling a product. There's some academic papers floating around coming to the same conclusion (no links. sorry, studying for a cert exam).
That's been my assumption since the beginning of this drama last year. He seems to have one goal: real AGI. He knows that while LLMs may make something that seems like AGI there's nothing actually intelligent about it and its never going to get them there. OpenAI wants to pivot and sell sell sell because all they see is potential trillions of dollars, and it's time to make money instead of burning more millions/billions chasing a dream.
Yet all the AI weirdos on Twitter seem convinced that Ilya "saw something" (AGI) and got scared and wanted to pull the plug...lmao.
This is counter to every interview ilya has ever given since gpt3--he believes scaling llms can get there that's why they scaled to gpt4 scales at all.
There is more money in AGI than LLMs.
Whatever it is, language seems key to intelligent algorithms.
Nah, he departed due to politics (failed coup) and shift from research first to profit first. Same with Karpathy I believe.
He’ll most likely go somewhere where he can get a lot of compute and go back to research first.
> it's time to make money instead of burning more millions/billions chasing a dream.
The investors want their money before people realise they have been oversold the dream/threat of AGI.
> Yet all the AI weirdos on Twitter seem convinced that Ilya "saw something" (AGI) and got scared and wanted to pull the plug...lmao.
The market to believe in made-up stories is alaways strong.
Isn't it consensus that AGI will never arise from LLMs?
Just based on current energy usage never going to happen. You just have to ask them to show you their energy bills alongside their demos.
There is no such consensus.
No... A good number of folks will even go so far as to say that "all we do" is token prediction too. It's worth noting -- OpenAI founder Elon Musk claims in a lawsuit that the company has achieved AGI. Make of that what you will, but certainly there are many people on this site who believe in the general potential of LLMs.
Yeah, I study* that way too.
*procrastinate
> Yeah, I study* that way too
I’m not the only one?
So this is what it’s like when doves cry! - Milhouse
Funny
Altman's tweet (https://x.com/sama/status/1790518031640347056?s=46) makes it seem as if he wanted to stay, and Ilya disagreed and "chose" to depart. Very interesting framing.
PR statement. After nearly being ousted, I'm sure Sam is relieved to have a thorn removed from his side.
It could be a PR statement, it could also be genuine. From outside looking in there's no way to know, so I will just pretend this tweet doesn't exist.
It's PR for sure. A genuine announcement would have addressed the elephant in the room.
Someone has already tried doing that, and it's pretty close:
https://twitter.com/eli_schein/status/1790520139164614820
Altman is the biggest con artist in tech.
Con artist is a bad description. The guy is legit dangerous. He's not after swindling you out of your money, that wouldn't be worth it.
What’s the con? Aren’t they constantly delivering frontier models?
Surely he would have never gotten his current role if that is the case. There's way too much money and visibility involved.
Exactly. He’s only founded and led a company that’s built some of the most easily adoptable and exciting innovations in human-computer interactions in the last decade. Total fraud!
This was easily the most PR tweet of our generation.
The fact Ilya himself tweeted about it too was also easily the most PR tweet of our generation.
:D
Yeah, like when Cheney shot Harry Whittington and it was Whittington that apologized.
Since it's all in proper casing I'm going to assume he wrote it with chatgpt.
Or GPT-5 went rogue, took out the senior staff, and is running the game now, Westworld style.
He also literally mentioned Ilya's personal project; something that ChatGPT would do (it repeats parts of the prompt).
Ironically built by Ilya
He used “easily one of the greatest minds of our generation” for two different people in the same message. 100% ai generated
Good observation lol.
Nah. It’s same platitudes that’s always said when a someone high profile is fired.
> Ilya is easily one of the greatest minds of our generation ...
> Jakub is also easily one of the greatest minds of our generation ...
Phew, I was worried he'd be irreplaceable or something. Hopefully they've already standardized the comp package.
Personally I don’t trust much of anything sam says, so I’d take any framing with a large grain of salt
It’s too nice. Nobody is this nice. It’s like Truman Show nice.
While he does say, he is leaving for some personal and meaningful project, let’s see what it ends up being.
He’s got a good PR team.
[flagged]
Funny enough people will still call OpenAI “an engineering led company” when very obviously it’s slowly being taken over by the same MBAs as Google
Sam "Worldcoin" Altman regrets the loss of a friend that called him out on how OpenAi is becoming closed because the engineers realized they could make a lot of money. Doesn't seem like it is impacting the quality of the models, but it will probably impact openai's impact.
Can you blame the engineers? If you realize LLM tech is neat but ultimately overhyped and probably decades away from truly realizing the promises of general purpose AI, why not just switch goals to making as much money as you can?
Oklo as well
There’s a halo around Ilya Sutskever as the Albert Einstein of AI. Are there others on par with his— umm, how would you qualify it—- AI intuition or are we idolizing?
You have used an excellent term: AI intuition. This quality is extremely rare. Einstein probably had a similar kind of intuition in physics, and maybe that's why he was so successful. The ability to see what direction to pursue. Ilya has demonstrated it again and again, first with Alexnet (Hinton said Ilya was the person driving the project, believing in its success when no one else did, while Alex was the main implementer), then with OpenAI when he believed scaling up models is "all we need" to get to AGI, when very few people would agree with that. Today he believes the alignment is very important - perhaps we should listen to him.
I think you're idolizing perhaps.
There's no doubt Ilya is highly respected in the field, but not to the same extent as Albert Einstein is in physics.
Maybe with time, but certainly not today.
Personality cult.
Yann LeCun is better known, right?
Schmidhüber
[flagged]
Reifing, you can't help but not in English. Capabilities are given intention, intentions given classes, and godhood is a class...
I hope Ilya takes care of himself. I can imagine that what happened during the past year is not helpful for one's mental health. I assume the presented relationship with Sam Altman does not reflect reality and the external press surely also causes a lot of pressure.
didn't know this. can you explain or link a few articles?
Ilya will literally have a blank check from almost all the VC's in the industry.
And probably all of the big tech CEOs are trying to get him on the phone right now.
All the executives are computer science majors …
[flagged]
Nope. His non-competes are likely very restrictive.
Most of the VCs have already spent their money in the last couple of years.
So he may have a blank check but not nearly enough to build an OpenAI competitor.
Would love to see him at IBM with full use of their quantum systems.
Well, no.
No VC who wants anything to do with OpenAI would invest in Ilya.
Ilya represents the anti-OpenAI ethos. So it would only be a VC who would be comfortable publicly being an anti-OpenAI VC, which is not that many.
He showed exceptionally bad judgement, judgement is perhaps the most important characteristic of high level employees.
He's brilliant, which means someone will take a leap of faith, but he badly, badly damaged his brand as a leader going forward.
Biggest free agent since Lebron James
Guy should announce the next step of his career in a one hour TV special. It will easily have as many watchers as OpenAI's keynote.
Plot twist: Ilya joins xAI next.
Elon poached Ilya into OpenAI so I'm sure he will be happy to have him around.
If we truly live in the "most entertainig outcome" timeline, this is definitely what's going to happen.
This is plausible, Elon is a fantastic recruiter and he recruited Ilya for OpenAI. There are reports of xAI buying enormous numbers of GPUs and Elon's level of control of his companies means that Ilya recklessness isn't an issue.
It's a match. Probably the best match possible.
Never go against the family, Fredo
Matches the hairline.
I'll say it again - the one who was irreplaceable at OpenAI is Ilya, not Sam.
"Was irreplaceable" != "is still irreplaceable". OpenAI as a company has outgrown any individual engineer or scientist, no matter how smart.
Ilya Wozskever
Yeah - he's one of the only people I've seen talk on the topic who really seems to understand where it's going and how to get there. It's possible he's evangelized others at OAI who can carry the torch, but I'm skeptical given the degree of pushback the statements most in need of being represented got from his peers.
Just the opinion of an outsider, so not worth very much. But Ilya seemed to be one of the few who actually believed in the mission. I’m sure it was hard for him to watch the company become so product focused.
OpenAI under Sam strikes me as completely disingenuous - and the constant hyperbolic tweeting by many OpenAI employees just reinforces that.
Too bad. While I don’t really think that OpenAI is on the right track for general intelligence, it certainly could have been a positive for the world.
I wonder if that's true at a certain stage of OpenAI, which because of the product bootstrapping skills of Sam and co, has made his role irrelevant?
I mean, Jakub can take it forward at the current scale and leadership team of Sam and other people, but maybe he could not have earlier, which is where Ilya shone?
My guess is that Ilya is the one that saddled OpenAI with it's insane structure.
He's brilliant, no doubt, but he shouldn't be in leadership.
Plot twist he starts his own AI company
and calls it Actually-Open AI
[dead]
If Karpathy and Illya join xAI, that would be a fun trajectory.
[flagged]
[flagged]
And the boeing whistleblowers...Hopefully he avoids airplanes.
[flagged]
Too bad they're actually nice people hey?
[flagged]
Everyone in the photo is @'d in the post. It's @merettm / Jakub Pachocki who is taking over as Chief Scientist. Downvotes are probably because you cheekily mentioned his weight for some reason.
[flagged]
[flagged]
[flagged]
'Back in May 2023, before Ilya Sutskever started to speak at the event, I sat next to him and told him, “Ilya, I listened to all of your podcast interviews. And unlike Sam Altman, who spread the AI panic all over the place, you sound much more calm, rational, and nuanced. I think you do a really good service to your work, to what you develop, to OpenAI.” He blushed a bit, and said, “Oh, thank you. I appreciate the compliment.”
An hour and a half later, when we finished this talk, I looked at my friend and told her, “I’m taking back every single word that I said to Ilya.”
He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”
The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'[0]
[0] What Ilya Sutskever Really Wants https://www.aipanic.news/p/what-ilya-sutskever-really-wants
I read the linked article and have no clue what the author is even trying to say..
[flagged]
[flagged]
Again?
Yikes
The future of the company doesn't depend on one engineer. If he left, it's likely because he had a vision that wasn't in line with Sam or Microsoft. Others will take his place and OpenAI will likely reach Elon Musks' recent prediction that AI will improve 100x in the next few years.
So the CEO of Amazon Web Services and the Chief Scientist of OpenAI are on the market on the same day...
I'm not saying it's a conspiracy, but it's an awful big coincidence, especially since today is Tuesday and usually these things happen on a Friday.
Will join Yandex
Source?
Ilya knows how ChatGPT works. Any company that hires him will be able to catch up with ChatGPT.
Catching up right now is not a matter of tech innovation, but raw energy and compute.
Of course, the next -revolution- in AI could very well come from Ilya. But why would he bestow that honor to anyone? He can self fund it if he wants. It's an R&D project, not a scaling problem.
I think what is probably very stressful about this space is virtually everyone knows how ChatGPT works. It is not a theoretical leap. It's actually fairly predictable how this shakes out, and OpenAI is pretty vulnerable.
an LLM is a curiosity without user data, anyone with a big silo of data can put out something years behind frontier and still instantly see huge usage. No one wants to go to AI, they want AI to come to them, unless OpenAI can stake a claim in a super novel way they're the Dropbox.
It's not like someone is going to use insider OpenAI knowledge to build an LLM so advanced you switch email, phone, or ERP providers
You will still need the compute resources of Microsoft, Meta etc.
And they have their own people who equally know how LLMs work.
Even raising funds is not a certainty given that VCs are becoming more cautious with AI as they realise it's now a platform fight between the mega corporations.
I don't think it's good to lionize people like that, like some kind of tech Übermensch. He neither knows everything about ChatGPT, nor is he the only person there to know a whole lot about it.
If his concern was irresponsible AI proliferation, too much commercial focus and wanting to move more carefully, accelerating competition doesn't seem like it would align with his goals.
If he really is quitting over ethics he isn't going to Google or Meta. He'll probably go to Anthropic or academia. But who knows.
Everyone knows how chatGPT works.
first Andrej and now Ilya. Talent exodus at OpenAI?