"Elizabeth Holmes says lack of working blood tests is delaying the company's products."
If you're spending (and losing) tens of billions a year and still can't afford a fundamental input to your product, then that's a sign that your company might not be viable given the present state of technology.
My reading on this is that they refuse to allow public access to newer models because inference is much more expensive than the $20 per month that the market is willing to pay. That's why they are working on dedicated inference hardware. And that's why they are still losing money: they are selling access to their AI for less than what it costs to operate. Releasing more resource intensive products (without raising prices) would then just increase their losses. And investor money being limited then forces them to not release new products.
And if he owned all that compute at this very moment his statement would instead read "Sam Altman says lack of grid power generation is delaying the company's products"
They literally have access to anything they want all they need to do is pick up the phone and people would be tripping over themselfs to provide compute services … for a price or course.
My suspicion is that they will work great but mostly need time for compute scaling to kick in (as was basically the case for the self-attention). Computers will need to get asymptotically faster before they work out.
Having said that, it's not my area and i'm mostly an uninformed enthusiast.
"Elizabeth Holmes says lack of working blood tests is delaying the company's products."
If you're spending (and losing) tens of billions a year and still can't afford a fundamental input to your product, then that's a sign that your company might not be viable given the present state of technology.
Unless what you’re doing creates a landslide of government funded competition across the planet.
When all the major governments of the world face the exact same problem you do after copying what you did, you might be onto something.
Do all the governments of the world face a problem making autocomplete chatbots?
SHALL WE PLAY A GAME?
WOULDN'T YOU PREFER A NICE GAME OF CHESS?
They could be innovative in their initial idea, but that doesn't make them good programmers.
It could be that their code is just not great.
Telling me that there isn't enough compute power to do your work, lets me know that your code quality is probably mediocre.
My reading on this is that they refuse to allow public access to newer models because inference is much more expensive than the $20 per month that the market is willing to pay. That's why they are working on dedicated inference hardware. And that's why they are still losing money: they are selling access to their AI for less than what it costs to operate. Releasing more resource intensive products (without raising prices) would then just increase their losses. And investor money being limited then forces them to not release new products.
And if he owned all that compute at this very moment his statement would instead read "Sam Altman says lack of grid power generation is delaying the company's products"
What a convenient and self flattering excuse.
In the immortal words of Homer Simpson, "This Is Everybody's Fault But Mine"
They literally have access to anything they want all they need to do is pick up the phone and people would be tripping over themselfs to provide compute services … for a price or course.
[flagged]
Did it not occur to you that LeCun is being given plenty of resources to incorporate energy based models at Meta?
Did they figure out how to scale them? Because if not, we are working that out.
My suspicion is that they will work great but mostly need time for compute scaling to kick in (as was basically the case for the self-attention). Computers will need to get asymptotically faster before they work out.
Having said that, it's not my area and i'm mostly an uninformed enthusiast.
Well we are working on scaling them.