As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
My 2 cents. It's when OKRs are executed without a vision, or the vision is that one and well, it sucks.
The goal is AI everywhere, so this means top-down everyone will implement it and will be rewarded for doing so, so thrre are incentives for each team to do it - money, promotions, budget.
100 teams? 100 AI integrations or more. It's not 10 entry points as it should be (maybe).
This means for a year or more, a lot of AI everywhere, impossible to avoid, will make usability sink.
Now, if this was only done by Microsoft, I would not mind. The issue is that this behavior is getting widespread.
I had the experience too. Working with Azure is already a nightmare, but the copilot tool built in to Azure is completely useless for troubleshooting. I just pasted log output into Claude and got actual answers. Mincrosoft’s first party stuff just seems so half assed and poorly thought out.
Why is this, I wonder? Aren't the models trained on about the same blob of huggingface web scrapes anyway? Does one tool do a better job of pre-parsing the web data, or pre-parsing the prompts, or enhancing the prompts? Or a better sequence of self-repair in an agent-like conversation? Or maybe more precision in the weights and a more expensive model?
their products are just just good enough to allow them to put a checkbox in a feature table to allow it to be sold to someone who will then never have to use it
but not even a penny more will be spent than the absolute bare minimum to allow that
this explains Teams, Azure, and everything else they make you can think of
How do you QA adding weird prediction tool to say Outlook. I have to use Outlook at one of my clients and have switched to writing all emails in VS Code and then pasting it to Outlook as “autocomplete” is unbearable… Not sure QA is possible with tools like these…
Part of QA used to be evaluating whether a change was actually helpful in doing the thing it was supposed to be doing.
... why, it's almost like in eliminating the QA function, we removed the final checks and balances on developers (read: PMs) from implementing whatever ass-backwards feature occurs to them.
Just in time for 'AI all the things!' directives to come down from on high.
exactly!! though evaluating whether a change was actually helpful in doing the thing it was supposed to be doing is hard when no one knows what it is supposed to be doing :)
I had a WTF moment last week, i was writing SQL, and there was no autocomplete at all. Then a chunk of autocomplete code appeared, what looked like an SQL injection attack, with some "drop table" mixed in. The code would have not worked, it was syntactically rubbish, but still looked spooky, should have made a screenshot of it.
This is the most annoying thing, and it's even happened to Jetbrains' rider too.
Some stuff that used to work well with smart autocomplete / intellisense got worse with AI based autocomplete instead, and there isn't always an easy way to switch back to the old heuristic based stuff.
You can disable it entirely and get dumb autocomplete, or get the "AI powered" rubbish, but they had a very successful heuristic / statistics based approach that worked well without suggesting outright rubbish.
In .NET we've had intellisense for 25 years that would only suggest properties that could exist, and then suddenly I found a while ago that vscode auto-completed properties that don't exist.
It's maddening! The least they could have done is put in a roslyn pass to filter out the impossible.
Loosely related: voice control on Android with Gemini is complete rubbish compared to the old assistant. I used to be able to have texts read out and dictate replies whilst driving. Now it's all nondeterministic which adds cognitive load on me and is unsafe in the same way touch screens in cars are worse than tactile controls.
I've been immensely frustrated by no longer being able to set reminders by voice. I got so used to saying "remind me in an hour to do x" and now that's just entirely not an option.
I'm a very forgetful person and easily distracted. This feature was incredibly valuable to me.
I got Gemini Pro (or whatever it's called) for free for a year on my new Pixel phone, but there's an option to keep Assistant, which I'm using.
Gotta love the enshittification: "new and better" being more CPU cycles being burned for a worse experience.
I just have a shortcut to the Gemini webpage on my home screen if I want to use it, and for some reason I can't just place a shortcut (maybe it's my ancient launcher that's not even in the play store anymore), so I have to make a tasker task that opens the webpage when run.
This is my biggest frustration. Why not check with the compiler to generate code that would actually compile? I've had this with Go and .Net in the Jetbrains IDE.
Had to turn ML auto-completion off. It was getting in the way.
You can still use the older ML-model (and non-LLM-based!) IntelliCode completion suggestions - it’s buried in the VS Installer as an optional feature entirely separate from anything branded CoPilot.
The most WTF moment for me was that recent Visual Studio versions hooked up the “add missing import” quick fix suggestion to AI. The AI would spin for 5s, then delete the entire file and only leave the new import statement.
I’m sure someone on the VS team got a pat on the back for increasing AI usage but it’s infuriating that they broke a feature that worked perfectly for a decade+ without AI. Luckily there was a switch buried in settings to disable the AI integration.
There is no setting to revert back to the very reliable and high quality "AI" autocomplete that reliably did not recommend class methods that do not exist and reliably figured out the pattern I was writing 20 lines of without randomly suggesting 100 lines of new code that only disrupts my view of the code I am trying to work on.
I even clicked the "Don't do multiline suggestions" checkbox because the above was so absurdly anti-productive, but it was ignored
The last time I asked Gemini to assist me with some SQL I got (inside my postgres query form):
This task cannot be accomplished
USING
standard SQL queries against the provided database schema. Replication slots
managed through PostgreSQL system views AND functions,
NOT through user-defined tables. Therefore,
I must return
Gemini weirdly messes things up, even though it seems to have the right information - something I started noticing more often recently. I'd ask it to generate a curl command to call some API, and it would describe (correctly) how to do it, and then generate the code/command, but the command would have obvious things missing like the 'https://' prefix in some case, sometimes the API path, sometimes the auth header/token - even though it mentioned all of those things correctly in the text summary it gave above the code.
I feel like this problem was far less prevalent a few months/weeks ago (before gemini-3?).
Using it for research/learning purposes has been pretty amazing though, while claude code is still best for coding based on my experience.
This is a great post. Next time that you see it, grab a screenshot, put on GitHub pages and post it here on HN. It will generated lots of interesting discussion about rubbish suggestions from poor LLM models.
This seems like what should be a killer feature: Copilot having access to configuration and logs and being able to identify where a failure is coming from. This stuff is tedious manually since I basically run through a checklist of where the failure could occur and there’s no great way to automate that plus sometimes there’s subtle typo type issues. Copilot can generate the checklist reasonably well but can’t execute on it, even from Copilot within Azure. Why not??
I have had great luck with ChatGPT trying to figure out a complex AWS issue with
“I am going to give you the problem I have. I want you to help me work backwards step by step and give me the AWS cli commands to help you troubleshoot. I will give you the output of the command”.
It’s a combination of advice that ChatGPT gives me and my own rubberducking.
that's what happens when everyone is under the guillotine and their lives depend on overselling this shit ASAP instead of playing/experimenting to figure things out
I've worked in tech and lived in SF for ~20 years and there's always been something I couldn't quite put my finger on.
Tech has always had a culture of aiming for "frictionless" experiences, but friction is necessary if we want to maneuver and get feedback from the environment. A car can't drive if there's no friction between the tires and the road, despite being helped when there's no friction between the chassis and the air.
Friction isn't fungible.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
In "Mind and World", McDowell criticizes this sort of thinking, too, saying:
> We need to conceive this expansive spontaneity as subject to control from outside our thinking, on pain off representing the operations of spontaneity as a frictionless spinning in a void.
And that's really what this is about, I think. Friction-free is the goal but friction-free "thought" isn't thought at all. It's frictionless spinning in a void.
I teach and see this all the time in EdTech. Imagine if students could just ask the robot XYZ and how much time it'd free up! That time could be spent on things like relationship-building with the teacher, new ways of motivating students, etc.
Except...those activities supply the "wants and struggles whose consummations" build the relationships! Maybe the robot could help the student, say, ask better questions to the teacher, or direct the student to peers who were similarly confused but figure it out.
But I think that strikes many tech-minded folks as "inefficient" and "friction-ful". If the robot knows the answer to my question, why slow me down by redirecting me to another person?
This is the same logic that says making dinner is a waste of time and we should all live off nutrient mush. The purposes of preparing dinner is to make something you can eat and the purpose of eating is nutrient acquisition, right? Just beam those nutrients into my bloodstream and skip the rest.
Not sure how to put this all together into something pithy, but I see it all as symptoms of the same cultural impulse. One that's been around for decades and decades, I think.
People want the cookie, but they also want to be healthy. They want to never be bored, but they also want to have developed deep focus. They want instant answers, but they also want to feel competent and capable. Tech optimizes for revealed preference in the moment. Click-through rates, engagement metrics, conversion funnels: these measure immediate choices. But they don't measure regret, or what people wish they had become, or whether they feel their life is meaningful.
Nobody woke up in 2005 thinking "I wish I could outsource my spatial navigation to a device." They just wanted to not be lost. But now a generation has grown up without developing spatial awareness.
> Tech optimizes for revealed preference in the moment.
I appreciate the way you distinguish this from actual revealed preference, which I think is key to understanding why what tech is doing is so wrong (and, bluntly, evil) despite it being what "people want". I like the term "revealed impulse" for this distinction.
It's the difference between choosing not to buy a bag of chips at the store or a box of cookies, because you know it'll be a problem and your actual preference is not to eat those things, and having someone leave chips and cookies at your house without your asking, and giving in to the impulse to eat too many of them when you did not want them in the first place.
Example from social media: My "revealed preference" is that I sometimes look at and read comments from shit on my Instagram algo feed. My actual preference is that I have no algo feed, just posts on my "following" tab, or at least that I could default my view to that. But IG's gone out of their way (going so far as disabling deep link shortcuts to the following tab, which used to work) to make sure I don't get any version of my preference.
So I "revealed" that my preference is to look at those algo posts sometimes, but if you gave me the option to use the app to follow the few accounts I care about (local businesses, largely) but never see algo posts at all, ever, I'd hit that toggle and never turn it off. That's my actual preference, despite whatever was "revealed". That other preference isn't "revealed" because it's not even an option.
Just like the chips and cookies the costs of social meida are delayed and diffuse. Eating/scrolling feels good now. The cost (diminished attention span, shallow relationships, health problems) shows up gradually over years.
Yes i agree with this. I think more people, than not, would benefit from actively cultivating space in their lives to be bored. Even something as basic as putting your phone in the internal zip part of your bag, so when you're standing in line at the store/post office/whatever you can't be arsed to just reach for your phone and instead be in your head or aware of your surroundings. Both can be such wonderful and interesting places but we seem to forget that now
I think that's partially true. The point is to have the freedom to pursue higher-level goals. And one thing tech doesn't do - and education in general doesn't do either - is give experience of that kind of goal setting.
I'm completely happy to hand over menial side-quest programming goals to an AI. Things like stupid little automation scripts that require a lot of learning from poor docs.
But there's a much bigger issue with tech products - like Facebook, Spotify, and AirBnB - that promise lower friction and more freedom but actually destroy collective and cultural value.
AI is a massive danger to that. It's not just about forgetting how to think, but how to desire - to make original plans and have original ideas that aren't pre-scripted and unconsciously enforced by algorithmic control over motivation, belief systems, and general conformity.
Tech has been immensely destructive to that impulse. Which is why we're in a kind of creative rut where too much of the culture is nostalgic and backward-looking, and there isn't that sense of a fresh and unimagined but inspiring future to work towards.
I don't think I could agree with you more. I think that more in tech and business should think about and read about philosophy, the mind, social interactions, and society.
ED Tech for example I think really seems to neglect the kind of bonds that people form when they go through difficult things together, and the pushing through difficulties is how we improve. Asking a robot xyz does not improve ourselves. AI and LLMs do not know how to teach, they are not Socratic pushing and prodding at our weaknesses and assessing us to improve. The just say how smart we are.
This is perhaps one of the most articulate takes on this I have ever read - thank-you!
And - for myself, it was friction that kickstarted my interest in "tech" - I bought a janky modem, and it had IRQ conflicts with my Windows 3 mouse at the time - so, without internet (or BBS's at that time), I had to troubleshot and test different settings with the 2-page technical manual that came with it.
It was friction that made me learn how to program and read manuals/syntax/language/framework/API references to accomplish things for hobby projects - which then led to paying work. It was friction not having my "own" TV and access to all the visual media I could consume "on-demand" as a child, therefore I had to entertain myself by reading books.
Friction is an element of the environment like any other. There's an "ecology of friction" we should respect. Deciding friction is bad and should be eradicated is like deciding mosquitoes or spiders or wolves are bad and should be eradicated.
Sometimes friction is noise. Sometimes friction is signal. Sometimes the two can't be separated.
I learned much the same way you did. I also started a coding bootcamp, so I've thought a lot about what counts as "wasted" time.
I think of it like building a road through wilderness. The road gets you there faster, but careless construction disturbs the ecosystem. If you're building the road, you should at least understand its ecological impact.
Much of tech treats friction as an undifferentiated problem to be minimized or eliminated—rather than as part of a living system that plays an ecological role in how we learn and work.
Take Codecademy, which uses a virtual file system with HTML, CSS, and JavaScript files. Even after mastering the lessons, many learners try the same tasks on their own computers and ask, "Why do I need to put this CSS file in that directory? What does that have to do with my hard drive?"
If they'd learned directly on their own machines, they would have picked up the hard-drive concepts along the way. Instead, they learned a simplified version that, while seemingly more efficient for "learning to code," creates its own kind of waste.
But is that to say the student "should" spend a week struggling? Could they spend a day, say, and still learn what the friction was there to teach? Yes, usually.
I tell everyone to introduce friction into their lives...especially if they have kids. Friction is good! Friction is part of the je ne sais quoi that make human's create
Thank you for expressing this. It might not be pithy but its something I've been thinking about a lot for a long time and this a well articulated way of expressing this
In my experience part of the 'frictionless' experience is also to provide minimal information about any issues and no way to troubleshoot. Everything works until it doesn't, and when it doesn't you are now at the mercy of the customer support que and getting an agent with the ability to fix your problem.
> but friction is necessary if we want to maneuver and get feedback from the environment
You are positing that we are active learners whose goal is clarity of cognition and friction and cognitive-struggle is part of that. Clarity is attempting to understand the "know-how" of things.
Tech and dare I say the natural laziness inherent in us instead wants us to be zombies being fed the "know-that" as that is deemed sufficient. ie the dystopia portrayed in the matrix movie or the rote student regurgitating memes. But know-that is not the same as know-how, and know-how is evolving requiring a continuously learning agent.
Looking at it from a slightly different angle, one I find most illuminating, removing "friction" is like removing "difficulty" from a game, and "friction free" as an ideal is like "cheat codes from the start" as an ideal. It's making a game where there's a single button that says "press here to win." The goal isn't the remove "friction", it's the remove a specific type of valueless friction, to replace it with valuable friction.
I don't know. You can be banging your head against the wall to demolish it or you can use manual/mechanical equipment to do so. If the wall is down, it is down. Either way you did it.
> ...Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you"
I feel like that describes nearly all of the "productivity" tools I see in AI ads. Sadly enough, it also aligns with how most people use it, in my personal experience. Just a total off-boarding of needing to think.
Sheesh, I notice I also just ask an assistant quite a bit rather than putting effort to think about things. Imagine people who drive everywhere with GPS (even for routine drives) and are lost without it, and imagine that for everything needing a little thought...
As an old school interface/interaction designer, I see this as a direct consequence of how the discipline of software design has evolved in the last decade or two.
We’ve went from conceiving of software as tools - constructs that enhance and amplify their user’s skills and capabilities - to magic boxes that should aim to do everything with just one button (and maybe even that is one action too many).
This shift in thinking is visible in how junior designers and product managers are trained and incentivized to think about their work. “Anticipating the user’s intent”, “providing a magical experience”, “making the simplest, most beautiful and intuitive product” - all things that are so routine parlance now that they sound trite, but that would make any software designer from the 80s/90s catatonic because of how orthogonal they are to good tool design.
To caricature a bit, the industry went from being run by people designing heavy machinery to people designing Disneyland rides. Disneyland rides are great and have their place, but you probably
don’t want your tractor to be designed like one.
Perhaps this is a feature and not a bug for MS. Every time you hit escape or accept, you're giving them more training samples. The more training data they can get you to give them, the better. So they WANT to be throwing out possibly irrelevant suggestions at every opportunity.
As much as I love JetBrains (IntelliJ and friends), I have the same feeling this year. The ratio that I undo an accidental tab/whatever far exceeds the accepted ones. I'm not anti-LLM -- they are great for many things, but I am tired of undoing shitting suggestions. Literally, many of them produce a syntax error. Please don't read this post as dumping on JetBrains. I still love their products.
> It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
No trolling: This is genius-level sarcasm. You do realise that most "business" emails are essentially this, right? Oh, right, you knew that already!
I agree. I am happiest just using plain Emacs for coding and every once in a while separately using an LLM or once or twice a day use gemini-cli or codex for a single task.
My comment is for coding, but same opinion for writing emails - once in a blue moon, then I will use a LLM manually.
>As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.
It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".
My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:
A world full of humans being guided by computers would be... dystopian.
Although I imagine a version where AI drives humans who mindlessly trust them to be more vegetarian or take public transport, helping save the environment (an ironic wish since AI is burning the planet). Of course "AI" is being guided by their owners, so there'd be a camp who uses Grok who'll still drive SUVs, eat meat, and be racist idiots...
The disappointing thing is I’d rather them spend the time improving security but it sounds like all cycles are shoved into making AI shovels. Last year, the CEO promised security would come first but it’s not the case
AI agent technology likely isn’t ready for the kind of high-stakes autonomous business work Microsoft is promising.
It's unbelievable to me that tech leaders lack the insight to recognize this.
So how to explain the current AI mania being widely promoted?
I think the best fit explanation is simple con artistry. They know the product is fundamentally flawed and won't perform as being promised. But the money to be made selling the fantasy is simply too good to ignore.
In other words --- pure greed. Over the longer term, this is a weakness, not a strength.
It's part of a larger economic con centered on the financial industry and the financialization of American industry. If you want this stuff to stop, you have to be hoping (or even working toward) a correction that wipes out the incumbents who absolutely are working to maintain the masqerade.
It will hurt, and they'll scare us with the idea that it will hurt, but the secret is that we get to choose where it hurts - the same as how they've gotten to choose the winners and losers for the past two decades.
The author argues that this con has been caused by three relatively simple levers: Low dividend yields, legalization of stock buybacks, and executive compensation packages that generate lots of wealth under short pump-and-dump timelines.
If those are the causes, then simple regulatory changes to make stock buybacks illegal again, limit the kinds of executive compensation contracts that are valid, and incentivize higher dividend yields/penalize sales yields should return the market to the previous long-term-optimized behavior.
I doubt that you could convince the politicians and financiers who are currently pulling value out of a fragile and inefficient economy under the current system to make those changes, and if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system. I think you're right that it will take a huge disaster that the wealthy and powerful are unable to dodge and unable to blame on anything but their own actions, I just don't know what that event might look like.
Genuine question, I don't understand the economics of the stock market and as such I participate very little (probably to my detriment) I sort of figure the original theory went like this.
"We have an idea to run a for profit endeavor but do not have money to set it up. If you buy from us a portion of our future profit we will have the immediate funds to set up the business and you will get a payout for the indefinite future."
And the stock market is for third party buying and selling of these "shares of profit"
Under these conditions are not all stocks a sort of millstone of perpetual debt for the company and it would behoove them to remove that debt, that is, buyback the stock. Naively I assume this is a good thing.
If you don't understand a concept that's part of the stock market, reading the Investopedia article will go a long way. It's a nice site for basic overviews. https://www.investopedia.com/terms/b/buyback.asp
The short answer is that the trend of frequent stock buybacks as discussed here is not being used to "eliminate debt" (restore private ownership), it's being used to puff up the stock price as a non-taxable alternative to dividend payouts (simply increasing the stock price by reducing supply does not realize any gains, while paying stockholders "interest" directly is subject to income tax). This games the metric of "stock price", which is used as a proxy for all sorts of things including executive performance and compensation.
My view is that you don't want more layers. Chasing ever increasing share prices favor shareholders (limited amount of generally rich people) over customers (likely to be average people). The incentives get out of whack.
I disagree. Those place the problem at the corporate level, when it's clearly extended through to being a monetary issue. The first thing I would like to see is the various Fed and banking liquidity and credit facilities go away. They don't facilitate stability, but a fiscal shell game that has allowed numerous zombie companies to live far past their solvency. This in turn encourages widespread fiscal recklessness.
We're headed for a crunch anyway. My observation is that a controlled demolition has been attempted several times over the past few years, but in every instance, someone has stepped up to cry about the disaster that would occur if incumbents weren't shored up. Of course, that just makes the next occurrence all the more dire.
Stupidity, greed, and straight-up evil intentions do a bunch of the work, but ultimately short-term thinking wins because it's an attractor state. The influence of the wealthy/powerful is always outsized, but attractors and common-knowledge also create a natural conspiracy that doesn't exactly have a center.
So with AI, the way the natural conspiracy works out is like this. Leaders at the top might suspect it's bullshit, but don't care, they always fail upwards anyway. Middle management at non-tech companies suspect their jobs are in trouble on some timeline, so they want to "lead a modernization drive" to bring AI to places they know don't need it, even if it's a doomed effort that basically defrauds the company owners. Junior engineers see a tough job market, want to devalue experience to compete.. decide that only AI matters, everything that came before is the old way. Owners and investors hate expensive senior engineers who don't have to bow and scrape, think they have to much power, would love to put them in their place. Senior engineers who are employed and maybe the most clear-eyed about the actual capabilities of technology see the writing on the wall.. you have to make this work even if it's handed to you in a broken state, because literally everyone is gunning for you. Those who are unemployed are looking around like well.. this is apparently the game one must play. Investors will invest in any horrible doomed thing regardless of what it is because they all think they are smarter than other investors and will get out in just in time. Owners are typically too disconnected from whatever they own, they just want to exit/retire and already mostly in the position of listening to lieutenants.
At every level for every stakeholder, once things have momentum they don't need be a healthy/earnest/noble/rational endeavor any more than the advertising or attention economy did before it. Regardless of the ethics there or the current/future state of any specific tech.. it's a huge problem when being locally rational pulls us into a state that's globally irrational
Yes, that "attractor state" you describe is what I meant by "if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system". The older I get and the more I learn, the less I'm willing to ascribe faults in our society to individual evils or believe in the existence of intentionally concealed conspiracies rather than just seeing systemic flaws and natural conspiracies.
There was a long standing illusion that people care about long-term thinking. But given the opportunity, people seem to take the short-term road with high risks, instead of chasing a long-term gain, as they, themselves, might not experience the gain.
The timeframe of expectations have just shifted, as everyone wants to experience everything. Just knowing the possibility of things that can happen already affects our desires. And since everyone has a limited time in life, we try to maximize our opportunities to experience as many things as possible.
It’s interesting to talk about this to older generation (like my parents in their 70s), because there wasn’t such a rush back then. I took my mom out to some cities around the world, and she mentioned how she really never even dreamed of a possibility of being in such places. On the other hand, when you grow in a world of technically unlimited possibilities, you have more dreams.
Sorry for rambling, but in my opinion, this somewhat affects economics of the new generation as well. Who cares of long term gains if there’s a chance of nobody experiencing the gain, might as well risk it for the short term one for a possibility of some reward.
> correction that wipes out the incumbents who absolutely are working to maintain the masqerade
You need to also have a robust alternative that grows quickly in the cleared space. In 2008 we got a correction that cleared the incumbents, but the ensuing decade of policy choices basically just allowed the thing to re-grow in a new form.
I thought we pretty explicitly bailed out most of the incumbents. A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy. 2008's "correction" should have seen the end of most of our investment banks and auto manufacturers. Say what you want to about them (and I have no particular love for either), but Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch. There should have been more, and Goldman Sachs and GM et al. should not currently exist.
> A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy.
Yeah that's a more accurate framing, basically just saying that in '08 we put out the fire and rehabbed the old growth rather than seeding the fresh ground.
> Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch
I disagree, I think they're artifacts of the rehab environment (the ZIRP policy sphere). I think in a world where we fully ate the loss of '08 and started in a new direction you might get Tesla, but definitely not TSLA, and the version we got is really (Tesla+TSLA) IMO. Bitcoin to me is even less of a break with the pre-08 world; blockchain is cool tech but Bitcoin looks very much "Financial Derivatives, Online". I think an honest correction to '08 would have been far more of a focus on "hard tech and value finance", rather than inventing new financial instruments even further distanced from the value-generation chain.
> Goldman Sachs and GM et al. should not currently exist.
I would say yes and no on Tesla. Entities that survived becaue of the rehab environment actually expected it to fail, and shorted it heavily. TSLA as it currently exists is a result of the short squeeze on the stock that ensued when it became clear that the company was likely to become profitable. Its current, ridiculous valuation isn't a product of its projected earnings, but recoil from those large shorts blowing up.
In our hypothetical alternate timeline, I imagine that there would have still been capital eager to fill the hole left by GM, and possibly Ford. Perhaps Tesla would have thrived in that vacuum, alongside the likes of Fisker, Mullen, and others, who instead faced incumbent headwinds that sunk their ventures.
Bitcoin, likewise, was warped by the survival of incumbents. IIUC, those interests influenced governance in the early 2010s, resulting in a fork of the project's original intent from a transactional medium that would scale as its use grew, to a store of value, as controlled by them as traditional currencies. In our hypothetical, traditional banks collapsed, and even survivors lost all trust. The trustless nature of Bitcoin, or some other cryptocurrency, maybe would have allowed it to supercede them. Deprived of both retail and institutional deposits, they simply did not have the capital to warp the crypto space as they did in the actual 2010s.
I call them "ghosts" because, yes, whatever they might have been, they're clearly now just further extensions of that pre-2008 world, enabled by the our post-2008 environment (including ZIRP).
"In 2008 we got a correction that cleared the incumbents,"
I thought in 2008 we told the incumbents "you are the most important component of our economy. We will allow everybody to go down the drain but you. That's because you caused the problem, so you are the only ones to guide us out of it"
Looking forward to the OpenAI (and Anthropic) IPOs. It’s funny to me that this info is being “leaked” - they are sussing out the demand. If they wait too long, they won’t be able to pull off the caper (at these valuations). And we will get to see who has staying power.
It’s obvious to me that all of OpenAIs announcements about partnerships and spending is gearing up for this. But I do wonder how Altman retains the momentum through to next year. What’s the next big thing? A rocket company?
I have thought about stopping the use of all tech leaders: only use LLM access by running locally and Huggingface, only use a small 3rd party email provider, just use open source, and only social media use is via Mastodon.
What would be the effect? Ironically, more productive?
I am pissed at Microsoft now because my family plan for Office365 is set to renew and they are tagging on a surcharge of $30 for AI services I don’t want. What assholes: that should be a voluntary add on.
EDIT: I tried to cancel my Office365 plan, and they let me switch to a non-AI plan for the old price. I don’t hate them anymore.
Problem with "it will hurt" is that it will actually hurt middle class by completely wiping it out, and maybe slightly inconvenience the rich. More like annoy the rich, really.
Yeah, it started with the whole Wall Street, with all the depression and wars that it brought, and it hasn't stopped, at each cycle the curve has to go up, with exponential expectations of growth, until it explodes taking the world economy to the ground.
How do you guarantee your accelerationism produces the right results after the collapse? If the same systems of regulation and power are still in place then it would produce the same result afterwards
I tend to agree, but there's something to be said for a retribution focus taking time and energy away from problem-solving. When market turmoil hits, stand up facilities to guarantee food and healthcare access, institute a nationwide eviction moratorium, and then let what remains of the free market play out. Maybe we pursue justice by actually prosecuting corporate malfeasance this time. The opposite of 2008.
Don’t attribute to malice that which can equally be contributed to incompetence.
I think you’re over-estimating the capabilities of these tech leaders, especially when the whole industry is repeating the same thing. At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics: if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
If, however, AI ended up delivering and they missed the boat, they’re going to be held accountable.
It’s much less risky to just follow industry trends. It takes a lot of technical knowledge, gut, and confidence in your own judgement to push back against an industry-wide trend at that level.
I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos, but will fail pretty badly when deployed.
If it works 99% of the time, then a demo of 10 runs is 90% likely to succeed. Even if it fails, as long as it's not spectacular, you can just say "yeah, but it's getting better every day!", and "you'll still have the best 10% of your human workers in the loop".
When you go to deploy it, 99% is just not good enough. The actual users will be much more noisy than the demo executives and internal testers.
When you have a call center with 100 people taking 100 calls per day, replacing those 10,000 calls with 99% accurate AI means you have to clean up after 100 bad calls per day. Some percentage of those are going to be really terrible, like the AI did reputational damage or made expensive legally binding promises. Humans will make mistakes, but they aren't going to give away the farm or say that InsuranceCo believes it's cheaper if you die. And your 99% accurate-in-a-lab AI isn't 99% accurate in the field with someone with a heavy accent on a bad connection.
So I think that the parties all "want to believe", and to an untrained eye, AI seems "good enough" or especially "good enough for the first tier".
A big task my team did had measured accuracy in the mid 80% FWIW.
I think the line of thought in this thread is broadly correct. The most value I’ve seen in AI is problems where the cost of being wrong is low and it’s easy to verify the output.
I wonder if anyone is taking good measurements on how frequently an LLM is able to do things like route calls in a call center. My personal experience is not good and I would be surprised if they had 90% accuracy.
>I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos
Sort of a repost on my part, but the LLM's are all really good at marketing and other similar things that fool CEO's and executives. So they think it must be great at everything.
> if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
Understatement of the year. At this point, if AI fails to deliver, the US economy is going to crash. That would not be the case if executives hadn't bought in so hard earlier on.
Yep, either way things are going to suck for ordinary people.
My country has had bad economy and high unemployment for years, even though rest of the world is doing mostly OK. I'm scared to think what will happen once AI bubble either bursts or eats most white collar jobs left here.
> Don’t attribute to malice that which can equally be contributed to incompetence.
This discourse needs to die. Incompetence + lack of empathy is malice. Even competence in the scenario they want to create is malice. It's time to stop sugar-coating it.
I keep fighting this stupid platitude [0]. By that logic, I fail to find anything malicious. Everything could be explained by incompetence, stupidity etc.
> At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics
Isn't that the whole mythos of these corporate leaders though? They are the ones with the vision and guts to cut against the fold and stand out among the crowd?
I mean it's obviously bullshit, but you would think at least a couple of them actually would do something to distinguish themselves. They all want to be Steve Jobs but none of them have the guts to even try to be visionary. It is honestly pathetic
What you have is a lot of middle managers imposing change with random fresh ideas. The ones that succeed rise up the ranks. The ones that failed are forgotten, leading to survivorship bias.
Ultimately it's a distinction without a difference. Maliciously stupid or stupidly malicious invariably leads to the same place.
The discussion we should be having is how we can come together to remove people from power and minimize the influence they have on society.
We don't have the carbon budget to let billionaires who conspires from island fortresses in Hawaii do this kind of reckless stuff.
It's so dismaying to see these industries muster the capital and political resources to make these kinds of infrastructure projects a reality when they've done nothing comparable w.r.t to climate change.
It tells me that the issue around the climate has always been a lack of will not ability.
Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
These attempts to try to steer demand despite clear indicators that it doesn't want to go in that direction aren't just driven by greed, they're driven by abject incompetence.
Also, if the current level of AI investment and valuations aren't justified by market demand (I believe so), many of these people/companies are getting more money than they would without the unreasonable hype.
No, it's greed right now. They are fundamentally incapable of considering consequences beyond the immediate term.
If the kind of foresight and consideration you suggest were possible, companies wouldn't be on this self-cannibalizing path of exploiting customers right now for every red cent you can squeeze out of them. Long term thinking would very clearly tell you that abusing your customers and burning all the goodwill the company built over a hundred years is idiotic beyond comparison. If you think about anything at all other than tomorrow's bottom line you'd realize that the single best way to make a stable long-term business is to treat your customers with respect and build trust and loyalty.
But this behavior is completely absent in today's economy. Past and future don't matter. Getting more money right now is the only thing they're capable of seeing.
you seem to be committing the error of believing that the problem here is just that they’re not selling what people want to buy, instead of identifying the clear intention to _create_ the market.
> Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
Not necessarily, just look at this clip [1] from Margin Call, an excellent movie on the GFC. As Jeremy Irons is saying in that clip, the market (as usually understood in classical economy, with producers making things for clients/customers to purchase) is of no importance to today's market economy, almost all that matters, at the hundreds of billions - multi-trillion dollars-levels, is for your company "to play the music" as best as the other (necessarily very big) market participants, "nothing more, nothing less" (again, to quote Irons in that movie).
There's nothing in it about "making what people/customers want" and all that, which is regarded as accessory, that is if it is taken into consideration at all. As another poster is mentioning in this thread, this is all the direct result of the financialization of much of the Western economy, this is how things work at this level, given these (financiliazed) inputs.
Given that they aren’t meeting their sales targets at all, I guess that’s a little bit of encouraging about the discernment of their customers. I’m not sure how Microsoft has managed to escape market discipline for so long.
> I’m not sure how Microsoft has managed to escape market discipline for so long.
How would they? They are a monopoly, and partake in aggressive product bundling and price manipulation tactics. They juice their user numbers by enabling things in enterprise tenants by default.
If a product of theirs doesn't sell, they bundle it for "free" in the next tier up of license to drive adoption and upgrades. Case in point, the InTune suite (includes EntraID P2, Remote assistance, endpoint privilege management) will now be included in E5, and the price of E5 is going up (by $10/user/month, less than the now bundled features cost when bought separately). People didn't buy it otherwise, so now there's an incentive to move customers off E3 and into E5.
Now their customers are in a place where Microsoft can check boxes, even if the products aren't good, so there's little incentive to switch.
Try to price out Google Workspace (and also, an office license still because someone will need Excel), Identity, EDR, MDM for Windows, mac, mobile, slack, VoIP, DLP, etc. You won't come close to Microsoft's bundled pricing by piecing together the whole M365 stack yourself.
So yeah, they escape market discipline because they are the only choice. Their customers are fully captive.
Their customers largely aren't their users. Their customers are the purchasing departments at Dell, Lenovo, and other OEMs. Their customers are the purchasing departments at large enterprises who want to buy Excel. Their customers are the advertisers. The products where the customers and the users are the same people (Excel, MS flight simulator, etc.) tend to be pretty nice. The products where the customers aren't the users inevitably turn to shit.
Not really. It's just that the point you have to push people to get them to start pushing back on something tends to be quite high. And it's very different for different people on different topics.
In the past this wasn't such a big deal because businesses weren't so large or so frequently run by myopic sociopaths. Ebenezer Scrooge was running some small local business, not a globe spanning empire entangling itself with government and then imposing itself on everybody and everything.
Scrooge is a fictional person and Microsoft have been getting away with it since I’m alive with people hating it probably just as long.
So I think GP definitely has a point.
Are you a fan of reading? Good character fiction is based on reality as understood at a time and a great way to get insights into how and what people think, particularly as it's precisely those believable portrayals that tend to 'stick' with society. For example even most of George R. R. Martin's tales are directly inspired by real things, very much living up to the notion that reality is much stranger than fiction! Or similarly, read something like Dune and the 60s leaks into it hard.
In modern times the tale of Scrooge probably wouldn't really resonate, nor 'stick', because we transitioned to a culture of worshiping wealth, consumerism, and materialism. See (relevant to this topic) how many people defend unethical actions by claiming that fiduciary duty precludes any value beyond greed. In the time of Scrooge this was not the case, and so it was a more viable cautionary tale that strongly resonated.
People think that because AI cannot replace a senior dev, it's a worthless con.
Meanwhile, pretty much every single person in my life is using LLMs almost daily.
Guys, these things are not going away, and people will pay more money to use them in future.
Even my mom asks ChatGPT to make a baking applet with a picture she uploads of the recipe, that creates a simple checklist for adding ingredients (she forgets ingredients pretty often). She loves it.
This is where LLMs shine for regular people. She doesn't need it to create a 500k LOC turn-key baking tracking SaaS AWS back-end 5 million recipes on tap kitchen assistant app.
Yeah, she is, because when reality sets in, these models will probably have monthly cellphone/internet level costs. And training is the main money sink, whereas inference is cheap.
500,000,000 people paying $80/mo is roughly a 5-yr ROI on a $2T investment.
I cannot believe on a tech forum I need to explain the "Get them hooked on the product, then jack up the price" business model that probably 40% of people here are kept employed with.
Right now they are (very successfully) getting everyone dependent on LLMs. They will pull rug, and people will pay to get it back. And none of the labs care if 2% of people use local/chinese models.
> And training is the main money sink, whereas inference is cheap.
False. Training happens once for a time period, but inference happens again and again every time users use the product. Inference is the main money sink.
"according to a report from Google, inference now accounts for nearly 60% of total energy use in their AI workloads. Meta revealed something even more striking: within their AI infrastructure, power is distributed in a 10:20:70 ratio among experimentation, training, and inference respectively, with inference taking the lion’s share."
I think there are 2 things at play here. LLMs are, without a doubt, absolutely useful/helpful but they have shortcomings and limitations (often worth the cost of using). That said, businesses trying to add "AI" into their products have a much lower success rate than LLM-use directly.
I dislike almost every AI feature in software I use but love using LLMs.
It's exactly the same situation as Tesla "self driving". It's sold and marketed in no uncertain terms, VERY EXPLICITLY that AI will replace senior devs.
As you admit, it can't do that. And everyone involved knows it.
This false dichotomy is still frustratingly all over the place. LLMs are useful for a variety of benign everyday use cases, that doesn't mean that they can replace a human for anything. And if those benign use cases is all they're good at, then the entire AI space right now is maybe worth $2B/year, tops. Which is still a good amount of money! Except that's roughly the amount of money OpenAI spends every minute, and it's definitely not "the next invention of fire" like Sam Altman says.
Even these everyday use-cases are infinitely varied and can displace entire industries. E.g. ChatGPT helped me get $500 in airline delay compensation after multiple companies like AirHelp blew me off: https://news.ycombinator.com/item?id=45749803
This single niche industry as a whole is probably worth billions alone.
Now multiply that by the number of niches that exist in this world.
The consider the entire universe of formal knowledge work, where large studies (from self-reported national surveys to empirical randomized controlled trials on real-world tasks) have already shown significant productivity boosts, in the range of 30%. Now consider their salaries, and how much companies would be willing to pay to make their employees more productive.
Are your mother's cooking recipes gonna cover the billions and even trillions being spent here? I somehow doubt that, and it's funny to me that the killer usecase the hypesters use is stupid inane shit like this (no offense to your mom, but a recipe generator isn't something we should be speedrunning global economic collapse for)
> So how to explain the current AI mania being widely promoted?
Probably individual actors have different motivations, but let's spitball for a second:
- LLMs are genuinely a revolution in natural language processing. We can do things now in that space that were unthinkable single-digit years ago. This opens new opportunity spaces to colonize, and some might turn out quite profitable. Ergo, land rush.
- Even if the new spaces are not that much of a value leap intrinsically, some may still end up obsoleting earlier-generation products pretty much overnight, and no one wants to be the next Nokia. Ergo, defensive land rush.
- There's a non-zero chance that someone somewhere will actually manage to build the tech up into something close enough to AGI to serve, which in essence means deprecating the labor class. The benefits (to that specific someone, anyway...) would be staggering enough to make that a goal worth pursuing even if the odds of reaching it are unclear and arguably quite low.
- The increasingly leveraged debt that's funding the land rush's capex needs to be paid off somehow and I'll venture everyone knows that the winners will possibly be able to, but not everyone will be a winner. In that scenario, you really don't want to be a non-winner. It's kind of like that joke where you don't need to outrun the lions, you only need to outrun the other runners, except in this case the harder everyone runs and the bigger the lions become. (Which is a funny thought now, sure, but the feasting, when it comes, will be a bloodbath.)
- A few, I'll daresay, have perhaps been huffing each other's farts too deep and too long and genuinely believe the words of ebullient enthusiasm coming out of their own mouths. That, and/or they think everyone's job except theirs is simple actually, and therefore just this close to being replaceable (which is a distinct flavor of fart, although coming from largely the same sources).
So basically the mania is for the most part a natural consequence of what's going on in the overlap of the tech itself and the incentive structure within which it exists, although this might be a good point to remember that cancer and earthquakes too are natural. Either way, take care of yourselves and each other, y'all, because the ride is only going to get bouncier for a while.
>So how to explain the current AI mania being widely promoted?
CEOs have been sold on the ludicrous idea that "AI" will replace 60-80% of their total employee headcount over the next 2-3 years. This is also priced into current equity valuations.
I think on some level it is being done on the premise that further advancement requires an enormous capital investment and if they can find a way to fund that with today’s sales it will give the opportunity for the tech to get there (quite a gamble).
At this point, the people in charge have signed off on so much AI spending that they need it to succeed, otherwise they are the ones responsible for massive losses.
_Number would not go up sufficiently steeply_, would be the major concern, not collapse. Microsoft might end up valued as (whisper it) a normal mature stable company. That would be something like a quarter to a half what it's currently valued. For someone paid mostly in options, this is clearly a problem (and people at the top in these companies mostly _are_ compensated with options, not RSUs; if the stock price halves, they get _nothing_).
The cost of the boat sinking is also very high and that’s looking like the more likely scenario. Watching your competitors sink huge amounts of capital into a probably sinking boat is a valid strategy. The growth path they were already on was fine no?
I have a feeling that Microsoft is setting themselves up for a serious antitrust lawsuit if they do what they are intending on. They should really be careful about introducing products into the OS that take away from all other AI shops. I fear this would cripple innovation if allowed to do so as well, since Microsoft has drastically fatter wallets than most of their competition.
Trump has ushered in a truly lawless phase of american politics. I mean, it was kind of bad before, but at least there was a pretense of rule of law. A trillion dollar company can easily just buy its way out of any enforcement of such antitrust action.
Corruption is indeed going strong in the current corporate-controlled US group of lame actors posing as government indeed. At the least Trump is now regularly falling asleep - that's the best example that you can use any surrogate puppet and the underlying policies will still continue.
If I mention a president who was more of a general secretary of the party, taking notes of decisions taken for him by lobbies from the largest corporations, falling asleep and having incoherent speech to the point that he seems to be way past the point of stroke, I don’t think anyone will guess Trump.
I mean, see Windows Vista. It was eventually patched up to the point where it was semi-usable (and then quietly killed off), but on introduction it was a complete mess. But... something had to be shipped, and this was something, so it was shipped.
(Vista wasn't the only one; Windows ME never even made it to semi-usable, and no-one even remembers that Windows 8 _existed_.)
Microsoft has _never_, as far as I know, been a company to be particularly concerned about product quality. The copilot stuff may be unusually bad, but it's not that aberrant for MS.
I was just in a thread yesterday with someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was.
Everything about the conversation felt like talking to a true believer, and there's plenty out there.
It's the hopes and dreams of the Next Big Thing after blockchain and web3 fell apart and everyone is desperate to jump on the bandwagon because ZIRP is gone and everyone who is risk averse will only bet on what everyone else is betting on.
Thus, the cycle feeds itself until the bubble pops.
I don't see how people don't see it. LLMs are a revolutionary technology and are for the first time since the iPhone are changing how we interact with computers. This isn't block chains. This is something we're going to use until something better replaces it.
I agree to some extent, but we’re also in a bubble. It seems completely obvious that huge revenue numbers aren’t around the corner, not enough to justify the spend.
> "someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was."
I think that. It's new technology and it always takes some years before all the implications and applications of new technology are fully worked out. I also think that we're in a bubble that will hose a lot of people when it pops.
1) We have barely scratched the surface of what is possible to do with existing AI technology.
2) Almost all of the money we are spending on AI now is ineffectual and wasted.
---
If you go back to the late 1990s, that is the state that most companies were at with _computers_. Huge, wasteful projects that didn't improve productivity at all. It took 10 years of false starts sometimes to really get traction.
It's interesting to think Microsoft was around back then too, taking approximately 14 years to regain the loss of approximately 58% of their valuation.
AI research has always been a series of occasional great leaps between slogs of iterative improvements, from Turing and Rosenblatt to AlexNet and GPT-3. The LLM era will result in a few things becoming invisible architecture* we stop appreciating and then the next big leap starts the hype cycle anew.
*Think toll booths (“exact change only!”) replaced by automated license plate readers in just the span of a decade. Hardly noticeable now.
It's not just AI mania, it's been this way for over a decade.
When I first started consulting, organizations were afraid enough of lack of ROI in tech implementations that projects needed an economic justification in order to be approved.
Starting with cloud, leadership seemed so become rare, and everything was "us too!".
After cloud it was data/data visualization, then it was over-hiring during Covid, the it was RTO, and now it's AI.
I wonder if we will ever return to rationalization? The bellwether might be Tesla stock price (at a rational valuation).
If rationalization comes back, everyone will talk like in Michael Moore’s documentary about GM and Detroit. A manager’s salary after half a career will be around $120k, like in an average bank, and that would be succeeding. I don’t think we even imagine how much of a tsunami we’ve been surfing since 2000.
US technocapitalism is built on the premise of technological innovation driving exponential growth. This is why they are fixated on whatever provides an outlook for that. The risk that it might not work out is downplayed, because (a) they don’t want to hazard not being at the forefront in the event that it does work out, and (b) if it doesn’t work out, nobody will really hold them accountable for it, not the least because everybody does it.
After the mobile and cloud revolution having run out of steam, AI is what promises most growth by far, even if it is a dubious promise.
It’s a gamble, a bet on “the next big thing”. Because they would never be satisfied with there not being another “big thing”, or not being prominently part of it.
It's not "pure greed." It's keeping up with the Joneses. It's fear.
There are three types of humans: mimics, amplifiers, originators. ~99% of the population are basic mimics, and they're always terrified - to one degree or another - of being out of step with the herd. The hyper mimicry behavior can be seen everywhere and at all times, from classrooms to Tiktok & Reddit to shopping behaviors. Most corporate leadership are highly effective mimics, very few are originators. They desperately herd follow ('nobody ever got fired for buying IBM').
This is the dotcom equivalent of every business must be e and @ ified (the advertising was aggressively targeted to that at the time). 1998-2000, you must be e ready. Your hotdog stand must have its own web site.
It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know.
> Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality.
That’s not my experience at all, I’m getting it done much faster and the quality is on par. It’s hard to measure, but as a small business owner it’s clear to me that I now require fewer new developers.
You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.
At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.
I think what it comes down to is that the advocates making false claims are relatively uncommon on HN. So, for example, I don't know what advocates you're talking about here. I know people exist who say they can vibe-code quality applications with 100k LoC, or that guy at Anthropic who claims that software engineering will be a dead profession in the first half of '26, and I know that these people tend to be the loudest on other platforms. I also know sober-minded people exist who say that LLMs save them a few hours here and there per week trawling documentation, writing a 200 line SQL script to seed data into a dev db, or finding some off-by-one error in a haystack. If my main or only exposure to AI discourse was HN, I would really only be familiar with the latter group and I would interpret your comment as very biased against AI.
Alternatively, you are referring to the latter group and, uh, sorry.
The whole point I tried to make when I said “you need to learn how to use it” is that it’s not vibe coding. It has nothing to do with vibes. You need to be specific and methodological to get good results, and use it for appropriate problems.
I think the AI companies have over-promised in terms of “vibe” coding, as you need to be very specific, not at all based on “vibes”.
I’m one of those advocates for AI, but on HN it consistently gets downvoted no matter how I try to explain things. There’s a super strong anti-AI sentiment here.
There is no scenario where AI is a net benefit. There are three possibilities:
1. AI does things we can already do but cheaper and worse.
This is the current state of affairs. Things are mostly the same except for the flood of slop driving out quality. My life is moderately worse.
2. Total victory of capital over labor.
This is what the proponents are aiming for. It's disastrous for the >99% of the population who will become economically useless. I can't imagine any kind of universal basic income when the masses can instead be conveniently disposed of with automated killer drones or whatever else the victors come up with.
3. Extinction of all biological life.
This is what happens if the proponents succeed better than they anticipated. If recursively self-improving ASI pans out then nobody stands a chance. There are very few goals an ASI can have that aren't better accomplished with everybody dead.
What is the motivation for killing off the population in scenario 2? That's a post-scarcity world where the elites can have everything they want, so what more are they getting out of mass murder? A guilty conscience, potentially for some multiple of human lifespans? Considerably less status and fame?
Even if they want to do it for no reason, they'll still be happier if their friends and family are alive and happy, which recurses about 6 times before everybody on the planet is alive and happy.
It's not a post-scarcity world. There's no obvious upper bound on resources AGI could use, and there's no obvious stopping point where you can call it smart enough. So long are there are other competing elites the incentive is to keep improving it. All the useless people will be using resources that could be used to make more semiconductors and power plants.
My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so).
While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be.
I think MSFT really needs some validated user stories. How many users want to, "Improve my writing," "Create an image," "Understand what is changed" (e.g. recent edits), or "Visualize my data."?
Conversely, I bet there are a lot of people who want AI to improve things they are already doing repeatedly. For example, I click the same button in Epic every day because Epic can't remove a tab. Maybe Copilot could learn that I do this and just...do it for me? Like, Copilot could watch my daily habits and offer automation for recurring things.
But do you (or MSFT) trust it to do that correctly, consistently, and handle failure modes (what happens when the meaning of that button/screen changes)?
I agree, an assistant would be fantastic in my life, but LLMs aren't AGI. They can not reason about my intentions, don't ask clarifing questions (bring back ELIZA), and handle state in an interesting way (are there designs out there that automatically prune/compress context?).
>improve things they are already doing repeatedly. For example, I click the same button in Epic every day because Epic can't remove a tab. Maybe Copilot could learn that I do this and just...do it for me?
You could solve that issue (and probably lot's of similar issues) with something like Auto Hotkey. Seems like extreme overkill to have an autonomous agent watch everything you do, so it might possibly click a button.
Auto Hotkey doesn't work well for Epic manipulation because Epic runs inside of a Citrix Virtual Machine. You can't just read Window information and navigate that way. You'd have to have some sort of on-screen OCR to detect whether Epic is open, has focus, and is showing the tab that I want to close. Also, the tab itself can't be closed...I'm just clicking on the tab next to it.
And in an ideal world, one could report this as a bug or improvement and get it fixed for every single user without them needing to do anything at all.
Well, it isn't every user. We use a version of Epic called Epic Radiant. It's designed for radiologists. The tab that always opens is the radiologist worklist. The thing is, we don't use that worklist for procedures (I'm an interventional radiologist). So that tab is always there, always opens first, and always shows an empty list. It can't be removed in the Radiant version of Epic.
But why would Epic spend money improving or fixing their software? If they spend money developing their product then they can't spend that money on their adult playground of a campus!
I can’t find any use case for Copilot at all, and I frequently “sell” people Microsoft 365. (I don’t earn a commission; I just help them sign up for it.) I cannot come up with a reason anyone needs Copilot.
Meanwhile I spent 3-4 hours working with a client yesterday using Dreamhost’s free AI tools to get them up and running with a website quickly whilst I configured Microsoft 365, Cloudflare, email and so forth for them.
The difference between poison and medicine is the amount. AI is great and very useful, but they want the AI to replace you instead of supporting your needs.
"AI everywhere" is worse than "AI nowhere". What we need is "AI somewhere".
That's what we had before LLMs. Without the financially imposed contrivance of it needing to be used everywhere, it was free to be used where it made sense.
It's almost a revenge of the engineers. The big players' path to "success" has been to slap together some co-pilot loaded with enterprise bloat and try to compete with startups that solve the same problems in a much cleaner way.
Meanwhile, they believed the market was already theirs—so their logic became: fire the engineers, buy more GPUs.
I have mixed feelings about this. I've interviewed several people who were affected by these layoffs, and honestly, many of them were mediocre engineers by most measures. But that still doesn't make this a path to success.
>I've interviewed several people who were affected by these layoffs, and honestly, many of them were mediocre engineers by most measures. But that still doesn't make this a path to success.
How mediocre are we talking about here? (I’m curious)
You can find secret little pockets within Microsoft where individuals & small teams do nothing at all, day in and day out. I mean literally nothing. The game is to maximize life and minimize work at the expense of the company. The managers are in on the game and help with the cover-up. I find it hilariously awesome and kind of sad at the same time.
Anyway, one round of layoffs this year was specifically targeted at finding these pockets and snuffing them out. The evidence used to identify said pocket was slowly built out over a year ahead of time. It's very likely that these pockets also harbored poor & mediocre developers, it stands to reason that a poor or mediocre developer is more likely to gravitate to such a place.
Not saying all the developers that were laid off were in a free-loader pocket, or that this cohort must be the ones that were interviewed. I'm only suggesting that the mediocre freeloaders form a significant slice of the Venn diagram.
Damn that is crazy, how do you measure it? , AI use? , i hope you saying this doesn't affect the employment prospects of the ones that aren't "mediocre" but happened to be on those teams.
I'm sure it's difficult enough for people to find work right now without you putting a knife in their back on the way out.
I don't know if AI was used, but I do know that git contributions were used as a starting point. From what I've heard, it was just individuals and the managers that enabled it.
Even Devblogs and anything related to Java,.NET, C++ and Python out of Redmond seems to be all around AI and anything else are now low priority tickets on their roadmaps.
Anyone who has had the pleasure of being forced to migrate to their new Fabric product can tell you why sales are low. It's terrible not just because it's a rushed buggy pile of garbage they want people to Alpha test on users but because of the "AI First" design they are forcing into it. They hide so much of what's happening in the background it is hard to feel like you can trust any of it. Like agentic "thinking" models with zero way to look into what it did to get to the conclusion.
It's so bizarre because their devs tools and frameworks are so well thought out. You'd think if they're using those it should come out not janky. But I don't think they do use their own devs tools, and I also don't think it would help.
Super interesting how this arc has played out for Microsoft. They went from having this massive advantage in being an early OpenAI partner with early access to their models to largely losing the consumer AI space: Copilot is almost never mentioned in the same breath as Claude and ChatGPT. Though I guess their huge stake in OpenAI will still pay out massively from a valuation perspective.
Microsoft seems to be actively discarding the consumer PC market for Windows. It's gamers and enterprise, it seems. Enterprise users don't get a lot of say in what's on their desktop.
Hearing similar stories play out elsewhere too with targets being missed left and right.
There’s definitely something there with AI but a giant chasm between reality and the sales expectations on what’s needed to make the current financial engineering on AI make any sense.
> At the heart of the problem is the tendency for AI language models to confabulate, which means they may confidently generate a false output that is stated as being factual.
"Confabulate" is precisely the correct term; I don't know how we ended up settling on "hallucinate".
The bigger problem is that, whichever term you choose (confabulate or hallucinate), that's what they're always doing. When they produce a factually correct answer, that's just as much of a random fabrication based on training data as when they're factually incorrect. Either of those terms falsely implies that they "know" the answer when they get it right, but "confabulate" is worse because there isn't "gaps in their memory", they're just always making things up.
About 2 years ago I was using Whisper AI locally to translate some videos, and "hallucinations" is definitely the right phrase for some of its output! So just like you might expect from a stereotypical schizo: it would stay on-task for a while, but then start ranting about random things, or "hearing things", etc.
Too much money being spent on a technology that isnt ready to do what they're saying it can do. It feels like the 3G era all over again. Billion spent on 3G licences which didnt deliver what they expected it would.
>> The Information notes that much of Microsoft’s AI revenue comes from AI companies themselves renting cloud infrastructure rather than from traditional enterprises adopting AI tools for their own operations.
And MS spends on buying AI hardware. That's a full circle.
It wants to help create things in Office documents, I imagine just saving you the copy and paste from the app or web form. The one thing I tried to get it to do was to take a spreadsheet of employees and add a column with their office numbers (it has access to the company directory). The response was something like "here's how you would look up a office number, you're welcome!"
It is functional at RAG stuff on internal docs but definitely not good - not sure how much of this is Copilot vs corporate disarray and access controls.
It won't send emails for me (which I would think is the agentic mvp) but that is likely a switch my organization daren't turn on.
Tldr it's valuable as a normal LLM, very limited as a add-on to Microsoft's software ecosystem.
Chatting and everything you normally do in chats is there. needle hunting info out of all my Teams group chats is probably my favorite thing. It can retrieve info out of sharepoint I guess.
Biggest complaint for me personally is that you run out of context very quickly. If you are used to having longer running chats on other platforms you won't be happy when Copilot tells you to make a new chat like 5 messages in.
For most of my clients they are only interested in meeting minutes and otter does that for 25% of the price. I think in any given business the qty of people who actually use textgen regularly is pretty low. My workplace is looking to downsize licenses and asking people to use it or lose it because $21/user/mo is too much to have as a every now and then novelty.
Despite having an unlimited warchest I'm not expecting Microsoft to come out as a winner from this AI race whilst having the necessary resources. The easy investment was to throw billions at OpenAI to gain access to their tech, but that puts them in a weird position of not investing heavily in cultivating their own AI talent and being in control of their own destiny by having their own horse in the race with their own SOTA models.
Apple's having a similar issue, unlimited wealth that's outsourcing to external SOTA model providers.
But is it sold enough to regular Windows Home users? If MS brings an ultimatum: "you need to buy AI services to use Windows", they might get a bunch more clueless subscribers. In the same way as there's no ability to set up Windows without internet connection and MS account they could make it mandatory to subscribe to Copilot.
I think Microsoft's long-term plan is exactly that: to make Windows itself a subscription product. Windows 12 Home for $4.99 a month, Copilot included. It will be called OSaaS.
> In the same way as there's no ability to set up Windows without internet connection and MS account
Not true. They're clearly unwilling or unable to remove this code path fully, or they would have done so by now. There's just a different workaround for it every few years.
There’s probably some compliance requirement that it’s technically possible to set it up without an internet connection, so they leave it there, but make it unreasonably difficult for a majority to do it.
Microsoft had a great start with the exclusive rights over OpenAI tech but they're not capable of really talking with developers within those large companies in the same sense Google and AWS are rapidly catching-up.
Blaming slow sales on salespeople is almost always a scapegoat. Reality is that either the product sells or it doesn’t.
Not saying that sales is useless, far from it. But with an established product that people know about, the sales team is more of a conduit than they are a resource-gathering operation.
I worked car sales for years. The same large dealership can have a person anyone would call a decent salesperson, and they made $4k a month. There was also two people at that dealership making $25k+ a month each.
If your organization is filled with the $4k type and not the $25k type, you're going to have a bad time.
I was #7 in the US while working at a small dealership. I moved the the large dealership mentioned above and instantly that dealership became #1 for that brand in the country, something they had never done before. Because not only did I sell 34 cars a month without just cannibalizing others sales, I showed others that you can show up one day and do well so there weren't many excuses. The output of the entire place went up.
So, depending on the pay plan and hiring process, who exactly is working at Microsoft right now selling AI? I honestly have no idea. It could be rock stars and it could be the $4k guys happy they're making $10k at Microsoft.
No but several women that came to buy cars (some with male coworkers, or so they told me) eventually did over the years.
Tbh this wasn't some crazy brag post, as making $250-300k a year working 80 hours a week isn't all that impressive when software devs make more than that easily, and the top guys make many multiples of that.
"The technology is not useful", at least in enterprise contexts, is what this comes out to. Which is really where the money is, because some vibecoder paying $20/mo for Claude really doesn't matter (especially when it costs $100/mo to run inference for his queries in the first place). Enterprise is the only place this could possibly make money.
Think about it: MS has a giant advantage over every other AI vendor, that they can directly insert the product into the OS and LOB apps without the business needing to onboard a new vendor. This is best case scenario, and by far the easiest sell for these tools. Given how badly they're failing, yeah, turns out orgs just don't see the value in it.
Next year will be interesting too: I suspect a large portion of the meager sales they managed to make will not renew, it'll be a bloodbath.
MS has a giant advantage over every other vendor for all kinds of products (including defunct ones). Sometimes they function well, sometimes they do not. Sometimes they make money, sometimes they do not. MS isn't the tech (or even enterprise tech) bellcow.
Considering enterprise typically is characterized by perfunctory tasks, information silos, and bit rot, they're a perfect application of LLMs. It's just Microsoft kind of sucks at a lot of things.
This is annoying because Ars is one of the better tech blogs out there, but it still has instances of biased reporting like this one. It's interesting to decipher this article with an eye on what they said, what they implied, and what they didn't say.
Would be good if a sales person chime could in to keep me honest, but:
1. There is a difference between sales quotas and sales growth targets. The former is a goal, latter is aspirational, a "stretch goal". They were not hitting their stretch goals.
2. The stretch goals were, like, doubling the sales in a year. And they dropped it to 25% or 50% growth. No idea what the adoption of such a product should be, but doubling sounds pretty ambitious? I really can't say, and neither did TFA.
3. Only a fraction met their growth goals, but I guess it's safe to assume most hit their sales quotas, otherwise that's what the story would be about. Also, this implies some DID hit their growth goals, which implies at least some doubled their sales in a year. Could be they started small so doubling was easy, or could be a big deal, we don't know.
4. Sales quotas get revised all the time, especially for new products. Apparently, this was for a single product, Foundry, which was launched a year ago, so I expect some trial and error to figure out the real demand.
5. From the reporting it seems Foundry is having problems connecting to internal data sources... indicating it's a problem with engineering, and not a problem with the AI itself. But TFA focuses on AI issues like hallucinations.
6. No reporting on the dozens of other AI products that MSFT has churned out.
As an aside, it seems data connectivity issues are a stickier problem than most realize (e.g. organizational issues) and I believe Palantir created the FDE role for just this purpose: https://nabeelqu.substack.com/p/reflections-on-palantir
Maybe without that strategy it would be hard for a product like this to work.
For the first time I have begun to doubt Microsoft's chosen course. (I am a retired MS principal engineer.) Their integration of copilot shows all the taste and good tradeoff choices of Teams but to far greater consequence. Copilot is irritating.
MS dependence on OpenAI may well become dicey because that company is going to be more impacted by the popping of the AI bubble than any other large player. I've read that MS can "simply" replace ChatGPT by rolling their own -- maybe they can. I wouldn't bet the company on it. Is google going to be eager to license Gemini? Why would they?
For the first time? What about Zune, Nokia/Windows Phone, Windows Vista, attacking open source for decades, Scroogled campaign, all the lost Ballmer years, etc. Microsoft has had tons of blunders over time.
Microsoft is strange cause it reports crazy growth numbers for Azure but I never hear about any tech company using Azure (AWS and GCP dominate here). I know it's more popular in big enterprises, banks, pharma, government, etc. and companies like Openai use their GPU offerings. Then there's all the Office stuff (Sharepoint, One Drive, etc). Who knows what they include under Azure numbers. Even Github can be considered "cloud".
My point is, outside of co-pilot, very few consider Microsoft when they are looking for AI solutions, and if you're not already using Azure, why would you even bother check what they offer. At this point, their biggest ticket is their OpenAI stake.
With that being said, I should give them some credit. They do some interesting research and have some useful open source libraries they release and maintain in the AI space. But that's very different than building AI products and solutions for customers.
I wonder what part of these failed sales is due to GDRP requirements in the IT enterprise industry. I have my own european view, and it seems our governments are treating the matter very seriously. How do you ensure an AI agent won't leak anything? It just so happened that it wiped entire database or cleared a disk and later being very "sorry" about it. Is the risk worth it?
Having worked with this stuff a lot, privacy isn't the biggest problem (though it is a problem). This shit just doesn't work. Wide-eyed investors might be willing to overlook the 20% failure rates, but ordinary people won't, especially when a single mistake can cost you millions of dollars. In most places I've seen AI shoved - especially Copilot - it takes more time to read and dismiss its crappy suggestions than it does to just do the work without it. But the really insidious case is when you don't realize it is making shit up and then you act on it. If you are lucky you embarrass yourself in front of a customer. If you are unlucky you unintentionally wipe out the production database. That's much more of an overt and immediate concern than leaking some PII.
As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.
It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.
I broadly agree. They package "copilot" in a way that constantly gets in your way.
The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.
I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.
My 2 cents. It's when OKRs are executed without a vision, or the vision is that one and well, it sucks.
The goal is AI everywhere, so this means top-down everyone will implement it and will be rewarded for doing so, so thrre are incentives for each team to do it - money, promotions, budget.
100 teams? 100 AI integrations or more. It's not 10 entry points as it should be (maybe).
This means for a year or more, a lot of AI everywhere, impossible to avoid, will make usability sink.
Now, if this was only done by Microsoft, I would not mind. The issue is that this behavior is getting widespread.
Things are becoming increasingly unusable.
Yep, ship it out now, we'll figure out the rest later. I remember there was that motto, "respect the user." I am so tired.
Reminds me of when Google’s core mission was to put Google Plus integrations in everything
I had the experience too. Working with Azure is already a nightmare, but the copilot tool built in to Azure is completely useless for troubleshooting. I just pasted log output into Claude and got actual answers. Mincrosoft’s first party stuff just seems so half assed and poorly thought out.
Why is this, I wonder? Aren't the models trained on about the same blob of huggingface web scrapes anyway? Does one tool do a better job of pre-parsing the web data, or pre-parsing the prompts, or enhancing the prompts? Or a better sequence of self-repair in an agent-like conversation? Or maybe more precision in the weights and a more expensive model?
> Why is this, I wonder?
because that's Microsoft's business model
their products are just just good enough to allow them to put a checkbox in a feature table to allow it to be sold to someone who will then never have to use it
but not even a penny more will be spent than the absolute bare minimum to allow that
this explains Teams, Azure, and everything else they make you can think of
* That's modern Microsoft's desktop product business model
I hear tales of the before-times, when they had a QA department and took quality seriously.
How do you QA adding weird prediction tool to say Outlook. I have to use Outlook at one of my clients and have switched to writing all emails in VS Code and then pasting it to Outlook as “autocomplete” is unbearable… Not sure QA is possible with tools like these…
Part of QA used to be evaluating whether a change was actually helpful in doing the thing it was supposed to be doing.
... why, it's almost like in eliminating the QA function, we removed the final checks and balances on developers (read: PMs) from implementing whatever ass-backwards feature occurs to them.
Just in time for 'AI all the things!' directives to come down from on high.
exactly!! though evaluating whether a change was actually helpful in doing the thing it was supposed to be doing is hard when no one knows what it is supposed to be doing :)
Which was the other benefit of a formal QA org -- you had to be able to tell them what you changed and how it was supposed to work.
UX consistency also took a dive, both in MS products and in all the psuedo-webpage crap shipped as Electron apps.
Probably compute isn’t enough to serve everyone from a frontier LLM.
I had a WTF moment last week, i was writing SQL, and there was no autocomplete at all. Then a chunk of autocomplete code appeared, what looked like an SQL injection attack, with some "drop table" mixed in. The code would have not worked, it was syntactically rubbish, but still looked spooky, should have made a screenshot of it.
This is the most annoying thing, and it's even happened to Jetbrains' rider too.
Some stuff that used to work well with smart autocomplete / intellisense got worse with AI based autocomplete instead, and there isn't always an easy way to switch back to the old heuristic based stuff.
You can disable it entirely and get dumb autocomplete, or get the "AI powered" rubbish, but they had a very successful heuristic / statistics based approach that worked well without suggesting outright rubbish.
In .NET we've had intellisense for 25 years that would only suggest properties that could exist, and then suddenly I found a while ago that vscode auto-completed properties that don't exist.
It's maddening! The least they could have done is put in a roslyn pass to filter out the impossible.
Loosely related: voice control on Android with Gemini is complete rubbish compared to the old assistant. I used to be able to have texts read out and dictate replies whilst driving. Now it's all nondeterministic which adds cognitive load on me and is unsafe in the same way touch screens in cars are worse than tactile controls.
I've been immensely frustrated by no longer being able to set reminders by voice. I got so used to saying "remind me in an hour to do x" and now that's just entirely not an option.
I'm a very forgetful person and easily distracted. This feature was incredibly valuable to me.
I got Gemini Pro (or whatever it's called) for free for a year on my new Pixel phone, but there's an option to keep Assistant, which I'm using.
Gotta love the enshittification: "new and better" being more CPU cycles being burned for a worse experience.
I just have a shortcut to the Gemini webpage on my home screen if I want to use it, and for some reason I can't just place a shortcut (maybe it's my ancient launcher that's not even in the play store anymore), so I have to make a tasker task that opens the webpage when run.
This is my biggest frustration. Why not check with the compiler to generate code that would actually compile? I've had this with Go and .Net in the Jetbrains IDE. Had to turn ML auto-completion off. It was getting in the way.
You can still use the older ML-model (and non-LLM-based!) IntelliCode completion suggestions - it’s buried in the VS Installer as an optional feature entirely separate from anything branded CoPilot.
The most WTF moment for me was that recent Visual Studio versions hooked up the “add missing import” quick fix suggestion to AI. The AI would spin for 5s, then delete the entire file and only leave the new import statement.
I’m sure someone on the VS team got a pat on the back for increasing AI usage but it’s infuriating that they broke a feature that worked perfectly for a decade+ without AI. Luckily there was a switch buried in settings to disable the AI integration.
The regular JetBrains IDEs have a setting to disable the AI-based inline completion, you can then just assign it to a hotkey and call it when needed.
I found that it makes the AI experience so much better.
There is no setting to revert back to the very reliable and high quality "AI" autocomplete that reliably did not recommend class methods that do not exist and reliably figured out the pattern I was writing 20 lines of without randomly suggesting 100 lines of new code that only disrupts my view of the code I am trying to work on.
I even clicked the "Don't do multiline suggestions" checkbox because the above was so absurdly anti-productive, but it was ignored
Try disabling the "Enable the next edit suggestions" in the AI settings.
The last time I asked Gemini to assist me with some SQL I got (inside my postgres query form):
It's feels almost haiku-like.Gemini weirdly messes things up, even though it seems to have the right information - something I started noticing more often recently. I'd ask it to generate a curl command to call some API, and it would describe (correctly) how to do it, and then generate the code/command, but the command would have obvious things missing like the 'https://' prefix in some case, sometimes the API path, sometimes the auth header/token - even though it mentioned all of those things correctly in the text summary it gave above the code.
I feel like this problem was far less prevalent a few months/weeks ago (before gemini-3?).
Using it for research/learning purposes has been pretty amazing though, while claude code is still best for coding based on my experience.
Now this is prime software gore
Same thing happened to me today in vs code. A simple helm template:
```{{ .default .Values.whatever 10 }}``` instead of the correct ```{{ default 10 .Values.whatever }}```.
Pure garbage which should be solved by now. I don't understand how it can make such a mistake.
This is a great post. Next time that you see it, grab a screenshot, put on GitHub pages and post it here on HN. It will generated lots of interesting discussion about rubbish suggestions from poor LLM models.
> rubbish suggestions from poor LLM models.
We get rubbish suggestions from SOTA(tm) LLM models too, y’know.
The problem with scrapping the web for teaching AI is that the web is full of 'little bobby tables' jokes.
This seems like what should be a killer feature: Copilot having access to configuration and logs and being able to identify where a failure is coming from. This stuff is tedious manually since I basically run through a checklist of where the failure could occur and there’s no great way to automate that plus sometimes there’s subtle typo type issues. Copilot can generate the checklist reasonably well but can’t execute on it, even from Copilot within Azure. Why not??
"They package "copilot" in a way that constantly gets in your way."
And when you try to make it something useful, the response is usually "I can't do that"
I asked copilot in outlook webmail to search my emails for something I needed.
I can't do that.
that's the one use case where LLM is helpful!
I have had great luck with ChatGPT trying to figure out a complex AWS issue with
“I am going to give you the problem I have. I want you to help me work backwards step by step and give me the AWS cli commands to help you troubleshoot. I will give you the output of the command”.
It’s a combination of advice that ChatGPT gives me and my own rubberducking.
that's what happens when everyone is under the guillotine and their lives depend on overselling this shit ASAP instead of playing/experimenting to figure things out
I've worked in tech and lived in SF for ~20 years and there's always been something I couldn't quite put my finger on.
Tech has always had a culture of aiming for "frictionless" experiences, but friction is necessary if we want to maneuver and get feedback from the environment. A car can't drive if there's no friction between the tires and the road, despite being helped when there's no friction between the chassis and the air.
Friction isn't fungible.
John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:
”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”
In "Mind and World", McDowell criticizes this sort of thinking, too, saying:
> We need to conceive this expansive spontaneity as subject to control from outside our thinking, on pain off representing the operations of spontaneity as a frictionless spinning in a void.
And that's really what this is about, I think. Friction-free is the goal but friction-free "thought" isn't thought at all. It's frictionless spinning in a void.
I teach and see this all the time in EdTech. Imagine if students could just ask the robot XYZ and how much time it'd free up! That time could be spent on things like relationship-building with the teacher, new ways of motivating students, etc.
Except...those activities supply the "wants and struggles whose consummations" build the relationships! Maybe the robot could help the student, say, ask better questions to the teacher, or direct the student to peers who were similarly confused but figure it out.
But I think that strikes many tech-minded folks as "inefficient" and "friction-ful". If the robot knows the answer to my question, why slow me down by redirecting me to another person?
This is the same logic that says making dinner is a waste of time and we should all live off nutrient mush. The purposes of preparing dinner is to make something you can eat and the purpose of eating is nutrient acquisition, right? Just beam those nutrients into my bloodstream and skip the rest.
Not sure how to put this all together into something pithy, but I see it all as symptoms of the same cultural impulse. One that's been around for decades and decades, I think.
People want the cookie, but they also want to be healthy. They want to never be bored, but they also want to have developed deep focus. They want instant answers, but they also want to feel competent and capable. Tech optimizes for revealed preference in the moment. Click-through rates, engagement metrics, conversion funnels: these measure immediate choices. But they don't measure regret, or what people wish they had become, or whether they feel their life is meaningful.
Nobody woke up in 2005 thinking "I wish I could outsource my spatial navigation to a device." They just wanted to not be lost. But now a generation has grown up without developing spatial awareness.
> Tech optimizes for revealed preference in the moment.
I appreciate the way you distinguish this from actual revealed preference, which I think is key to understanding why what tech is doing is so wrong (and, bluntly, evil) despite it being what "people want". I like the term "revealed impulse" for this distinction.
It's the difference between choosing not to buy a bag of chips at the store or a box of cookies, because you know it'll be a problem and your actual preference is not to eat those things, and having someone leave chips and cookies at your house without your asking, and giving in to the impulse to eat too many of them when you did not want them in the first place.
Example from social media: My "revealed preference" is that I sometimes look at and read comments from shit on my Instagram algo feed. My actual preference is that I have no algo feed, just posts on my "following" tab, or at least that I could default my view to that. But IG's gone out of their way (going so far as disabling deep link shortcuts to the following tab, which used to work) to make sure I don't get any version of my preference.
So I "revealed" that my preference is to look at those algo posts sometimes, but if you gave me the option to use the app to follow the few accounts I care about (local businesses, largely) but never see algo posts at all, ever, I'd hit that toggle and never turn it off. That's my actual preference, despite whatever was "revealed". That other preference isn't "revealed" because it's not even an option.
Just like the chips and cookies the costs of social meida are delayed and diffuse. Eating/scrolling feels good now. The cost (diminished attention span, shallow relationships, health problems) shows up gradually over years.
> They want to never be bored
This is the problem. Learning to embrace boredom is best thing I have ever done.
Yes i agree with this. I think more people, than not, would benefit from actively cultivating space in their lives to be bored. Even something as basic as putting your phone in the internal zip part of your bag, so when you're standing in line at the store/post office/whatever you can't be arsed to just reach for your phone and instead be in your head or aware of your surroundings. Both can be such wonderful and interesting places but we seem to forget that now
I think that's partially true. The point is to have the freedom to pursue higher-level goals. And one thing tech doesn't do - and education in general doesn't do either - is give experience of that kind of goal setting.
I'm completely happy to hand over menial side-quest programming goals to an AI. Things like stupid little automation scripts that require a lot of learning from poor docs.
But there's a much bigger issue with tech products - like Facebook, Spotify, and AirBnB - that promise lower friction and more freedom but actually destroy collective and cultural value.
AI is a massive danger to that. It's not just about forgetting how to think, but how to desire - to make original plans and have original ideas that aren't pre-scripted and unconsciously enforced by algorithmic control over motivation, belief systems, and general conformity.
Tech has been immensely destructive to that impulse. Which is why we're in a kind of creative rut where too much of the culture is nostalgic and backward-looking, and there isn't that sense of a fresh and unimagined but inspiring future to work towards.
I don't think I could agree with you more. I think that more in tech and business should think about and read about philosophy, the mind, social interactions, and society.
ED Tech for example I think really seems to neglect the kind of bonds that people form when they go through difficult things together, and the pushing through difficulties is how we improve. Asking a robot xyz does not improve ourselves. AI and LLMs do not know how to teach, they are not Socratic pushing and prodding at our weaknesses and assessing us to improve. The just say how smart we are.
This is perhaps one of the most articulate takes on this I have ever read - thank-you!
And - for myself, it was friction that kickstarted my interest in "tech" - I bought a janky modem, and it had IRQ conflicts with my Windows 3 mouse at the time - so, without internet (or BBS's at that time), I had to troubleshot and test different settings with the 2-page technical manual that came with it.
It was friction that made me learn how to program and read manuals/syntax/language/framework/API references to accomplish things for hobby projects - which then led to paying work. It was friction not having my "own" TV and access to all the visual media I could consume "on-demand" as a child, therefore I had to entertain myself by reading books.
Friction is good.
I think of it like this:
Friction is an element of the environment like any other. There's an "ecology of friction" we should respect. Deciding friction is bad and should be eradicated is like deciding mosquitoes or spiders or wolves are bad and should be eradicated.
Sometimes friction is noise. Sometimes friction is signal. Sometimes the two can't be separated.
I learned much the same way you did. I also started a coding bootcamp, so I've thought a lot about what counts as "wasted" time.
I think of it like building a road through wilderness. The road gets you there faster, but careless construction disturbs the ecosystem. If you're building the road, you should at least understand its ecological impact.
Much of tech treats friction as an undifferentiated problem to be minimized or eliminated—rather than as part of a living system that plays an ecological role in how we learn and work.
Take Codecademy, which uses a virtual file system with HTML, CSS, and JavaScript files. Even after mastering the lessons, many learners try the same tasks on their own computers and ask, "Why do I need to put this CSS file in that directory? What does that have to do with my hard drive?"
If they'd learned directly on their own machines, they would have picked up the hard-drive concepts along the way. Instead, they learned a simplified version that, while seemingly more efficient for "learning to code," creates its own kind of waste.
But is that to say the student "should" spend a week struggling? Could they spend a day, say, and still learn what the friction was there to teach? Yes, usually.
I tell everyone to introduce friction into their lives...especially if they have kids. Friction is good! Friction is part of the je ne sais quoi that make human's create
Thank you for expressing this. It might not be pithy but its something I've been thinking about a lot for a long time and this a well articulated way of expressing this
In my experience part of the 'frictionless' experience is also to provide minimal information about any issues and no way to troubleshoot. Everything works until it doesn't, and when it doesn't you are now at the mercy of the customer support que and getting an agent with the ability to fix your problem.
> but friction is necessary if we want to maneuver and get feedback from the environment
You are positing that we are active learners whose goal is clarity of cognition and friction and cognitive-struggle is part of that. Clarity is attempting to understand the "know-how" of things.
Tech and dare I say the natural laziness inherent in us instead wants us to be zombies being fed the "know-that" as that is deemed sufficient. ie the dystopia portrayed in the matrix movie or the rote student regurgitating memes. But know-that is not the same as know-how, and know-how is evolving requiring a continuously learning agent.
Looking at it from a slightly different angle, one I find most illuminating, removing "friction" is like removing "difficulty" from a game, and "friction free" as an ideal is like "cheat codes from the start" as an ideal. It's making a game where there's a single button that says "press here to win." The goal isn't the remove "friction", it's the remove a specific type of valueless friction, to replace it with valuable friction.
This resonated a lot with me. Thank you for your articulate writing.
I don't know. You can be banging your head against the wall to demolish it or you can use manual/mechanical equipment to do so. If the wall is down, it is down. Either way you did it.
> ...Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you"
I feel like that describes nearly all of the "productivity" tools I see in AI ads. Sadly enough, it also aligns with how most people use it, in my personal experience. Just a total off-boarding of needing to think.
The term is "cognitive offloading". https://duckduckgo.com/?q=cognitive+offloading
Sheesh, I notice I also just ask an assistant quite a bit rather than putting effort to think about things. Imagine people who drive everywhere with GPS (even for routine drives) and are lost without it, and imagine that for everything needing a little thought...
As an old school interface/interaction designer, I see this as a direct consequence of how the discipline of software design has evolved in the last decade or two.
We’ve went from conceiving of software as tools - constructs that enhance and amplify their user’s skills and capabilities - to magic boxes that should aim to do everything with just one button (and maybe even that is one action too many).
This shift in thinking is visible in how junior designers and product managers are trained and incentivized to think about their work. “Anticipating the user’s intent”, “providing a magical experience”, “making the simplest, most beautiful and intuitive product” - all things that are so routine parlance now that they sound trite, but that would make any software designer from the 80s/90s catatonic because of how orthogonal they are to good tool design.
To caricature a bit, the industry went from being run by people designing heavy machinery to people designing Disneyland rides. Disneyland rides are great and have their place, but you probably don’t want your tractor to be designed like one.
Watching this tractor boot is infuriating
https://youtu.be/pWWC2a7Bj-U
Perhaps this is a feature and not a bug for MS. Every time you hit escape or accept, you're giving them more training samples. The more training data they can get you to give them, the better. So they WANT to be throwing out possibly irrelevant suggestions at every opportunity.
As much as I love JetBrains (IntelliJ and friends), I have the same feeling this year. The ratio that I undo an accidental tab/whatever far exceeds the accepted ones. I'm not anti-LLM -- they are great for many things, but I am tired of undoing shitting suggestions. Literally, many of them produce a syntax error. Please don't read this post as dumping on JetBrains. I still love their products.
No trolling: This is genius-level sarcasm. You do realise that most "business" emails are essentially this, right? Oh, right, you knew that already!I agree. I am happiest just using plain Emacs for coding and every once in a while separately using an LLM or once or twice a day use gemini-cli or codex for a single task.
My comment is for coding, but same opinion for writing emails - once in a blue moon, then I will use a LLM manually.
Too many companies have bolted AI on to their existing products with the value-prop Let us do the work (poorly) for you.
>As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".
This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.
It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".
My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:
https://codesolvent.com/botworx/intelligent-workspace/
A world full of humans being guided by computers would be... dystopian.
Although I imagine a version where AI drives humans who mindlessly trust them to be more vegetarian or take public transport, helping save the environment (an ironic wish since AI is burning the planet). Of course "AI" is being guided by their owners, so there'd be a camp who uses Grok who'll still drive SUVs, eat meat, and be racist idiots...
That's because in its current form, that's all it's good for reliably. Can't sell that it might hallucinate the numbers in the Q4 report
Dear MS please use AI to autocomplete my billing address correctly when I fill out web forms, thanks
Dissonance runs straight through from top of the org chart.
https://x.com/satyanadella/status/1996597609587470504
Just 22 hours ago... https://news.ycombinator.com/item?id=46138952
The disappointing thing is I’d rather them spend the time improving security but it sounds like all cycles are shoved into making AI shovels. Last year, the CEO promised security would come first but it’s not the case
https://www.techspot.com/news/102873-microsoft-now-security-...
Security does come first.
Job security.
AI agent technology likely isn’t ready for the kind of high-stakes autonomous business work Microsoft is promising.
It's unbelievable to me that tech leaders lack the insight to recognize this.
So how to explain the current AI mania being widely promoted?
I think the best fit explanation is simple con artistry. They know the product is fundamentally flawed and won't perform as being promised. But the money to be made selling the fantasy is simply too good to ignore.
In other words --- pure greed. Over the longer term, this is a weakness, not a strength.
It's part of a larger economic con centered on the financial industry and the financialization of American industry. If you want this stuff to stop, you have to be hoping (or even working toward) a correction that wipes out the incumbents who absolutely are working to maintain the masqerade.
It will hurt, and they'll scare us with the idea that it will hurt, but the secret is that we get to choose where it hurts - the same as how they've gotten to choose the winners and losers for the past two decades.
Agreed! I recently listened to a podcast (video) from the "How Money Works" channel on this topic:
"How Short Term Thinking Won" - https://youtu.be/qGwU2dOoHiY
The author argues that this con has been caused by three relatively simple levers: Low dividend yields, legalization of stock buybacks, and executive compensation packages that generate lots of wealth under short pump-and-dump timelines.
If those are the causes, then simple regulatory changes to make stock buybacks illegal again, limit the kinds of executive compensation contracts that are valid, and incentivize higher dividend yields/penalize sales yields should return the market to the previous long-term-optimized behavior.
I doubt that you could convince the politicians and financiers who are currently pulling value out of a fragile and inefficient economy under the current system to make those changes, and if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system. I think you're right that it will take a huge disaster that the wealthy and powerful are unable to dodge and unable to blame on anything but their own actions, I just don't know what that event might look like.
What is wrong with stock buybacks?
Genuine question, I don't understand the economics of the stock market and as such I participate very little (probably to my detriment) I sort of figure the original theory went like this.
"We have an idea to run a for profit endeavor but do not have money to set it up. If you buy from us a portion of our future profit we will have the immediate funds to set up the business and you will get a payout for the indefinite future."
And the stock market is for third party buying and selling of these "shares of profit"
Under these conditions are not all stocks a sort of millstone of perpetual debt for the company and it would behoove them to remove that debt, that is, buyback the stock. Naively I assume this is a good thing.
If you don't understand a concept that's part of the stock market, reading the Investopedia article will go a long way. It's a nice site for basic overviews. https://www.investopedia.com/terms/b/buyback.asp
The short answer is that the trend of frequent stock buybacks as discussed here is not being used to "eliminate debt" (restore private ownership), it's being used to puff up the stock price as a non-taxable alternative to dividend payouts (simply increasing the stock price by reducing supply does not realize any gains, while paying stockholders "interest" directly is subject to income tax). This games the metric of "stock price", which is used as a proxy for all sorts of things including executive performance and compensation.
My view is that you don't want more layers. Chasing ever increasing share prices favor shareholders (limited amount of generally rich people) over customers (likely to be average people). The incentives get out of whack.
I disagree. Those place the problem at the corporate level, when it's clearly extended through to being a monetary issue. The first thing I would like to see is the various Fed and banking liquidity and credit facilities go away. They don't facilitate stability, but a fiscal shell game that has allowed numerous zombie companies to live far past their solvency. This in turn encourages widespread fiscal recklessness.
We're headed for a crunch anyway. My observation is that a controlled demolition has been attempted several times over the past few years, but in every instance, someone has stepped up to cry about the disaster that would occur if incumbents weren't shored up. Of course, that just makes the next occurrence all the more dire.
Stupidity, greed, and straight-up evil intentions do a bunch of the work, but ultimately short-term thinking wins because it's an attractor state. The influence of the wealthy/powerful is always outsized, but attractors and common-knowledge also create a natural conspiracy that doesn't exactly have a center.
So with AI, the way the natural conspiracy works out is like this. Leaders at the top might suspect it's bullshit, but don't care, they always fail upwards anyway. Middle management at non-tech companies suspect their jobs are in trouble on some timeline, so they want to "lead a modernization drive" to bring AI to places they know don't need it, even if it's a doomed effort that basically defrauds the company owners. Junior engineers see a tough job market, want to devalue experience to compete.. decide that only AI matters, everything that came before is the old way. Owners and investors hate expensive senior engineers who don't have to bow and scrape, think they have to much power, would love to put them in their place. Senior engineers who are employed and maybe the most clear-eyed about the actual capabilities of technology see the writing on the wall.. you have to make this work even if it's handed to you in a broken state, because literally everyone is gunning for you. Those who are unemployed are looking around like well.. this is apparently the game one must play. Investors will invest in any horrible doomed thing regardless of what it is because they all think they are smarter than other investors and will get out in just in time. Owners are typically too disconnected from whatever they own, they just want to exit/retire and already mostly in the position of listening to lieutenants.
At every level for every stakeholder, once things have momentum they don't need be a healthy/earnest/noble/rational endeavor any more than the advertising or attention economy did before it. Regardless of the ethics there or the current/future state of any specific tech.. it's a huge problem when being locally rational pulls us into a state that's globally irrational
Yes, that "attractor state" you describe is what I meant by "if the changes were made I doubt they could last or be enforced given the massive incentives to revert to our broken system". The older I get and the more I learn, the less I'm willing to ascribe faults in our society to individual evils or believe in the existence of intentionally concealed conspiracies rather than just seeing systemic flaws and natural conspiracies.
One need only look at 1929 to understand what's in store. Of course, the rich/powerful will say "who could have seen this coming?"
There was a long standing illusion that people care about long-term thinking. But given the opportunity, people seem to take the short-term road with high risks, instead of chasing a long-term gain, as they, themselves, might not experience the gain.
The timeframe of expectations have just shifted, as everyone wants to experience everything. Just knowing the possibility of things that can happen already affects our desires. And since everyone has a limited time in life, we try to maximize our opportunities to experience as many things as possible.
It’s interesting to talk about this to older generation (like my parents in their 70s), because there wasn’t such a rush back then. I took my mom out to some cities around the world, and she mentioned how she really never even dreamed of a possibility of being in such places. On the other hand, when you grow in a world of technically unlimited possibilities, you have more dreams.
Sorry for rambling, but in my opinion, this somewhat affects economics of the new generation as well. Who cares of long term gains if there’s a chance of nobody experiencing the gain, might as well risk it for the short term one for a possibility of some reward.
> correction that wipes out the incumbents who absolutely are working to maintain the masqerade
You need to also have a robust alternative that grows quickly in the cleared space. In 2008 we got a correction that cleared the incumbents, but the ensuing decade of policy choices basically just allowed the thing to re-grow in a new form.
I thought we pretty explicitly bailed out most of the incumbents. A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy. 2008's "correction" should have seen the end of most of our investment banks and auto manufacturers. Say what you want to about them (and I have no particular love for either), but Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch. There should have been more, and Goldman Sachs and GM et al. should not currently exist.
> A few were allowed to be sacrificed, but most of the risk wasn't realized, and instead rolled into new positions that diffused it across the economy.
Yeah that's a more accurate framing, basically just saying that in '08 we put out the fire and rehabbed the old growth rather than seeding the fresh ground.
> Tesla and Bitcoin are ghosts of the timeline where those two sectors had to rebuild themselves from scratch
I disagree, I think they're artifacts of the rehab environment (the ZIRP policy sphere). I think in a world where we fully ate the loss of '08 and started in a new direction you might get Tesla, but definitely not TSLA, and the version we got is really (Tesla+TSLA) IMO. Bitcoin to me is even less of a break with the pre-08 world; blockchain is cool tech but Bitcoin looks very much "Financial Derivatives, Online". I think an honest correction to '08 would have been far more of a focus on "hard tech and value finance", rather than inventing new financial instruments even further distanced from the value-generation chain.
> Goldman Sachs and GM et al. should not currently exist.
Hard agree here
I would say yes and no on Tesla. Entities that survived becaue of the rehab environment actually expected it to fail, and shorted it heavily. TSLA as it currently exists is a result of the short squeeze on the stock that ensued when it became clear that the company was likely to become profitable. Its current, ridiculous valuation isn't a product of its projected earnings, but recoil from those large shorts blowing up.
In our hypothetical alternate timeline, I imagine that there would have still been capital eager to fill the hole left by GM, and possibly Ford. Perhaps Tesla would have thrived in that vacuum, alongside the likes of Fisker, Mullen, and others, who instead faced incumbent headwinds that sunk their ventures.
Bitcoin, likewise, was warped by the survival of incumbents. IIUC, those interests influenced governance in the early 2010s, resulting in a fork of the project's original intent from a transactional medium that would scale as its use grew, to a store of value, as controlled by them as traditional currencies. In our hypothetical, traditional banks collapsed, and even survivors lost all trust. The trustless nature of Bitcoin, or some other cryptocurrency, maybe would have allowed it to supercede them. Deprived of both retail and institutional deposits, they simply did not have the capital to warp the crypto space as they did in the actual 2010s.
I call them "ghosts" because, yes, whatever they might have been, they're clearly now just further extensions of that pre-2008 world, enabled by the our post-2008 environment (including ZIRP).
"In 2008 we got a correction that cleared the incumbents,"
I thought in 2008 we told the incumbents "you are the most important component of our economy. We will allow everybody to go down the drain but you. That's because you caused the problem, so you are the only ones to guide us out of it"
Looking forward to the OpenAI (and Anthropic) IPOs. It’s funny to me that this info is being “leaked” - they are sussing out the demand. If they wait too long, they won’t be able to pull off the caper (at these valuations). And we will get to see who has staying power.
It’s obvious to me that all of OpenAIs announcements about partnerships and spending is gearing up for this. But I do wonder how Altman retains the momentum through to next year. What’s the next big thing? A rocket company?
Increasing signs the ship has sailed on the IPO window for these folks but let’s see.
> But I do wonder how Altman retains the momentum through to next year. What’s the next big thing? A rocket company?
Hmm, there were news about Sam Altman wanting to buy/invest on a rocket company. [0]
[0] https://www.wsj.com/tech/ai/sam-altman-has-explored-deal-to-...
Hell yes! Would love to short.
I have thought about stopping the use of all tech leaders: only use LLM access by running locally and Huggingface, only use a small 3rd party email provider, just use open source, and only social media use is via Mastodon.
What would be the effect? Ironically, more productive?
I am pissed at Microsoft now because my family plan for Office365 is set to renew and they are tagging on a surcharge of $30 for AI services I don’t want. What assholes: that should be a voluntary add on.
EDIT: I tried to cancel my Office365 plan, and they let me switch to a non-AI plan for the old price. I don’t hate them anymore.
Problem with "it will hurt" is that it will actually hurt middle class by completely wiping it out, and maybe slightly inconvenience the rich. More like annoy the rich, really.
Yeah, it started with the whole Wall Street, with all the depression and wars that it brought, and it hasn't stopped, at each cycle the curve has to go up, with exponential expectations of growth, until it explodes taking the world economy to the ground.
How do you guarantee your accelerationism produces the right results after the collapse? If the same systems of regulation and power are still in place then it would produce the same result afterwards
It's like when a child doesn't want something, you "give them a choice": would you like to put on your red or white shoes?
This assumes fair competition in the tech industry, which has evaporated without a path for return years ago.
> you have to be hoping (or even working toward) a correction that wipes out the incumbents who absolutely are working to maintain the masqerade.
I'm not hoping for a market correction personally, I'm hoping that mobs reinvent the guillotine
They deserve nothing less by now. If they get away with nothing worse than "a correction" then they have still made out like bandits
I tend to agree, but there's something to be said for a retribution focus taking time and energy away from problem-solving. When market turmoil hits, stand up facilities to guarantee food and healthcare access, institute a nationwide eviction moratorium, and then let what remains of the free market play out. Maybe we pursue justice by actually prosecuting corporate malfeasance this time. The opposite of 2008.
Don’t attribute to malice that which can equally be contributed to incompetence.
I think you’re over-estimating the capabilities of these tech leaders, especially when the whole industry is repeating the same thing. At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics: if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
If, however, AI ended up delivering and they missed the boat, they’re going to be held accountable.
It’s much less risky to just follow industry trends. It takes a lot of technical knowledge, gut, and confidence in your own judgement to push back against an industry-wide trend at that level.
I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos, but will fail pretty badly when deployed.
If it works 99% of the time, then a demo of 10 runs is 90% likely to succeed. Even if it fails, as long as it's not spectacular, you can just say "yeah, but it's getting better every day!", and "you'll still have the best 10% of your human workers in the loop".
When you go to deploy it, 99% is just not good enough. The actual users will be much more noisy than the demo executives and internal testers.
When you have a call center with 100 people taking 100 calls per day, replacing those 10,000 calls with 99% accurate AI means you have to clean up after 100 bad calls per day. Some percentage of those are going to be really terrible, like the AI did reputational damage or made expensive legally binding promises. Humans will make mistakes, but they aren't going to give away the farm or say that InsuranceCo believes it's cheaper if you die. And your 99% accurate-in-a-lab AI isn't 99% accurate in the field with someone with a heavy accent on a bad connection.
So I think that the parties all "want to believe", and to an untrained eye, AI seems "good enough" or especially "good enough for the first tier".
Agreed, but 99% is being very generous.
A big task my team did had measured accuracy in the mid 80% FWIW.
I think the line of thought in this thread is broadly correct. The most value I’ve seen in AI is problems where the cost of being wrong is low and it’s easy to verify the output.
I wonder if anyone is taking good measurements on how frequently an LLM is able to do things like route calls in a call center. My personal experience is not good and I would be surprised if they had 90% accuracy.
And that's for tasks it's actually suited for
>I suspect that AI is in an "uncanny valley" where it is definitely good enough for some demos
Sort of a repost on my part, but the LLM's are all really good at marketing and other similar things that fool CEO's and executives. So they think it must be great at everything.
I think that's what is happening here.
> if AI fails to deliver, it fails to deliver for everyone and the people that bought into the hype can blame the consultants / whatever.
Understatement of the year. At this point, if AI fails to deliver, the US economy is going to crash. That would not be the case if executives hadn't bought in so hard earlier on.
Race to "Too big to fail" on hype and your losses are socialized
There’s also a case that without the AI rush, US economy would look even weaker now.
And if it does deliver, everyone's gonna be out of a job and the US economy is also going to crash.
Nice cul-de-sac our techbro leaders have navigated us into.
Yep, either way things are going to suck for ordinary people.
My country has had bad economy and high unemployment for years, even though rest of the world is doing mostly OK. I'm scared to think what will happen once AI bubble either bursts or eats most white collar jobs left here.
> Don’t attribute to malice that which can equally be contributed to incompetence.
At this point I think it might actually be both rather than just one or the other.
“Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally.” - Keynes.
Convention here is that AI is the next sliced bread. And big-tech managers care about their reputation.
It's pretty pathetic that they can build a brand based on "doing the exact same thing everyone else is doing" though
> Don’t attribute to malice that which can equally be contributed to incompetence.
This discourse needs to die. Incompetence + lack of empathy is malice. Even competence in the scenario they want to create is malice. It's time to stop sugar-coating it.
I keep fighting this stupid platitude [0]. By that logic, I fail to find anything malicious. Everything could be explained by incompetence, stupidity etc.
[0] https://news.ycombinator.com/item?id=46147328
> At that point, it takes a lot of guts to say “No, we’re not going to buy into the hype, we’re going to wait and see” because it’s simply a matter of corporate politics
Isn't that the whole mythos of these corporate leaders though? They are the ones with the vision and guts to cut against the fold and stand out among the crowd?
I mean it's obviously bullshit, but you would think at least a couple of them actually would do something to distinguish themselves. They all want to be Steve Jobs but none of them have the guts to even try to be visionary. It is honestly pathetic
What you have is a lot of middle managers imposing change with random fresh ideas. The ones that succeed rise up the ranks. The ones that failed are forgotten, leading to survivorship bias.
Ultimately it's a distinction without a difference. Maliciously stupid or stupidly malicious invariably leads to the same place.
The discussion we should be having is how we can come together to remove people from power and minimize the influence they have on society.
We don't have the carbon budget to let billionaires who conspires from island fortresses in Hawaii do this kind of reckless stuff.
It's so dismaying to see these industries muster the capital and political resources to make these kinds of infrastructure projects a reality when they've done nothing comparable w.r.t to climate change.
It tells me that the issue around the climate has always been a lack of will not ability.
It's mass delusion
> In other words --- pure greed.
Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
These attempts to try to steer demand despite clear indicators that it doesn't want to go in that direction aren't just driven by greed, they're driven by abject incompetence.
This isn't pure greed, it's stupid greed.
Pure greed is stupid greed.
Also, if the current level of AI investment and valuations aren't justified by market demand (I believe so), many of these people/companies are getting more money than they would without the unreasonable hype.
No, it's greed right now. They are fundamentally incapable of considering consequences beyond the immediate term.
If the kind of foresight and consideration you suggest were possible, companies wouldn't be on this self-cannibalizing path of exploiting customers right now for every red cent you can squeeze out of them. Long term thinking would very clearly tell you that abusing your customers and burning all the goodwill the company built over a hundred years is idiotic beyond comparison. If you think about anything at all other than tomorrow's bottom line you'd realize that the single best way to make a stable long-term business is to treat your customers with respect and build trust and loyalty.
But this behavior is completely absent in today's economy. Past and future don't matter. Getting more money right now is the only thing they're capable of seeing.
you seem to be committing the error of believing that the problem here is just that they’re not selling what people want to buy, instead of identifying the clear intention to _create_ the market.
> Pure greed would have a strong incentive to understand what the market is actually demanding in order to maximize profits.
Not necessarily, just look at this clip [1] from Margin Call, an excellent movie on the GFC. As Jeremy Irons is saying in that clip, the market (as usually understood in classical economy, with producers making things for clients/customers to purchase) is of no importance to today's market economy, almost all that matters, at the hundreds of billions - multi-trillion dollars-levels, is for your company "to play the music" as best as the other (necessarily very big) market participants, "nothing more, nothing less" (again, to quote Irons in that movie).
There's nothing in it about "making what people/customers want" and all that, which is regarded as accessory, that is if it is taken into consideration at all. As another poster is mentioning in this thread, this is all the direct result of the financialization of much of the Western economy, this is how things work at this level, given these (financiliazed) inputs.
[1] https://www.youtube.com/watch?v=UOYi4NzxlhE
They've gotten away with shipping garbage for years and still getting paid for it. They think we're all stupid.
Given that they aren’t meeting their sales targets at all, I guess that’s a little bit of encouraging about the discernment of their customers. I’m not sure how Microsoft has managed to escape market discipline for so long.
> I’m not sure how Microsoft has managed to escape market discipline for so long.
How would they? They are a monopoly, and partake in aggressive product bundling and price manipulation tactics. They juice their user numbers by enabling things in enterprise tenants by default.
If a product of theirs doesn't sell, they bundle it for "free" in the next tier up of license to drive adoption and upgrades. Case in point, the InTune suite (includes EntraID P2, Remote assistance, endpoint privilege management) will now be included in E5, and the price of E5 is going up (by $10/user/month, less than the now bundled features cost when bought separately). People didn't buy it otherwise, so now there's an incentive to move customers off E3 and into E5.
Now their customers are in a place where Microsoft can check boxes, even if the products aren't good, so there's little incentive to switch.
Try to price out Google Workspace (and also, an office license still because someone will need Excel), Identity, EDR, MDM for Windows, mac, mobile, slack, VoIP, DLP, etc. You won't come close to Microsoft's bundled pricing by piecing together the whole M365 stack yourself.
So yeah, they escape market discipline because they are the only choice. Their customers are fully captive.
Their customers largely aren't their users. Their customers are the purchasing departments at Dell, Lenovo, and other OEMs. Their customers are the purchasing departments at large enterprises who want to buy Excel. Their customers are the advertisers. The products where the customers and the users are the same people (Excel, MS flight simulator, etc.) tend to be pretty nice. The products where the customers aren't the users inevitably turn to shit.
They think we're all stupid.
As time goes by, I'm starting to think they may be right more than they're wrong.
And this is a sad and depressing statement about humanity.
Not really. It's just that the point you have to push people to get them to start pushing back on something tends to be quite high. And it's very different for different people on different topics.
In the past this wasn't such a big deal because businesses weren't so large or so frequently run by myopic sociopaths. Ebenezer Scrooge was running some small local business, not a globe spanning empire entangling itself with government and then imposing itself on everybody and everything.
Scrooge is a fictional person and Microsoft have been getting away with it since I’m alive with people hating it probably just as long. So I think GP definitely has a point.
Are you a fan of reading? Good character fiction is based on reality as understood at a time and a great way to get insights into how and what people think, particularly as it's precisely those believable portrayals that tend to 'stick' with society. For example even most of George R. R. Martin's tales are directly inspired by real things, very much living up to the notion that reality is much stranger than fiction! Or similarly, read something like Dune and the 60s leaks into it hard.
In modern times the tale of Scrooge probably wouldn't really resonate, nor 'stick', because we transitioned to a culture of worshiping wealth, consumerism, and materialism. See (relevant to this topic) how many people defend unethical actions by claiming that fiduciary duty precludes any value beyond greed. In the time of Scrooge this was not the case, and so it was a more viable cautionary tale that strongly resonated.
People think that because AI cannot replace a senior dev, it's a worthless con.
Meanwhile, pretty much every single person in my life is using LLMs almost daily.
Guys, these things are not going away, and people will pay more money to use them in future.
Even my mom asks ChatGPT to make a baking applet with a picture she uploads of the recipe, that creates a simple checklist for adding ingredients (she forgets ingredients pretty often). She loves it.
This is where LLMs shine for regular people. She doesn't need it to create a 500k LOC turn-key baking tracking SaaS AWS back-end 5 million recipes on tap kitchen assistant app.
She just needs a bespoke one off check list.
Is she going to pay enough to fund the multitrillion dollars it costs to run the current AI landscape?
Yeah, she is, because when reality sets in, these models will probably have monthly cellphone/internet level costs. And training is the main money sink, whereas inference is cheap.
500,000,000 people paying $80/mo is roughly a 5-yr ROI on a $2T investment.
I cannot believe on a tech forum I need to explain the "Get them hooked on the product, then jack up the price" business model that probably 40% of people here are kept employed with.
Right now they are (very successfully) getting everyone dependent on LLMs. They will pull rug, and people will pay to get it back. And none of the labs care if 2% of people use local/chinese models.
> 500,000,000 people paying $80/mo
Simply not going to happen
> And training is the main money sink, whereas inference is cheap.
False. Training happens once for a time period, but inference happens again and again every time users use the product. Inference is the main money sink.
"according to a report from Google, inference now accounts for nearly 60% of total energy use in their AI workloads. Meta revealed something even more striking: within their AI infrastructure, power is distributed in a 10:20:70 ratio among experimentation, training, and inference respectively, with inference taking the lion’s share."
https://blogs.dal.ca/openthink/the-hidden-cost-of-ai-convers...
They get paid for inference, those tokens might as well be monetary tokens.
I think there are 2 things at play here. LLMs are, without a doubt, absolutely useful/helpful but they have shortcomings and limitations (often worth the cost of using). That said, businesses trying to add "AI" into their products have a much lower success rate than LLM-use directly.
I dislike almost every AI feature in software I use but love using LLMs.
It's exactly the same situation as Tesla "self driving". It's sold and marketed in no uncertain terms, VERY EXPLICITLY that AI will replace senior devs.
As you admit, it can't do that. And everyone involved knows it.
How is that anything other than a con?
This false dichotomy is still frustratingly all over the place. LLMs are useful for a variety of benign everyday use cases, that doesn't mean that they can replace a human for anything. And if those benign use cases is all they're good at, then the entire AI space right now is maybe worth $2B/year, tops. Which is still a good amount of money! Except that's roughly the amount of money OpenAI spends every minute, and it's definitely not "the next invention of fire" like Sam Altman says.
Even these everyday use-cases are infinitely varied and can displace entire industries. E.g. ChatGPT helped me get $500 in airline delay compensation after multiple companies like AirHelp blew me off: https://news.ycombinator.com/item?id=45749803
For reference, AirHelp alone had revenue of $153M last year (even without my money ;-P): https://rocketreach.co/airhelp-profile_b5e8e078f42e8140
This single niche industry as a whole is probably worth billions alone.
Now multiply that by the number of niches that exist in this world.
The consider the entire universe of formal knowledge work, where large studies (from self-reported national surveys to empirical randomized controlled trials on real-world tasks) have already shown significant productivity boosts, in the range of 30%. Now consider their salaries, and how much companies would be willing to pay to make their employees more productive.
Trillions is not an exaggeration.
Use case == Next iteration of "You're Fired" may be more like it.
> People think that because AI cannot replace a senior dev, it's a worthless con.
Quite the strawman. There are many points between “worthless” and “worth 100s of billions to trillions of investment”.
Are your mother's cooking recipes gonna cover the billions and even trillions being spent here? I somehow doubt that, and it's funny to me that the killer usecase the hypesters use is stupid inane shit like this (no offense to your mom, but a recipe generator isn't something we should be speedrunning global economic collapse for)
is this really the best use case you could come up with? says it all really if so.
> So how to explain the current AI mania being widely promoted?
Probably individual actors have different motivations, but let's spitball for a second:
- LLMs are genuinely a revolution in natural language processing. We can do things now in that space that were unthinkable single-digit years ago. This opens new opportunity spaces to colonize, and some might turn out quite profitable. Ergo, land rush.
- Even if the new spaces are not that much of a value leap intrinsically, some may still end up obsoleting earlier-generation products pretty much overnight, and no one wants to be the next Nokia. Ergo, defensive land rush.
- There's a non-zero chance that someone somewhere will actually manage to build the tech up into something close enough to AGI to serve, which in essence means deprecating the labor class. The benefits (to that specific someone, anyway...) would be staggering enough to make that a goal worth pursuing even if the odds of reaching it are unclear and arguably quite low.
- The increasingly leveraged debt that's funding the land rush's capex needs to be paid off somehow and I'll venture everyone knows that the winners will possibly be able to, but not everyone will be a winner. In that scenario, you really don't want to be a non-winner. It's kind of like that joke where you don't need to outrun the lions, you only need to outrun the other runners, except in this case the harder everyone runs and the bigger the lions become. (Which is a funny thought now, sure, but the feasting, when it comes, will be a bloodbath.)
- A few, I'll daresay, have perhaps been huffing each other's farts too deep and too long and genuinely believe the words of ebullient enthusiasm coming out of their own mouths. That, and/or they think everyone's job except theirs is simple actually, and therefore just this close to being replaceable (which is a distinct flavor of fart, although coming from largely the same sources).
So basically the mania is for the most part a natural consequence of what's going on in the overlap of the tech itself and the incentive structure within which it exists, although this might be a good point to remember that cancer and earthquakes too are natural. Either way, take care of yourselves and each other, y'all, because the ride is only going to get bouncier for a while.
> There's a non-zero chance that someone somewhere will actually manage to build the tech up into something close enough to AGI
Bullshit
>So how to explain the current AI mania being widely promoted?
CEOs have been sold on the ludicrous idea that "AI" will replace 60-80% of their total employee headcount over the next 2-3 years. This is also priced into current equity valuations.
I think on some level it is being done on the premise that further advancement requires an enormous capital investment and if they can find a way to fund that with today’s sales it will give the opportunity for the tech to get there (quite a gamble).
At this point, the people in charge have signed off on so much AI spending that they need it to succeed, otherwise they are the ones responsible for massive losses.
Thing is, it's hard to predict what can be done and what breakthrough or minor tweak can suddenly open up an avenue for a profitable use-case.
The cost of missing that opportunity is why they're heavily investing in AI, they don't want to miss the boat if there's going to be one.
And what else would they do? What's the other growth path?
this idea that AI is the only thing anyone could possibly do that might be useful has absolutely got to go
> And what else would they do? What's the other growth path?
Are you arguing that if LLMs didn’t exist as a technology, they wouldn’t find anything to do and collapse?
_Number would not go up sufficiently steeply_, would be the major concern, not collapse. Microsoft might end up valued as (whisper it) a normal mature stable company. That would be something like a quarter to a half what it's currently valued. For someone paid mostly in options, this is clearly a problem (and people at the top in these companies mostly _are_ compensated with options, not RSUs; if the stock price halves, they get _nothing_).
The cost of the boat sinking is also very high and that’s looking like the more likely scenario. Watching your competitors sink huge amounts of capital into a probably sinking boat is a valid strategy. The growth path they were already on was fine no?
I have a feeling that Microsoft is setting themselves up for a serious antitrust lawsuit if they do what they are intending on. They should really be careful about introducing products into the OS that take away from all other AI shops. I fear this would cripple innovation if allowed to do so as well, since Microsoft has drastically fatter wallets than most of their competition.
There's no such thing as antitrust in the US right now. Google's recent slap on the wrist is all the proof you need.
Trump has ushered in a truly lawless phase of american politics. I mean, it was kind of bad before, but at least there was a pretense of rule of law. A trillion dollar company can easily just buy its way out of any enforcement of such antitrust action.
Under the current US administration the only thing Microsoft is getting is numerous piles of taxpayer bailouts.
Corruption is indeed going strong in the current corporate-controlled US group of lame actors posing as government indeed. At the least Trump is now regularly falling asleep - that's the best example that you can use any surrogate puppet and the underlying policies will still continue.
If I mention a president who was more of a general secretary of the party, taking notes of decisions taken for him by lobbies from the largest corporations, falling asleep and having incoherent speech to the point that he seems to be way past the point of stroke, I don’t think anyone will guess Trump.
> So how to explain the current AI mania being widely promoted?
> I think the best fit explanation is simple con artistry.
Yes, perhaps, but many industries are built on a little bit of technology and a lot of stories.
I think of it as us all being caught in one giant infomercial.
Meanwhile as long as investors buy the hype it's a great story to use for triming payrolls.
> In other words --- pure greed.
It's the opposite; it's FOMO.
Imagine your supplier effectively telling you that they don't even value you (and your money) enough to bother a real human.
Fake it till you make it.
outside of the recovery community, this is known as 'fraud'
I mean, see Windows Vista. It was eventually patched up to the point where it was semi-usable (and then quietly killed off), but on introduction it was a complete mess. But... something had to be shipped, and this was something, so it was shipped.
(Vista wasn't the only one; Windows ME never even made it to semi-usable, and no-one even remembers that Windows 8 _existed_.)
Microsoft has _never_, as far as I know, been a company to be particularly concerned about product quality. The copilot stuff may be unusually bad, but it's not that aberrant for MS.
I was just in a thread yesterday with someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was.
Everything about the conversation felt like talking to a true believer, and there's plenty out there.
It's the hopes and dreams of the Next Big Thing after blockchain and web3 fell apart and everyone is desperate to jump on the bandwagon because ZIRP is gone and everyone who is risk averse will only bet on what everyone else is betting on.
Thus, the cycle feeds itself until the bubble pops.
I don't see how people don't see it. LLMs are a revolutionary technology and are for the first time since the iPhone are changing how we interact with computers. This isn't block chains. This is something we're going to use until something better replaces it.
I agree to some extent, but we’re also in a bubble. It seems completely obvious that huge revenue numbers aren’t around the corner, not enough to justify the spend.
> "someone who genuinely believed that we're only seeing the beginnings of what the current breed of AI will get us, and that it's going to be as transformative as the introduction of the internet was."
I think that. It's new technology and it always takes some years before all the implications and applications of new technology are fully worked out. I also think that we're in a bubble that will hose a lot of people when it pops.
Two things can be true:
1) We have barely scratched the surface of what is possible to do with existing AI technology. 2) Almost all of the money we are spending on AI now is ineffectual and wasted.
---
If you go back to the late 1990s, that is the state that most companies were at with _computers_. Huge, wasteful projects that didn't improve productivity at all. It took 10 years of false starts sometimes to really get traction.
It's interesting to think Microsoft was around back then too, taking approximately 14 years to regain the loss of approximately 58% of their valuation.
All these boosters think we're on the leading edge of an exponential, when it's way more likely that we're on the midpoint to tail of a logistic
AI research has always been a series of occasional great leaps between slogs of iterative improvements, from Turing and Rosenblatt to AlexNet and GPT-3. The LLM era will result in a few things becoming invisible architecture* we stop appreciating and then the next big leap starts the hype cycle anew.
*Think toll booths (“exact change only!”) replaced by automated license plate readers in just the span of a decade. Hardly noticeable now.
It's not just AI mania, it's been this way for over a decade.
When I first started consulting, organizations were afraid enough of lack of ROI in tech implementations that projects needed an economic justification in order to be approved.
Starting with cloud, leadership seemed so become rare, and everything was "us too!".
After cloud it was data/data visualization, then it was over-hiring during Covid, the it was RTO, and now it's AI.
I wonder if we will ever return to rationalization? The bellwether might be Tesla stock price (at a rational valuation).
If rationalization comes back, everyone will talk like in Michael Moore’s documentary about GM and Detroit. A manager’s salary after half a career will be around $120k, like in an average bank, and that would be succeeding. I don’t think we even imagine how much of a tsunami we’ve been surfing since 2000.
US technocapitalism is built on the premise of technological innovation driving exponential growth. This is why they are fixated on whatever provides an outlook for that. The risk that it might not work out is downplayed, because (a) they don’t want to hazard not being at the forefront in the event that it does work out, and (b) if it doesn’t work out, nobody will really hold them accountable for it, not the least because everybody does it.
After the mobile and cloud revolution having run out of steam, AI is what promises most growth by far, even if it is a dubious promise.
It’s a gamble, a bet on “the next big thing”. Because they would never be satisfied with there not being another “big thing”, or not being prominently part of it.
Riding hype waves forever is the most polar opposite thing to “sustainable” that I can imagine
It was the same with the cloud adoption. And I still think that cloud is expensive, wasteful and in the vast majority of cases not needed.
It's not "pure greed." It's keeping up with the Joneses. It's fear.
There are three types of humans: mimics, amplifiers, originators. ~99% of the population are basic mimics, and they're always terrified - to one degree or another - of being out of step with the herd. The hyper mimicry behavior can be seen everywhere and at all times, from classrooms to Tiktok & Reddit to shopping behaviors. Most corporate leadership are highly effective mimics, very few are originators. They desperately herd follow ('nobody ever got fired for buying IBM').
This is the dotcom equivalent of every business must be e and @ ified (the advertising was aggressively targeted to that at the time). 1998-2000, you must be e ready. Your hotdog stand must have its own web site.
It is not greed-driven, it's fear-driven.
They want to exfiltrate the customers' data under the guise of getting better "AI" responses.
No company or government in the EU should use this spyware.
It's not "fundamentally flawed". It is brilliant at what it does. What is flawed is how people are applying it to solve specific problems. It isn't a "do anything" button that you can just push. Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
I'd consider hallucinations to be a fundamental flaw that currently sets hard limits on the current utility of LLMs in any context.
I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know.
> Every problem you apply AI to still has a ton of engineering work that needs to be done to make it useful.
Ok, but that isn't useful to me. If I have to hold the bot's hand to get stuff done, I'll just do it myself, which will be both faster and higher quality.
That’s not my experience at all, I’m getting it done much faster and the quality is on par. It’s hard to measure, but as a small business owner it’s clear to me that I now require fewer new developers.
You’re correct, you need to learn how to use it. But for some reason HN has an extremely strong anti-AI sentiment, unless it’s about fundamental research.
At this point, I consider these AI tools to be an invaluable asset to my work in the same way that search engines are. It’s integrated into my work. But it takes practice on how to use it correctly.
> for some reason HN has an extremely strong anti-AI sentiment
It's because I've used it and it doesn't come even close to delivering the value that its advocates claim it does. Nothing mysterious about it.
I think what it comes down to is that the advocates making false claims are relatively uncommon on HN. So, for example, I don't know what advocates you're talking about here. I know people exist who say they can vibe-code quality applications with 100k LoC, or that guy at Anthropic who claims that software engineering will be a dead profession in the first half of '26, and I know that these people tend to be the loudest on other platforms. I also know sober-minded people exist who say that LLMs save them a few hours here and there per week trawling documentation, writing a 200 line SQL script to seed data into a dev db, or finding some off-by-one error in a haystack. If my main or only exposure to AI discourse was HN, I would really only be familiar with the latter group and I would interpret your comment as very biased against AI.
Alternatively, you are referring to the latter group and, uh, sorry.
The whole point I tried to make when I said “you need to learn how to use it” is that it’s not vibe coding. It has nothing to do with vibes. You need to be specific and methodological to get good results, and use it for appropriate problems.
I think the AI companies have over-promised in terms of “vibe” coding, as you need to be very specific, not at all based on “vibes”.
I’m one of those advocates for AI, but on HN it consistently gets downvoted no matter how I try to explain things. There’s a super strong anti-AI sentiment here.
There is no scenario where AI is a net benefit. There are three possibilities:
1. AI does things we can already do but cheaper and worse.
This is the current state of affairs. Things are mostly the same except for the flood of slop driving out quality. My life is moderately worse.
2. Total victory of capital over labor.
This is what the proponents are aiming for. It's disastrous for the >99% of the population who will become economically useless. I can't imagine any kind of universal basic income when the masses can instead be conveniently disposed of with automated killer drones or whatever else the victors come up with.
3. Extinction of all biological life.
This is what happens if the proponents succeed better than they anticipated. If recursively self-improving ASI pans out then nobody stands a chance. There are very few goals an ASI can have that aren't better accomplished with everybody dead.
What is the motivation for killing off the population in scenario 2? That's a post-scarcity world where the elites can have everything they want, so what more are they getting out of mass murder? A guilty conscience, potentially for some multiple of human lifespans? Considerably less status and fame?
Even if they want to do it for no reason, they'll still be happier if their friends and family are alive and happy, which recurses about 6 times before everybody on the planet is alive and happy.
It's not a post-scarcity world. There's no obvious upper bound on resources AGI could use, and there's no obvious stopping point where you can call it smart enough. So long are there are other competing elites the incentive is to keep improving it. All the useless people will be using resources that could be used to make more semiconductors and power plants.
My suspicion is because they (HN) are very concerned this technology is pushing hard into their domain expertise and feel threatened (and, rightfully so).
While it will suck when that happens (and inevitably it will), that time is not now. I'm not one to say LLMs are useless, but they aren't all they're being marketed to be.
Or they might know better than you. A painful idea.
Painful? What's painful when someone has a different opinion? I think that is healthy.
I think MSFT really needs some validated user stories. How many users want to, "Improve my writing," "Create an image," "Understand what is changed" (e.g. recent edits), or "Visualize my data."?
Those are the four use cases featured by the Microsoft 365 Copilot App (https://m365.cloud.microsoft/).
Conversely, I bet there are a lot of people who want AI to improve things they are already doing repeatedly. For example, I click the same button in Epic every day because Epic can't remove a tab. Maybe Copilot could learn that I do this and just...do it for me? Like, Copilot could watch my daily habits and offer automation for recurring things.
But do you (or MSFT) trust it to do that correctly, consistently, and handle failure modes (what happens when the meaning of that button/screen changes)?
I agree, an assistant would be fantastic in my life, but LLMs aren't AGI. They can not reason about my intentions, don't ask clarifing questions (bring back ELIZA), and handle state in an interesting way (are there designs out there that automatically prune/compress context?).
>improve things they are already doing repeatedly. For example, I click the same button in Epic every day because Epic can't remove a tab. Maybe Copilot could learn that I do this and just...do it for me?
You could solve that issue (and probably lot's of similar issues) with something like Auto Hotkey. Seems like extreme overkill to have an autonomous agent watch everything you do, so it might possibly click a button.
Auto Hotkey doesn't work well for Epic manipulation because Epic runs inside of a Citrix Virtual Machine. You can't just read Window information and navigate that way. You'd have to have some sort of on-screen OCR to detect whether Epic is open, has focus, and is showing the tab that I want to close. Also, the tab itself can't be closed...I'm just clicking on the tab next to it.
Doable in Autohotkey. You can take a screenshot of what to look for, and tell AutoHotKey to navigate the mouse to it on the screen if it finds it.
I've done similar things.
And in an ideal world, one could report this as a bug or improvement and get it fixed for every single user without them needing to do anything at all.
Well, it isn't every user. We use a version of Epic called Epic Radiant. It's designed for radiologists. The tab that always opens is the radiologist worklist. The thing is, we don't use that worklist for procedures (I'm an interventional radiologist). So that tab is always there, always opens first, and always shows an empty list. It can't be removed in the Radiant version of Epic.
I'm sure you have, but try be bringing that up to Epic, not introducing AI slop and Data gathering into HIPPA workflows.
But why would Epic spend money improving or fixing their software? If they spend money developing their product then they can't spend that money on their adult playground of a campus!
I think what people want in the long term is truly malleable software: https://manuel.kiessling.net/2025/11/04/what-if-software-shi...
I can’t find any use case for Copilot at all, and I frequently “sell” people Microsoft 365. (I don’t earn a commission; I just help them sign up for it.) I cannot come up with a reason anyone needs Copilot.
Meanwhile I spent 3-4 hours working with a client yesterday using Dreamhost’s free AI tools to get them up and running with a website quickly whilst I configured Microsoft 365, Cloudflare, email and so forth for them.
> Like, Copilot could watch my daily habits and offer automation for recurring things.
We're working on it at https://github.com/openadaptai/openadapt.
I actually would like it to improve my writing. Problem is LLMs aren't particularly good for this (yet).
> Copilot could watch my daily habits and offer automation for recurring things
Pretty sure the advertising department already watches you and helpfully suggests things that you need to buy.
If you click through to the article shared yesterday[0]:
> Microsoft denies report of lowering targets for AI software sales growth
This Ars Technica article cites the same reporting as that Reuters piece but doesn't (yet) include anything about MSFT's rebuttal.
[0]: https://news.ycombinator.com/item?id=46135388
Semantics + Spin
The difference between poison and medicine is the amount. AI is great and very useful, but they want the AI to replace you instead of supporting your needs.
"AI everywhere" is worse than "AI nowhere". What we need is "AI somewhere".
That's what we had before LLMs. Without the financially imposed contrivance of it needing to be used everywhere, it was free to be used where it made sense.
It's almost a revenge of the engineers. The big players' path to "success" has been to slap together some co-pilot loaded with enterprise bloat and try to compete with startups that solve the same problems in a much cleaner way.
Meanwhile, they believed the market was already theirs—so their logic became: fire the engineers, buy more GPUs.
I have mixed feelings about this. I've interviewed several people who were affected by these layoffs, and honestly, many of them were mediocre engineers by most measures. But that still doesn't make this a path to success.
>I've interviewed several people who were affected by these layoffs, and honestly, many of them were mediocre engineers by most measures. But that still doesn't make this a path to success.
How mediocre are we talking about here? (I’m curious)
Very poor & mediocre.
You can find secret little pockets within Microsoft where individuals & small teams do nothing at all, day in and day out. I mean literally nothing. The game is to maximize life and minimize work at the expense of the company. The managers are in on the game and help with the cover-up. I find it hilariously awesome and kind of sad at the same time.
Anyway, one round of layoffs this year was specifically targeted at finding these pockets and snuffing them out. The evidence used to identify said pocket was slowly built out over a year ahead of time. It's very likely that these pockets also harbored poor & mediocre developers, it stands to reason that a poor or mediocre developer is more likely to gravitate to such a place.
Not saying all the developers that were laid off were in a free-loader pocket, or that this cohort must be the ones that were interviewed. I'm only suggesting that the mediocre freeloaders form a significant slice of the Venn diagram.
Damn that is crazy, how do you measure it? , AI use? , i hope you saying this doesn't affect the employment prospects of the ones that aren't "mediocre" but happened to be on those teams.
I'm sure it's difficult enough for people to find work right now without you putting a knife in their back on the way out.
I don't know if AI was used, but I do know that git contributions were used as a starting point. From what I've heard, it was just individuals and the managers that enabled it.
Aren't most of us mediocre?
By some metric yes, that was the point of my question.
Is this “they’re not Carmack”? “They messed up their explanation or the CAP theorem”? “They can’t write a for loop”?
Even Devblogs and anything related to Java,.NET, C++ and Python out of Redmond seems to be all around AI and anything else are now low priority tickets on their roadmaps.
No wonder there is this exhaustion.
Anyone who has had the pleasure of being forced to migrate to their new Fabric product can tell you why sales are low. It's terrible not just because it's a rushed buggy pile of garbage they want people to Alpha test on users but because of the "AI First" design they are forcing into it. They hide so much of what's happening in the background it is hard to feel like you can trust any of it. Like agentic "thinking" models with zero way to look into what it did to get to the conclusion.
Every new Microsoft product is like this. It all has that janky, slapped together at the last minute feeling.
It's so bizarre because their devs tools and frameworks are so well thought out. You'd think if they're using those it should come out not janky. But I don't think they do use their own devs tools, and I also don't think it would help.
I can see why Microsoft likes AI and thinks it's great for writing code.
The kind of code AI writes is the kind of code Microsoft has always written.
"after salespeople miss their quotas."
Well.. that's certainly one way to view it. The other is:
"because the company set unrealistic expectations."
I'm sure this will slow down the growth of "AI datacenters." I'm sure of this.
Every salesperson will tell you that good products sell themselves and therein lies the real story.
Super interesting how this arc has played out for Microsoft. They went from having this massive advantage in being an early OpenAI partner with early access to their models to largely losing the consumer AI space: Copilot is almost never mentioned in the same breath as Claude and ChatGPT. Though I guess their huge stake in OpenAI will still pay out massively from a valuation perspective.
It's because Copilot isn't (just) a model, it's a brand that's been slapped on any old rubbish.
If Clippy were still around, that'd have been rebranded as Copilot by now.
If they resurected Clippy and made it the face of their Ai I would switch in a heartbeat.
https://felixrieseberg.github.io/clippy/
That is impressive! I really want clippy to chime in and tell me it looks like i am writing a letter and offer to help.
Microsoft seems to be actively discarding the consumer PC market for Windows. It's gamers and enterprise, it seems. Enterprise users don't get a lot of say in what's on their desktop.
They made Copilot the term for AI and smeared it everywhere to the point that it has no meaning and therefore no usage when talking about AI.
Hearing similar stories play out elsewhere too with targets being missed left and right.
There’s definitely something there with AI but a giant chasm between reality and the sales expectations on what’s needed to make the current financial engineering on AI make any sense.
A bit tangential and pedantic, but:
> At the heart of the problem is the tendency for AI language models to confabulate, which means they may confidently generate a false output that is stated as being factual.
"Confabulate" is precisely the correct term; I don't know how we ended up settling on "hallucinate".
The bigger problem is that, whichever term you choose (confabulate or hallucinate), that's what they're always doing. When they produce a factually correct answer, that's just as much of a random fabrication based on training data as when they're factually incorrect. Either of those terms falsely implies that they "know" the answer when they get it right, but "confabulate" is worse because there isn't "gaps in their memory", they're just always making things up.
About 2 years ago I was using Whisper AI locally to translate some videos, and "hallucinations" is definitely the right phrase for some of its output! So just like you might expect from a stereotypical schizo: it would stay on-task for a while, but then start ranting about random things, or "hearing things", etc.
>fabricate imaginary experiences as compensation for loss of memory
Uh, TIL. This is wildly different to the Spanish meaning, confabular means to plot something bad (as in a conspiracy).
Which is a weird evolution in both languages, as the Latin root seems to mean simply “talking together”.
I mean, neither is a great term, in that they both refer to largely dissimilar psychological phenomena, but confabulate is at least a lot _closer_.
Too much money being spent on a technology that isnt ready to do what they're saying it can do. It feels like the 3G era all over again. Billion spent on 3G licences which didnt deliver what they expected it would.
Meanwhile, divisions that make actual products people wants are expected to subsidize the hype department: https://www.geekwire.com/2025/new-report-about-crazy-xbox-pr...
It would appear XBox is not subsidizing anything, since Microsoft's gross profit margin is ~70%.
Although that depends on total revenues too (low margin on high revenue can be better than high margin on low revenue).
>> The Information notes that much of Microsoft’s AI revenue comes from AI companies themselves renting cloud infrastructure rather than from traditional enterprises adopting AI tools for their own operations.
And MS spends on buying AI hardware. That's a full circle.
What can you even do in the ms enterprise ecosystem with their copilot integration?
Is it just for chatting? Is it a glorified RAG?
Can you tell copilot co to create a presentation? Make a visualisation in a spreadsheet?
It wants to help create things in Office documents, I imagine just saving you the copy and paste from the app or web form. The one thing I tried to get it to do was to take a spreadsheet of employees and add a column with their office numbers (it has access to the company directory). The response was something like "here's how you would look up a office number, you're welcome!"
It is functional at RAG stuff on internal docs but definitely not good - not sure how much of this is Copilot vs corporate disarray and access controls.
It won't send emails for me (which I would think is the agentic mvp) but that is likely a switch my organization daren't turn on.
Tldr it's valuable as a normal LLM, very limited as a add-on to Microsoft's software ecosystem.
Chatting and everything you normally do in chats is there. needle hunting info out of all my Teams group chats is probably my favorite thing. It can retrieve info out of sharepoint I guess.
Biggest complaint for me personally is that you run out of context very quickly. If you are used to having longer running chats on other platforms you won't be happy when Copilot tells you to make a new chat like 5 messages in.
For most of my clients they are only interested in meeting minutes and otter does that for 25% of the price. I think in any given business the qty of people who actually use textgen regularly is pretty low. My workplace is looking to downsize licenses and asking people to use it or lose it because $21/user/mo is too much to have as a every now and then novelty.
It's basically clippy without the funny animations.
Why wasn't AI able to help them meet their sales targets?
Can't Microsoft supercharge its workflow with these five weird prompts that bring a new layer of intelligence to its productivity:
https://fortune.com/2025/09/02/billionaire-microsoft-ceo-sat...
Hopefully this is the beginning of the trough of disillusionment, and the steady return of rationalism.
Despite having an unlimited warchest I'm not expecting Microsoft to come out as a winner from this AI race whilst having the necessary resources. The easy investment was to throw billions at OpenAI to gain access to their tech, but that puts them in a weird position of not investing heavily in cultivating their own AI talent and being in control of their own destiny by having their own horse in the race with their own SOTA models.
Apple's having a similar issue, unlimited wealth that's outsourcing to external SOTA model providers.
Have we finally reached peak AI already? In that event we will see the falling down phase next.
Not until we put it on the blockchain
Yea, we're getting their, had some people reach out to me who only do so once a hype bubble is well formed
What do you do and why do people reach out to you?
I'm the "computer guy" in an IRL social group, same thing when blockchain was hyping
I also have a PhD in CS/ML, work with a healthcare AI company, and am building my own agentic setup
But is it sold enough to regular Windows Home users? If MS brings an ultimatum: "you need to buy AI services to use Windows", they might get a bunch more clueless subscribers. In the same way as there's no ability to set up Windows without internet connection and MS account they could make it mandatory to subscribe to Copilot.
I think Microsoft's long-term plan is exactly that: to make Windows itself a subscription product. Windows 12 Home for $4.99 a month, Copilot included. It will be called OSaaS.
I think you wrote Ass OS wrong :)
> In the same way as there's no ability to set up Windows without internet connection and MS account
Not true. They're clearly unwilling or unable to remove this code path fully, or they would have done so by now. There's just a different workaround for it every few years.
There’s probably some compliance requirement that it’s technically possible to set it up without an internet connection, so they leave it there, but make it unreasonably difficult for a majority to do it.
I went to Ignite a few weeks ago, and the theme of the event and most talks was "look at how we're leveraging AI in this product to add value".
Separately, the theme from talking to Every. Single. Person on the buy-side was gigantic eye roll yes I cant wait for AI to solve all my problems.
Companies I support are being directed from their presidents to use ai, literally a solution in search of a problem.
Why do they have salespeople when AI could have done the job?
"They just have no taste" - Steve Jobs
Microsoft had a great start with the exclusive rights over OpenAI tech but they're not capable of really talking with developers within those large companies in the same sense Google and AWS are rapidly catching-up.
Good. Go make your OS useful and stop alienating your enterprise customers.
It truly looks like they didn’t learn anything from Clippy…
Top signal. Phase transition is imminent.
Blaming slow sales on salespeople is almost always a scapegoat. Reality is that either the product sells or it doesn’t.
Not saying that sales is useless, far from it. But with an established product that people know about, the sales team is more of a conduit than they are a resource-gathering operation.
> Reality is that either the product sells or it doesn’t.
Why do people use this useless phrase template?
Yeah, the point is that it's not selling, and it's not selling because people are getting increasingly skeptical about its actual value.
> it's not selling because people are getting increasingly skeptical about its actual value.
So why are the sales-peops being blamed?
I think the point of this headline is that they're not being blamed in this one instance.
I worked car sales for years. The same large dealership can have a person anyone would call a decent salesperson, and they made $4k a month. There was also two people at that dealership making $25k+ a month each.
If your organization is filled with the $4k type and not the $25k type, you're going to have a bad time.
I was #7 in the US while working at a small dealership. I moved the the large dealership mentioned above and instantly that dealership became #1 for that brand in the country, something they had never done before. Because not only did I sell 34 cars a month without just cannibalizing others sales, I showed others that you can show up one day and do well so there weren't many excuses. The output of the entire place went up.
So, depending on the pay plan and hiring process, who exactly is working at Microsoft right now selling AI? I honestly have no idea. It could be rock stars and it could be the $4k guys happy they're making $10k at Microsoft.
can I suck your dick?
No but several women that came to buy cars (some with male coworkers, or so they told me) eventually did over the years.
Tbh this wasn't some crazy brag post, as making $250-300k a year working 80 hours a week isn't all that impressive when software devs make more than that easily, and the top guys make many multiples of that.
Lol "Microsoft can't make something work ergo the technology is not feasible".
"The technology is not useful", at least in enterprise contexts, is what this comes out to. Which is really where the money is, because some vibecoder paying $20/mo for Claude really doesn't matter (especially when it costs $100/mo to run inference for his queries in the first place). Enterprise is the only place this could possibly make money.
Think about it: MS has a giant advantage over every other AI vendor, that they can directly insert the product into the OS and LOB apps without the business needing to onboard a new vendor. This is best case scenario, and by far the easiest sell for these tools. Given how badly they're failing, yeah, turns out orgs just don't see the value in it.
Next year will be interesting too: I suspect a large portion of the meager sales they managed to make will not renew, it'll be a bloodbath.
MS has a giant advantage over every other vendor for all kinds of products (including defunct ones). Sometimes they function well, sometimes they do not. Sometimes they make money, sometimes they do not. MS isn't the tech (or even enterprise tech) bellcow.
Considering enterprise typically is characterized by perfunctory tasks, information silos, and bit rot, they're a perfect application of LLMs. It's just Microsoft kind of sucks at a lot of things.
turns out ppl dont want to pay astronomical sums for shitty hallucinating ai when it really matters
This is annoying because Ars is one of the better tech blogs out there, but it still has instances of biased reporting like this one. It's interesting to decipher this article with an eye on what they said, what they implied, and what they didn't say.
Would be good if a sales person chime could in to keep me honest, but:
1. There is a difference between sales quotas and sales growth targets. The former is a goal, latter is aspirational, a "stretch goal". They were not hitting their stretch goals.
2. The stretch goals were, like, doubling the sales in a year. And they dropped it to 25% or 50% growth. No idea what the adoption of such a product should be, but doubling sounds pretty ambitious? I really can't say, and neither did TFA.
3. Only a fraction met their growth goals, but I guess it's safe to assume most hit their sales quotas, otherwise that's what the story would be about. Also, this implies some DID hit their growth goals, which implies at least some doubled their sales in a year. Could be they started small so doubling was easy, or could be a big deal, we don't know.
4. Sales quotas get revised all the time, especially for new products. Apparently, this was for a single product, Foundry, which was launched a year ago, so I expect some trial and error to figure out the real demand.
5. From the reporting it seems Foundry is having problems connecting to internal data sources... indicating it's a problem with engineering, and not a problem with the AI itself. But TFA focuses on AI issues like hallucinations.
6. No reporting on the dozens of other AI products that MSFT has churned out.
As an aside, it seems data connectivity issues are a stickier problem than most realize (e.g. organizational issues) and I believe Palantir created the FDE role for just this purpose: https://nabeelqu.substack.com/p/reflections-on-palantir
Maybe without that strategy it would be hard for a product like this to work.
For the first time I have begun to doubt Microsoft's chosen course. (I am a retired MS principal engineer.) Their integration of copilot shows all the taste and good tradeoff choices of Teams but to far greater consequence. Copilot is irritating. MS dependence on OpenAI may well become dicey because that company is going to be more impacted by the popping of the AI bubble than any other large player. I've read that MS can "simply" replace ChatGPT by rolling their own -- maybe they can. I wouldn't bet the company on it. Is google going to be eager to license Gemini? Why would they?
For the first time? What about Zune, Nokia/Windows Phone, Windows Vista, attacking open source for decades, Scroogled campaign, all the lost Ballmer years, etc. Microsoft has had tons of blunders over time.
Again? https://news.ycombinator.com/item?id=46135388
Not only that but the headline and story changed by the time Ars went to print:
Microsoft denies report of lowering targets for AI software sales growth
They don't give a shit about their users, but their own salespeople are worthy of this morsel of mercy.
People are wondering how we got here when these AI's make so many mistakes.
But the one thing they're really good at is marketing.
That's why it's all over linkedin etc, marketing people see how great it is and think it must be great at everything else too.
I wonder if it’s because Microsoft is hyper focused on a bunch of crap people don’t want or need?
Is "The Information" credible? It's the sole source.
Microsoft is strange cause it reports crazy growth numbers for Azure but I never hear about any tech company using Azure (AWS and GCP dominate here). I know it's more popular in big enterprises, banks, pharma, government, etc. and companies like Openai use their GPU offerings. Then there's all the Office stuff (Sharepoint, One Drive, etc). Who knows what they include under Azure numbers. Even Github can be considered "cloud".
My point is, outside of co-pilot, very few consider Microsoft when they are looking for AI solutions, and if you're not already using Azure, why would you even bother check what they offer. At this point, their biggest ticket is their OpenAI stake.
With that being said, I should give them some credit. They do some interesting research and have some useful open source libraries they release and maintain in the AI space. But that's very different than building AI products and solutions for customers.
[dupe] https://news.ycombinator.com/item?id=46135388
made up story
AI is people looking at EV hype and saying - I'll 100x it.
It has all the same components, just on much higher scale:
1. Billionaire con-man convincing large part of market and industry (Altman in AI vs Musk in EV) that new tech will take over in few years.
2. Insane valuations not supported by an actual ROI.
3. Very interesting and amazing underlying technology.
4. Governments jumping on the hype and enabling it.
The valuations are based on value, not revenue.
I wonder what part of these failed sales is due to GDRP requirements in the IT enterprise industry. I have my own european view, and it seems our governments are treating the matter very seriously. How do you ensure an AI agent won't leak anything? It just so happened that it wiped entire database or cleared a disk and later being very "sorry" about it. Is the risk worth it?
Having worked with this stuff a lot, privacy isn't the biggest problem (though it is a problem). This shit just doesn't work. Wide-eyed investors might be willing to overlook the 20% failure rates, but ordinary people won't, especially when a single mistake can cost you millions of dollars. In most places I've seen AI shoved - especially Copilot - it takes more time to read and dismiss its crappy suggestions than it does to just do the work without it. But the really insidious case is when you don't realize it is making shit up and then you act on it. If you are lucky you embarrass yourself in front of a customer. If you are unlucky you unintentionally wipe out the production database. That's much more of an overt and immediate concern than leaking some PII.