I perhaps get where the author is coming from at a very surface level, but the US is acting like a drunk Culture where the Minds face credible accusations of all sorts of abuse, are named something like 'I Got Small Dick, Wannu Make Everyone Think Is Big', have no goal beyond self-enrichment and ships that dump their human passengers into empty space with the promise that if they somehow survive the next time they come onboard, everything is going to be even more BIG, GREAT and BEAUTIFUL!
This behavior predates Trump. He's just an accelerationist of where this sort of behavior was always bound to go.
But he does perfectly demonstrate that you can't have operational efficiency if you're ignorant about your enemies because you're being advised by religious fanatics, if your goals are constantly shifting and your motives are purely selfish.
> This behavior predates Trump. He's just an accelerationist of where this sort of behavior was always bound to go.
Idk if I agree with this. First off, your initial verbiage is distinctly Trumpian. Second, I think Trump, like Hitler, activates latent sentiments that are largely kept at bay with "normal" post-WWII world leader politics. I think it's anomalous and once we get out of it things will normalize.
But really, my main point was that the politics and the "whys" of these decisions (capture Maduro, bomb Iran) are outside the scope of the article. It assumes that the decisions have been made and is looking only at the impact of specific technology on the operational outcomes.
It seems like a lot of the commenters are responding as if the article is making the point that "the US is like the Culture" but it's much more narrow and specific than that.
> It seems like a lot of the commenters are responding as if the article is making the point that "the US is like the Culture" but it's much more narrow and specific than that.
Right, however that narrow point of essentially (overwhelming) technological superiority and 'efficiency' can be made using a very large number of science fiction. The Culture explores specific themes that make it what it is.
If you completely dismiss them, I am not sure you are left with even a whiff of Iain Banks' Culture.
And to be clear, the point I am specifically making is that a lot of what the US is currently doing is not exactly rational, or even a supper efficient way to achieve their stated goals and a lot of it seems to be made up as they go along.
> you just can’t play StarCraft that much better than the best humans
I could not disagree with this more.
Just the perfect micro part means that computers have a far higher ceiling than humans.
No, it is not possible in theory for humans to have perfect micro with thousands of APM!
We're talking about hundred unit zergling swarms perfectly dodging tank shells. Hundreds of APM at multiple locations on the map. Perfect timing and placement for every order.
This is like saying an aimbot wouldn't make a top CS pro much better.
Having written the AI systems for Robocode bots 15 years ago, they perform at such a higher level than humans that there is no way, given all the time in the world, a human can compete with a full statistical targeting and movement system. We just don't think in that way.
I think the "you" they refer to there is the hypothetical other skilled human, not a computer. The wording is confusing but I think they're just saying that the human players will reach a ceiling with each other (they then contrast this with real life where the ceiling is always moving). That whole paragraph is a bit muddy with the point it's trying to make.
"I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side."
Dario Amodei [1]
I can only guess but I'd think maybe it's more for playing out scenarios i.e. in such and such situation, if x, y and z are true on the ground, how would you expect this to play out, while playing with x, y, z and then maybe working to get x, y, z as close to be what we roleplayed on the ground before going in, rather than an exact 'how to'.
I think the author is making a mistake assigning the seemingly new competence of the US military to AI, rather than noticing that the US has spent the last half-century or so picking the kinds of fights we absolutely suck at.
Force projection, targeted aerial strikes, intelligence gathering, and a nuclear deterrent play to the US miltary's strengths. Convincing the people who we just whacked the leaders of to like us? Not at all. The US doesn't have the political will to commit the monstrous acts required to stomp out an insurgency, and we, as the big bad empire on the global stage, can't help but inspire insurgents.
If you look at the boondoggles that the US has gotten itself into post Korea, they typically follow a pattern of "we show up, complete the key objectives in the first couple of days, and then spend years occupying territory while trying to root out an insurgency, creating new insurgents at least as fast as we neutralize them, then eventually limp away with our tail between our legs."
Lately, we've been just doing the first part. Which is the part we've been good at for ages. No need to blame AI, it's just that we aren't / haven't gotten around to doing the part we suck at.
We keep using AI to do things humans already do fine (putting on a bomb in a target.) Still no word on AI for things humans really suck at (lasting regime change.)
Or, the US military is just that good. I mean, we spend orders of magnitude more than our closest adversaries, let alone other smaller nations. It should be that good. No AI necessary. Maybe.
A Culture Mind would at least have a clear set of objectives and a plan for how to achieve them? "Bomb everything forever" doesn't seem very like the Culture at all?
In my experience this is the big difference with AI vs humans. It's not superhuman intelligence (although it does have a massive working memory) but rather the ability to just grind on anything you throw at it, long past the point when any reasonable human would have taken a break or given up.
"It can kind of be be bargained with. It can kind of be reasoned with. It doesn't feel pity, or remorse, or fear but it will fake them! And it absolutely will not stop, ever, until you are absolutely right!"
I have had the same hypothesis around the recent operational success of US military interventions, but would agree with other comments here that this is more "vibes" than data. It's been reported that Maven (integrated with Claude) has been used extensively for Iran, but I haven't seen any hard evidence this is directly contributing to greater US military efficiency. I do buy the general thesis that AI would support operation excellence and solve attention problems across concurrent actions. Would be good to see some more reporting or combat analysis to try to measure the contributions of AI (e.g., how many more concurrent aerial sorties are taking place vs equivalent interventions, how many more strikes are "successful" vs past, etc).
EDIT: I see this post has been flagged. Why? I understand it’s political but it seems very much within the site’s ethos. I didn’t get the impression it was AI-writing either.
The US has nearly always been successful in terms of conventional firepower and individual operations. E.g. in 2003 the US overthrew Saddam's government in a matter of weeks. The US won most battles in Vietnam. That doesn't change the fact that the strategic outcomes and long-term track record are poor. Trying to draw a link to AI or the current state of the US military feels flimsy.
Anyway, the recurring Big Question throughout the Culture series is "how should a highly progressive, developed, and egalitarian society act when it meets others who are not?". The US is sliding further and further from that ideal, and you can argue whether it was ever close.
A fundamental thing that this misses, I think, is that the reinforcement learning approaches of AlphaGo do generate that sensation of lack-of-narrative, everything together at the same time alien thinking, whereas using an LLM as hypothesized would have a clear tree-like approach with an overarching thesis, so the approach would be more traditional / human like.
The failure here is to see that the "plan" has been on the books, and being refined for well over 30 years (1979 the Shah is deposed).
This is the JOB of the military... and it has been for a long time. I would think there is even modern version of "war plan red" (see: https://en.wikipedia.org/wiki/War_Plan_Red ) somewhere.
We should be more specific about what’s surprising.
The US being able to engage in a very one-sided air war is not surprising. The Gulf War went similarly well and so did the 2003 invasion of Iraq, at first.
I think it’s surprising that attempting to capture or kill a foreign leader actually worked. But I’m not sure if US presidents other than Trump would have tried? Trump has a lot of “you can just do things” energy due to being largely unconstrained by legal or moral considerations, or larger strategic concerns.
Israeli intelligence being able to so thoroughly hack the devices of their enemies clearly has a lot to do with this. What happened to Hezbollah was surprising.
The "article" (I don't know what else to call it; "fantastical screed"?) has also gotten ahead of events a little bit. The operation in Iran doesn't seem like it was planned by a superintelligent AI. It seems as though it was an impulse decision, and poorly thought out at that, with the end result likely to be far worse than its planners anticipated. As for Venezuela, that was literally an inside job, lol.
The key line "I’m getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should."
There is no measurement of efficacy here. It feels like these things are working better because the US military is now doing big public things, but that is not necessarily a good change over not-doing-big-public-things.
Yeah, that was exactly where he lost me. The US military doesn't need a remarkable amount of luck for these operations to be tactical successes, tactical risk wasn't the reason previous administrations didn't do them. The element that was missing was a complete disregard for second order consequences, and Claude has nothing to do with that whatsoever.
"The US has been acting powerful recently..." sure_jan.gif
I can commiserate with this person cooking up a rant based on a faulty initial premise but it's a doozy. Kidnapping heads of state and indiscriminate bombing campaigns with massive collateral damage certainly don't fit my conception of "acting powerful."
> I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they’d find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world.
This is either a misreading of the Culture (which for all its fictional foibles is not a federation of nation-states), a misunderstanding of what the European Union is, or both.
Uninformative title? I’m getting a whiff of AI cont— oh right there it is.
Today it’s how AI is a superpower for the already by-far the most powerful military in the world. Okay sure why not.
In the case of Maduro was that an amazing feat? Massacring the whole bodyguard entourage? Capturing a head of state who might have been a willing accomplice?
How does this square with bombing civilian targets in Iran? Another superhuman stalker-micro move?
I perhaps get where the author is coming from at a very surface level, but the US is acting like a drunk Culture where the Minds face credible accusations of all sorts of abuse, are named something like 'I Got Small Dick, Wannu Make Everyone Think Is Big', have no goal beyond self-enrichment and ships that dump their human passengers into empty space with the promise that if they somehow survive the next time they come onboard, everything is going to be even more BIG, GREAT and BEAUTIFUL!
So not sure I buy the analogy.
The analogy is about technology aiding operational efficiency. It ends there. You basically made a dramatic statement about Trump.
This behavior predates Trump. He's just an accelerationist of where this sort of behavior was always bound to go.
But he does perfectly demonstrate that you can't have operational efficiency if you're ignorant about your enemies because you're being advised by religious fanatics, if your goals are constantly shifting and your motives are purely selfish.
> This behavior predates Trump. He's just an accelerationist of where this sort of behavior was always bound to go.
Idk if I agree with this. First off, your initial verbiage is distinctly Trumpian. Second, I think Trump, like Hitler, activates latent sentiments that are largely kept at bay with "normal" post-WWII world leader politics. I think it's anomalous and once we get out of it things will normalize.
But really, my main point was that the politics and the "whys" of these decisions (capture Maduro, bomb Iran) are outside the scope of the article. It assumes that the decisions have been made and is looking only at the impact of specific technology on the operational outcomes.
It seems like a lot of the commenters are responding as if the article is making the point that "the US is like the Culture" but it's much more narrow and specific than that.
> It seems like a lot of the commenters are responding as if the article is making the point that "the US is like the Culture" but it's much more narrow and specific than that.
Right, however that narrow point of essentially (overwhelming) technological superiority and 'efficiency' can be made using a very large number of science fiction. The Culture explores specific themes that make it what it is. If you completely dismiss them, I am not sure you are left with even a whiff of Iain Banks' Culture.
And to be clear, the point I am specifically making is that a lot of what the US is currently doing is not exactly rational, or even a supper efficient way to achieve their stated goals and a lot of it seems to be made up as they go along.
That does not feel like The Culture to me.
> you just can’t play StarCraft that much better than the best humans
I could not disagree with this more.
Just the perfect micro part means that computers have a far higher ceiling than humans.
No, it is not possible in theory for humans to have perfect micro with thousands of APM!
We're talking about hundred unit zergling swarms perfectly dodging tank shells. Hundreds of APM at multiple locations on the map. Perfect timing and placement for every order.
This is like saying an aimbot wouldn't make a top CS pro much better.
Having written the AI systems for Robocode bots 15 years ago, they perform at such a higher level than humans that there is no way, given all the time in the world, a human can compete with a full statistical targeting and movement system. We just don't think in that way.
> We're talking about hundred unit zergling swarms perfectly dodging tank shells.
Exactly the reference I was thinking of https://www.youtube.com/watch?v=IKVFZ28ybQs
I think the "you" they refer to there is the hypothetical other skilled human, not a computer. The wording is confusing but I think they're just saying that the human players will reach a ceiling with each other (they then contrast this with real life where the ceiling is always moving). That whole paragraph is a bit muddy with the point it's trying to make.
"I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side." Dario Amodei [1]
[1] - https://www.darioamodei.com/essay/machines-of-loving-grace#5...
Can someone ELI5 how Cluade is/was being used for Venezuela & Iran?
"Hey claude, tell me how the US can abduct Maduro. Your response should include all details regarding times, local places as well as blah blah blah"
I can only guess but I'd think maybe it's more for playing out scenarios i.e. in such and such situation, if x, y and z are true on the ground, how would you expect this to play out, while playing with x, y, z and then maybe working to get x, y, z as close to be what we roleplayed on the ground before going in, rather than an exact 'how to'.
I think the author is making a mistake assigning the seemingly new competence of the US military to AI, rather than noticing that the US has spent the last half-century or so picking the kinds of fights we absolutely suck at.
Force projection, targeted aerial strikes, intelligence gathering, and a nuclear deterrent play to the US miltary's strengths. Convincing the people who we just whacked the leaders of to like us? Not at all. The US doesn't have the political will to commit the monstrous acts required to stomp out an insurgency, and we, as the big bad empire on the global stage, can't help but inspire insurgents.
If you look at the boondoggles that the US has gotten itself into post Korea, they typically follow a pattern of "we show up, complete the key objectives in the first couple of days, and then spend years occupying territory while trying to root out an insurgency, creating new insurgents at least as fast as we neutralize them, then eventually limp away with our tail between our legs."
Lately, we've been just doing the first part. Which is the part we've been good at for ages. No need to blame AI, it's just that we aren't / haven't gotten around to doing the part we suck at.
We keep using AI to do things humans already do fine (putting on a bomb in a target.) Still no word on AI for things humans really suck at (lasting regime change.)
I understand the attempted analogy, but it's more like dealing with AIs that Ferengi have built than with one of the Minds of Culture.
The Ferengi may have funded it, but it was built by Binars...I hope.
Or, the US military is just that good. I mean, we spend orders of magnitude more than our closest adversaries, let alone other smaller nations. It should be that good. No AI necessary. Maybe.
A Culture Mind would at least have a clear set of objectives and a plan for how to achieve them? "Bomb everything forever" doesn't seem very like the Culture at all?
"constantly put pressure on the human"
In my experience this is the big difference with AI vs humans. It's not superhuman intelligence (although it does have a massive working memory) but rather the ability to just grind on anything you throw at it, long past the point when any reasonable human would have taken a break or given up.
"It can kind of be be bargained with. It can kind of be reasoned with. It doesn't feel pity, or remorse, or fear but it will fake them! And it absolutely will not stop, ever, until you are absolutely right!"
Infinite patience is what I call it. It would be great for a tutor
I have had the same hypothesis around the recent operational success of US military interventions, but would agree with other comments here that this is more "vibes" than data. It's been reported that Maven (integrated with Claude) has been used extensively for Iran, but I haven't seen any hard evidence this is directly contributing to greater US military efficiency. I do buy the general thesis that AI would support operation excellence and solve attention problems across concurrent actions. Would be good to see some more reporting or combat analysis to try to measure the contributions of AI (e.g., how many more concurrent aerial sorties are taking place vs equivalent interventions, how many more strikes are "successful" vs past, etc).
EDIT: I see this post has been flagged. Why? I understand it’s political but it seems very much within the site’s ethos. I didn’t get the impression it was AI-writing either.
The AI fans desperately need their god to not be perceived as icky.
This feels somewhat ahistorical.
The US has nearly always been successful in terms of conventional firepower and individual operations. E.g. in 2003 the US overthrew Saddam's government in a matter of weeks. The US won most battles in Vietnam. That doesn't change the fact that the strategic outcomes and long-term track record are poor. Trying to draw a link to AI or the current state of the US military feels flimsy.
Anyway, the recurring Big Question throughout the Culture series is "how should a highly progressive, developed, and egalitarian society act when it meets others who are not?". The US is sliding further and further from that ideal, and you can argue whether it was ever close.
A fundamental thing that this misses, I think, is that the reinforcement learning approaches of AlphaGo do generate that sensation of lack-of-narrative, everything together at the same time alien thinking, whereas using an LLM as hypothesized would have a clear tree-like approach with an overarching thesis, so the approach would be more traditional / human like.
not sure i'd lump Iran in with Venezuela here. also far too early for either to say if either will lead to a "win" whatever that means.
They both feel like a no-troops proof-of-concept.
The failure here is to see that the "plan" has been on the books, and being refined for well over 30 years (1979 the Shah is deposed).
This is the JOB of the military... and it has been for a long time. I would think there is even modern version of "war plan red" (see: https://en.wikipedia.org/wiki/War_Plan_Red ) somewhere.
We should be more specific about what’s surprising.
The US being able to engage in a very one-sided air war is not surprising. The Gulf War went similarly well and so did the 2003 invasion of Iraq, at first.
I think it’s surprising that attempting to capture or kill a foreign leader actually worked. But I’m not sure if US presidents other than Trump would have tried? Trump has a lot of “you can just do things” energy due to being largely unconstrained by legal or moral considerations, or larger strategic concerns.
Israeli intelligence being able to so thoroughly hack the devices of their enemies clearly has a lot to do with this. What happened to Hezbollah was surprising.
This is pure delusion.
Completely dry of any data, based on vibes and a vague whiff that maybe a chatbot did all the hard work done by hardworking spooks.
Effective operations have happened just like this long before chatgpt launched.
Operation Entebbe comes to mind of an insane sounding stunt that worked unreasonably well.
The "article" (I don't know what else to call it; "fantastical screed"?) has also gotten ahead of events a little bit. The operation in Iran doesn't seem like it was planned by a superintelligent AI. It seems as though it was an impulse decision, and poorly thought out at that, with the end result likely to be far worse than its planners anticipated. As for Venezuela, that was literally an inside job, lol.
It seems to implicate anything that "work[ed] slightly better than it should" as the work of AI.
The hezbollah pagers op, now that was an operation. And I'm willing to bet 99% of the work was done the usual way, many years in the making.
And totally pales when compared to operation spiderweb both in precision and the amount of damage done.
The key line "I’m getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should."
There is no measurement of efficacy here. It feels like these things are working better because the US military is now doing big public things, but that is not necessarily a good change over not-doing-big-public-things.
Yeah, that was exactly where he lost me. The US military doesn't need a remarkable amount of luck for these operations to be tactical successes, tactical risk wasn't the reason previous administrations didn't do them. The element that was missing was a complete disregard for second order consequences, and Claude has nothing to do with that whatsoever.
The prize for the most insane take on the Iran War has been awarded to this piece.
Let's see how many days until something else tops it.
"The US has been acting powerful recently..." sure_jan.gif
I can commiserate with this person cooking up a rant based on a faulty initial premise but it's a doozy. Kidnapping heads of state and indiscriminate bombing campaigns with massive collateral damage certainly don't fit my conception of "acting powerful."
> I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they’d find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world.
This is either a misreading of the Culture (which for all its fictional foibles is not a federation of nation-states), a misunderstanding of what the European Union is, or both.
Uninformative title? I’m getting a whiff of AI cont— oh right there it is.
Today it’s how AI is a superpower for the already by-far the most powerful military in the world. Okay sure why not.
In the case of Maduro was that an amazing feat? Massacring the whole bodyguard entourage? Capturing a head of state who might have been a willing accomplice?
How does this square with bombing civilian targets in Iran? Another superhuman stalker-micro move?
Except the Culture are the good guys.
"The Master Chief has gone rampant."