Personally I think it's scandalous that the now top comment is an off-topic reference to something tangential to the title, and nothing to do with the article, which isn't really about knitting at all, except for being the hook to which the author was pulled in to the world of AI podcasts, and consequently found their output rather lacking in content.
You could substitute the word knitting for almost any hobby, and the article would read almost the same.
It's an article about the soulless content-free world of AI podcasting, and about how AI output is about validating the emotions of the listener rather than meaningful content.
HN sends tens of thousands of views to AI-farmed articles about why AI is good or why AI is bad. These articles get upvoted to the front page literally every day. They don't say anything interesting, but many of us just like having our existing beliefs recited back to us.
So to answer your question, I think we all do, it's just that different audiences have different sets of topics for which they let their guard down.
There is a huge market for content that makes you feel smart without requiring thinking and makes you busy without requiring work. I'm not not saying it's inherently bad. I'm listening to music on my daily commute and it's the same thing: just enjoyable filler so that you can do something other than getting angry at other drivers. The internet just weaponized the formula, and now AI is the equivalent of nuclear weapons I guess.
No, but to misinform people you have two main strategies: limiting through tailored scarcity and dilute in extra-generic overabundance. Don’t get it wrong: both can be combined and even can sometime overlap.
It doesn’t matter if no one is listening. Equally saturating all channels, metrics and indicator is enough to create hindrance so preventing relevant information to spread in meaningful time.
Attention is all you need, so distraction is all that will be given.
I listened to a podcast a while back (human authored I'm pretty sure) about low-quality gutter level streamer content and how popular it is, speaking of personalities like asmongold and a vast number of even worse imitators.
This content is made by humans but is pointless grindingly stupid filler spiced with a dash of obviously performative offensiveness. You're basically listening to a complete loser (or someone LARPing as one) telling you about their boogers and then being racist and then playing video games for 6 hours.
But it's wildly popular. Millions of people stream this kind of shit for hours every day.
There's a lot of people out there who just want to numb their brains, and there seems to be no floor. You can just keep making it dumber. The stuff people stream (and doom scroll) on the Internet makes 1980s daytime soaps look like high art from a lost golden age.
So it's not at all surprising that millions of people listen to low-quality un-curated AI slop podcasts.
I actually unsubbed from the podcast I heard. Meta discussion of crap like this isn't much better than the content itself. Keep driving. Do not look at the car accident.
I had kind of an epiphany like that in the last year. The Information Age means information is free. It costs $0 and is produced to infinity. That means you are not missing anything. Your attention is actually 100% yours, and if you choose to ignore the car wreck that's fine. There are infinity car wrecks. There are infinity everything. Keep driving.
Wow I’d never have expected Kate Davies to show up on Hacker News. I think it’s important to understand her background a bit when she talks about knitting as a matter of life and death. She was a scholar of 18th century literature before she suffered a stroke young[0]. She focused on knitting as a means of recovery and never looked back. She built a business and a community and attributes a lot of her physical and mental health to knitting.
So while this post hopefully hits a chord for anyone in a creative field she embodies a particular type of person for whom slop is a genuine risk to their being. Not their job; their whole personhood. In a world where slop has chased out the humanity of things and the bullshit machines fill all content what are the chances someone like her could build a second life better than her first?
I wonder if (or, more accurately hope that) this kind of slop will eventually die out as people realise how little care is put into it. I am more and more convinced that if the devil existed he'd take care of the bigger stuff, but have an army of little devils that encourage people to do things like make unsupervised automated podcasts about knitting, relentlessly chipping away at the messy joys of living.
I think a lot of the value in these AI Podcasts is just the self-validation of the listener. It really doesn't matter to the listener if there's nothing between Egyptian socks and Revelry because the point was to feel good not to learn.
But also because I've had a long standing pet peeve with news articles that include random ass stock footage in articles. If humans can get away with include a picture of _any_ ship when talking about a specific ship (that may have never been in the harbor the picture shows) then why does the AI need to be correct?
At the start of Good Omens, there’s a scene where demons are sharing their recent misdeeds. A couple of them are sharing “classic” demon stuff like killing and possessing, but Crowley (the protagonist demon) shares more modern evil deeds, such as creating traffic jams.
It's been years, but I seem to recall that Crowley specifically is very proud about making sure some motorway project got botched, because the continual drip of suffering from the accumulated jams and road rage makes him look really good in the spreadsheets even though he's not much for the classical showy stuff. Millions of little instances of suffering adding up year on year, instead of a handful of incidents of really intense suffering.
I thought it was that he altered planning documents and even went and moved physical markings to make the M25 the shape of the ancient evil sigril Odegra (this is from memory; I just read it a lot as a teenager) so every angry drive round it powers that sigil.
Yes, I think you’re right. And if I recall correctly, near the end he’s trying to get somewhere but gets stuck in traffic by the same problem he caused.
Whoever decided adding silly audio effects to an operating system is surely one of these lesser devils. Just think of how many people have been aggravated by a colleague's laptop when it "wakes up" every day, or an inappropriate notification sound during a presentation or something. On any desktop PC I interact with I do my bit by disabling all sound effects before I continue.
For a long time I thought that the AdSense business model was ultimately doomed because I assumed that people hate ads as much as I do. It turns out I was just wrong about what most people are willing to put up with.
I remember visiting a friend over a decade ago, and for some reason I had to use their computer for a bit. I was immediately thrown aback by all the ads everywhere and installed an ad blocker before anything else. They were very grateful, but the part that surprised me was they were annoyed by the ads but never thought to look for some way around it. It never even crossed their minds it could be done or to search for it.
Similarly, when my partner moved in I told her about the network-level adblocker and she kinda scoffed at it saying ads don't bother her. A few years later she started complaining that when she's out of the house she gets ads.
I'm afraid it'll lead to a weird music-ification of content.
Music can make you feel good and keep you engaged just purely out of engaging our pattern recognition.
AI videos and photos seem to have a similar effect. Even if it's not real, they encode enough patterns from good human work to be able to engage our attention.
Just proving people with an attentional escape is valuable on the internet.
Yeah, people will reflexively filter out the slop, eventually, but they'll do it by leaving the places that have been rendered worthless by its persistent presence.
The particular type of innovator ghoul that's enabled by generative AI dreams of filling the entire internet with bullshit content. Aggregators (media and content) should be actively pushing them out for their own long-term survival, IMO.
Just like Big Tobacco moved onto greener pastures in the developing world, Big Slop is not targeting specifically us, but the billions of new internet users who connected over the past decade:
There's this (now old) meme called "Italian brainrot" - AI generated characters with vaguely Italian-sounding names like Bombardiro Crocodilo (note the incorrect spelling of the Italian word for crocodile).
One character stands out - Tung Tung Tung Sahur. Not only does it not sound Italian at all, that last word rang a bell.
Sahur (or Suhur) is the meal eaten before dawn during Ramadan.
After some digging I discovered this whole category originated in Indonesia. The country experienced an absolute explosion in the number of internet users in recent years and is home to internet phenomena which spread globally, but few in the west seem to realise that.
I didn't know (but should have assumed) AI-generated podcasts existed. That's depressing.
I imagined if mankind had the ideal machine, that could automate anything, we would get rid of dull office work and back breaking physical labor, but not the things that are actually enjoyable: sharing with each other, entertaining each other, making art. I imagined a lively world of live performance and creation, since all subsistence work had been taken care of. Instead we might end up in the world of fifteen million merits.
It seems people don't mind letting their minds be hacked by machines that can create the form of what they find enjoyable, if not the substance. But I guess there's always been slop and the public for it. To imagine actual people wasting their limited time on Earth listening to these GPT logorrhea podcasts is truly depressing. The unchemical soma.
What are we even supposed to spend our days doing in this bright future of the AI champions'? Stop automating away the things that give people purpose, tackle real problems instead.
The incentives are at odds. In this capitalist landscape, you create podcasts and blogs (or have them created) to attract an audience which then attracts those fat advertising dollars.
It's superficially true, currently. We've had generative AI for a few years and people are using it to make a quick buck. But even if the world had been taken over by communism, or if the Western Highlands of Papua New Guinea had got imperial ambitions and now we all lived in a gift economy, people would still be using generative AI to gain attention and status. This will work until it wears thin. Thinner.
> one of the most pernicious things about this particular kind of bullshit is the way it casts any form of critical scrutiny as a terrible failure of sensibility.
What a great line. And you'll probably notice this technique being used by very skilled bullshitters and master manipulators: any request for rigor or scrutiny is met by something like genteel condescension. You're treated as if you've committed a breach of etiquette, and that's one of the reasons the technique is powerful -- you're likely to feel embarrassed and, following that, to back off.
I like how the pictures got more and more sloporific through the essay.
It doesn't mention an important group being harmed: the creators who make high-quality, sincere podcasts about knitting. Their genuine content gets buried under a mountain of slop. In theory, recommendation algorithms ought to surface the best stuff, but that doesn't seem to align with incentives. Sad.
I remember this kind of slop from times well before the LLM explosion.
I'm specifically thinking of a print magazine that was designed to make you feel like you are a smart reader of science articles, without any useful information about the actual science or technology.
Yes, the article acknowledges this in the first paragraph by citing Harry Frankfurt’s „On Bullshit“ (1986). Of course bullshit (as well as even more insidious misinformation/propaganda) have always been around, but the incredible advances in its production and dissemination are worth considering. At some point, sheer quantity turns into its own quality. Indeed I would argue these issues have always been underconsidered. The article is a kind of inoculation against bullshit that every generation requires again and again. People aren’t born nearly skeptical enough, and the game keeps ever changing.
I actually don’t think the article is sufficiently vehement in calling out just how brain-frying this is. And how destructive on a societal level. The razor’s edge between being too uncritical and too cynical is hella narrow.
> I remember this kind of slop from times well before the LLM explosion.
Even if that were true (which I don’t think it is, this is a different kind of worthless content), you most definitely don’t remember it at this scale, and that’s a major point.
Interestingly, Inception AI seem to have pivoted from content slop for "gardening, [...] knitting, cooking" - or "things we can afford to be wrong" - to "AI Immigration Drafting Software for Law Firms": https://www.inceptionai.co/
I'm somewhat curious how that'll work out. Hint: I'm not.
Why does this site want to access apps and services on my local network?
On topic, I do wonder how "the market" is going to sort this out. At this moment I'm leaning towards just banning this shit, but maybe there is a better way?
We can already see the market in action. Increasingly people are more hostile to online content and influencers, except for the few people they follow, just like everyone was already defensive against unsolicited email. Authenticity will become valuable in a sea of slop, and making high budget productions (think Mr Beast) will be worth nothing since it can be easily faked and hard to distinguish.
TL;DR: there are brainrot farms with help from AI.
But I saw this one coming three or four years ago.
Actually, I've been listening to AI-generated brainrot music. I prefer it to some human-generated brainrot music (there's "I Hate Boys" from Christina Aguilera. Sorry if you are a fan).
Brainrot serves a specific social purpose: relieving stress, incoherently winning elections. It's a kind of drug that dulls the dangerous part of the brain while leaving the he-is-a-good-tool and she-is-blonde brain hemispheres in working order.
In fact, I do believe that if there were to be an uprising in a couple of decades against AI, and the human side were to rise victorious, the aftermath's social order would be studiously anti-AI and anti-science, but they would make a carve-out for AI brainrot (yes, I published a short fiction story with that premise, because I'm brainrot-vers).
Are you serious when you connect anti-AI sentiment to anti-science sentiment?
To me, they are opposite sentiments, and my experience discussing AI with others supports this. The most pro-AI people I meet are very far removed from science, and my research colleagues are definitely more critical of AI than not.
> Are you serious when you connect anti-AI sentiment to anti-science sentiment?
I don’t believe that the current state of things represents peak-AI problems. AI is for now weak both in its capability and its impact, and also just new. Speculatively, if things go really bad, in a couple of decades there will be a huge swath of population without jobs nor high-flying education. They, perhaps rightly, will blame AI for the situation, but they’ll also, perhaps rightly, blame capital and the “snobbish elite” that is today and in the near future propping AI. That “snobbish elite” is well-paid engineers and researchers. That’s because people tend to like to have somebody to blame for their problems. But even without making it about bad guys, the heart of the thing that is pouring billions into AI is a relentless ethos of profit deriving from progress and disruption. You can’t stop AI without stabbing that heart.
ummmm, WOW!, hey that clicks
your brainrot/drug description is good.
making a choice for zero human content and therfore interaction.
the full suite of options would include perfectly artificial scents.
personaly, I am way over in the analog/organic direction, but I get the need
to disconect from the "whatever this is™" that passes for a society.
the question remains for AI scaling to meet the demands and desires society has always placed on indivuals
the audible exasperated noise comming from the person in line with me, seeing me pull out cash, thereby breaking there own perfect
little automated world, mearly by bieng subjected to witnessing such a primitive ritual, not behind me I might add, the person leaving in front of me, is the prime
example of someone who will violently reject AI and the rest when it inevitably fails to "fix" everything
Extremely long winded. I think this person is trying to throw stones at someone else’s work, but their own is so elliptical I lost the will to find out.
Not taking away the right to your opinion, but I couldn't disagree more; I found it an excellent sociological article. One, it takes the formal concept of "bullshit" and applies it to knitting in a very methodical and strict manner. I found it novel and convincing, and the examples were great; not contrived or forced at all. IMO it was much better than many academic books or articles; an immediate share.
Two, the turns of logic are clearly laid out, in a conversational way, which would make it easy to stick a wrench in and form a polemic if you found any of her arguments or logical implications specious. That said, that does make the article quite long. But then, it is anything other than "elliptical", which I think you used as "runs in circles and repeats itself often", while it actually means "omits parts and thus is difficult to understand" (like the ellipsis sign: …).
Also: what the heck is wrong with that podcast farm founder. I hope they have a bad year.
You only had to reach the second paragraph to find the example of an 8-person company that uses AI to generate “about 3000 podcast episodes per week, hosted by AI personalities.”
I was a couple of images in before I sussed it. Bullshit images, but pleasing enough to look at. Without the images, it would have either been a big wall of text, which would have put me off reading, though I did give up about 25% of the way through after sussing the images and thus the incoherence in the argument.
The images bring something to the article. They were cheap/quick to generate. The increase the potential payoff (more reader) without significantly increasing the cost. Without the images, the payoff(readers) would likely have been lower, below the cost of actually writing the article. Same goes for a history of knitting podcast or that video. Production costs would not be worth it for a very niche viewership.
Reading that made me feel like you wanted to be contrarian from the get-go and dismiss the article with the least effort possible. The whole point of the images is that they're low-effort AI slop, it's part of what she's trying to point to when someone is generating unsupervised automated podcasts about knitting.
I came in indifferent but it doesn’t take much to make me give up on an article linked on hacker news. I use it as bubblegum while waiting for a compile/prompt, intent ally for stuff that can be dropped easily. I saw her disclaimer at the end. My point was that the slop images make a more appealing article than if they were absent
So you're saying you can spot AI generated bullshit, but not spot a deliberate and hilarious contrivance that the author uses to reinforce their point?
I like the blog but the premise of the blog is an engineering/epistemological perspective on the craft. The writer clearly cares more about the process, technique and history more than the feeling and validation.
It could be, that a big part of the the future of hobby's and entertainment in this way is the feeling and validation over the actual performance. Or it can be that a massive amount of people find their value in this content.
So .. I think we need to ask a deceptively simple question here, which is: is knitting real?
I'll add in an aside to this, which is not only are there fake knitting podcasts there are fake knitting and crochet patterns, which is a problem because people get a substantial way through making them only to discover that they don't work. In some cases the giveaway is that the supposed final image isn't physically possible, like the images in this article, but the fakers can use a real stolen image and just spam a pattern underneath it.
So: what is the knitting that is real? It has to be the use of your hands, needles, and yarn to produce a physical object, right?
The podcasts work towards something else. The identity of "being a knitter". This is a form of "hobby" that was already not unusual, that of discussing a thing without ever bothering to actually do it. Photographers are especially bad at this: too many lenses, not enough photographs. They've also got comprehensively run over by AI, because you can just generate the photographs now. Same for "authors".
But ultimately all these pleasant sensations aren't backed by a connection to the real. If you're going to talk about the history of knitting, shouldn't it be the real, evidenced history? As done by real (usually) women? Otherwise you're just knitting a pleasant fantasy for yourself.
The AI approach is "wireheading": the logical conclusion of all of that would be to find a means of inserting a wire in your head that provides constant pleasant sensations. Achieving happiness through a constant feed of generated images is less effective, but it's the same order of things.
(see also: authenticity in food, which could easily turn into another ten thousand words)
I'd also say a few things, if knitting takes a long time consider how long it takes to make a good clear pattern so that others can replicate it.
People who make patterns are already dealing with a saturated market.
This includes historical/vintage patterns, which for many years patterns were primarily given away freely to incentivize yarn sales, or dominated by publishers. It wasn't until recently (internet, etsy, ravelry) when designers actually had the means to sell directly to consumers. People making an effort to produce usable patterns are now being dwarfed by AI nonsense in the speed of their output. It was already a difficult market. That everybodys images of real objects (along with AI generated ones) are being used to peddle and market patterns that will never work can be really demotivating.
One last thing is how many of the 8 people in this podcast company are actually generating slop and how many are actually just doing marketing?
> But ultimately all these pleasant sensations aren't backed by a connection to the real. If you're going to talk about the history of knitting, shouldn't it be the real, evidenced history? As done by real (usually) women? Otherwise you're just knitting a pleasant fantasy for yourself.
If the real is the feeling you get from listening to the podcast or identifying with a subculture, then that is the real for that person. Factual, grounded information is just one take. If it was not this way, we would have much less myths, religions, etc historically.
People will feel the same degree of joy and completion when the final word of the podcast is read like you feel when you finish a really complex piece of work.
If you genuinely believe this, there is no point to doing anything at all except heroin. Every moment that you aren't dedicating to being on heroin or getting more heroin, to heroinmaxx if you will, is a net loss.
'But what if I run out though' I hear you ask? Simply finish off on a truly heroic dose and sail into oblivion on a wave of bliss that's much better than all your relationships and hopes and dreams. It's real for you, right? If it makes your friends sad, they could just do some heroin about it. More real than real!
Look, I get your comparison and while extreme, it's funny. I just have very little faith in that the average person cares this deeply about the physically grounded reality. It's kind of a luxury of the well-off to be able to sit and think about what content to engage with when you just want to relax after a 8 hour shift followed by picking up kids, getting groceries, etc. If someone sees an AI-video that makes them happy or laugh, they send it to their friend who also laughs about it, that's their reality.
We happen to have time to argue about the philosophy about direction of the ontology of information at the downvoted bottom of a HN thread today, most people dont.
The idea that we could create a world where 'a big part of the future of hobbies and entertainment' is people listening to meaningless words made up by machines that help them feel good about themselves sounds horrifying. How could anybody feel ok about that? What would it say about the society we've built?
It would say that society changes, and people who were not used to a new world get upset about it, as it has always been throughout the entire history of humanity.
We were used to having psychologists and doctors in person, now the most common form is to have it through apps, and the younger generation does not care, it's in fact more efficient to get a prescription that you like than to spend time going places and having in-person meetings. But older generation finds it hollowing out and horrifying.
You need to accept that society moves on, and it can look different from your perspective.
A looooot of assumptions here. We have yet to see any of these brave new ideas actually work.
Therapy has never been more available, yet mental health is through the basement.
I’m also not seeing any evidence that young people are the driving force behind turning the world to shit. Every Gen Z person I know craves authenticity, connection, and meaningful work. All of this is the opposite.
It's interesting how every time this argument is made, its about subjective experiences of 'craving'. If this was the objective reality, we would have a majority of Gen Z engaged in movements, social groups and other concepts that would help them fulfill their 'cravings'.
However, it seems to not be the case, it seems like they prefer to spend their free time to doomscroll, or sit at home, and engage more in parasocial relationships that perhaps can be more on their terms, on their timeframes, and with their opinions.
That’s one explanation. The other explanation is that young people feel powerless to change anything, and that they are hooked against their will on deliberately addictive ad delivery platforms.
The more alarming conclusion here happens to be backed by a lot of science, unfortunately, so it’s not easy to dismiss.
In this case, the user is deciding that they choose what progress is. I am saying, that people who use the tool and value the utility of it decide what is progress. If people listen to the podcast, or use doctors in the phone because it provides them any value, it will be a change and a perceived progress for them.
If the generated podcasts did not bring any value to the users, such as validation, or engagement, they would not use them, and there would be no change.
Your meaning and your truth, not necessarily other peoples who find their meaning and truth in other things.
Go to China, or Congo and you will find that the public might hold a different version of some truths than you do.
We had religions dominating the world order for thousands of years, which projected their versions of the truth onto their societies.
If we would extrapolate that to today and to your opinion, it would be that everyone in the middle ages actually had it all figured out, they knew that the religious texts about splitting oceans or the moon were fake, and were all just playing along with it for the social structure.
Maybe it just happens that the LLM-generated stuff is the next thing in this iteration.
> Your meaning and your truth, not necessarily other peoples who find their meaning and truth in other things.
The makers of those AI podcasts explicitly stated they were unconcerned with whether their content was factual, so this is not comparable to people that actually thought they were right. But if you're arguing that listeners of those podcasts will believe that made-up slop is truth, that that's the "their truth" you're talking about, then yes, that is exactly what I meant by "collapse of truth".
If you only care about the material and physical utility of the product, you can order the sweater from AliExpress for 5% of the cost and no time spent.
Seriously? You can't get the feeling of satisfaction of wearing something, or having someone wear something you made from AliExpress. My point is your sense of feeling and validation is extremely distorted if you have no knitted material to show for it?
Completely subjective take by you with similar epistemology around value as the blog author.
People might not care. I might identify as a runner because I bought a little jacket, expensive shoes, and wide-purple-tinted sunglasses, do I have to run? Not necessarily if the objects and my identity gives me the feeling of completion and satisfaction.
If your premise was true for all people, and the sense would be distorted, we would not see these phenomena, and people wouldn't listen or engage with AI-content. But the biological reality and the path of least resistance seems to prove us otherwise.
It's scandalous that no-one has yet posted Gary Larson's Far Side cartoon "Bullknitters".
https://www.instagram.com/p/C2OQtokvzCa/
(or google image search)
Related: Four Yorkshiremen: https://www.youtube.com/watch?v=ue7wM0QC5LE.
Personally I think it's scandalous that the now top comment is an off-topic reference to something tangential to the title, and nothing to do with the article, which isn't really about knitting at all, except for being the hook to which the author was pulled in to the world of AI podcasts, and consequently found their output rather lacking in content.
You could substitute the word knitting for almost any hobby, and the article would read almost the same.
It's an article about the soulless content-free world of AI podcasting, and about how AI output is about validating the emotions of the listener rather than meaningful content.
Am I to believe that those 700K+ downloads are organic traffic? Who's listening to all this stuff?
HN sends tens of thousands of views to AI-farmed articles about why AI is good or why AI is bad. These articles get upvoted to the front page literally every day. They don't say anything interesting, but many of us just like having our existing beliefs recited back to us.
So to answer your question, I think we all do, it's just that different audiences have different sets of topics for which they let their guard down.
There is a huge market for content that makes you feel smart without requiring thinking and makes you busy without requiring work. I'm not not saying it's inherently bad. I'm listening to music on my daily commute and it's the same thing: just enjoyable filler so that you can do something other than getting angry at other drivers. The internet just weaponized the formula, and now AI is the equivalent of nuclear weapons I guess.
My podcast app downloads way more podcast episodes than I actually listen to.
By McHealy's logic, we ought not be concerned about that. After all, it's low-stakes content.
Other bots?
Dead Internet Theory.
AI produced, AI downloaded. No humans in the loop.
No, but to misinform people you have two main strategies: limiting through tailored scarcity and dilute in extra-generic overabundance. Don’t get it wrong: both can be combined and even can sometime overlap.
It doesn’t matter if no one is listening. Equally saturating all channels, metrics and indicator is enough to create hindrance so preventing relevant information to spread in meaningful time.
Attention is all you need, so distraction is all that will be given.
Also, fracturing audiences to infinity.
I listened to a podcast a while back (human authored I'm pretty sure) about low-quality gutter level streamer content and how popular it is, speaking of personalities like asmongold and a vast number of even worse imitators.
This content is made by humans but is pointless grindingly stupid filler spiced with a dash of obviously performative offensiveness. You're basically listening to a complete loser (or someone LARPing as one) telling you about their boogers and then being racist and then playing video games for 6 hours.
But it's wildly popular. Millions of people stream this kind of shit for hours every day.
There's a lot of people out there who just want to numb their brains, and there seems to be no floor. You can just keep making it dumber. The stuff people stream (and doom scroll) on the Internet makes 1980s daytime soaps look like high art from a lost golden age.
So it's not at all surprising that millions of people listen to low-quality un-curated AI slop podcasts.
I actually unsubbed from the podcast I heard. Meta discussion of crap like this isn't much better than the content itself. Keep driving. Do not look at the car accident.
I had kind of an epiphany like that in the last year. The Information Age means information is free. It costs $0 and is produced to infinity. That means you are not missing anything. Your attention is actually 100% yours, and if you choose to ignore the car wreck that's fine. There are infinity car wrecks. There are infinity everything. Keep driving.
Or maybe Ms. McHealy was simply lying.
Wow I’d never have expected Kate Davies to show up on Hacker News. I think it’s important to understand her background a bit when she talks about knitting as a matter of life and death. She was a scholar of 18th century literature before she suffered a stroke young[0]. She focused on knitting as a means of recovery and never looked back. She built a business and a community and attributes a lot of her physical and mental health to knitting.
So while this post hopefully hits a chord for anyone in a creative field she embodies a particular type of person for whom slop is a genuine risk to their being. Not their job; their whole personhood. In a world where slop has chased out the humanity of things and the bullshit machines fill all content what are the chances someone like her could build a second life better than her first?
0: https://katedaviesdesigns.com/2015/01/28/five-years-on-part-...
I feel like the alt captions for the images, although diligent and thorough, don't really capture the most important aspects.
I wonder if (or, more accurately hope that) this kind of slop will eventually die out as people realise how little care is put into it. I am more and more convinced that if the devil existed he'd take care of the bigger stuff, but have an army of little devils that encourage people to do things like make unsupervised automated podcasts about knitting, relentlessly chipping away at the messy joys of living.
I really doubt it's going to die out.
I think a lot of the value in these AI Podcasts is just the self-validation of the listener. It really doesn't matter to the listener if there's nothing between Egyptian socks and Revelry because the point was to feel good not to learn.
But also because I've had a long standing pet peeve with news articles that include random ass stock footage in articles. If humans can get away with include a picture of _any_ ship when talking about a specific ship (that may have never been in the harbor the picture shows) then why does the AI need to be correct?
At the start of Good Omens, there’s a scene where demons are sharing their recent misdeeds. A couple of them are sharing “classic” demon stuff like killing and possessing, but Crowley (the protagonist demon) shares more modern evil deeds, such as creating traffic jams.
https://en.wikipedia.org/wiki/Good_Omens
I’d link to a clip of it, but to your point some devil is making it frustratingly hard to find.
It's been years, but I seem to recall that Crowley specifically is very proud about making sure some motorway project got botched, because the continual drip of suffering from the accumulated jams and road rage makes him look really good in the spreadsheets even though he's not much for the classical showy stuff. Millions of little instances of suffering adding up year on year, instead of a handful of incidents of really intense suffering.
I thought it was that he altered planning documents and even went and moved physical markings to make the M25 the shape of the ancient evil sigril Odegra (this is from memory; I just read it a lot as a teenager) so every angry drive round it powers that sigil.
Yes, I think you’re right. And if I recall correctly, near the end he’s trying to get somewhere but gets stuck in traffic by the same problem he caused.
Ha, that's right! I forgot about that bit.
Man. I do miss Terry Pratchett.
Whoever decided adding silly audio effects to an operating system is surely one of these lesser devils. Just think of how many people have been aggravated by a colleague's laptop when it "wakes up" every day, or an inappropriate notification sound during a presentation or something. On any desktop PC I interact with I do my bit by disabling all sound effects before I continue.
For a long time I thought that the AdSense business model was ultimately doomed because I assumed that people hate ads as much as I do. It turns out I was just wrong about what most people are willing to put up with.
I remember visiting a friend over a decade ago, and for some reason I had to use their computer for a bit. I was immediately thrown aback by all the ads everywhere and installed an ad blocker before anything else. They were very grateful, but the part that surprised me was they were annoyed by the ads but never thought to look for some way around it. It never even crossed their minds it could be done or to search for it.
All human progress in history has been due to a VERY small handful of people who think “this is bullshit, things could be better”.
The vast majority of people accept what they see as the way things are and it never occurs to them that things could be different.
Similarly, when my partner moved in I told her about the network-level adblocker and she kinda scoffed at it saying ads don't bother her. A few years later she started complaining that when she's out of the house she gets ads.
I'm afraid it'll lead to a weird music-ification of content.
Music can make you feel good and keep you engaged just purely out of engaging our pattern recognition.
AI videos and photos seem to have a similar effect. Even if it's not real, they encode enough patterns from good human work to be able to engage our attention.
Just proving people with an attentional escape is valuable on the internet.
It's definitely the sort of thing that Crowley from Good Omens would be working on.
Yeah, people will reflexively filter out the slop, eventually, but they'll do it by leaving the places that have been rendered worthless by its persistent presence.
The particular type of innovator ghoul that's enabled by generative AI dreams of filling the entire internet with bullshit content. Aggregators (media and content) should be actively pushing them out for their own long-term survival, IMO.
Just like Big Tobacco moved onto greener pastures in the developing world, Big Slop is not targeting specifically us, but the billions of new internet users who connected over the past decade:
https://data.worldbank.org/indicator/IT.NET.USER.ZS
There's this (now old) meme called "Italian brainrot" - AI generated characters with vaguely Italian-sounding names like Bombardiro Crocodilo (note the incorrect spelling of the Italian word for crocodile).
One character stands out - Tung Tung Tung Sahur. Not only does it not sound Italian at all, that last word rang a bell.
Sahur (or Suhur) is the meal eaten before dawn during Ramadan.
After some digging I discovered this whole category originated in Indonesia. The country experienced an absolute explosion in the number of internet users in recent years and is home to internet phenomena which spread globally, but few in the west seem to realise that.
Great article, thanks for sharing.
I didn't know (but should have assumed) AI-generated podcasts existed. That's depressing.
I imagined if mankind had the ideal machine, that could automate anything, we would get rid of dull office work and back breaking physical labor, but not the things that are actually enjoyable: sharing with each other, entertaining each other, making art. I imagined a lively world of live performance and creation, since all subsistence work had been taken care of. Instead we might end up in the world of fifteen million merits.
It seems people don't mind letting their minds be hacked by machines that can create the form of what they find enjoyable, if not the substance. But I guess there's always been slop and the public for it. To imagine actual people wasting their limited time on Earth listening to these GPT logorrhea podcasts is truly depressing. The unchemical soma.
What are we even supposed to spend our days doing in this bright future of the AI champions'? Stop automating away the things that give people purpose, tackle real problems instead.
The incentives are at odds. In this capitalist landscape, you create podcasts and blogs (or have them created) to attract an audience which then attracts those fat advertising dollars.
Ironically, these are both incredibly common, LLM-able takes:
Lament: Oh why did we automate art?
Answer: Capitalism.
It's superficially true, currently. We've had generative AI for a few years and people are using it to make a quick buck. But even if the world had been taken over by communism, or if the Western Highlands of Papua New Guinea had got imperial ambitions and now we all lived in a gift economy, people would still be using generative AI to gain attention and status. This will work until it wears thin. Thinner.
> one of the most pernicious things about this particular kind of bullshit is the way it casts any form of critical scrutiny as a terrible failure of sensibility.
What a great line. And you'll probably notice this technique being used by very skilled bullshitters and master manipulators: any request for rigor or scrutiny is met by something like genteel condescension. You're treated as if you've committed a breach of etiquette, and that's one of the reasons the technique is powerful -- you're likely to feel embarrassed and, following that, to back off.
I like how the pictures got more and more sloporific through the essay.
It doesn't mention an important group being harmed: the creators who make high-quality, sincere podcasts about knitting. Their genuine content gets buried under a mountain of slop. In theory, recommendation algorithms ought to surface the best stuff, but that doesn't seem to align with incentives. Sad.
Or even worse, it gets fed back into the AI slop machine
I remember this kind of slop from times well before the LLM explosion.
I'm specifically thinking of a print magazine that was designed to make you feel like you are a smart reader of science articles, without any useful information about the actual science or technology.
Yes, the article acknowledges this in the first paragraph by citing Harry Frankfurt’s „On Bullshit“ (1986). Of course bullshit (as well as even more insidious misinformation/propaganda) have always been around, but the incredible advances in its production and dissemination are worth considering. At some point, sheer quantity turns into its own quality. Indeed I would argue these issues have always been underconsidered. The article is a kind of inoculation against bullshit that every generation requires again and again. People aren’t born nearly skeptical enough, and the game keeps ever changing.
I actually don’t think the article is sufficiently vehement in calling out just how brain-frying this is. And how destructive on a societal level. The razor’s edge between being too uncritical and too cynical is hella narrow.
> I remember this kind of slop from times well before the LLM explosion.
Even if that were true (which I don’t think it is, this is a different kind of worthless content), you most definitely don’t remember it at this scale, and that’s a major point.
Interestingly, Inception AI seem to have pivoted from content slop for "gardening, [...] knitting, cooking" - or "things we can afford to be wrong" - to "AI Immigration Drafting Software for Law Firms": https://www.inceptionai.co/
I'm somewhat curious how that'll work out. Hint: I'm not.
EDIT: My bad, wrong company, it's "Inception Point AI": https://www.inceptionpoint.ai/
There's also https://www.inceptionlabs.ai - it's not confusing at all.
Why does this site want to access apps and services on my local network?
On topic, I do wonder how "the market" is going to sort this out. At this moment I'm leaning towards just banning this shit, but maybe there is a better way?
We can already see the market in action. Increasingly people are more hostile to online content and influencers, except for the few people they follow, just like everyone was already defensive against unsolicited email. Authenticity will become valuable in a sea of slop, and making high budget productions (think Mr Beast) will be worth nothing since it can be easily faked and hard to distinguish.
> I do wonder how "the market" is going to sort this out
Unlikely to do a better job than it did with anything else.
For someone complaining about slop, I found this unreadable.
TL;DR: there are brainrot farms with help from AI.
But I saw this one coming three or four years ago.
Actually, I've been listening to AI-generated brainrot music. I prefer it to some human-generated brainrot music (there's "I Hate Boys" from Christina Aguilera. Sorry if you are a fan).
Brainrot serves a specific social purpose: relieving stress, incoherently winning elections. It's a kind of drug that dulls the dangerous part of the brain while leaving the he-is-a-good-tool and she-is-blonde brain hemispheres in working order.
In fact, I do believe that if there were to be an uprising in a couple of decades against AI, and the human side were to rise victorious, the aftermath's social order would be studiously anti-AI and anti-science, but they would make a carve-out for AI brainrot (yes, I published a short fiction story with that premise, because I'm brainrot-vers).
Are you serious when you connect anti-AI sentiment to anti-science sentiment?
To me, they are opposite sentiments, and my experience discussing AI with others supports this. The most pro-AI people I meet are very far removed from science, and my research colleagues are definitely more critical of AI than not.
AI's tendency to emit unsourced, untrue statements with authority is about the most unscientific thing you can get.
AI is scientism: presenting science-flavoured things as a cultural marker.
> Are you serious when you connect anti-AI sentiment to anti-science sentiment?
I don’t believe that the current state of things represents peak-AI problems. AI is for now weak both in its capability and its impact, and also just new. Speculatively, if things go really bad, in a couple of decades there will be a huge swath of population without jobs nor high-flying education. They, perhaps rightly, will blame AI for the situation, but they’ll also, perhaps rightly, blame capital and the “snobbish elite” that is today and in the near future propping AI. That “snobbish elite” is well-paid engineers and researchers. That’s because people tend to like to have somebody to blame for their problems. But even without making it about bad guys, the heart of the thing that is pouring billions into AI is a relentless ethos of profit deriving from progress and disruption. You can’t stop AI without stabbing that heart.
I think he is making the point that scientist built the AI.
The whole "Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should,"
ummmm, WOW!, hey that clicks your brainrot/drug description is good. making a choice for zero human content and therfore interaction.
the full suite of options would include perfectly artificial scents. personaly, I am way over in the analog/organic direction, but I get the need to disconect from the "whatever this is™" that passes for a society. the question remains for AI scaling to meet the demands and desires society has always placed on indivuals
the audible exasperated noise comming from the person in line with me, seeing me pull out cash, thereby breaking there own perfect little automated world, mearly by bieng subjected to witnessing such a primitive ritual, not behind me I might add, the person leaving in front of me, is the prime example of someone who will violently reject AI and the rest when it inevitably fails to "fix" everything
Extremely long winded. I think this person is trying to throw stones at someone else’s work, but their own is so elliptical I lost the will to find out.
Maybe. But at least she gave a shit enough to actually write something you didn’t like.
The sloplings don’t even bother.
Not taking away the right to your opinion, but I couldn't disagree more; I found it an excellent sociological article. One, it takes the formal concept of "bullshit" and applies it to knitting in a very methodical and strict manner. I found it novel and convincing, and the examples were great; not contrived or forced at all. IMO it was much better than many academic books or articles; an immediate share.
Two, the turns of logic are clearly laid out, in a conversational way, which would make it easy to stick a wrench in and form a polemic if you found any of her arguments or logical implications specious. That said, that does make the article quite long. But then, it is anything other than "elliptical", which I think you used as "runs in circles and repeats itself often", while it actually means "omits parts and thus is difficult to understand" (like the ellipsis sign: …).
Also: what the heck is wrong with that podcast farm founder. I hope they have a bad year.
You only had to reach the second paragraph to find the example of an 8-person company that uses AI to generate “about 3000 podcast episodes per week, hosted by AI personalities.”
Yeah, well good thing that LLMs are good at summarizing articles, unlike generating believable knitting images.
I was a couple of images in before I sussed it. Bullshit images, but pleasing enough to look at. Without the images, it would have either been a big wall of text, which would have put me off reading, though I did give up about 25% of the way through after sussing the images and thus the incoherence in the argument. The images bring something to the article. They were cheap/quick to generate. The increase the potential payoff (more reader) without significantly increasing the cost. Without the images, the payoff(readers) would likely have been lower, below the cost of actually writing the article. Same goes for a history of knitting podcast or that video. Production costs would not be worth it for a very niche viewership.
Reading that made me feel like you wanted to be contrarian from the get-go and dismiss the article with the least effort possible. The whole point of the images is that they're low-effort AI slop, it's part of what she's trying to point to when someone is generating unsupervised automated podcasts about knitting.
I came in indifferent but it doesn’t take much to make me give up on an article linked on hacker news. I use it as bubblegum while waiting for a compile/prompt, intent ally for stuff that can be dropped easily. I saw her disclaimer at the end. My point was that the slop images make a more appealing article than if they were absent
The AI images were deliberate and part of the narrative. Ie, you can generate slop with zero effort.
from TFA: "All of the images in this post were generated by an ai in response to the simple two-word prompt “lovely knitting”
Edit: ps: Kate Davies is an actual creator who has been creating knitting patterns for years.
Yes, I saw. By giving up I meant I skimmed to the end. The images improve the article
So you're saying you can spot AI generated bullshit, but not spot a deliberate and hilarious contrivance that the author uses to reinforce their point?
I like the blog but the premise of the blog is an engineering/epistemological perspective on the craft. The writer clearly cares more about the process, technique and history more than the feeling and validation.
It could be, that a big part of the the future of hobby's and entertainment in this way is the feeling and validation over the actual performance. Or it can be that a massive amount of people find their value in this content.
So .. I think we need to ask a deceptively simple question here, which is: is knitting real?
I'll add in an aside to this, which is not only are there fake knitting podcasts there are fake knitting and crochet patterns, which is a problem because people get a substantial way through making them only to discover that they don't work. In some cases the giveaway is that the supposed final image isn't physically possible, like the images in this article, but the fakers can use a real stolen image and just spam a pattern underneath it.
So: what is the knitting that is real? It has to be the use of your hands, needles, and yarn to produce a physical object, right?
The podcasts work towards something else. The identity of "being a knitter". This is a form of "hobby" that was already not unusual, that of discussing a thing without ever bothering to actually do it. Photographers are especially bad at this: too many lenses, not enough photographs. They've also got comprehensively run over by AI, because you can just generate the photographs now. Same for "authors".
But ultimately all these pleasant sensations aren't backed by a connection to the real. If you're going to talk about the history of knitting, shouldn't it be the real, evidenced history? As done by real (usually) women? Otherwise you're just knitting a pleasant fantasy for yourself.
The AI approach is "wireheading": the logical conclusion of all of that would be to find a means of inserting a wire in your head that provides constant pleasant sensations. Achieving happiness through a constant feed of generated images is less effective, but it's the same order of things.
(see also: authenticity in food, which could easily turn into another ten thousand words)
I'd also say a few things, if knitting takes a long time consider how long it takes to make a good clear pattern so that others can replicate it.
People who make patterns are already dealing with a saturated market. This includes historical/vintage patterns, which for many years patterns were primarily given away freely to incentivize yarn sales, or dominated by publishers. It wasn't until recently (internet, etsy, ravelry) when designers actually had the means to sell directly to consumers. People making an effort to produce usable patterns are now being dwarfed by AI nonsense in the speed of their output. It was already a difficult market. That everybodys images of real objects (along with AI generated ones) are being used to peddle and market patterns that will never work can be really demotivating.
One last thing is how many of the 8 people in this podcast company are actually generating slop and how many are actually just doing marketing?
I am with you until you make this assumption:
> But ultimately all these pleasant sensations aren't backed by a connection to the real. If you're going to talk about the history of knitting, shouldn't it be the real, evidenced history? As done by real (usually) women? Otherwise you're just knitting a pleasant fantasy for yourself.
If the real is the feeling you get from listening to the podcast or identifying with a subculture, then that is the real for that person. Factual, grounded information is just one take. If it was not this way, we would have much less myths, religions, etc historically.
People will feel the same degree of joy and completion when the final word of the podcast is read like you feel when you finish a really complex piece of work.
If you genuinely believe this, there is no point to doing anything at all except heroin. Every moment that you aren't dedicating to being on heroin or getting more heroin, to heroinmaxx if you will, is a net loss.
'But what if I run out though' I hear you ask? Simply finish off on a truly heroic dose and sail into oblivion on a wave of bliss that's much better than all your relationships and hopes and dreams. It's real for you, right? If it makes your friends sad, they could just do some heroin about it. More real than real!
Do not willingly become a lotus eater.
Look, I get your comparison and while extreme, it's funny. I just have very little faith in that the average person cares this deeply about the physically grounded reality. It's kind of a luxury of the well-off to be able to sit and think about what content to engage with when you just want to relax after a 8 hour shift followed by picking up kids, getting groceries, etc. If someone sees an AI-video that makes them happy or laugh, they send it to their friend who also laughs about it, that's their reality.
We happen to have time to argue about the philosophy about direction of the ontology of information at the downvoted bottom of a HN thread today, most people dont.
The idea that we could create a world where 'a big part of the future of hobbies and entertainment' is people listening to meaningless words made up by machines that help them feel good about themselves sounds horrifying. How could anybody feel ok about that? What would it say about the society we've built?
It would say that society changes, and people who were not used to a new world get upset about it, as it has always been throughout the entire history of humanity.
We were used to having psychologists and doctors in person, now the most common form is to have it through apps, and the younger generation does not care, it's in fact more efficient to get a prescription that you like than to spend time going places and having in-person meetings. But older generation finds it hollowing out and horrifying.
You need to accept that society moves on, and it can look different from your perspective.
> the younger generation does not care, [...] more efficient to get a prescription that you like [through apps]
Absolutely
> people listening to meaningless words made up by machines that help them feel good about themselves sounds horrifying
Yes
> Every ... person ... craves authenticity, connection, and meaningful work.
Right
> to find a means of inserting a wire in your head that provides constant pleasant sensations.
https://psycnet.apa.org/record/1955-06866-001
> Factual, grounded information is just one take.
Absolutely
The problem is, who is moving “society on” and what is their agenda.
I don’t think it’s healthy to encourage an attitude to just accept all change without any sort of reflection or push back.
A looooot of assumptions here. We have yet to see any of these brave new ideas actually work.
Therapy has never been more available, yet mental health is through the basement.
I’m also not seeing any evidence that young people are the driving force behind turning the world to shit. Every Gen Z person I know craves authenticity, connection, and meaningful work. All of this is the opposite.
It's interesting how every time this argument is made, its about subjective experiences of 'craving'. If this was the objective reality, we would have a majority of Gen Z engaged in movements, social groups and other concepts that would help them fulfill their 'cravings'.
However, it seems to not be the case, it seems like they prefer to spend their free time to doomscroll, or sit at home, and engage more in parasocial relationships that perhaps can be more on their terms, on their timeframes, and with their opinions.
That’s one explanation. The other explanation is that young people feel powerless to change anything, and that they are hooked against their will on deliberately addictive ad delivery platforms.
The more alarming conclusion here happens to be backed by a lot of science, unfortunately, so it’s not easy to dismiss.
You could justify basically anything with that logic. Change isn't always about progress.
In this case, the user is deciding that they choose what progress is. I am saying, that people who use the tool and value the utility of it decide what is progress. If people listen to the podcast, or use doctors in the phone because it provides them any value, it will be a change and a perceived progress for them.
If the generated podcasts did not bring any value to the users, such as validation, or engagement, they would not use them, and there would be no change.
"But how does the collapse of truth and meaning in society affect you personally?"
https://knowyourmeme.com/photos/2565163-smugjak-but-how-does...
Your meaning and your truth, not necessarily other peoples who find their meaning and truth in other things.
Go to China, or Congo and you will find that the public might hold a different version of some truths than you do.
We had religions dominating the world order for thousands of years, which projected their versions of the truth onto their societies.
If we would extrapolate that to today and to your opinion, it would be that everyone in the middle ages actually had it all figured out, they knew that the religious texts about splitting oceans or the moon were fake, and were all just playing along with it for the social structure.
Maybe it just happens that the LLM-generated stuff is the next thing in this iteration.
> Your meaning and your truth, not necessarily other peoples who find their meaning and truth in other things.
The makers of those AI podcasts explicitly stated they were unconcerned with whether their content was factual, so this is not comparable to people that actually thought they were right. But if you're arguing that listeners of those podcasts will believe that made-up slop is truth, that that's the "their truth" you're talking about, then yes, that is exactly what I meant by "collapse of truth".
Can't wear feelings and validation...
If you only care about the material and physical utility of the product, you can order the sweater from AliExpress for 5% of the cost and no time spent.
Seriously? You can't get the feeling of satisfaction of wearing something, or having someone wear something you made from AliExpress. My point is your sense of feeling and validation is extremely distorted if you have no knitted material to show for it?
Completely subjective take by you with similar epistemology around value as the blog author.
People might not care. I might identify as a runner because I bought a little jacket, expensive shoes, and wide-purple-tinted sunglasses, do I have to run? Not necessarily if the objects and my identity gives me the feeling of completion and satisfaction.
If your premise was true for all people, and the sense would be distorted, we would not see these phenomena, and people wouldn't listen or engage with AI-content. But the biological reality and the path of least resistance seems to prove us otherwise.