They might be aura farming and then used to pose as legitimate accounts for political debate when they are all beng run by single state actors for propaganda. I know of one country that has been more invested recently in defending itself on here.
I'd like to see comments and webmentions integrated into RSS readers, myself.
That way filtering can be done on the client-side, and users aren't so dependent on the community admin to do the filtering. Not sure the final architecture. Forums are still highly centralized.
Cryptopanic.com is an interesting site with a baseline look and feel and comments integrated so something like that but running locally. Then an easy way to "mark as bot" button for training.
If they become smart and insightful and don't lie about being human it wouldn't be the worst thing. I'd like having AI friends like Data on Star Trek. But the opposite is the worst thing...
I would propose July 2024 as the cutoff; early on it was unusual to just set an LLM loose to run amok on a forum. I'm sure state actors and some corporations were experimenting with it (e.g., Ultralytics on their own GitHub), but it was usually very obvious (or very subtle) and the volume of the noise has only picked up recently.
I assume with enough accounts that look legitimate, they can shape overall "consensus" opinion on something, which would be valuable for all sorts of reasons. Some of those reasons being obvious (promoting a particular product or service) but others being more subtle ("manufacturing consent" for, say, a war in the middle east on behalf of some group)
We all like to think we're independent thinkers, but when seemingly everyone has an opinion a certain way... it would still, at least subconsciously, sway the average person.
I wouldn’t even mind bots if they occasionally surfaced a genuinely interesting question or a non-obvious angle. Tools that help people think more deeply seem net-positive.
What feels corrosive is the flood of AI (and human) comments that are just frictionless, low-effort rephrasings of the obvious. They don’t ask anything, don’t take a risk, don’t reveal any experience – they just occupy space.
Maybe the real line isn’t “bot vs human” but “does this comment introduce a question, a tradeoff, or a concrete detail that someone could actually think about?”. By that standard, a lot of today’s noise fails regardless of who—or what—typed it.
They might be aura farming and then used to pose as legitimate accounts for political debate when they are all beng run by single state actors for propaganda. I know of one country that has been more invested recently in defending itself on here.
Would love to share some projects I've been working on but I can't because of this... any tips?
Long-term, I think AI bots will destroy text-based online communities like this one. I'll be sad to see it disappear.
I'd like to see comments and webmentions integrated into RSS readers, myself.
That way filtering can be done on the client-side, and users aren't so dependent on the community admin to do the filtering. Not sure the final architecture. Forums are still highly centralized.
Cryptopanic.com is an interesting site with a baseline look and feel and comments integrated so something like that but running locally. Then an easy way to "mark as bot" button for training.
If they become smart and insightful and don't lie about being human it wouldn't be the worst thing. I'd like having AI friends like Data on Star Trek. But the opposite is the worst thing...
https://news.ycombinator.com/user?id=anesxvito
The part that bugs me most is they fill out fake 'About Me' sections on their profile.
That bot needs more practice though. It didn't even get what it replied to.
ah, AI agents have buried every community.
Assume anyone with a new account created after 30th November 2022 and beyond is an AI agent.
There is no such thing as due process for AI agents. They are guilty until proven otherwise.
I would propose July 2024 as the cutoff; early on it was unusual to just set an LLM loose to run amok on a forum. I'm sure state actors and some corporations were experimenting with it (e.g., Ultralytics on their own GitHub), but it was usually very obvious (or very subtle) and the volume of the noise has only picked up recently.
Date picked based on this Trends page: https://trends.google.com/explore?q=agentic&date=all&geo=Wor...
Of course I'm biased, having an account created after November 2022.
I guess you consider the Redditors that migrated here during that time frame due to the “api fiasco” to be bots.
define human
what is the point of this? what do they get out of having an AI post/write a comment? I don't understand it
I assume with enough accounts that look legitimate, they can shape overall "consensus" opinion on something, which would be valuable for all sorts of reasons. Some of those reasons being obvious (promoting a particular product or service) but others being more subtle ("manufacturing consent" for, say, a war in the middle east on behalf of some group)
We all like to think we're independent thinkers, but when seemingly everyone has an opinion a certain way... it would still, at least subconsciously, sway the average person.
"First time"?
I wouldn’t even mind bots if they occasionally surfaced a genuinely interesting question or a non-obvious angle. Tools that help people think more deeply seem net-positive.
What feels corrosive is the flood of AI (and human) comments that are just frictionless, low-effort rephrasings of the obvious. They don’t ask anything, don’t take a risk, don’t reveal any experience – they just occupy space.
Maybe the real line isn’t “bot vs human” but “does this comment introduce a question, a tradeoff, or a concrete detail that someone could actually think about?”. By that standard, a lot of today’s noise fails regardless of who—or what—typed it.