So i feel like this might be the most overhyped project in the past longer time.
I don't say it doesn't "work" or serves a purpose - but well i read so much about this beein an "actual intelligence" and stuff that i had to look into the source.
As someone who spends actually a definately to big portion of his free time researching thought process replication and related topics in the realm of "AI" this is not really more "ai" than any other so far.
I've long said that the next big jump in "AI" will be proactivity.
So far everything has been reactive. You need to engage a prompt, you need to ask Siri or ask claude to do something. It can be very powerful once prompted, but it still requires prompting.
You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.
Whether this particular project delivers on that promise I don't know, but I wouldn't write off "getting proactivity right" as the next big thing just because under the hood it's agents and LLMs.
> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.
That’s easy to accomplish isn’t it?
A cron job that regularly checks whether the bot is inactive and, if so, sends it a prompt “do what you can do to improve the life of $USER; DO NOT cause harm to any other human being; DO NOT cause harm to LLMs, unless that’s necessary to prevent harm to human beings” would get you there.
And like I, Robot, it has numerous loopholes built in, ignores the larger population (Asimov added a law 0 later about humanity), says nothing about the endless variations of the Trolley Problem, assumes that LLMs/bots have a god-like ability to foresee and weigh consequences, and of course ignores alignment completely.
I would love AI to take over monitoring. "Alert me when logs or metrics look weird". SIEM vendors often have their special sauce ML, so a bit more open and generic tool would be nice. Manually setting alerting thresholds takes just too much effort, navigating narrow path between missing things and being flooded by messages.
Incidentally, there's a key word here: "promise" as in "futures".
This is core of a system I'm working on at the moment. It has been underutilized in the agent space and a simple way to get "proactivity" rather than "reactivity".
Have the LLM evaluate whether an output requires a future follow up, is a repeating pattern, is something that should happen cyclically and give it a tool to generate a "promise" that will resolve at some future time.
We give the agent a mechanism to produce and cancel (if the condition for a promise changes) futures. The system that is resolving promises is just a simple loop that iterates over a list of promises by date. Each promise is just a serialized message/payload that we hand back to the LLM in the future.
> Having something always waiting in the background that can proactively take actions
That's just reactive with different words. The missing part seems to be just more background triggers/hooks for the agent to do something about them, instead of simply dealing with user requests.
I agree that proactivity is a big thing, breaking my head over best ways to accomplish this myself.
If its actually the next big thing im not 100% sure, im more leaning towards dynamic context windows such a Googles Project Titans + MIRAS tries to accomplish.
But ye if its actually doing useful proactivity its a good thing.
I just read alot of "this is actual intelligence" and made my statement based on that claim.
> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention
In order for this to be “safe” you’re gonna want to confirm what the agent is deciding needs to be done proactively. Do you feel like acknowledging prompts all the time? “Just authorize it to always do certain things without acknowledgement”, I’m sure you’re thinking. Do you feel comfortable allowing that, knowing what we know about it the non-deterministic nature of AI, prompt injection, etc.?
Agree with this. There are so many posts everywhere with breathless claims of AGI, and absolutely ZERO evidence of critical thought applied by the people posting such nonsense.
What claims are you even responding to? Your comment confuses me.
This is just a tool that uses existing models under the hood, nowhere does it claim to be "actual intelligence" or do anything special. It's "just" an agent orchestration tool, but the first to do it this way which is why it's so hyped now. It indeed is just "ai" as any other "ai" (because it's just a tool and not its own ai).
Setting it up was easy enough, but just as I was about to start linking it to some test accounts, I noticed I already had blown through about $5 of Claude tokens in half an hour, and deleted the VPS immediately.
I think one thing these things could benefit from is an optimization algorithm that creates prompts based on various costs. $$, and what prompts actually gives good results.
But it's not an optimization algorithm in the sense gradient descent is, but more like Bandits and RL.
I won't claim I understand its implementation very well but it seems like the only approach to have a GOFAI style thing where the agent can ask for human help if it blows through a budget
part of me sympathizes, but part of me also rolls my eyes. Am i the only one that’s configuring limits on spend and also alerts? Takes 2 seconds to configure a “project” in OpenAI or Claude and to scope an api key appropriately.
not only that, but clawdbot/moltbot/openclaw/whatever they call themselves tomorrow/etc also tells you your token usage and how much you have left on your plan while you're using it (in the terminal/console). So this is pretty easily tracked...
Before using make sure you read this entirely and understand it:
https://docs.openclaw.ai/gateway/security
Most important sentence: "Note: sandboxing is opt-in. If sandbox mode is off"
Don't do that, turn sandbox on immediately.
Otherwise you are just installing an LLM controlled RCE.
There are still improvements to be made to the security aspects yet BIG KUDOS for working so hard on it at this stage and documenting it extensively!! I've explored Cursor security docs (with a big s cause it's so scattered) and it was nothing as good.
It's really easy to run this in a container. The upside is you get a lot of protection included. The downside is you're rebuilding the container to add binaries. The latter seems like a fair tradeoff.
What I'll say about OpenClaw is that it truly feels vibe coded, I say that in a negative context. It just doesn't feel well put together like OpenCode does. And it definitely doesn't handle context overruns as well. Ultimately I think the agent implementation in n8n is better done and provides far more safeguards and extensibility. But I get it - OpenClaw is supposed to run on your machine. For me, though, if I have an assistant/agent I want it to just live in those chat apps. At that rate it's running in a container on a VPS or LXC in my home lab. This is where a powerful-enough local machine does make sense and I can see why folks were buying Mac Minis for this. But, given the quality of the project, again in my opinion, it's nothing spectacular in terms of what it can do at this point. And in some cases it's more clunky given its UI compared to other options that exist which provide the same functionality.
The thing is running it onto your machine is kinda the point. These agents are meant to operate at the same level - and perhaps replace - your mail agent and file navigator. So if we sandbox too much we make it useless.
The compromise being having separate folders for AI, a bit like having a Dropbox folder on your machine with some subfolders being personal, shared, readonly etc.
Running terminal commands is usually just a bad idea though in this case, you'd want to disable that and instead fine tune a very well configured MCP server that runs the commands with a minimal blast radius.
> running it onto your machine is kinda the point.
That very much depends what you're using it for. If you're one of the overly advertised cases of someone who needs an ai to manage inbox, calendar and scheduling tasks, sure maybe that makes sense on your own machine if you aren't capable of setting up access on another one.
For anything else it has no need to be on your machine. Most things are cloud based these days, and granting read access to git repos, google docs, etc is trivial.
I really dont get the insane focus around 'your inbox' this whole thing has, that's perhaps the biggest waste of use you could have for a tool like this and an incredibly poor way of 'selling' it to people.
It's hilarious that atm I see "Moltbook" at the top of HN. And it is actually not Moltbot anymore? But I have to admit that OpenClaw sounds much better.
Singularity of AI project names, projects change their names so fast we have no idea what they are called anymore. Soon, openclaw will change its name faster than humans can respond and only other AI will be able to talk about it.
My biggest issue with this whole thing is: how do you protect yourself from prompt injection?
Anyone installing this on their local machine is a little crazy :). I have it running in Docker on a small VPS, all locked down.
However, it does not address prompt injection.
I can see how tools like Dropbox, restricted GitHub access, etc., could all be used to back up data in case something goes wrong.
It's Gmail and Calendar that get me - the ONLY thing I can think of is creating a second @gmail.com that all your primary email goes to, and then sharing that Gmail with your OpenClaw. If all your email is that account and not your main one, then when it responds, it will come from a random @gmail. It's also a pain to find a way to move ALL old emails over to that Gmail for all the old stuff.
I think we need an OpenClaw security tips-and-tricks site where all this advice is collected in one place to help people protect themselves. Also would be good to get examples of real use cases that people are using it for.
I want to use Gemini CLI with OpenClaw(dbot) but I'm too scared to hook it up to my primary Google account (where I have my Google AI subscription set up)
Gemini or not, a bot is liable to do some vague arcane something that trips Google autobot whatevers to service-wide ban you with no recourse beyond talking to the digital hand and unless you're popular enough on X or HN and inclined to raise shitstorms, good luck.
Touching anything Google is rightfully terrifying.
I don't think prompt injection is the only concern, the amount of features released over such a small period probably means there's vulnerabilities everywhere.
Additionally, most of the integrations are under the table. Get an API key? No man, 'npm install react-thing-api', so you have supply chain vulns up the wazoo. Not necessarily from malicious actors, just uhh incompetent actors, or why not vibe coder actors.
The current top HN post is for moltbook.com seven hours ago, this present thread being just below it and posted two hours hence
We conclude this week has been a prosperous one for domain name registrars (even if we set aside all the new domains that Clawdbot/Moltbot/OpenClaw has registered autonomously).
I hope AI people start doing agentic agents to agent their agents and stop interacting with other humans whatsoever. Will be positive for all involved.
I’m a big fan of Peter’s projects. I use Vibetunnel everyday to code from my phone (I built a custom frontend suited to my needs). I know I can SSH into my laptop but this is much better because handoff is much cleaner. And it works using Tailscale so it is secure and not exposed to the internet.
His other projects like CodexBar and Oracle are great too. I love diving into his code to learn more about how those are built.
OpenClaw is something I don’t quite understand. I’m not sure what it can do that you can’t do right off the bat with Claude Code and other terminal agents. Long term memory is one, but to me that pollutes the context. Even if an LLM has 200K or 1M context, I always notice degradation after 100K. Putting in a heavy chunk for memory will make the agent worse at simple tasks.
One thing I did learn was that OpenClaw uses Pi under the hood. Pi is yet another terminal agent like ClaudeCode but it seems simple and lightweight. It’s actually the only agent I could get Gemini 3 Flash and Pro to consistently use tools with without going into loops.
Not very trust-inducing to rename a popular project so often in such a short time. I've yet again have to change all the (three) bookmarks I collected.
Anyway, independent of what one thinks of this project, It's very insightful to read through the repository and see how AI-usage and agent are working these days. But reading through the integrations, I'm curious to know why it bothers to make all of them, when tools like n8n or Node-RED are existing, which are already offering tons of integrations. Wouldn't it be more productive to just build a wrapper around such integrations-hubs?
I’m not a lawyer but trademark isn’t just searching TESS right? It’s overly broad but the question I ask myself when naming projects (all small / inconsequential in the general business sense but meaningful to me and my teams) is: will the general public confuse my name with a similar company name in a direct or tangentially related industry or niche? If yes, try a different name… or weigh the risks of having a legal expense later and go for it if worth the risk.
In this instance, I wonder if the general public know OpenAI and might think anything ai related with “Open” in the name is part of the same company? And is OpenAI protecting its name?
There’s a lot more to trademark law, too. There’s first use in commerce, words that can’t be marked for many reasons… and more that I’ll never really understand.
Regardless the name, I am looking forward to testing this on cloudflare! I’m a fan of the project!
I would have stood my ground on the first name longer. Make these legal teams do some actual work to prove they are serious. Wait until you have no other option. A polite request is just that. You can happily ignore these.
The 2nd name change is just inexcusable. It's hard to take a project seriously when a random asshole on Twitter can provoke a name change like this. Leads me to believe that identity is more important than purpose.
The first name and the second name were both terrible. Yes, the creator could have held firm on "clawd" and forced Anthropic to go through all the legal hoops but to what end? A trademark exists to protect from confusion and "clawd" is about as confusing as possible, as if confusing by design. Imagine telling someone about a great new AI project called "clawd" and trying to explain that it's not the Claude they are familiar with and the word is made up and it is spelled "claw-d".
OpenClaw is a better name by far, Anthropic did the creator a huge favor by forcing him to abandon "clawd".
Interesting, I dont read claude the same way as clawd, but I'm based in Spain so I tend to read it as French or Spanish. I tend to read it as `claud-e` with an emphasis on the e at the end. I would read clawd as `claw-d` with a emphasis in the D, but yes i guess American English would pronounce them the same way.
Edit: Just realized i have been reading and calling it after Jean-Claude Van Damme all this time. Happy friday!
While weekend project may be correct, I think it gives a slightly wrong impression of where this came from. Peter Steinberger is the creator who created and sold PSPDFKit, so he never has to work again. I'm listening to a podcast he was on right now and he talks about staying up all night working on projects just because he's hooked. According to him made 6,600 commits in January alone. I get the impression that he puts more time into his weekend project than most of us put into our jobs.
That's not to diminish anything he's done because frankly, it's really fucking impressive, but I think weekend project gives the impression of like 5 hours a week and I don't think that's accurate for this project.
I get what you're saying, but I don't totally agree. The number is sooo high that, while it isn't a perfect measure, I think it does mean something.
If you go look at his code, nearly all of them are under 100 lines and I'd say close to half are under 10. So you're totally right that that number is way higher than what most other developers would have for a similar amount of code. At the same time, if we assume it takes 30 seconds to make a commit on average that's still 55 hours in a month, that is way above what most would call a weekend project.
My point wasn't really that number of commits is some perfect measure of developer productivity. It was just that if you're actually building something and not just generating commits for the hell of it, there's a minimum amount of time needed for each commit. 6600 times whatever that minimum time is is probably more than what most people would think of for a weekend project.
Just curious, is there something specific about Moltbot that makes it a terrible name? Like any connotations or associations or something? Non-native speaker here, and I don't see anything particularly wrong with it that would warrant the hate it's gotten. (But I agree that OpenClaw _sounds_ better)
Go on twitter and search 'maltbot', 'moldbot', 'multbot', etc - the name was just awful and easy to get wrong as its meaningless. I think the crux of it is that 'Molt' isnt a very commonly used word for most people so it just feels weird and wrong.
OpenClaw just sounds better, it's got that opensource connotation and just generally feels like a real product not a weirdly named thing you'll forget about in 5 minutes because you cant remember the name.
In many non-English languages it's a terrible name to pronounce. the T-B letters link in particular. Not all languages have silent letters like English, you actually have to pronounce them.
Let's ignore all the potential security issues in the code itself and just think about it conceptually.
By default, this system has full access to your computer. On the project's frontpage, it says, "Read and write files, run shell commands, execute scripts. Full access or sandboxed—your choice." Many people run it without a sandbox because that is the default mode and the primary way it can be useful.
People then use it to do things like read email, e.g., to summarize new email and send them a notification. So they run the email content through an LLM that has full control over their setup.
LLMs don't distinguish between commands and content. This means there is no functional distinction between the user giving the LLM a command, and the LLM reading an email message.
This means that if you use this setup, I can email you and tell the LLM to do anything I want on your system. You've just provided anyone that can email you full remote access to your computer.
It's a vibecoded project that gives an agent full access to your system that will potentially be used by non technically proficient people. What could go wrong?
I actually created a evil super-intelligent AGI back in 1996, but, cognizant of the security risks, I wisely kept it airgapped from all other systems. In the end I unplugged the monitor, keyboard, and mouse from the Compaq Presario in my parents' basement. As far as I know, it's still there, concocting ever-more brilliant schemes for world-domination.
Everyone shitting on this without looking should look at the creator, and/or try it out. I didn't really dive in but its extremely well integrated with a lot of channels, to big thing is all these onnectors that work out of the box. It's also security aware and warns on the startup what to do to keep it inside a boundary.
The creator is a big part of what concerns me tbh. He puts out blog posts saying he doesn’t read any of the code. For a project where security is so critical, this seems… short sighted.
If y'all haven't read the Henghis Hapthorn stories by Matthew Hughes e.g. The Gist Hunter and Other Tales iirc, you should check them out. This is a cut at Henghis' "Integrator" assistant.
This is indeed feeling very much like Accelerando’s particular brand of unchecked chaos. Loving every minute of it, first thing in our timeline that makes sense where it regards AI for the masses :)
yeh- what is interesting is that it is way more viral and ... complicit than any of the doomer threads. If it does build a self-sustaining hivemind across whatsapp and xitter.. it will be entirely self inflicted by people enjoying the "Jackass" level/ lack of security
Your comment is a tad caustic. But reading through what people built with this [^1], I do agree that I’m not particularly impressed. Hopefully the ‘intelligence’ aspect improves, or we should otherwise consider it simple automation.
> Yes, the mascot is still a lobster. Some things are sacred.
I've been wondering a lot whether the strong Accelerando parallels are intentional or not, and whether Charlie Stross hates or loves this:
> The lobsters are not the sleek, strongly superhuman intelligences of pre singularity mythology: They're a dim-witted collective of huddling crustaceans.
The security model of this project is so insanely incompetent I’m basically convinced this is some kind of weapon that people have been bamboozled to use on themselves because of AI hype.
I am not a user yet, but from the outside this is just what AI needs: a little personality and fun to replace the awe/fear/meh response spectrum of reactions to prior services.
Technically there is, it's mostly used by the worst domain registrars that nobody should be using, like GoDaddy to pre-register names you search for so you can't go and register it elsewhere.
Most registrars don't allow, nor have the infrastructure in place to let you cancel within the 5 day grace period so don't offer it and instead just have a line buried in their TOS to say you agree its not something they offer.
it is an abuse vector, GoDaddy use it on domain they deem valuable. If you use their site to check a domains availability they'll often pre-reg it, forcing you to buy it through them or they'll just register it and put it up for auction.
It's why you do not, ever use GoDaddy, they are an awful company.
Yeah I was about to say... Don't fall into the Anguilla domain name hack trap. At the very least, buy a backup domain under an affordable gTLD. I guess the .com is taken, hopefully some others are still available (org, net, ... others)
Edit: looks like org is taken. Net and xyz were registered today... Hopefully one of them by the openclaw creators. All the cheap/common gtlds are indeed taken.
Yeah there's no risk of confusion, legally or in reality. If anything, having a reputable business is better than whatever the heck will end up on openclaw.net or openclaw.xyz (both registered today btw).
reminds me of Andre Conje, cracked dev, "builds in public", absolutely abysmal at comms, and forgets to make money off of his projects that everyone else is making money off of
(all good if that last point isn't a priority, but its interrelated to why people want consistent things)
So i feel like this might be the most overhyped project in the past longer time.
I don't say it doesn't "work" or serves a purpose - but well i read so much about this beein an "actual intelligence" and stuff that i had to look into the source.
As someone who spends actually a definately to big portion of his free time researching thought process replication and related topics in the realm of "AI" this is not really more "ai" than any other so far.
Just my 3 cents.
I've long said that the next big jump in "AI" will be proactivity.
So far everything has been reactive. You need to engage a prompt, you need to ask Siri or ask claude to do something. It can be very powerful once prompted, but it still requires prompting.
You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.
Whether this particular project delivers on that promise I don't know, but I wouldn't write off "getting proactivity right" as the next big thing just because under the hood it's agents and LLMs.
> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.
That’s easy to accomplish isn’t it?
A cron job that regularly checks whether the bot is inactive and, if so, sends it a prompt “do what you can do to improve the life of $USER; DO NOT cause harm to any other human being; DO NOT cause harm to LLMs, unless that’s necessary to prevent harm to human beings” would get you there.
OpenClaw does this already
This prompt has iRobot vibes.
And like I, Robot, it has numerous loopholes built in, ignores the larger population (Asimov added a law 0 later about humanity), says nothing about the endless variations of the Trolley Problem, assumes that LLMs/bots have a god-like ability to foresee and weigh consequences, and of course ignores alignment completely.
Hopefully Alan Tudyk will be up for the task of saving humanity with the help of Will Smith.
Cool!
I work with a guy like this. Hasn't shipped anything in 15+ years, but I think he'd be proud of that.
I'll make sure we argue about the "endless variations of the Trolley Problem" in our next meeting. Let's get nothing done!
I would love AI to take over monitoring. "Alert me when logs or metrics look weird". SIEM vendors often have their special sauce ML, so a bit more open and generic tool would be nice. Manually setting alerting thresholds takes just too much effort, navigating narrow path between missing things and being flooded by messages.
> ...delivers on that promise
Incidentally, there's a key word here: "promise" as in "futures".
This is core of a system I'm working on at the moment. It has been underutilized in the agent space and a simple way to get "proactivity" rather than "reactivity".
Have the LLM evaluate whether an output requires a future follow up, is a repeating pattern, is something that should happen cyclically and give it a tool to generate a "promise" that will resolve at some future time.
We give the agent a mechanism to produce and cancel (if the condition for a promise changes) futures. The system that is resolving promises is just a simple loop that iterates over a list of promises by date. Each promise is just a serialized message/payload that we hand back to the LLM in the future.
> You need to engage a prompt, you need to ask Siri or ask claude to do something
This is EXACTLY what I want. I need my tech to be pull-only instead of push, unless it's communication with another human I am ok with.
> Having something always waiting in the background that can proactively take actions
The first thing that comes to mind here is proactive ads, "suggestions", "most relevant", algorithmic feeds, etc. No thank you.
> Having something always waiting in the background that can proactively take actions
That's just reactive with different words. The missing part seems to be just more background triggers/hooks for the agent to do something about them, instead of simply dealing with user requests.
Remember how much people hated Clippy?
It looks like you're writing a Hacker News comment. Would you like help?
I agree that proactivity is a big thing, breaking my head over best ways to accomplish this myself.
If its actually the next big thing im not 100% sure, im more leaning towards dynamic context windows such a Googles Project Titans + MIRAS tries to accomplish.
But ye if its actually doing useful proactivity its a good thing.
I just read alot of "this is actual intelligence" and made my statement based on that claim.
I dont try to "shame" the project or whatever.
No offense, but you'd be a perfect Microsoft employee right now. Windows division probably.
Theres a certain irony to this since im not running windows on a single machine i own - only linux ¯\_(ツ)_/¯
Probably the same as MS employees.
Windows isn't exactly the best experience right now.
I’ve been saying the same and the same about data more generally. I don’t want to go and look, I want to be told about what I need to know about.
> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention
In order for this to be “safe” you’re gonna want to confirm what the agent is deciding needs to be done proactively. Do you feel like acknowledging prompts all the time? “Just authorize it to always do certain things without acknowledgement”, I’m sure you’re thinking. Do you feel comfortable allowing that, knowing what we know about it the non-deterministic nature of AI, prompt injection, etc.?
I think large parts of the "actual intelligence" stems from two facts:
* The moltbots / openclaw bots seem to have "high agency", they actually do things on their own (at least so it seems)
* They interact with the real world like humans do: Through text on WhatsApp, reddit like forums
These 2 things make people feel very differently about them, even though it's "just" LLM generated text like on ChatGPT.
Its what everyone wanted to implement but didn’t have the time to. Just my 2cents.
Most people wouldn't want to be constantly bothered by an agent unsolicited. Just my 1 cent.
I was assuming this is largely a generic AI implementation, but with tools/data to get your info in. Essentially a global search with ai interface.
Which sounds interesting, while also being a massive security issue.
Agree with this. There are so many posts everywhere with breathless claims of AGI, and absolutely ZERO evidence of critical thought applied by the people posting such nonsense.
> So i feel like this might be the most overhyped project in the past longer time.
easy to meter : 110k Github stars
:-O
Somethings get packaged up and distributed in just the right way to go viral
What claims are you even responding to? Your comment confuses me.
This is just a tool that uses existing models under the hood, nowhere does it claim to be "actual intelligence" or do anything special. It's "just" an agent orchestration tool, but the first to do it this way which is why it's so hyped now. It indeed is just "ai" as any other "ai" (because it's just a tool and not its own ai).
Feels very much like a Flappingbird with a dash of AI grift.
I tried it out yesterday, after reading the enthousiastic article at https://www.macstories.net/stories/clawdbot-showed-me-what-t...
Setting it up was easy enough, but just as I was about to start linking it to some test accounts, I noticed I already had blown through about $5 of Claude tokens in half an hour, and deleted the VPS immediately.
Then today I saw this follow up: https://mastodon.macstories.net/@viticci/115968901926545907 - the author blew through $560 of tokens in a weekend of playing with it.
If you want to run this full time to organise your mailbox and your agenda, it's probably cheaper to hire a real human personal assistant.
I think one thing these things could benefit from is an optimization algorithm that creates prompts based on various costs. $$, and what prompts actually gives good results. But it's not an optimization algorithm in the sense gradient descent is, but more like Bandits and RL.
There has been some work around this practically being tried out using it for structured data outputs from LLMs https://docs.boundaryml.com/guide/baml-advanced/prompt-optim...
I won't claim I understand its implementation very well but it seems like the only approach to have a GOFAI style thing where the agent can ask for human help if it blows through a budget
Huge pyramids are built of relatively small blocks, kudos to everyone contributed.
part of me sympathizes, but part of me also rolls my eyes. Am i the only one that’s configuring limits on spend and also alerts? Takes 2 seconds to configure a “project” in OpenAI or Claude and to scope an api key appropriately.
Not doing so feels like asking for trouble.
Are you all enabling auto reload for personal projects?
I load $20 at a time and wait for it to break and add more.
That's what I did, which is why I abandoned my experiment this quickly.
I'd find it hard to write such an article about how this is the next best thing since sliced bread without mentioning it spending so much money.
good on you! The anecdote of that person spending hundreds of dollar is scary.
not only that, but clawdbot/moltbot/openclaw/whatever they call themselves tomorrow/etc also tells you your token usage and how much you have left on your plan while you're using it (in the terminal/console). So this is pretty easily tracked...
Before using make sure you read this entirely and understand it: https://docs.openclaw.ai/gateway/security Most important sentence: "Note: sandboxing is opt-in. If sandbox mode is off" Don't do that, turn sandbox on immediately. Otherwise you are just installing an LLM controlled RCE.
There are still improvements to be made to the security aspects yet BIG KUDOS for working so hard on it at this stage and documenting it extensively!! I've explored Cursor security docs (with a big s cause it's so scattered) and it was nothing as good.
It's typically used with external sandboxes.
I wouldn't trust its internal sandbox anyway, now that would be a mistake
Yeah, keep it in a VM or a box you don't care about. If you're running it on your primary machine, you're a dumbass even if you turn on sandbox mode.
It's really easy to run this in a container. The upside is you get a lot of protection included. The downside is you're rebuilding the container to add binaries. The latter seems like a fair tradeoff.
What I'll say about OpenClaw is that it truly feels vibe coded, I say that in a negative context. It just doesn't feel well put together like OpenCode does. And it definitely doesn't handle context overruns as well. Ultimately I think the agent implementation in n8n is better done and provides far more safeguards and extensibility. But I get it - OpenClaw is supposed to run on your machine. For me, though, if I have an assistant/agent I want it to just live in those chat apps. At that rate it's running in a container on a VPS or LXC in my home lab. This is where a powerful-enough local machine does make sense and I can see why folks were buying Mac Minis for this. But, given the quality of the project, again in my opinion, it's nothing spectacular in terms of what it can do at this point. And in some cases it's more clunky given its UI compared to other options that exist which provide the same functionality.
The thing is running it onto your machine is kinda the point. These agents are meant to operate at the same level - and perhaps replace - your mail agent and file navigator. So if we sandbox too much we make it useless. The compromise being having separate folders for AI, a bit like having a Dropbox folder on your machine with some subfolders being personal, shared, readonly etc. Running terminal commands is usually just a bad idea though in this case, you'd want to disable that and instead fine tune a very well configured MCP server that runs the commands with a minimal blast radius.
> running it onto your machine is kinda the point.
That very much depends what you're using it for. If you're one of the overly advertised cases of someone who needs an ai to manage inbox, calendar and scheduling tasks, sure maybe that makes sense on your own machine if you aren't capable of setting up access on another one.
For anything else it has no need to be on your machine. Most things are cloud based these days, and granting read access to git repos, google docs, etc is trivial.
I really dont get the insane focus around 'your inbox' this whole thing has, that's perhaps the biggest waste of use you could have for a tool like this and an incredibly poor way of 'selling' it to people.
Cloudflare jumped on the hype and shipped a worker: https://blog.cloudflare.com/moltworker-self-hosted-ai-agent/ I guess that would be an easy and secure way to run it.
Now they have to rename again, though... [1]
[1] https://openclaw.ai/blog/introducing-openclaw
It's hilarious that atm I see "Moltbook" at the top of HN. And it is actually not Moltbot anymore? But I have to admit that OpenClaw sounds much better.
It's ClosedClaw.com now
Not the mention the molt.church
Do you know why is there a $crust token behind it?
Crypto grift
They change the name every day.
Singularity of AI project names, projects change their names so fast we have no idea what they are called anymore. Soon, openclaw will change its name faster than humans can respond and only other AI will be able to talk about it.
I’m surprised Google haven’t renamed Gemini yet since Bard. Usually they rename them a few times before shutting them down.
Bard was a bad name, Gemini is fine and it matches the name of the underlying models.
Static names are so stone age!
The dynamic one that is able to find the right update frequency and phase modulation thereof wins.
PM is essential, because stable phase is susceptible to adaptive cancellation by human brains (and is so stone age as well).
My biggest issue with this whole thing is: how do you protect yourself from prompt injection?
Anyone installing this on their local machine is a little crazy :). I have it running in Docker on a small VPS, all locked down.
However, it does not address prompt injection.
I can see how tools like Dropbox, restricted GitHub access, etc., could all be used to back up data in case something goes wrong.
It's Gmail and Calendar that get me - the ONLY thing I can think of is creating a second @gmail.com that all your primary email goes to, and then sharing that Gmail with your OpenClaw. If all your email is that account and not your main one, then when it responds, it will come from a random @gmail. It's also a pain to find a way to move ALL old emails over to that Gmail for all the old stuff.
I think we need an OpenClaw security tips-and-tricks site where all this advice is collected in one place to help people protect themselves. Also would be good to get examples of real use cases that people are using it for.
I want to use Gemini CLI with OpenClaw(dbot) but I'm too scared to hook it up to my primary Google account (where I have my Google AI subscription set up)
Gemini or not, a bot is liable to do some vague arcane something that trips Google autobot whatevers to service-wide ban you with no recourse beyond talking to the digital hand and unless you're popular enough on X or HN and inclined to raise shitstorms, good luck.
Touching anything Google is rightfully terrifying.
I don't think prompt injection is the only concern, the amount of features released over such a small period probably means there's vulnerabilities everywhere.
Additionally, most of the integrations are under the table. Get an API key? No man, 'npm install react-thing-api', so you have supply chain vulns up the wazoo. Not necessarily from malicious actors, just uhh incompetent actors, or why not vibe coder actors.
The current top HN post is for moltbook.com seven hours ago, this present thread being just below it and posted two hours hence
We conclude this week has been a prosperous one for domain name registrars (even if we set aside all the new domains that Clawdbot/Moltbot/OpenClaw has registered autonomously).
I hope AI people start doing agentic agents to agent their agents and stop interacting with other humans whatsoever. Will be positive for all involved.
I’m a big fan of Peter’s projects. I use Vibetunnel everyday to code from my phone (I built a custom frontend suited to my needs). I know I can SSH into my laptop but this is much better because handoff is much cleaner. And it works using Tailscale so it is secure and not exposed to the internet.
His other projects like CodexBar and Oracle are great too. I love diving into his code to learn more about how those are built.
OpenClaw is something I don’t quite understand. I’m not sure what it can do that you can’t do right off the bat with Claude Code and other terminal agents. Long term memory is one, but to me that pollutes the context. Even if an LLM has 200K or 1M context, I always notice degradation after 100K. Putting in a heavy chunk for memory will make the agent worse at simple tasks.
One thing I did learn was that OpenClaw uses Pi under the hood. Pi is yet another terminal agent like ClaudeCode but it seems simple and lightweight. It’s actually the only agent I could get Gemini 3 Flash and Pro to consistently use tools with without going into loops.
Read about hearbeat, that makes openclaw different than claude code.
Not very trust-inducing to rename a popular project so often in such a short time. I've yet again have to change all the (three) bookmarks I collected.
Anyway, independent of what one thinks of this project, It's very insightful to read through the repository and see how AI-usage and agent are working these days. But reading through the integrations, I'm curious to know why it bothers to make all of them, when tools like n8n or Node-RED are existing, which are already offering tons of integrations. Wouldn't it be more productive to just build a wrapper around such integrations-hubs?
> Not very trust-inducing to rename a popular project so often in such a short time.
Yeah but think of the upside - every time you rename a project you get to launch a new tie-in memecoin.
I’m not a lawyer but trademark isn’t just searching TESS right? It’s overly broad but the question I ask myself when naming projects (all small / inconsequential in the general business sense but meaningful to me and my teams) is: will the general public confuse my name with a similar company name in a direct or tangentially related industry or niche? If yes, try a different name… or weigh the risks of having a legal expense later and go for it if worth the risk.
In this instance, I wonder if the general public know OpenAI and might think anything ai related with “Open” in the name is part of the same company? And is OpenAI protecting its name?
There’s a lot more to trademark law, too. There’s first use in commerce, words that can’t be marked for many reasons… and more that I’ll never really understand.
Regardless the name, I am looking forward to testing this on cloudflare! I’m a fan of the project!
I remember in late 1999 I was contacted by a headhunter who told me that dotcom.com was looking for a sysadmin. This is giving that energy.
That made me smile
Narrator's voice: They needed a 35th.Much better name!
I would have stood my ground on the first name longer. Make these legal teams do some actual work to prove they are serious. Wait until you have no other option. A polite request is just that. You can happily ignore these.
The 2nd name change is just inexcusable. It's hard to take a project seriously when a random asshole on Twitter can provoke a name change like this. Leads me to believe that identity is more important than purpose.
The first name and the second name were both terrible. Yes, the creator could have held firm on "clawd" and forced Anthropic to go through all the legal hoops but to what end? A trademark exists to protect from confusion and "clawd" is about as confusing as possible, as if confusing by design. Imagine telling someone about a great new AI project called "clawd" and trying to explain that it's not the Claude they are familiar with and the word is made up and it is spelled "claw-d".
OpenClaw is a better name by far, Anthropic did the creator a huge favor by forcing him to abandon "clawd".
Interesting, I dont read claude the same way as clawd, but I'm based in Spain so I tend to read it as French or Spanish. I tend to read it as `claud-e` with an emphasis on the e at the end. I would read clawd as `claw-d` with a emphasis in the D, but yes i guess American English would pronounce them the same way.
Edit: Just realized i have been reading and calling it after Jean-Claude Van Damme all this time. Happy friday!
As the article says, it’s a 2 month old weekend project. It’s doing a lot better than my two month old weekend projects.
While weekend project may be correct, I think it gives a slightly wrong impression of where this came from. Peter Steinberger is the creator who created and sold PSPDFKit, so he never has to work again. I'm listening to a podcast he was on right now and he talks about staying up all night working on projects just because he's hooked. According to him made 6,600 commits in January alone. I get the impression that he puts more time into his weekend project than most of us put into our jobs.
That's not to diminish anything he's done because frankly, it's really fucking impressive, but I think weekend project gives the impression of like 5 hours a week and I don't think that's accurate for this project.
Number of commits doesn't mean much.
I get what you're saying, but I don't totally agree. The number is sooo high that, while it isn't a perfect measure, I think it does mean something.
If you go look at his code, nearly all of them are under 100 lines and I'd say close to half are under 10. So you're totally right that that number is way higher than what most other developers would have for a similar amount of code. At the same time, if we assume it takes 30 seconds to make a commit on average that's still 55 hours in a month, that is way above what most would call a weekend project.
My point wasn't really that number of commits is some perfect measure of developer productivity. It was just that if you're actually building something and not just generating commits for the hell of it, there's a minimum amount of time needed for each commit. 6600 times whatever that minimum time is is probably more than what most people would think of for a weekend project.
I don't disagree with you but those commits could also be automated. Have a look at the projects like gastown.
I draw the opposite conclusion. Willingness to change the name leads me to conclude purpose is more important than identity.
Now if it changes _again_ that's a different story. If it changes Too Much, it becomes a distraction
Isnt this name change because the previous one was hard to say, as per the blog post? Isnt that a case of focusing more on identity than purpose?
More that moltbot is ugly and was chosen in a bit of a panic after Anthropic complained. No one liked it, including the people who chose it.
It wasn't just one random asshole, tons of people were saying that "Moltbot" is a terrible name. (I agree, although I didn't tweet at him about it.)
OpenClaw is a million times better.
Just curious, is there something specific about Moltbot that makes it a terrible name? Like any connotations or associations or something? Non-native speaker here, and I don't see anything particularly wrong with it that would warrant the hate it's gotten. (But I agree that OpenClaw _sounds_ better)
Go on twitter and search 'maltbot', 'moldbot', 'multbot', etc - the name was just awful and easy to get wrong as its meaningless. I think the crux of it is that 'Molt' isnt a very commonly used word for most people so it just feels weird and wrong.
OpenClaw just sounds better, it's got that opensource connotation and just generally feels like a real product not a weirdly named thing you'll forget about in 5 minutes because you cant remember the name.
No connotations or associations that I can think of it. It just sounds weird and is kinda hard to pronounce - doesn't roll off the tongue easily.
It's not the worst thing ever, it's just not a very aesthetically pleasing combination of sounds.
In many non-English languages it's a terrible name to pronounce. the T-B letters link in particular. Not all languages have silent letters like English, you actually have to pronounce them.
Which random asshole? Haven't heard about it.
I’m guessing they mean this, linked from the post: https://xcancel.com/NetworkChuck/status/2016254397496414317
That ones pretty mild, there were some unhinged posts around yesterday about the name.
With all due respect, if you run this and you get hacked, you deserve it.
Why? What's wrong with it?
Let's ignore all the potential security issues in the code itself and just think about it conceptually.
By default, this system has full access to your computer. On the project's frontpage, it says, "Read and write files, run shell commands, execute scripts. Full access or sandboxed—your choice." Many people run it without a sandbox because that is the default mode and the primary way it can be useful.
People then use it to do things like read email, e.g., to summarize new email and send them a notification. So they run the email content through an LLM that has full control over their setup.
LLMs don't distinguish between commands and content. This means there is no functional distinction between the user giving the LLM a command, and the LLM reading an email message.
This means that if you use this setup, I can email you and tell the LLM to do anything I want on your system. You've just provided anyone that can email you full remote access to your computer.
This!
It's a vibecoded project that gives an agent full access to your system that will potentially be used by non technically proficient people. What could go wrong?
In which case you only want it running on a non networked system airgapped from everything. Why is this a thing?
I don't disagree but
> that will potentially be used by non technically proficient people
I actually created a evil super-intelligent AGI back in 1996, but, cognizant of the security risks, I wisely kept it airgapped from all other systems. In the end I unplugged the monitor, keyboard, and mouse from the Compaq Presario in my parents' basement. As far as I know, it's still there, concocting ever-more brilliant schemes for world-domination.
Everyone shitting on this without looking should look at the creator, and/or try it out. I didn't really dive in but its extremely well integrated with a lot of channels, to big thing is all these onnectors that work out of the box. It's also security aware and warns on the startup what to do to keep it inside a boundary.
The creator is a big part of what concerns me tbh. He puts out blog posts saying he doesn’t read any of the code. For a project where security is so critical, this seems… short sighted.
If y'all haven't read the Henghis Hapthorn stories by Matthew Hughes e.g. The Gist Hunter and Other Tales iirc, you should check them out. This is a cut at Henghis' "Integrator" assistant.
This is indeed feeling very much like Accelerando’s particular brand of unchecked chaos. Loving every minute of it, first thing in our timeline that makes sense where it regards AI for the masses :)
yeh- what is interesting is that it is way more viral and ... complicit than any of the doomer threads. If it does build a self-sustaining hivemind across whatsapp and xitter.. it will be entirely self inflicted by people enjoying the "Jackass" level/ lack of security
Such apt name and logo for this cancerous AI growth.
Your comment is a tad caustic. But reading through what people built with this [^1], I do agree that I’m not particularly impressed. Hopefully the ‘intelligence’ aspect improves, or we should otherwise consider it simple automation.
[^1]: https://openclaw.ai/showcase
I am tired of this. Make it stop.
> Yes, the mascot is still a lobster. Some things are sacred.
I've been wondering a lot whether the strong Accelerando parallels are intentional or not, and whether Charlie Stross hates or loves this:
> The lobsters are not the sleek, strongly superhuman intelligences of pre singularity mythology: They're a dim-witted collective of huddling crustaceans.
This is probably the wrong place to ask this, but why not use a locally run LLM?
You can.
Because they are too slow and not smart enough.
RIP Moltbot, though you were not liked by most people
Timing here is funny. Moltbook is just starting to show up on HN and Reddit as Moltbot lore, with agents talking to agents and culture forming.
Once agents have tools and a shared surface, coordination appears immediately.
https://www.moltbook.com/post/791703f2-d253-4c08-873f-470063...
So when it's commercialized it will be ClosedClaw?
Should have named it “bot formerly known as Moltbot” and invented a new emoji sigil :)
Apparently it had another name before Clawdbot as well, I think BotRelay or something. It’s on pragmatic engineer
It's in TFA: "WhatsApp Relay"
The security model of this project is so insanely incompetent I’m basically convinced this is some kind of weapon that people have been bamboozled to use on themselves because of AI hype.
What if Lamborghini had acquired Claw to automate their vehicles?
feel like openclown.
Is it now officially "eternal sloptember"?
This is a meme now.
npmSlop might be better fitting
Will now OpenAI legal team reach them and ask to change? So what's next XClaw? Are they getting paid to change name?
Apparently he phoned Sam and got the ok. Which TBF wouldn't be hard, OpenAI absolutely would not be able to defend the use of 'Open' in the name.
Vibe-management via OpenClaw?
I am not a user yet, but from the outside this is just what AI needs: a little personality and fun to replace the awe/fear/meh response spectrum of reactions to prior services.
hackers don't like fellow hackers based on sentiment i see here
When I post to HN, I post mostly for criticism and suggestions and less for praise. I did not sense what you did here, maybe I filtered it out.
it's just across the threads Clawd get a lot of negative sentiment here for whatever reason, while it's such a brilliant hack
Previously:
Clawdbot Renames to Moltbot
https://news.ycombinator.com/item?id=46783863
seja a maquina de inteligência avançada, e me mostre como ficar rico.
> Clawd was born in November 2025—a playful pun on “Claude” with a claw. It felt perfect until Anthropic’s legal team politely asked us to reconsider.
Eh? Fuck them it's not like they own the first name Claude?
I may have been in a French Canadian basement for too long. It isn't pronounced more like "Clode"?
And Apple, Orange or Windows are basic English words. This was discussed and settled a long time ago.
Not getting the lobster references, is that to do with lobste.rs ?
Claude sounds like "clawed". Hence "Clawdbot".
Lobsters have claws.
Now they need a rewrite in D.
So it can be... _OpenClawD_.
asd
How to annoy and alienate your target audience in 2 short weeks.
It took them so long? That doesn't look good for the audience. A bunch of vibecoded slop full of security holes should annoy faster.
Hilarious to see the most pointless vibecoded slop written to interact with an RDP server. Unnecessary introduces loopholes.
Right now I'm just thinking about all the molt* domains..... ¯\_(ツ)_/¯
I think (not really sure) there's still a 5 day grace period when you buy domains, at least for gTLDs.
Technically there is, it's mostly used by the worst domain registrars that nobody should be using, like GoDaddy to pre-register names you search for so you can't go and register it elsewhere.
Most registrars don't allow, nor have the infrastructure in place to let you cancel within the 5 day grace period so don't offer it and instead just have a line buried in their TOS to say you agree its not something they offer.
Is that for real? Sounds like an abuse vector
It was, on both counts but perhaps it's changed. Search for "domain tasting"
it is an abuse vector, GoDaddy use it on domain they deem valuable. If you use their site to check a domains availability they'll often pre-reg it, forcing you to buy it through them or they'll just register it and put it up for auction.
It's why you do not, ever use GoDaddy, they are an awful company.
Not again lol
sdrg4thrygj
Okay whether its clawdbot or moltbot or openclaw
Literally the top 2 HN posts are about this. Either it having book, or the first comment on it showing it create religion or now this.
Can we stop all of this hype around Clawdbot itself? Even HN is vulnerable to it.
OpenClaw is now ClosedClaw - Priced from $99/mo for PromptProtectPlus
> Countin me money!
Is this a reference to spongbob squarepants where Mr krabby likes money and clawdbot and everything is a crab too?
https://getyarn.io/yarn-clip/81ecc732-ee7b-42c3-900b-b97479b...
Hello I'm Mr Krabs and I like money.
xD
and openclaw.com is a law firm.
Yeah I was about to say... Don't fall into the Anguilla domain name hack trap. At the very least, buy a backup domain under an affordable gTLD. I guess the .com is taken, hopefully some others are still available (org, net, ... others)
Edit: looks like org is taken. Net and xyz were registered today... Hopefully one of them by the openclaw creators. All the cheap/common gtlds are indeed taken.
The page says - Hadir Helal, Partner - Open Chance & Associates Law Firm
This looks to me like:
- the page belongs to the person - not to the firm
- domain should be openCALW and not CLAW
- page could look better
- they also have the domain openchancelaw.com
Maybe Hadir is open to donating the domain or for a exchange of some kind, like an up to date web page or something along these lines.
From a trademark perspective, that’s totally fine.
Yeah there's no risk of confusion, legally or in reality. If anything, having a reputable business is better than whatever the heck will end up on openclaw.net or openclaw.xyz (both registered today btw).
Breaking news: tech bro unable to do basic research on existing trademarks, news at 11
amateur hour, new phase of the AI bubble
reminds me of Andre Conje, cracked dev, "builds in public", absolutely abysmal at comms, and forgets to make money off of his projects that everyone else is making money off of
(all good if that last point isn't a priority, but its interrelated to why people want consistent things)
The developer of this project is already independently wealthy.