It’s really not that hard to secure agents. Just give them tightly scope API Keys, put them in front of your API and treat it like you would a user instead of behind your API.
If I were to ever use Claude in a production environment for an AWS account for instance, you best believe the role it was running with with temporary access keys would be the bare minimum.
To be clear - I'm not really talking about my personal laptop. I'm thinking about where this is going at scale. When companies start replacing entire teams with agents (and looking at the layoffs, that's clearly the direction), those agents will need real access to production systems. That's the scenario where "just don't give it access" stops being an answer.
Right, but with scams you trick a human into doing something. With agents, you give them the keys upfront - terminal, file system, API keys - because otherwise what's the point? You can't have an agent that asks permission for every action, you'd just be babysitting it all day.
So the question isn't "how do we stop someone from being tricked." It's "how do we secure something that already has root access and runs on vibes instead of logic."
That answer hasn't changed since day one of LLMs, despite some of the thing people are attempting to build these days: If you don't want to get in trouble, don't give LLMs access to anything that can cause actual harm, nor give them autonomy.
Sure, that works today. But Meta is cutting 20% of its workforce. So is everyone else. The whole bet is that agents replace human work - and that only works if they can actually do things. Deploy, access databases, call APIs.
"Don't give it access" is like saying "don't connect to the internet" in 1995. The question isn't whether agents get these permissions. They will. The question is what happens when they do.
How do we expect that everything goes all right if we give prod access to a pack of very smart dogs that know some key tricks? Now the same, when humans actually leave the room?
My answer is simple: it just won't be all right this way. The problems will cost the management who drank too much kool-aid; maybe they already do (check out what was happening at Cloudflare recently). Sanity will return, now as a hard-won lesson.
If at this point you (where you may be a person or a company) still think relying on spicy autocomplete is a smart decision, I can't fucking help you, and you deserve whatever bad things happen to you.
This is akin to saying "we are fully committed to slapping together sql queries directly from request data, but I wonder if it's risky?"
Part of security awareness is knowing when something is simply not worth the risks.
It’s really not that hard to secure agents. Just give them tightly scope API Keys, put them in front of your API and treat it like you would a user instead of behind your API.
If I were to ever use Claude in a production environment for an AWS account for instance, you best believe the role it was running with with temporary access keys would be the bare minimum.
To be clear - I'm not really talking about my personal laptop. I'm thinking about where this is going at scale. When companies start replacing entire teams with agents (and looking at the layoffs, that's clearly the direction), those agents will need real access to production systems. That's the scenario where "just don't give it access" stops being an answer.
Scams and "social engineering", as known for a long time, could be a good approximation.
Right, but with scams you trick a human into doing something. With agents, you give them the keys upfront - terminal, file system, API keys - because otherwise what's the point? You can't have an agent that asks permission for every action, you'd just be babysitting it all day. So the question isn't "how do we stop someone from being tricked." It's "how do we secure something that already has root access and runs on vibes instead of logic."
Don't give it root access.
That answer hasn't changed since day one of LLMs, despite some of the thing people are attempting to build these days: If you don't want to get in trouble, don't give LLMs access to anything that can cause actual harm, nor give them autonomy.
Sure, that works today. But Meta is cutting 20% of its workforce. So is everyone else. The whole bet is that agents replace human work - and that only works if they can actually do things. Deploy, access databases, call APIs.
"Don't give it access" is like saying "don't connect to the internet" in 1995. The question isn't whether agents get these permissions. They will. The question is what happens when they do.
Let's see how well it works for them. Apparently Salesforce had been a bit overly enthusiastic about layoffs, and recently had to backtrack.
How do we expect that everything goes all right if we give prod access to a pack of very smart dogs that know some key tricks? Now the same, when humans actually leave the room?
My answer is simple: it just won't be all right this way. The problems will cost the management who drank too much kool-aid; maybe they already do (check out what was happening at Cloudflare recently). Sanity will return, now as a hard-won lesson.
If at this point you (where you may be a person or a company) still think relying on spicy autocomplete is a smart decision, I can't fucking help you, and you deserve whatever bad things happen to you.
This is akin to saying "we are fully committed to slapping together sql queries directly from request data, but I wonder if it's risky?"
Part of security awareness is knowing when something is simply not worth the risks.