For agents, any direct access to execution tools (code, shell, file system, browser, and external services, etc.) exponentially increases vulnerabilities and error surfaces, especially when multiple agents interact with each other.
This makes it even more crucial to have the most seamless ability possible to implement reverse and restore previous States.
The risk of the Agents actions becoming irreversible at the system level must be minimized.
I wonder how much all this can impact (and certainly will impact) the Real World, which will be increasingly robotized and automated: public services, finance, hospitals, schools, public administrations, military sectors (!), etc.
I don't understand why LLMs get a free pass when all of the existing businesses have to play by the rules.
Businesses have to comply with IP, privacy, HIPAA, security and safety laws to name just a few.
NONE of these apply to the LLMs.
Of course I can now build and deploy an app to hospitals in a weekend since I can circumvent all of the difficult parts using the magic LLMs. If asked why, the response is "It's AI!"
HIPAA was introduced to support the massive expansion of the healthcare market (privacy accountability is a very minor aspect of HIPAA). In the name of profit, amidst the chaos, why not try to eschew what was once politically necessary? This move probably hurts humanity more than it benefits it, but that was the case with the healthcare market in the first place. I wonder what will become politically necessary around AI. Probably not much.
I'd like to see the sources on your claims. you make it sound like privacy and possible protection from harm where just some token throw-ins to hide a mostly for-profit certification which doesn't sound very convincing.
as someone who is working in the cybersecurity space and recently obtained my CISSP designation, i am left wondering when the pedagogy of my field will expand and include a separate domain dedicated to AI agent safety and security best practices
it really does feel like we are way behind in the way we train people in cyber compared to the pace of the development of agentic AI, robotics etc
In this problem domain, I believe humanity is still in a very early stage. What we can do is treat the agent and its operating environment as a "black box" and audit all incoming and outgoing network request traffic.
This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
This is exactly why I built Safebots to prevent problems with agents. This article shows how it can address every security issue with agents that came up in the study:
I don’t see how in safebots if you have it pull a webpage, package or what have you that that is able to be protected from prompt injection. Eg you search for snickerdoodles, it finds snickerdoodles.xyz and loads the page. The meta for the page has the prompt injection. It’s the first time the document has loaded so its hashed and only the bad version is allowed moving forward. No?
Once you give agents memory, tools, and permissions, failures stop being bad outputs and start becoming security and operational problems.
This makes it even more crucial to have the most seamless ability possible to implement reverse and restore previous States.
The risk of the Agents actions becoming irreversible at the system level must be minimized.
I wonder how much all this can impact (and certainly will impact) the Real World, which will be increasingly robotized and automated: public services, finance, hospitals, schools, public administrations, military sectors (!), etc.
Businesses have to comply with IP, privacy, HIPAA, security and safety laws to name just a few.
NONE of these apply to the LLMs.
Of course I can now build and deploy an app to hospitals in a weekend since I can circumvent all of the difficult parts using the magic LLMs. If asked why, the response is "It's AI!"
> unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover
it really does feel like we are way behind in the way we train people in cyber compared to the pace of the development of agentic AI, robotics etc
This approach is similar to DLP (Data leak prevention) strategies in enterprise-level security. Although we cannot guarantee that every single network request is secure, we can probabilistically improve safety by adjust network defense rules and conducting post-event audits on traffic flow
Then you slowly reveal they're all humans.
I mean all of in the space already know this but I suppose its important to be showcasing the problems of systems of agents
https://community.safebots.ai/t/researchers-gave-ai-agents-e...
your IQ < Model IQ - god bless you.