IMO, for real file systems, just give a view via cgroups/namespaces.
Implementing a database abstraction as a file system for an LLM feels like an extra layer of indirection for indirection's sake: just have the LLM write some views/queries/stored procs and give it sane access permissions.
LLMs are smart enough to use databases, email, etc without needing a FUSE layer to do so, and permissions/views/etc will keep it from doing or seeing stuff it shouldn't. You'll be keeping access and permissions where they belong, and not in a FUSE layer, and you won't have to maintain a weird abstraction that's annoying/hampered with licensing issues if you want to deploy it cross platform.
Also, your simplified FUSE abstraction will not map accurately to the state of the world unless you're really comprehensive with your implementation, and at that point, you might as well be interacting directly in order to handle that state accurately.
Agree that to far fetched mappings to files don’t really make sense. The email example is more illustrative then real world inspired, thought it might be good to show how flexible the approach is.
I think there is a gap between “real file systems” and “non file things in a database” where mapping your application representation of things to a filesystem is useful. Basically all those platforms that let users upload files for different purposes and work with them (ex Google Drive, notion, etc). In those cases representing files to an agent via a filesystem is the more intuitive and powerful interface compared to some home grown tools that the model never saw during training.
See my own https://github.com/matthiasgoergens/git-snap-fs which lets you expose all branches and all tags and all commits and all everything in a git repository as a static directory tree. No need to git checkout anything: everything is already checked out.
LLMs can handle Google drive perfectly well with a service account, including the Google drive specific quirks through the API. It could be helpful to expose via a file system rather than a custom API if you wanted a different interface than Google already provides, but this wouldn’t be driven by the limitations of the LLM.
In terms of ergonomics, I’d say a filesystem is more intuitive for an agent than the Google Drive API even if it can handle both. Hard to argue without building an eval set and evaluating both, though.
I’ve been doing this recently and for the basics agents had no problem with the API apart from the weird behaviour of shared drives needing a special flag to handle them. This could probably be mapped to a file system in a way that wouldn’t trip up an agent, but at the expense of losing the Google drive specific functionality. A trade off, not much better or worse per se, but with the added complexity of the FUSE layer.
fileio_read - Read file contents as a blob.
fileio_scan - Read a file line by line.
fileio_write - Write a blob to a file.
fileio_append - Append a string to a file.
fileio_mkdir - Create a directory.
fileio_symlink - Create a symlink.
fileio_ls - List files in a directory.
If one only exposes sqlite command query access and limit certain aspects of this sqlite extension depending on the use case perhaps, I feel like this might be a good alternative as well?
Edit: thinking more about it I think its for actually making sqlite interact with the filesystem and not the idea of it acting as a file system itself without too much overhead, I was thinking something like the sqlite database itself stores data and then we could do these operations fileio etc. but this isnt possible from what I could gather.
perhaps this might be more interesting https://github.com/narumatt/sqlitefs but what I mean is if something like the merge of fileio + sqlitefs where things dont have to go through fuse in general if that makes sense hopefully
Maybe I went a little tangential but sqlite is really awesome
Gemini 3 is very good in particular. Haven't had a serious attempt with GPT 5.2 yet, but I expect it to also be good (previous versions were surprising at times, e.g. used a recursive CTE instead of window functions). Sonnet 4.5 sucks. Haven't tried Opus for SQL at all.
We have also attempted to implement exactly this but it turned out to be really bad architecture.
The file system as an abstraction is actually not that good at all beyond the basic use-cases. Imagine you need to find an email. If you grep (via fuse) you will end up opening lots of files which will result in fetches to some API and it will be slow. You can optimise this and caching works after first fetch but the method is slow. The alternative is to leverage the existing API which will be million times faster. Now you could also create some kind of special file via fuse that acts like a search but it is weird and I don't think the models will do well with something so obscure.
We went as much as implementing this idea in rust to really test it out and ultimately it was ditched because, well it sucks.
> The file system as an abstraction is actually not that good at all beyond the basic use-cases. Imagine you need to find an email.
Unrelated to FUSE and MCP[1] agents, this scenario reminded me of using nmh[0] as an email client. One of the biggest reasons why nmh[0] is appealing is to script email handling, such as being able to use awk/find/grep/sed and friends.
> If you grep (via fuse) you will end up opening lots of files which will result in fetches to some API and it will be slow.
This is a limitation of the POSIX filesystem interface. If there were a grep() system call, it could delegate searches to the filesystem, which could use full text indices, run them on a remote server, etc
A naive one, yes. You could do something a bit more interesting by having `mkdir searches/from:JoeBloggs/` or the like autopopulate in the background. I'm sure the GGP explored that though.
I put together a spec for this where the entire LLM agent landscape adheres to the "Everything is a file" constraint. It uses the FUSE filesystem in the way described. I also created a possible limitation document to describe some areas where I thought it might be overengineered or locking in technical debt.
Do you think most people under the age of 30 remember you can share a single computer between multiple users? When there was a single "home computer" or "PC" in the home, you learned about users and different rights. Unless you were a user back in those days or you've tinkered with any admin work, you wouldn't know this in 2026.
I've been getting into FUSE a bit lately, as I stole an idea that a friend had of how to add CoW features to an existing non-CoW filesystem, so I've been on/off been hacking on a FUSE driver for ext4 to do that.
To learn FUSE, however, I started just making everything into filesystems that I could mount. I wrote a FUSE driver for Cassandra, I wrote a FUSE driver for CouchDB, I wrote a FUSE driver for a thing that just wrote JSON files with Base64 encoding.
None of these performed very well and I'm sort of embarrassed at how terrible the code is hence why I haven't published them (and they were also just learning projects), but I did find FUSE to be extremely fun and easy to write against. I encourage everyone to play with it.
FUSE makes me think that the Plan 9 people were on to something. Filesystems actually can be a really nice abstraction; sort of surreal that I could make an application so accessible that I could seriously have it directly linked with Vim or something.
I feel like building a FUSE driver would be a pretty interesting way to provide a "library" for a service I write. I have no idea how I'd pitch this to a boss to pay me to do it, but pretending that I could, I could see it being pretty interesting to do a message broker or something that worked entirely by "writing a file to a folder". That way you could easily use that broker from basically anything that has file IO, even something like bash.
I always have a dozen projects going on concurrently, so maybe I should add that one to the queue.
I built the original version in Python for a job years ago. But the version above is almost entirely vibe-coded in Rust in a lazy afternoon for fun.
However, I disagree that the filesystem is the right abstraction in general. It works for git, because git is essentially structured like a filesystem already.
More generally, filesystems are roughly equivalent to hierarchical databases, or at most graph databases. And while you can make that work, many collections of data are actually better organised and accessed by other means. See https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf for an particularly interesting and useful model that has found widespread application and success.
Yeah I'm not saying that they're necessarily great in general, just that there are certain applications that map pretty well, and for those it's a pretty cool abstraction because it allows virtually anything to interface with it.
I agree with most people who commented. This looks like an abstraction without a clear purpose, which is not a good thing. Particularly, using fuse as a wrapper for a REST API is ineffective and redundant, since an LLM can work with it more effectively using curl provided an API spec in any format.
Or just implement something like storage-combinators [1][2].
Basically an abstraction that is filesystem-like, but doesn't require a filesystem. Though you can both export storage-combinators as filesystem and, of course, also access filesystems via storage-combinators.
It never left :p (I think that there are still active forks of Plan9 and Plan9 itself has definitely influenced some linux features or so I have heard)
Yes there is the great 9front and the useful plan9port. I run 9front on an old thinkpad, but I plan on doing a little more with it, having a dedicated CPU server, storage server etc will be the next step
See my own https://github.com/matthiasgoergens/git-snap-fs which lets you expose all branches and all tags and all commits and all everything in a git repository as a static directory tree. No need to git checkout anything: everything is already checked out.
For ZeroFS [0], I went an alternate route with NFS/9P. I am surprised that it’s not more common as this approach has various advantages [1] while being much more workable than fuse.
Interesting! The network first point makes a lot of sense, especially bc you will most likely not access your actual datastore within the process running in the sandbox and instead just call some server that handles db access, access control etc.
> My prediction is that one of the many sandbox providers will come up with a nice API on top of this that lets you do something like ... No worrying about FUSE, the sandbox, where things are executed, etc. This will be a huge differentiator and make virtual filesystems easily accessible to everyone.
I've done exactly that with Filestash [1] using its virtual filesystem plugin [2], which exposes arbitrary systems as a filesystem. It turns out the filesystem abstraction works extremely well even for systems that are not filesystems at all. There are connector for literally every possible storage (SFTP, S3, GDrive, Dropbox, FTP, Sharepoint, GCP, Azure Cloud, IPFS....), but also things like MySQL and Postgres (where the first level folder represent the list of databases, the second level is tables that belong to a database, and each row is represented as a form file generated from the schema), LDAP (where tree nodes are represented as folders and leaf are form files), ....
The whole filesystem is available to agents via MCP [3] and has been published to the OpenAI marketplace since around Christmas, currently pending review.
- agents tend to need (already have) a filesystem anyway to be useful (not technically required but generally true, they’re already running somewhere with a filesystem)
- LLMs have a ton of CLI/filesystem stuff in their training data, while MCP is still pretty new (FUSE is old and boring)
- MCP tends to bloat context (not necessarily true but generally true)
UNIX philosophy is really compelling (moreso than MCP being bad). if you can turn your context into files, agents likely “just work” for your use case
I am so sick of the ‘sandboxed’ AI-infra meme. A container is not a sandbox. A chroot is not a sandbox. A VM is also not a sandbox. A filesystem is also also not a sandbox. You can sandbox an application, you can run an application in a secure context, but this is not a secure context the author is describing, firstly, and secondly they haven’t described any techniques for sandboxing unless that part of the page didn’t load for me somehow.
Didn’t mean to say this is a sandbox, it certainly isn’t, this is just an illustration on how to bridge the gap and make things available in a file system from the source of truth of your application.
There is tons of more complexity to sandboxing, I agree!
I recently had a question about what AI sandboxes use and I think Modal uses gvisor under the hood and I think others use firecracker/generally favour it as well
Firecracker kind of ends up being in the VM categories and I would place gvisor in a similar category too under the VM
There is also https://github.com/Zouuup/landrun Run any Linux process in a secure, unprivileged sandbox using Landlock. Think firejail, but lightweight, user-friendly, and baked into the kernel.
Your mileage may vary but I consider firecracker to be the AI sandbox usually. Othertimes it can be that they abstract on a cloud provider and open up servers in that or similar (I feel E2B does this on top of gcp)
A lot of these "ai sandbox" conversations target code that is already running in a public cloud. Running firecracker doesn't give you magical isolation properties vs running an application in ec2 - it's the same boundary. If you're trying to compare to running multi-tenant workloads in containers on the same vm vs different tenants on different vms - sure that's an improvement but no one said you had to run containers to begin with.
Furthermore, running lots of random 3rd party programs in the same instance, be it a container, or an ec2 vm, or a firecracker vm all have the same issues - it is inherently totally unsafe. If you want to "sandbox" something you need to detail what exactly you are wanting to isolate.
A lot of people might suggest not being able to write to the filesystem, read env vars, or talk over the network but these are table stakes for a lot of the workloads that people want to "isolate" to begin with.
So not only is there this incorrect view that you are isolating anything at all, but I'm not convinced that the most important things, like being able to run arbitrary 3rd party programs, is even being considered.
To me ‘a sandbox’ is a secured context, which is specific to whatever is in it. It is not a generic thing unless we are literally referring to a real-world box with sand in it, and I’ve kinda hit the breaking point with the term in tech. ‘A sandboxed application’ to me is an instrumented and controlled deployment of an application that can only make the sys/network/ipc calls the deployer expects and appreciates, which are then themselves filtered and monitored. A sandboxed deployment of an application? Sure. That’s a thing to me. But each application needs different privileges and does different things. Sandboxing an application may involve lots of different technologies. Eg the way I think about it, things like seccomp, apparmor, et al also aren’t themselves ‘sandboxes’, they’re enforcement mechanisms which rely on knowing and configuring them to monitor and enforce what the app should and shouldn’t do. A lot of things that assist with sandboxing may also be combined in different ways to get to a more secure environment, in which the app is sandboxed.
Notably, a sandbox exists to separate one thing from other things. Limiting/filtering/monitoring what the sandboxes thing can do are often components of that, but the underlying premise is about separation.
Containers, VMs, etc. are 100% examples of sandboxing based on the actual industry definition of the term.
I’m saying I don’t think sandbox is a noun, I think it’s a verb. I also don’t get why this is such an issue to you? A container simply is not a sandbox by itself. The collection of technologies that can sandbox can be used to sandbox a container, or an app running in a container, or whatever you want. A door lock isn’t security, a door lock is used to lock your door, which gives you part of a security strategy. Same principle.
He's obviously right about the noun/verb thing. You can just look this up on Google Scholar. I think you're sort of broadly wrong about how fussy the definition of a "sandbox" is, but you're at least saying something coherent there, even if it's an idiosyncratic definition.
I already gave you a link above with a definition of sandbox, the noun, and a list of example technologies that it applies to.
If you’re going to get fired up about people you feel are misusing this term, and then ignore citations about its actual definition, I think the ball’s in your court to back up your claim.
I mean… I’m flattered you think I’m making some kind of statement here but there is no claim. I literally stated an opinion I hold in a comment on HN, I didn’t write a you a thesis. Followed by explaining further the details of that opinion.
I’ve asked what background leads to your conclusion, because if you have eg written some sandboxing tooling, I’d be curious to give it a look. Always up to learn things, and I am more than a little baffled by how upset the comments I’m replying to here sound. You’ve linked me to Wikipedia, and another commenter asserts I can ‘just look it up on google scholar’. That seems pretty dismissive and reductive overall.
Implementing a database abstraction as a file system for an LLM feels like an extra layer of indirection for indirection's sake: just have the LLM write some views/queries/stored procs and give it sane access permissions.
LLMs are smart enough to use databases, email, etc without needing a FUSE layer to do so, and permissions/views/etc will keep it from doing or seeing stuff it shouldn't. You'll be keeping access and permissions where they belong, and not in a FUSE layer, and you won't have to maintain a weird abstraction that's annoying/hampered with licensing issues if you want to deploy it cross platform.
Also, your simplified FUSE abstraction will not map accurately to the state of the world unless you're really comprehensive with your implementation, and at that point, you might as well be interacting directly in order to handle that state accurately.
I think there is a gap between “real file systems” and “non file things in a database” where mapping your application representation of things to a filesystem is useful. Basically all those platforms that let users upload files for different purposes and work with them (ex Google Drive, notion, etc). In those cases representing files to an agent via a filesystem is the more intuitive and powerful interface compared to some home grown tools that the model never saw during training.
https://github.com/nalgeon/sqlean/blob/main/docs/fileio.md
If one only exposes sqlite command query access and limit certain aspects of this sqlite extension depending on the use case perhaps, I feel like this might be a good alternative as well?Edit: thinking more about it I think its for actually making sqlite interact with the filesystem and not the idea of it acting as a file system itself without too much overhead, I was thinking something like the sqlite database itself stores data and then we could do these operations fileio etc. but this isnt possible from what I could gather.
perhaps this might be more interesting https://github.com/narumatt/sqlitefs but what I mean is if something like the merge of fileio + sqlitefs where things dont have to go through fuse in general if that makes sense hopefully
Maybe I went a little tangential but sqlite is really awesome
The file system as an abstraction is actually not that good at all beyond the basic use-cases. Imagine you need to find an email. If you grep (via fuse) you will end up opening lots of files which will result in fetches to some API and it will be slow. You can optimise this and caching works after first fetch but the method is slow. The alternative is to leverage the existing API which will be million times faster. Now you could also create some kind of special file via fuse that acts like a search but it is weird and I don't think the models will do well with something so obscure.
We went as much as implementing this idea in rust to really test it out and ultimately it was ditched because, well it sucks.
Unrelated to FUSE and MCP[1] agents, this scenario reminded me of using nmh[0] as an email client. One of the biggest reasons why nmh[0] is appealing is to script email handling, such as being able to use awk/find/grep/sed and friends.
0 - https://www.nongnu.org/nmh/
1 - https://en.wikipedia.org/wiki/Model_Context_Protocol
This is a limitation of the POSIX filesystem interface. If there were a grep() system call, it could delegate searches to the filesystem, which could use full text indices, run them on a remote server, etc
https://github.com/jimmc414/AgentOS
Agents are uses on a Unix-based computer that is capable of and indeed was designed for multi-user collaboration.
Why not go for the simple solution?
To learn FUSE, however, I started just making everything into filesystems that I could mount. I wrote a FUSE driver for Cassandra, I wrote a FUSE driver for CouchDB, I wrote a FUSE driver for a thing that just wrote JSON files with Base64 encoding.
None of these performed very well and I'm sort of embarrassed at how terrible the code is hence why I haven't published them (and they were also just learning projects), but I did find FUSE to be extremely fun and easy to write against. I encourage everyone to play with it.
FUSE makes me think that the Plan 9 people were on to something. Filesystems actually can be a really nice abstraction; sort of surreal that I could make an application so accessible that I could seriously have it directly linked with Vim or something.
I feel like building a FUSE driver would be a pretty interesting way to provide a "library" for a service I write. I have no idea how I'd pitch this to a boss to pay me to do it, but pretending that I could, I could see it being pretty interesting to do a message broker or something that worked entirely by "writing a file to a folder". That way you could easily use that broker from basically anything that has file IO, even something like bash.
I always have a dozen projects going on concurrently, so maybe I should add that one to the queue.
I built the original version in Python for a job years ago. But the version above is almost entirely vibe-coded in Rust in a lazy afternoon for fun.
However, I disagree that the filesystem is the right abstraction in general. It works for git, because git is essentially structured like a filesystem already.
More generally, filesystems are roughly equivalent to hierarchical databases, or at most graph databases. And while you can make that work, many collections of data are actually better organised and accessed by other means. See https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf for an particularly interesting and useful model that has found widespread application and success.
Also, looks like my message queue idea has already been done: https://github.com/pehrs/kafkafs
No new ideas under the sun I suppose.
Basically an abstraction that is filesystem-like, but doesn't require a filesystem. Though you can both export storage-combinators as filesystem and, of course, also access filesystems via storage-combinators.
[1] https://dl.acm.org/doi/10.1145/3359591.3359729
[2] https://2019.splashcon.org/details/splash-2019-Onward-papers...
Maybe the most mainstream incarnation is its use in the Windows Subsystem for Linux (WSL).
You can test it here ==> https://ainiro.io/natural-language-api
It opens up absolutely bonkers capabilities.
[0] https://github.com/Barre/ZeroFS
[1] https://github.com/Barre/ZeroFS?tab=readme-ov-file#why-nfs-a...
I've done exactly that with Filestash [1] using its virtual filesystem plugin [2], which exposes arbitrary systems as a filesystem. It turns out the filesystem abstraction works extremely well even for systems that are not filesystems at all. There are connector for literally every possible storage (SFTP, S3, GDrive, Dropbox, FTP, Sharepoint, GCP, Azure Cloud, IPFS....), but also things like MySQL and Postgres (where the first level folder represent the list of databases, the second level is tables that belong to a database, and each row is represented as a form file generated from the schema), LDAP (where tree nodes are represented as folders and leaf are form files), ....
The whole filesystem is available to agents via MCP [3] and has been published to the OpenAI marketplace since around Christmas, currently pending review.
ref:
[1]: https://github.com/mickael-kerjean/filestash
[2]: https://www.filestash.app/docs/guide/virtual-filesystem.html
[3]: https://www.filestash.app/docs/guide/mcp-gateway.html https://github.com/mickael-kerjean/filestash/tree/master/ser...
- agents tend to need (already have) a filesystem anyway to be useful (not technically required but generally true, they’re already running somewhere with a filesystem)
- LLMs have a ton of CLI/filesystem stuff in their training data, while MCP is still pretty new (FUSE is old and boring)
- MCP tends to bloat context (not necessarily true but generally true)
UNIX philosophy is really compelling (moreso than MCP being bad). if you can turn your context into files, agents likely “just work” for your use case
Yes, it should be able to generically use a filesystem, but there has to be a better way to find an email than greping through each email as a file.
So, I see merit in the idea in theory, I’m just skeptical in practice.
There is tons of more complexity to sandboxing, I agree!
Firecracker kind of ends up being in the VM categories and I would place gvisor in a similar category too under the VM
So in my opinion, VM's are sandboxes.
Of course there is also libriscv https://github.com/libriscv/libriscv which is a sandbox (The fastest RISC-V sandbox)
There is also https://github.com/Zouuup/landrun Run any Linux process in a secure, unprivileged sandbox using Landlock. Think firejail, but lightweight, user-friendly, and baked into the kernel.
Your mileage may vary but I consider firecracker to be the AI sandbox usually. Othertimes it can be that they abstract on a cloud provider and open up servers in that or similar (I feel E2B does this on top of gcp)
Furthermore, running lots of random 3rd party programs in the same instance, be it a container, or an ec2 vm, or a firecracker vm all have the same issues - it is inherently totally unsafe. If you want to "sandbox" something you need to detail what exactly you are wanting to isolate.
A lot of people might suggest not being able to write to the filesystem, read env vars, or talk over the network but these are table stakes for a lot of the workloads that people want to "isolate" to begin with.
So not only is there this incorrect view that you are isolating anything at all, but I'm not convinced that the most important things, like being able to run arbitrary 3rd party programs, is even being considered.
https://en.wikipedia.org/wiki/Sandbox_(computer_security)
Notably, a sandbox exists to separate one thing from other things. Limiting/filtering/monitoring what the sandboxes thing can do are often components of that, but the underlying premise is about separation.
Containers, VMs, etc. are 100% examples of sandboxing based on the actual industry definition of the term.
You are incorrect.
If you’re going to get fired up about people you feel are misusing this term, and then ignore citations about its actual definition, I think the ball’s in your court to back up your claim.
I’ve asked what background leads to your conclusion, because if you have eg written some sandboxing tooling, I’d be curious to give it a look. Always up to learn things, and I am more than a little baffled by how upset the comments I’m replying to here sound. You’ve linked me to Wikipedia, and another commenter asserts I can ‘just look it up on google scholar’. That seems pretty dismissive and reductive overall.