If you're talking to a 100KB SQLite database file this kind of thing is likely unnecessary, just opening and closing a connection for each query is probably fine.
If you're querying a multi-GB SQLite database there are things like per-connection caches that may benefit from a connection pool.
> What is an asyncio SQLite connection anyways? Isn’t it just a regular one that gets hucked into a separate thread?
Basically yes - aiosqlite works by opening each connection in a dedicated thread and then sending async queries to it and waiting for a response that gets sent to a Future.
That's even crazier - so you're using asyncio because you have a ton of slow network-bound stuff - but for your database access you are running every sqlite connection in it's own thread and just managing those threads via the asyncio event loop?
Python's asyncio is single threaded. If you didn't send them into a different thread, the entire event loop would block, and it would degenerate to a fully synchronous single threaded program with additional overhead.
In which case SQLite is probably the wrong tool for the job, and you should be using Postgres or MySQL that is actually designed from the ground up for lots of concurrent connections.
SQLite is amazing. I love SQLite. But I love it for single-user single-machine scenarios. Not multi-user. Not over a network.
I didn't say anything about concurrent access. SQLite with WAL mode is fine for that these days for dozens of concurrent readers/writers (OK only one writer gets to write at a time, but if your writes queue for 1-2ms who cares?) - if you're dealing with hundreds or thousands over a network then yeah, use a server-based database engine.
I always had troubles having multiple processes get write access to the sqlite file. For example if I have node.js backend work with that file, and I try to access the file with different tool (adminer for example) it fails (file in use or something like that). Should it work? I don't know if I'm doing something wrong, but this is my experience with multiple projects.
Your throughput will be much worse than a single process, but it's possible, and sometimes convenient. Maybe something in your stack is trying to hold open a writable connection in both processes?
If you’re building Python async apps (FastAPI, background jobs, etc.) with SQLite, you’ll eventually hit two issues
- Opening/closing connections is fast, but not free—overhead adds up under load
- SQLite writes are globally locked
aiosqlitepool is a tiny library that adds connection pooling for any asyncio SQLite driver (like aiosqlite):
- It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections
- Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations
- Allows your application to process significantly more database queries per second under heavy load
I've been thinking about trying pre-serialization of SQLite commands to enable single-writer against a singleton SQLiteConnection using something like Channel<T> or other high performance MPSC abstraction. Most SQLite providers have an internal mutex that handles serialization, but if we can avoid all contention on this mutex things might go faster. Opening and closing SQLite connections is expensive. If we can re-use the same instance things go a lot faster.
FYI, I've once had few long-lived connection with wal, and wal file just goes exploded. Turns out sqlite won't truncate the wal if there are open connections.
SQLite does not even do network I/O.
How does sharing a connection (and transaction scope) in an asyncio environment even work? Won’t you still need a connection per asyncio context?
Does sqlite_open really take long compared to the inevitable contention for the write lock you’ll see when you have many concurrent contexts?
Does sqlite_open even register in comparison with the overhead of the python interpreter?
What is an asyncio SQLite connection anyways? Isn’t it just a regular one that gets hucked into a separate thread?
If you're querying a multi-GB SQLite database there are things like per-connection caches that may benefit from a connection pool.
> What is an asyncio SQLite connection anyways? Isn’t it just a regular one that gets hucked into a separate thread?
Basically yes - aiosqlite works by opening each connection in a dedicated thread and then sending async queries to it and waiting for a response that gets sent to a Future.
https://github.com/omnilib/aiosqlite/blob/895fd9183b43cecce8...
(I don't know anything about that project and this isn't meant as a criticism of its design or a challenge - cos I'd probably lose :-) )
In which case SQLite is probably the wrong tool for the job, and you should be using Postgres or MySQL that is actually designed from the ground up for lots of concurrent connections.
SQLite is amazing. I love SQLite. But I love it for single-user single-machine scenarios. Not multi-user. Not over a network.
I didn't say anything about concurrent access. SQLite with WAL mode is fine for that these days for dozens of concurrent readers/writers (OK only one writer gets to write at a time, but if your writes queue for 1-2ms who cares?) - if you're dealing with hundreds or thousands over a network then yeah, use a server-based database engine.
Your throughput will be much worse than a single process, but it's possible, and sometimes convenient. Maybe something in your stack is trying to hold open a writable connection in both processes?
Why? If all it's missing is an async connection pool to make it a good tool for more jobs, what's the problem with just creating one?
- Opening/closing connections is fast, but not free—overhead adds up under load
- SQLite writes are globally locked
aiosqlitepool is a tiny library that adds connection pooling for any asyncio SQLite driver (like aiosqlite):
- It avoids repeated database connection setup (syscalls, memory allocation) and teardown (syscalls, deallocation) by reusing long-lived connections
- Long-lived connections keep SQLite's in-memory page cache "hot." This serves frequently requested data directly from memory, speeding up repetitive queries and reducing I/O operations
- Allows your application to process significantly more database queries per second under heavy load
Enjoy!
Then writers queue up, while readers are unimpeded.