That's a classic! (https://news.ycombinator.com/item?id=46306724) ... but too far from this particular topic to make sense on the list - otherwise we'd probably have to add all SQLite stories, which are legion.
That one's always a good read, particularly the discussion of the tension between 100% coverage testing and defensive programming. We go for maximum defensive programming, so huge numbers of code paths that can't be exercised in testing but that will prevent things running off into the weeds if something does manage to trigger them. Another organisation in contrast had a client who required 100% code coverage in testing so they spent six months removing all the non-testable defensive code in their code base.
I'm not sure I buy this from a technical perspective. Rust already meets almost all of the criteria laid out at the end of this post. By all means keep using C if you like it, but the rust team has done an excellent job over the last few years addressing most of these issues.
> - Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Rust moves at a pretty glacial pace these days. Slower than C++ for sure. There haven't been any big, significant changes to the language since async. Code that compiles today should compile indefinitely. (And the rust compiler authors check this on every release, by recompiling basically everything in crates.io to make sure.)
> - Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
Rust matches C in this regard. You can import & export C functions from rust very easily. The consumer of the foreign function interface have no idea they're calling rust and not C.
> - Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
Rust works pretty well on raw / embedded hardware via #[no_std]. There's a few obscure architectures supported by gcc and not llvm (and by extension rust). But it generally works great. I'd love to know what the real blocker platforms are (if any).
> - Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
Uh, I think this is possible today? Rustrover (intellij) can certainly produce coverage reports. This doesn't feel out of reach.
> - Rust needs a mechanism to recover gracefully from OOM errors.
True. You can override the global allocator for a program and use that to detect OOM. But recovering from OOM in general is tricky. I personally wish rust's handling of allocators looked more like zig.
> - Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
Rust and C are pretty much even when it comes to performance. Rust binaries are often a bit bigger though.
The criteria were laid out in 2019 [0]. It was less clear then.
> If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.
It seems like the criteria are less of things the SQLite developers are claiming Rust can't do and more that they are non-negotiable properties that need to be considered before even bringing the idea of a rust version to the team.
I think it is at least arguable that Rust does not meet the requirements. And they did explicitly invite private argument if you feel differently.
Ah, I assumed the page was written recently due to this message at the bottom:
>> This page was last updated on 2025-05-09 15:56:17Z <<
> I think it is at least arguable that Rust does not meet the requirements
Absolutely. The lack of clean OOM handling alone might be a dealbreaker for sqlite. And I suspect sqlite builds for some weird platforms that rustc doesn't support.
But I find it pretty weird reading comments about how rust needs prove it performs similarly to C. Benchmarks are just a google search away folks.
> And they did explicitly invite private argument if you feel differently.
Never.
Its not up to me what language sqlite is written in. Emailing the sqlite authors to tell them to rewrite their code in a different language would be incredibly rude. They can write sqlite in whatever language they want. My only real choice is whether or not I want to use their code.
Yes! A quick google brings up cargo-llvm-cov[1], which is a rust wrapper around llvm source code coverage. It has an unstable --branch command for branch coverage, but branch coverage currently has some language level limitations[2].
I’d imagine this will go a bit like the rust rewrite of sudo etc. Despite the memory safety advantages at least towards the start it still ends up more fragile because the incumbent has years of testing and fixing behind it
They're not aiming at replacing SQLite-in-C with SQLite-in-Rust, they're doing this so they can implement more additional functionality faster than with C's chainsaw-juggling-act semantics and the inability to access the proprietary SQLite test suite.
IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust. Turso's Limbo announcement says exactly that: they couldn't confidently make large architectural changes without access to the tests. The rewrite lets them build Deterministic Simulation Testing from scratch, which they argue can exceed SQLite's reliability by simulating unlikely scenarios and reproducing failures deterministically.
Having seen way too many "we're going to rewrite $xyz but make it BETTERER!!", I don't give this one much chance of success. SQLite is a high-quality product with a quarter-century of development history and huge amounts of effort, both by the devs and via public use, of testing. So this let's-reinvent-it-in-Rust effort will have to beat an already very good product that's had a staggering amount of development effort and testing put into it which, if the devs do manage to get through it all, will end up being about the same as the existing thing but written in a language that most of the SQLite targets don't work with. I just can't see this going anywhere outside of hardcore Rust devotees who want to use a Rust SQLite even thought it still hasn't got past the fixer-upper stage.
I needed SQLite as a central system DB but couldn't live with single-writer. So I built a facade that can target SQLite, Postgres, or Turso's Rust rewrite through one API.
The useful part: mirroring. The facade writes to two backends simultaneously so I can diff SQLite vs Turso behavior and catch divergences before production. When something differs, I either file upstream or add an equalizing shim.
Concurrent writes already working is a reasonable definition of success. It's why I'm using it.
How do you want to define success for this project relative to SQLite? Because they already have concurrent writes working for their rust implementation. It's currently marked experimental, but it does already work. And for a lot of people, that's all they want or need.
> IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust.
I don't understand this claim, given the breadth and depth of SQLite's public domain TCL Tests. Can someone explain to me how this isn't pure FUD?
"There are 51445 distinct test cases, but many of the test cases are parameterized and run multiple times (with different parameters) so that on a full test run millions of separate tests are performed." - https://sqlite.org/testing.html
SQLite's test suite is infamously gigantic. It has two parts: the public TCL tests you're referencing, and a much larger proprietary test suite that's 100x bigger and covers all the edge cases that actually matter in production. The public tests are tiny compared to what SQLite actually runs internally.
The test suite that the actual SQLite developers use to develop SQLite is not open-source. 51445 open-source test cases is a big number but doesn't really mean much, particularly given that evidently the SQLite developers themselves don't consider it enough to provide adequate coverage.
That’s like if I gave you half the dictionary and then said it’s ironic that if there really weren’t any letters after “M” you wouldn’t be complaining.
I think it's fair to say they tried using SQLite but apparently had to bail out. Their use case is a distributed DBaaS with local-first semantics, they started out with SQLite and only now seem to be pivoting to "SQLite-compatible".
Building off of that into a SQLite-compatible DB doesn't seem to me as trying to piggyback on the brand. They have no other option as their product was SQLite to begin with.
That doesn't seem very fair. It's still beta and clearly far from finished. And they do call out the compromises - they have a whole page about how they are not yet fully compatible:
Maybe. It's hard to know what kind of issues that test suite covers. If memory safety is the main source of instability for the C implementation then the Rust implementation won't be too affected without the test suite. Same if it focus a lot on compatibility with niche embedded platforms and different OSes, which Turso won't care to lose.
"Stability" is a word that means different things for different use cases.
But the other one is not available to most and SQLite itself is "open-source" not "open-contributions" so extending SQLite is pretty much impossible at scale:
- no way to merge upstream
- no way to run full test-suit to be sure everything is tiptop
Not likely. The alternative was for them to modify SQLite without the test suite and no obvious indication of what they would need to do to try to fill in the gaps. Modifying SQLite with its full test suite would be the best choice, of course, but one that is apparently[1] not on the table for them. Since they have to reimagine the test suite either way, they believe they can do a better job if the tests are written alongside a new codebase.
And I expect they are right. Trying to test a codebase after the fact never goes well.
[1] With the kind of investment backing they have you'd think they'd be able to reach some kind of licensing deal, but who knows.
I don't get this. In their own rust implementation they have to write and use their own test and they still don't have access to the proprietary sqlite tests. So their implementation will necessarily be whatever they implement + whatever passes their tests. Same as it would be if they forked sqlite in C. (Plus they would have the open source tests). Am I missing something?
You are missing that HN accounts needlessly overthink everything, perhaps?
Otherwise, I doubt it. They have to write the tests again no matter what. Given that, there is no downside to reimplementing it while they are at it. All while there is a big upside to doing that: Trying to test something after the implementation is already written never ends well.
That does not guarantee that their approach will succeed. It is hard problem no matter how you slice it. But trying to reverse engineer the tests for the C version now that all knowledge of what went into it in the first place is lost is all but guaranteed to fail. Testing after the fact never ends well. Rewriting the implementation and tests in parallel increases the chances of success.
Of all the projects which may benefit from a rewrite or re-imagining in a memory-safe language, I'm really puzzled why it's heavily-tested, near-universally-deployed software such as sudo (use oBSD doas instead?), the coreutils, and sqlite.
Doas supports a subset of sudo functionality by design. Your comment is exactly what I said when I first heard about the rust linux utils thing. The best they can do is have new bugs.
I definitely wouldn't be surprised by bugs and/or compatibility issues over time. Especially in the near term. I'm mixed, but somewhat enthusiastic on Turso's efforts to create client-server options and replication.
In the past I've reached for FirebirdSQL when I needed local + external databases and wanted to limit the technology spread... In the use case, as long as transactions synched up even once a week it was enough for the disparate remote connections/systems. I'm honestly surprised it isn't used more. That said, SQLite is more universal and lighter overall.
Building a production app on Turso now. No bugs or compatibility issues so far. The sqlite API isn't fully implemented yet, so I wrote a declarative facade that backfills the missing implementations and parallels writes to both Turso and native sqlite: gives me integrity checking and fallback while the implementation matures
It looks like some parts are open source and other not. Does anyone know more about the backstory? (It looks like one is a custom program that generate fuzz test. Do they sell it to others SQL engines?)
The CoRecursive episode with SQLite creator D. Richard Hipp goes through it. I've linked to the part of the transcript that covers it, the key quote being:
> We still maintain the first one, the TCL tests. They’re still maintained. They’re still out there in the public. They’re part of the source tree. Anybody can download the source code and run my test and run all those. They don’t provide 100% test coverage but they do test all the features very thoroughly. The 100% MCD tests, that’s called TH3. That’s proprietary. I had the idea that we would sell those tests to avionics manufacturers and make money that way. We’ve sold exactly zero copies of that so that didn’t really work out. It did work out really well for us in that it keeps our product really solid and it enables us to turn around new features and new bug fixes very fast.
My law of headlines is, "don't take them too seriously, don't develop too many expectations about the article, skim the article (or the comments) to know what it is about and whether it is worth your time".
Taking feature lists and plans at face value is offensively shallow; the typical Rust fan arrogance pattern can be an explanation (if the Rust rewrite is "better", it doesn't have to be compatible with the rest of the world who uses the actual C SQLite).
The thing that worries me the most about Turso is that rather than the small, stable team running SQLite, Turso is a VC backed startup trying to capitalize on the AI boom. I can easily see how SQLite's development is sustainable, but not Turso's. They're currently trying to grow their userbase as quickly as possible with their free open source offering, but when they have investors breathing down their necks asking about how they're going to get 100x returns I'm not sure how long that'll last. VCs generally expect companies they invest in to grow to $100 million in revenue in 5-10 years. If your use of their technology doesn't help them get there, you should expect to be rugpulled at some point.
They do have a test suite that's private which I understand to be more about testing for different hardware - they sell access to that for companies that want SQLite to work on their custom embedded hardware, details here: https://sqlite.org/th3.html
> SQLite Test Harness #3 (hereafter "TH3") is one of three test harnesses used for testing SQLite.
> 2) They have a paid cloud option to drive income from:
I’ve been confused by this for a while. What is it competing with? Surely not SQLite, being client server defeats all the latency benefits. I feel it would be considered as an alternative to cloud Postgres offerings, and it seems unlikely they could compete on features. Genuinely curious, but is there any sensible use case for this product, or do they just catch people who read SQLite was good on hacker news, but didn’t understand any of the why.
The thing that cooks my noodle - who are these insane people who want to beta test a new database? Yes, all databases could have world destroying data loss/corruption, but I have significantly more confidence in a player than has been on the market for many years.
The article talks about this. If you have a project that starts small and an in-process DB is fine, but you end up needing to scale up then you don't have to switch DBs.
I think it's more like you started with SQLite and now you need concurrent writes, replication, sharding, etc. etc. - all the stuff that the "big" databases like PostgreSQL provide.
After all, if you can tell in advance that you might hit the limits of SQLite, you'd simply start with postgresql on day one, not with a new unproven DB vendor with a product that has been through the trial by fire of existing DBs.
Man, I've seen the SQL Metabase emits, it's not great. Like, doing a massive join across 10 tables and selecting all the columns from all the tables - to only return the average of one column from one table.
Grafana has been a pretty good steward of OSS. Whether you like their products or not, they've been able to balance the OSS and commercial offerings fairly well.
Whether or not they attempt rug pulls, or other slimy measures to extort money from entrenched users... this VC backed OSS startups have given us some nice things. People fork the permissively licensed code when the scumbuckets get too smelly and the company goes on to irrelevancy while people use the actually OSS version.
The MIT licensing makes this even less trustworthy. I can image a major cloud or fly.io just proprietary forking them as a service, as cloud providers have done for years.
So what? The MIT licensed original will still be there, you don't lose out on anything if that happens. And also, SQLite itself is public domain, so by your logic we shouldn't trust SQLite either. Which is crazy.
I don't understand you reply here. Database startups have always had the consistent issue of cloud providers providing managed solutions without contributing back. It is why many moved to or use the AGPLv3 and why there was the whole SSPL controversy in the first place. Running a successful open source database startup is not trivial. None of this applies to SQLite.
I think the point is that that sounds like a potential problem for turso, but it’s not really a problem for everyone else unless some sort of vendor lockin would prevent using open source alternatives. But given the strong compatibility story with the SQLite file format implied already that just doesn’t seem credible.
It's covered in the article. The full SQLite test suite isn't open source, so you (the third party) don't have the same confidence in your modifications as the SQLite team does.
Yeah, that's not a good environment for this kind of engineering. You need long term stability for a project like this, slow incremental development with a long term plan, and that's antithetical to VC culture.
On the other hand, Rust code and the culture of writing Rust leads to far more modularity, so maybe some useful stuff will come of it even if the startup fails.
I have been excited to see real work on databases in Rust, there are massive opportunities there.
where do you see these opportunities? i didnt see a lot of issues personally rust would be better at than C in this domain. care to elaborate? (genuinely curious!)
personally i see more benefit in rust for example as ORM and layers that talk to the database. (those are often useful to have in such an ecossystem so you can use the database safe and sanely, like python or so but then u know, fast and secure.)
You need to be crazy to use an ORM. I personally think that even SQL is redundant. I would like to see a high quality embedded database written in Rust.
It's painful having to switch to another language to talk to the database, and ORMs are the worst kind of leaky abstractions. With Rust, we've finally got a systems language that's expressive enough to do a really good job with the API to an embedded database.
The only thing that's really missing is language support for properly ergonomic Cap'n Proto support - Swift has stuff in this vein already. That'd mean serializable ~native types with no serialization/deserialization overhead, and it's applicable to a lot of things; Swift developed the support so they could do proper dynamically linked libraries (including handling version skew).
If I might plug my project yet again (as if I don't do that enough :) - bcachefs has a high quality embedded database at its core, and one of the dreams has always been to turn that into a real general purpose database. Much of the remaining stuff for making it truly general purpose is stuff that we're going to want sooner or later for the filesystem anyways, and while it's all C today I've done a ton of work on refactoring and modernizing the codebase to hopefully make a Rust conversion tractable, in the not too distant future.
(Basically, with the cleanup attribute in modern C, you can do pseudo RAII that's good enough to eliminate goto error handling in most code. That's been the big obstacle to transitioning a C codebase to be "close enough" to what the Rust version would look like to make the conversion mostly syntactic, not a rewrite, and that work is mostly done in bcachefs).
The database project is very pie in the sky, but if the project gets big enough (it's been growing, slowly but steadily), that's the dream. One of them, anyways.
A big obstacle towards codebases that we can continue to understand, maintain and continue to improve over the next 100 years is giant monorepos, and anything we can do to split giant monorepos apart into smaller, cleaner reusable components is pure gold.
I was excited about this for a second until seeing your comment.
Unless you are Amazon which has the resources to maintain a fork (which is questionable by itself with all the layoffs), you probably shouldn't touch this.
Completely agree, I'm looking at pretty much all software this way nowadays.
We've all been around long enough to know that "free" VC-backed software always means "free... until it's in our interest to charge for it". And yet users will still complain about the rugpull in 2026, no matter how many times they've been through it. "Fool me once, shame on you"
This reflects my experience. I also experienced very bad memory leaks when using libSQL for large write jobs. Haven't tried tursodatabase yet, but my impression by the confusing amount of packages in the Turso ecosystem is it's not ready for primetime yet.
> ... most of which can be fixed by a rewrite in Rust
huh? That is clearly not the case. memory bugs - sure. Not having a public test suite, not accepting public contributions, weakly typed columns and lack of concurrency has nothing to do with the language. They're governance decisions, that's it.
>I see this situation trhough the prism of the innovator's dilemma: the incumbent is not willing to sacrifice a part of its market to evolve, so we need a new player to come and innovate.
I don't think the innovators dilemma quite applies in the open source world. Projects are tools, that's it. Preserving a project for the sake of preserving it isn't a good idea.
If people need to run a sqlite db in these exotic places, shedding it means someone else has to build their own tool now that can do it. Sqlite has decided that they care about that, so they support it, so they can't use rust. Seems sound.
Projects coming and going is a good thing in open source, not a bug.
I know I've seen multiple bug reports in open source projects with "well we can't fix this because it'd break things for existing users." Maybe it's a bad thing, but why do you think this doesn't happen?
> lack of concurrency has nothing to do with the language
That's an extraordinary claim for any C codebase.
Unless it ships with code enabling concurrency that is commented out, we should assume that "concurrency in C ain't easy" was a factor in that design choice.
At the current rate of progress I'm wondering how long it will take for llm agents to be able to rewrite/translate complete projects into another language. SQLite may not be the best candidate, due to the hidden test suite. But CPython or Clang or binutils or...
The RIIR-benchmark: rewrite CPython in Rust, pass the complete test suite, no performance regressions, $100 budget. How far away are we there, a couple months? A few years? Or is it a completely ill-posed problem, due to the test suite being tied to the implementation language?
A clearly defined/testable long-horizon task: demonstrating the capability of planning and executing projects that overrun current llm's context windows by several orders of magnitude.
Single-issue coding benchmarks are getting saturated, and I'm wondering when we'll get to a point where coding agents will be able to tackle some long-running projects. Greenfield projects are hard to benchmark. So creating code or porting code from one language to another for an established project with a good test suite should make for an interesting benchmark, no?
I hate to be negative, but where is the deep dive? This is a shallow overview of Turso's features and some of the motivation behind it. Am I missing something?
So the idea is to rewrite it in Rust and drop SQLite? I mean, maybe that’s just how things evolve. But it feels like every project is only a few vibe-coding sessions away from getting rewritten in $LANGUAGE. And I can’t help wondering whether that’s hurting a sustainable open-source ecosystem.
SQLite is a good example: the author built a small ecosystem around it and managed to make a living from open source. Thanks to author's effort, we have a small surface area, extreme stability, relentless focus on correctness.
If we keep rewarding novelty over stewardship, we’ll lose more “SQLite-like” projects—stable cores that entire ecosystems depend on.
From what I’ve read there’s a pretty sizable performance gap between SQLite and pglite (with SQLite being much faster).
I’m excited to see things improve though. Having a more traditional database, with more features and less historical weirdness on the client would be really cool.
a blog not written by AI, about a project written in AI. It is just matter of time. We just need AI to read the article, and then the full circle is complete.
Related. Others?
Turso is an in-process SQL database, compatible with SQLite - https://news.ycombinator.com/item?id=46677583 - Jan 2026 (102 comments)
Beyond the SQLite single-writer limitation with concurrent writes - https://news.ycombinator.com/item?id=45508462 - Oct 2025 (70 comments)
An adventure in writing compatible systems - https://news.ycombinator.com/item?id=45059888 - Aug 2025 (12 comments)
Introducing the first alpha of Turso: The next evolution of SQLite - https://news.ycombinator.com/item?id=44433997 - July 2025 (11 comments)
Working on databases from prison - https://news.ycombinator.com/item?id=44288937 - June 2025 (534 comments)
Turso SQLite Offline Sync Public Beta - https://news.ycombinator.com/item?id=43535943 - March 2025 (67 comments)
We will rewrite SQLite. And we are going all-in - https://news.ycombinator.com/item?id=42781161 - Jan 2025 (3 comments)
Limbo: A complete rewrite of SQLite in Rust - https://news.ycombinator.com/item?id=42378843 - Dec 2024 (232 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=46303277
That's a classic! (https://news.ycombinator.com/item?id=46306724) ... but too far from this particular topic to make sense on the list - otherwise we'd probably have to add all SQLite stories, which are legion.
That one's always a good read, particularly the discussion of the tension between 100% coverage testing and defensive programming. We go for maximum defensive programming, so huge numbers of code paths that can't be exercised in testing but that will prevent things running off into the weeds if something does manage to trigger them. Another organisation in contrast had a client who required 100% code coverage in testing so they spent six months removing all the non-testable defensive code in their code base.
I never read this article by the C developers before. It's so odd to read a level headed C vs. Rust take on the internet.
https://sqlite.org/whyc.html
I'm not sure I buy this from a technical perspective. Rust already meets almost all of the criteria laid out at the end of this post. By all means keep using C if you like it, but the rust team has done an excellent job over the last few years addressing most of these issues.
> - Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
Rust moves at a pretty glacial pace these days. Slower than C++ for sure. There haven't been any big, significant changes to the language since async. Code that compiles today should compile indefinitely. (And the rust compiler authors check this on every release, by recompiling basically everything in crates.io to make sure.)
> - Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
Rust matches C in this regard. You can import & export C functions from rust very easily. The consumer of the foreign function interface have no idea they're calling rust and not C.
> - Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
Rust works pretty well on raw / embedded hardware via #[no_std]. There's a few obscure architectures supported by gcc and not llvm (and by extension rust). But it generally works great. I'd love to know what the real blocker platforms are (if any).
> - Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
Uh, I think this is possible today? Rustrover (intellij) can certainly produce coverage reports. This doesn't feel out of reach.
> - Rust needs a mechanism to recover gracefully from OOM errors.
True. You can override the global allocator for a program and use that to detect OOM. But recovering from OOM in general is tricky. I personally wish rust's handling of allocators looked more like zig.
> - Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
Rust and C are pretty much even when it comes to performance. Rust binaries are often a bit bigger though.
The criteria were laid out in 2019 [0]. It was less clear then.
> If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.
It seems like the criteria are less of things the SQLite developers are claiming Rust can't do and more that they are non-negotiable properties that need to be considered before even bringing the idea of a rust version to the team.
I think it is at least arguable that Rust does not meet the requirements. And they did explicitly invite private argument if you feel differently.
0: https://web.archive.org/web/20190423143433/https://sqlite.or...
Ah, I assumed the page was written recently due to this message at the bottom:
>> This page was last updated on 2025-05-09 15:56:17Z <<
> I think it is at least arguable that Rust does not meet the requirements
Absolutely. The lack of clean OOM handling alone might be a dealbreaker for sqlite. And I suspect sqlite builds for some weird platforms that rustc doesn't support.
But I find it pretty weird reading comments about how rust needs prove it performs similarly to C. Benchmarks are just a google search away folks.
> And they did explicitly invite private argument if you feel differently.
Never.
Its not up to me what language sqlite is written in. Emailing the sqlite authors to tell them to rewrite their code in a different language would be incredibly rude. They can write sqlite in whatever language they want. My only real choice is whether or not I want to use their code.
> Rustrover (intellij) can certainly produce coverage reports.
See <https://sqlite.org/testing.html#statement_versus_branch_cove...>. Does Rustrover produce branch coverage reports?
Yes! A quick google brings up cargo-llvm-cov[1], which is a rust wrapper around llvm source code coverage. It has an unstable --branch command for branch coverage, but branch coverage currently has some language level limitations[2].
[1] https://github.com/taiki-e/cargo-llvm-cov
[2] https://github.com/rust-lang/rust/issues/124118
I’d imagine this will go a bit like the rust rewrite of sudo etc. Despite the memory safety advantages at least towards the start it still ends up more fragile because the incumbent has years of testing and fixing behind it
They're not aiming at replacing SQLite-in-C with SQLite-in-Rust, they're doing this so they can implement more additional functionality faster than with C's chainsaw-juggling-act semantics and the inability to access the proprietary SQLite test suite.
See the features and roadmap at https://github.com/tursodatabase/turso
IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust. Turso's Limbo announcement says exactly that: they couldn't confidently make large architectural changes without access to the tests. The rewrite lets them build Deterministic Simulation Testing from scratch, which they argue can exceed SQLite's reliability by simulating unlikely scenarios and reproducing failures deterministically.
Having seen way too many "we're going to rewrite $xyz but make it BETTERER!!", I don't give this one much chance of success. SQLite is a high-quality product with a quarter-century of development history and huge amounts of effort, both by the devs and via public use, of testing. So this let's-reinvent-it-in-Rust effort will have to beat an already very good product that's had a staggering amount of development effort and testing put into it which, if the devs do manage to get through it all, will end up being about the same as the existing thing but written in a language that most of the SQLite targets don't work with. I just can't see this going anywhere outside of hardcore Rust devotees who want to use a Rust SQLite even thought it still hasn't got past the fixer-upper stage.
fragmede is correct.
I needed SQLite as a central system DB but couldn't live with single-writer. So I built a facade that can target SQLite, Postgres, or Turso's Rust rewrite through one API. The useful part: mirroring. The facade writes to two backends simultaneously so I can diff SQLite vs Turso behavior and catch divergences before production. When something differs, I either file upstream or add an equalizing shim. Concurrent writes already working is a reasonable definition of success. It's why I'm using it.
How do you want to define success for this project relative to SQLite? Because they already have concurrent writes working for their rust implementation. It's currently marked experimental, but it does already work. And for a lot of people, that's all they want or need.
https://turso.tech/blog/beyond-the-single-writer-limitation-...
> IMHO breaking free of SQLite's proprietary test suite is a bigger driver than C vs Rust.
I don't understand this claim, given the breadth and depth of SQLite's public domain TCL Tests. Can someone explain to me how this isn't pure FUD?
"There are 51445 distinct test cases, but many of the test cases are parameterized and run multiple times (with different parameters) so that on a full test run millions of separate tests are performed." - https://sqlite.org/testing.html
SQLite's test suite is infamously gigantic. It has two parts: the public TCL tests you're referencing, and a much larger proprietary test suite that's 100x bigger and covers all the edge cases that actually matter in production. The public tests are tiny compared to what SQLite actually runs internally.
The test suite that the actual SQLite developers use to develop SQLite is not open-source. 51445 open-source test cases is a big number but doesn't really mean much, particularly given that evidently the SQLite developers themselves don't consider it enough to provide adequate coverage.
The irony is if they only had the public domain tests, no one would complain even though it would mean the exact same number of open source tests.
That’s like if I gave you half the dictionary and then said it’s ironic that if there really weren’t any letters after “M” you wouldn’t be complaining.
The next bullet point:
> 2. The TH3 test harness is a set of proprietary tests…
Of course, but how does that make the allegation not FUD?
I’m confused, the statement is that SQLite has a proprietary test suite? It does. Where’s the FUD?
Turso tried to add features to SQLite in libsqlite but there were bugs/divergent behaviour that they couldn’t reconcile without the full test suite.
There are also non-public tests.
In other words, they are creating their own database and hitching on to the SQLite brand to market it. (That's fine though).
I think it's fair to say they tried using SQLite but apparently had to bail out. Their use case is a distributed DBaaS with local-first semantics, they started out with SQLite and only now seem to be pivoting to "SQLite-compatible".
Building off of that into a SQLite-compatible DB doesn't seem to me as trying to piggyback on the brand. They have no other option as their product was SQLite to begin with.
No that's completely incorrect. It's compatible with SQLite, not just in the same spirit:
> SQLite compatibility for SQL dialect, file formats, and the C API
It stopped being compatible with SQLite even before the Rust rewrite: https://news.ycombinator.com/item?id=42386894
That doesn't seem very fair. It's still beta and clearly far from finished. And they do call out the compromises - they have a whole page about how they are not yet fully compatible:
https://github.com/tursodatabase/turso/blob/main/COMPAT.md
I don't think that's fine at all, it's quite a shitty thing to do hoenstly and I'm not surprised it's a VC backed company doing it.
How would you do it then?
Without the test suite isn’t even more likely to have stability problems?
Maybe. It's hard to know what kind of issues that test suite covers. If memory safety is the main source of instability for the C implementation then the Rust implementation won't be too affected without the test suite. Same if it focus a lot on compatibility with niche embedded platforms and different OSes, which Turso won't care to lose.
"Stability" is a word that means different things for different use cases.
Coverage is described on the SQLite website
Turso has its own test suite that in the repo.
but the other one has decades of engineering effort and is based on real world problems
But the other one is not available to most and SQLite itself is "open-source" not "open-contributions" so extending SQLite is pretty much impossible at scale:
- no way to merge upstream
- no way to run full test-suit to be sure everything is tiptop
Not likely. The alternative was for them to modify SQLite without the test suite and no obvious indication of what they would need to do to try to fill in the gaps. Modifying SQLite with its full test suite would be the best choice, of course, but one that is apparently[1] not on the table for them. Since they have to reimagine the test suite either way, they believe they can do a better job if the tests are written alongside a new codebase.
And I expect they are right. Trying to test a codebase after the fact never goes well.
[1] With the kind of investment backing they have you'd think they'd be able to reach some kind of licensing deal, but who knows.
I don't get this. In their own rust implementation they have to write and use their own test and they still don't have access to the proprietary sqlite tests. So their implementation will necessarily be whatever they implement + whatever passes their tests. Same as it would be if they forked sqlite in C. (Plus they would have the open source tests). Am I missing something?
You are missing that HN accounts needlessly overthink everything, perhaps?
Otherwise, I doubt it. They have to write the tests again no matter what. Given that, there is no downside to reimplementing it while they are at it. All while there is a big upside to doing that: Trying to test something after the implementation is already written never ends well.
That does not guarantee that their approach will succeed. It is hard problem no matter how you slice it. But trying to reverse engineer the tests for the C version now that all knowledge of what went into it in the first place is lost is all but guaranteed to fail. Testing after the fact never ends well. Rewriting the implementation and tests in parallel increases the chances of success.
Of all the projects which may benefit from a rewrite or re-imagining in a memory-safe language, I'm really puzzled why it's heavily-tested, near-universally-deployed software such as sudo (use oBSD doas instead?), the coreutils, and sqlite.
Doas supports a subset of sudo functionality by design. Your comment is exactly what I said when I first heard about the rust linux utils thing. The best they can do is have new bugs.
I don't think there is a big picture plan. It requires that someone care both about rust and the thing
...which is a pretty arbitrary combination
I definitely wouldn't be surprised by bugs and/or compatibility issues over time. Especially in the near term. I'm mixed, but somewhat enthusiastic on Turso's efforts to create client-server options and replication.
In the past I've reached for FirebirdSQL when I needed local + external databases and wanted to limit the technology spread... In the use case, as long as transactions synched up even once a week it was enough for the disparate remote connections/systems. I'm honestly surprised it isn't used more. That said, SQLite is more universal and lighter overall.
Building a production app on Turso now. No bugs or compatibility issues so far. The sqlite API isn't fully implemented yet, so I wrote a declarative facade that backfills the missing implementations and parallels writes to both Turso and native sqlite: gives me integrity checking and fallback while the implementation matures
Isn’t the rust rewrite deployed as part of some fairly significant Linux distros these days?
That’s hearsay that I haven’t dug into, so I may well be wrong.
Ubuntu is deploying it in a non-LTS release, and they're trying to get the bugs out of the way is what I'm hearing
I was surprised that the test suit not open source. Some info in https://sqlite.org/testing.html
It looks like some parts are open source and other not. Does anyone know more about the backstory? (It looks like one is a custom program that generate fuzz test. Do they sell it to others SQL engines?)
The CoRecursive episode with SQLite creator D. Richard Hipp goes through it. I've linked to the part of the transcript that covers it, the key quote being:
> We still maintain the first one, the TCL tests. They’re still maintained. They’re still out there in the public. They’re part of the source tree. Anybody can download the source code and run my test and run all those. They don’t provide 100% test coverage but they do test all the features very thoroughly. The 100% MCD tests, that’s called TH3. That’s proprietary. I had the idea that we would sell those tests to avionics manufacturers and make money that way. We’ve sold exactly zero copies of that so that didn’t really work out. It did work out really well for us in that it keeps our product really solid and it enables us to turn around new features and new bug fixes very fast.
https://corecursive.com/066-sqlite-with-richard-hipp/#testin...
it's their business model
it's free
but if you want the compliance paperwork, you pay for it
Yeah but what about the poor VC startups that want to rat fuck the commons? Why won't anyone think of them?
usefull if you need to validate that the database runs properly on yours embedded platform, possibly with its custom io and sync primitives.
This is very shallow for a supposed deep dive.
I'm not ready to entertain Turso as an alternative to something that is as battle tested as Sqlite.
> This is very shallow for a supposed deep dive.
I think it's time for a new law of headlines: anything labeled a "deep dive" isn't.
My law of headlines is, "don't take them too seriously, don't develop too many expectations about the article, skim the article (or the comments) to know what it is about and whether it is worth your time".
Taking feature lists and plans at face value is offensively shallow; the typical Rust fan arrogance pattern can be an explanation (if the Rust rewrite is "better", it doesn't have to be compatible with the rest of the world who uses the actual C SQLite).
Perhaps these are for deep divers who discuss Apple watch deep diving features than actual deep diving.
Yeah, I was expecting performance benchmarks, detailed feature comparisons, analysis of binary/extension compatibility, etc.
The thing that worries me the most about Turso is that rather than the small, stable team running SQLite, Turso is a VC backed startup trying to capitalize on the AI boom. I can easily see how SQLite's development is sustainable, but not Turso's. They're currently trying to grow their userbase as quickly as possible with their free open source offering, but when they have investors breathing down their necks asking about how they're going to get 100x returns I'm not sure how long that'll last. VCs generally expect companies they invest in to grow to $100 million in revenue in 5-10 years. If your use of their technology doesn't help them get there, you should expect to be rugpulled at some point.
I too am weary of VC incentives but:
1) It's MIT licensed. Including the test suite which is something lacking in SQLite:
https://github.com/tursodatabase/turso
2) They have a paid cloud option to drive income from:
https://turso.tech/pricing
"Including the test suite which is something lacking in SQLite"
That's not entirely true. SQLite has a TON of tests that are part of the public domain project: https://github.com/sqlite/sqlite/tree/master/test
They do have a test suite that's private which I understand to be more about testing for different hardware - they sell access to that for companies that want SQLite to work on their custom embedded hardware, details here: https://sqlite.org/th3.html
> SQLite Test Harness #3 (hereafter "TH3") is one of three test harnesses used for testing SQLite.
> 2) They have a paid cloud option to drive income from:
I’ve been confused by this for a while. What is it competing with? Surely not SQLite, being client server defeats all the latency benefits. I feel it would be considered as an alternative to cloud Postgres offerings, and it seems unlikely they could compete on features. Genuinely curious, but is there any sensible use case for this product, or do they just catch people who read SQLite was good on hacker news, but didn’t understand any of the why.
The thing that cooks my noodle - who are these insane people who want to beta test a new database? Yes, all databases could have world destroying data loss/corruption, but I have significantly more confidence in a player than has been on the market for many years.
> Genuinely curious, but is there any sensible use case for this product
Looking at the comments each time this product comes up, Rust is apparently the selling point for many, including the dev team themselves.
The article talks about this. If you have a project that starts small and an in-process DB is fine, but you end up needing to scale up then you don't have to switch DBs.
So the usecase is: I started with SQLite, but now I have too many terrabytes to fit on one server? That seems.. very uncommon.
And since moving it out of process, and even to another network, is going to make it much much much slower. You're going to need a rewrite anyway
I think it's more like you started with SQLite and now you need concurrent writes, replication, sharding, etc. etc. - all the stuff that the "big" databases like PostgreSQL provide.
That's a valid, but very tiny, use case.
After all, if you can tell in advance that you might hit the limits of SQLite, you'd simply start with postgresql on day one, not with a new unproven DB vendor with a product that has been through the trial by fire of existing DBs.
Thanks. Serves me right for commenting without reading the article.
Elasticsearch was license under Apache 2.0 until they switched.
That says enough.
to AGPL3?
Are there any VC-funded open source projects that didn't attempt rug pulls? (There must be, right?)
metabase.com, but metabase is intended for business analyst types and is AGPL, with shenanigans for embedding and an enterprise edition thing
Man, I've seen the SQL Metabase emits, it's not great. Like, doing a massive join across 10 tables and selecting all the columns from all the tables - to only return the average of one column from one table.
Grafana has been a pretty good steward of OSS. Whether you like their products or not, they've been able to balance the OSS and commercial offerings fairly well.
Yeah that's something I actually use quite a bit!
Whether or not they attempt rug pulls, or other slimy measures to extort money from entrenched users... this VC backed OSS startups have given us some nice things. People fork the permissively licensed code when the scumbuckets get too smelly and the company goes on to irrelevancy while people use the actually OSS version.
The MIT licensing makes this even less trustworthy. I can image a major cloud or fly.io just proprietary forking them as a service, as cloud providers have done for years.
So what? The MIT licensed original will still be there, you don't lose out on anything if that happens. And also, SQLite itself is public domain, so by your logic we shouldn't trust SQLite either. Which is crazy.
I don't understand you reply here. Database startups have always had the consistent issue of cloud providers providing managed solutions without contributing back. It is why many moved to or use the AGPLv3 and why there was the whole SSPL controversy in the first place. Running a successful open source database startup is not trivial. None of this applies to SQLite.
I think the point is that that sounds like a potential problem for turso, but it’s not really a problem for everyone else unless some sort of vendor lockin would prevent using open source alternatives. But given the strong compatibility story with the SQLite file format implied already that just doesn’t seem credible.
> test suite which is something lacking in SQLite
You must be kidding. Last time I checked, sqlite was mostly extensive test suites.
It's covered in the article. The full SQLite test suite isn't open source, so you (the third party) don't have the same confidence in your modifications as the SQLite team does.
1. Only if you modify it. There is a free test suit, and You can license the non-free test suit.
2. Compare to the test in Turso, the test in Turso is just kids toy.
I think they meant that the test suite is not open source. You’re right that it is extensive.
Yeah, that's not a good environment for this kind of engineering. You need long term stability for a project like this, slow incremental development with a long term plan, and that's antithetical to VC culture.
On the other hand, Rust code and the culture of writing Rust leads to far more modularity, so maybe some useful stuff will come of it even if the startup fails.
I have been excited to see real work on databases in Rust, there are massive opportunities there.
where do you see these opportunities? i didnt see a lot of issues personally rust would be better at than C in this domain. care to elaborate? (genuinely curious!)
personally i see more benefit in rust for example as ORM and layers that talk to the database. (those are often useful to have in such an ecossystem so you can use the database safe and sanely, like python or so but then u know, fast and secure.)
You need to be crazy to use an ORM. I personally think that even SQL is redundant. I would like to see a high quality embedded database written in Rust.
Yep, exactly this.
It's painful having to switch to another language to talk to the database, and ORMs are the worst kind of leaky abstractions. With Rust, we've finally got a systems language that's expressive enough to do a really good job with the API to an embedded database.
The only thing that's really missing is language support for properly ergonomic Cap'n Proto support - Swift has stuff in this vein already. That'd mean serializable ~native types with no serialization/deserialization overhead, and it's applicable to a lot of things; Swift developed the support so they could do proper dynamically linked libraries (including handling version skew).
If I might plug my project yet again (as if I don't do that enough :) - bcachefs has a high quality embedded database at its core, and one of the dreams has always been to turn that into a real general purpose database. Much of the remaining stuff for making it truly general purpose is stuff that we're going to want sooner or later for the filesystem anyways, and while it's all C today I've done a ton of work on refactoring and modernizing the codebase to hopefully make a Rust conversion tractable, in the not too distant future.
(Basically, with the cleanup attribute in modern C, you can do pseudo RAII that's good enough to eliminate goto error handling in most code. That's been the big obstacle to transitioning a C codebase to be "close enough" to what the Rust version would look like to make the conversion mostly syntactic, not a rewrite, and that work is mostly done in bcachefs).
The database project is very pie in the sky, but if the project gets big enough (it's been growing, slowly but steadily), that's the dream. One of them, anyways.
A big obstacle towards codebases that we can continue to understand, maintain and continue to improve over the next 100 years is giant monorepos, and anything we can do to split giant monorepos apart into smaller, cleaner reusable components is pure gold.
I vaguely remember a crate doing a RocksDB kind of thing?
I was excited about this for a second until seeing your comment.
Unless you are Amazon which has the resources to maintain a fork (which is questionable by itself with all the layoffs), you probably shouldn't touch this.
Some lessons about the modern distaste for copyleft here IMO
Completely agree, I'm looking at pretty much all software this way nowadays.
We've all been around long enough to know that "free" VC-backed software always means "free... until it's in our interest to charge for it". And yet users will still complain about the rugpull in 2026, no matter how many times they've been through it. "Fool me once, shame on you"
I've lost the count of how many times people were fooled by VC backed companies in this forum.
I recently benchmarked different SQlite implementations/driver for Node. Better-sqlite3 came out on top of this test: https://sqg.dev/blog/sqlite-driver-benchmark/
This reflects my experience. I also experienced very bad memory leaks when using libSQL for large write jobs. Haven't tried tursodatabase yet, but my impression by the confusing amount of packages in the Turso ecosystem is it's not ready for primetime yet.
> ... most of which can be fixed by a rewrite in Rust
huh? That is clearly not the case. memory bugs - sure. Not having a public test suite, not accepting public contributions, weakly typed columns and lack of concurrency has nothing to do with the language. They're governance decisions, that's it.
>I see this situation trhough the prism of the innovator's dilemma: the incumbent is not willing to sacrifice a part of its market to evolve, so we need a new player to come and innovate.
I don't think the innovators dilemma quite applies in the open source world. Projects are tools, that's it. Preserving a project for the sake of preserving it isn't a good idea.
If people need to run a sqlite db in these exotic places, shedding it means someone else has to build their own tool now that can do it. Sqlite has decided that they care about that, so they support it, so they can't use rust. Seems sound.
Projects coming and going is a good thing in open source, not a bug.
Maybe they're saying a rewrite part solves the governance issues not the rust part.
That'd be an interesting attitude towards governance for a VC-funded startup with -- I presume -- VC-controlled board seats.
I know I've seen multiple bug reports in open source projects with "well we can't fix this because it'd break things for existing users." Maybe it's a bad thing, but why do you think this doesn't happen?
> lack of concurrency has nothing to do with the language
That's an extraordinary claim for any C codebase.
Unless it ships with code enabling concurrency that is commented out, we should assume that "concurrency in C ain't easy" was a factor in that design choice.
There has been a podcast by Developer Voices about Turso: https://m.youtube.com/watch?v=1JHOY0zqNBY
At the current rate of progress I'm wondering how long it will take for llm agents to be able to rewrite/translate complete projects into another language. SQLite may not be the best candidate, due to the hidden test suite. But CPython or Clang or binutils or...
The RIIR-benchmark: rewrite CPython in Rust, pass the complete test suite, no performance regressions, $100 budget. How far away are we there, a couple months? A few years? Or is it a completely ill-posed problem, due to the test suite being tied to the implementation language?
What’s the point?
A clearly defined/testable long-horizon task: demonstrating the capability of planning and executing projects that overrun current llm's context windows by several orders of magnitude.
Single-issue coding benchmarks are getting saturated, and I'm wondering when we'll get to a point where coding agents will be able to tackle some long-running projects. Greenfield projects are hard to benchmark. So creating code or porting code from one language to another for an established project with a good test suite should make for an interesting benchmark, no?
I'm working on a Django app. This would make production deployment a bit easier.
Also sad that the test suite isn't open source. Would help drive development of the new DB...
I'm pretty sceptical, to say the least
Where is the "networked mode" in Turso? Turso's readme and docs do not mention anything like this
They're implementing MVCC
For the Java ecosystem, H2 fills this gap nicely, easily handling both in- memory and remote JDBC access:
https://frequal.com/java/TheBestDatabase.html
I hate to be negative, but where is the deep dive? This is a shallow overview of Turso's features and some of the motivation behind it. Am I missing something?
It's longer than a tweet
So the idea is to rewrite it in Rust and drop SQLite? I mean, maybe that’s just how things evolve. But it feels like every project is only a few vibe-coding sessions away from getting rewritten in $LANGUAGE. And I can’t help wondering whether that’s hurting a sustainable open-source ecosystem.
SQLite is a good example: the author built a small ecosystem around it and managed to make a living from open source. Thanks to author's effort, we have a small surface area, extreme stability, relentless focus on correctness.
If we keep rewarding novelty over stewardship, we’ll lose more “SQLite-like” projects—stable cores that entire ecosystems depend on.
Good article though it kind of stopped just when I thought the deep dive was about to start.
Could you be any _less_ specific in your criticism?
> A database that can scale from in-process to networked is badly needed
Why not Postgres? https://pglite.dev
From what I’ve read there’s a pretty sizable performance gap between SQLite and pglite (with SQLite being much faster).
I’m excited to see things improve though. Having a more traditional database, with more features and less historical weirdness on the client would be really cool.
Edit: https://pglite.dev/benchmarks actually not looking too bad.. I might have something new to try!
Wow what a terrible and misleading article
let's play a little game known as "count the unsafe"
https://github.com/search?q=repo%3Atursodatabase%2Fturso%20u...
What a breath of fresh air to read a blog not written by AI, with actual human learnings and opinions. Thanks for the write up!
Was this written by an LLM?
a blog not written by AI, about a project written in AI. It is just matter of time. We just need AI to read the article, and then the full circle is complete.
Stop rewriting everything in Rust.
vibe-coding everything in Rust, you mean.