Writing it in Rust gets visibility because of the popularity of the language on HN.
Here's why we are not doing it for LadybugDB.
Would love to explore a more gradual/incremental path.
Also focusing on just one query language: strongly typed cypher.
https://vldb.org/cidrdb/2023/kuzu-graph-database-management-...
You can judge for yourself what work has been done in the last 5 months. Many short videos here. New open source contributors who I didn't know before ramping up.
In your discussion the first comment from an ex kuzu dev made an excellent point that rust for databases in an excellent language to ship faster with confidence while reducing real problems of concurrency and corruption.
At some point it becomes intellectual dishonesty to dismiss a language because of vibes instead of merit.
But rewriting a complex working piece of software in Rust is not trivial. Having an incremental path (where only parts are rewritten in Rust and compatible with C++ code) would be a good path to get there.
Also open to new code and extensions getting written in Rust.
From the commit history it's obvious that this is an AI coded project. It was started a few months ago, 99% of commits are from 1 contributor, and that 1 contributor has some times committed 100,000 lines of code per week. (EDIT: 200,000 lines of code in the first week)
I'm not anti-LLM, but I've done enough AI coding to know that one person submitting 100,000 lines of code a week is not doing deep thought and review on the AI output. I also know from experience that letting AI code the majority of a complex project leads to something very fragile, overly complicated, and not well thought out. I've been burned enough times by investigating projects that turned out to be AI slop with polished landing pages. In some cases the claimed benchmarks were improperly run or just hallucinated by the AI.
So is anyone actually using this? Or is this someone's personal experiment in building a resume portfolio project by letting AI run against a problem for a few months?
Trying to make it optional.
Try
explain match (a)-[b]->(c) return a.rowid, b.rowid, c.rowid;
If you wish to avoid that particular caveat, look for a graph DB which materializes edges within vertices/nodes. The obvious caveat there is that the edges are not normalized, which may or may not be an issue for your particulat application.
https://news.ycombinator.com/item?id=29737326
Kuzu folks took some of these discussions and implemented them. SIP, ASP joins, factorized joins and WCOJ.
Internally it's structured very similar to DuckDB, except for the differences noted above.
DuckDB 1.5 implemented sideways information passing (SIP). And LadybugDB is bringing in support for DuckDB node tables.
So the idea that graph databases have shaky internals stems primarily from pre 2021 incumbents.
4 more years to go to 2030!
> There are some additional optimizations that are specific to graphs that a relational DBMS needs to incorporate: [...]
This is essentially what Kuzu implemented and DuckDB tried to implement (DuckPGQ), without touching relational storage.
The jury is out on which one is a better approach.
As I like to point out, for two decades DARPA has offered to pay many millions of dollars to anyone who can demonstrate a graph database that can handle a sparse trillion-edge graph. That data model easily fits on a single machine. No one has been able to claim the money.
Inexplicably, major advances in this area 15-20 years ago under the auspices of government programs never bled into the academic literature even though it materially improved the situation. (This case is the best example I've seen of obviously valuable advanced research that became lost for mundane reasons, which is pretty wild if you think about it.)
I wonder why no one has claimed it. It's possible to compress large graphs to 1 byte per edge via Graph reordering techniques. So a trillion scale graph becomes 1TB, which can fit into high end machines.
Obviously it won't handle high write rates and mutations well. But with Apache Arrow based compression, it's certainly possible to handle read-only and read-mostly graphs.
Also the single machine constraint feels artificial. For any columnar database written in the last 5 years, implementing object store support is tablestakes.
There is no single machine constraint, just the observation that we routinely run non-graph databases at similar scale on single machines without issue. It doesn't scale on in-memory supercomputers either, so the hardware details are unrelated to the problem:
- Graph database with good query performance typically has terrible write performance. It doesn't matter how fast queries are if it takes too long to get data into the system. At this scale there can be no secondary indexing structures into the graph; you need a graph cutting algorithm efficient for both scalable writes and join recursion. This was solved.
- Graph workloads break cache replacement algorithms for well-understood theory reasons. Avoiding disk just removes one layer of broken caching among many but doesn't address the abstract purpose for which a cache exists. This is why in-memory systems still scale poorly. We've known how to solve this in theory since at least the 1980s. The caveat is it is surprisingly difficult to fully reduce to practice in software, especially at scale, so no one really has. This is a work in progress.
- Most implementations use global synchronization barriers when parallelizing algorithms such as BFS, which greatly increases resource consumption while throttling hardware scalability and performance. My contribution to research was actually in this area: I discovered a way to efficiently use error correction algorithms to elide the barriers. I think there is room to make this even better but I don't think anyone has worked on it since.
The pathological cache replacement behavior is the real killer here. It is what is left even if you don't care about write performance or parallelization.
I haven't worked in this area for many years but I do keep tabs on new graph databases to see if someone is exploiting that prior R&D, even if developed independently.
There are countless smaller graphs for narrow domains that may be <1B edges but many people have the ambition to stitch together these narrow graphs into a larger graph. When stitching graphs together, the number of edges is usually super-linear. A billion edges is kind of considered “Hello World” for system testing.
The Semantic Web companies in the 2000s had graphs that were 100B+ edges. They wanted to go much larger but hit hard scaling walls around that point. That scaling wall killed them.
Classic mapping data models are typically 10-100B edges. These could be much, much larger if they could process all the data available to them.
Of course, intelligence agencies had all kinds of graphs far beyond trillions of edges 20 years ago. People, places, things, events.
Any type of spatiotemporal entity graphs with large geographic scope are quadrillions of edges. It isn’t just a lot of inferred relationships between entities, the relationships evolve over time which also must be captured. These are probably the most commercially valuable type of graph. You could build hundreds of different graphs of this type with 1T+ edges in most regions, never mind doing it at scale. These are so large that we usually don’t store them. Subgraphs are generated on demand, which is computationally expensive.
These spatiotemporal entity graphs also have the largest write loads. Single sources generate tens of PB/day of new edges. There is a ton of industrial data that looks like this; it isn’t just people slinging structured data.
Graphs are everywhere but we furiously avoid them because the scalability of operations over anything but severely constrained graphs is so poor. Selection bias.
NSA in particular heavily funded foundational theoretical and applied computer science research into scaling graph computing for decades. They had all kinds of boring graphs where trillions of edges was their Tuesday. The US military also uses large graph databases in fairly boring applications that probably didn’t require a graph database.
Would you please share some more info about this? Were the advances implemented in software and never written up and published? What are the names of the government programs?
> We've all seen open source projects drag on for 3 years without shipping anything, that's not necessarily better
There are more options than “never ship anything” and “use AI to slip 200,000 lines of code into a codebase”
The first version was largely a (slightly rearchitected) port of a local graph database I had been building called graphos. Most of the engine and core are handwritten, so are the python bindings and conformance tests. The rest is indeed largely AI generated, so is the documentation (Mkdocs). The AI generated parts are curated and validated, although it's not up to par for a production release yet.
This is not a resume portfolio project and in no way related to my day job. I started writing grafeo(then graphos) out of frustration with Neo4j and being inspired by some discussions about database internals with Hännes from duckdb at a conference. I tried ladybug, but found memory usage insanely high and was sure I could do better. Anyone looking for an embedded battle tested graph database should probably still look at ladybug though. Grafeo is not that mature yet.
And to be honest I also have no real plans with grafeo, I am using it myself for now and am very happy with it, but that's n=1. It's fully free and open source and contributors are very welcome, but its also not yet fully where I would want it to be, hence the beta status. I have no commercial interest, but had a lot of fun pouring multiple hundreds of hours in and creating something that I enjoy using myself.
Hope that clarifies some things!
* it is possible to write high quality software using GenAI
* not using GenAI could mean project won't be competitive in current landscape
why? this is false in my opinion, iterating fast is not a good indicator of quality nor competitiveness
every example you mentioned is not something you should delegate to LLMs, unless quick prototyping
it works very well for me, llm with guidance produces good quality code.
From examine this codebase it doesn’t appear to be written carefully with AI.
It looks like code that was promoted into existence as fast as possible.
Because the latter is really dumb. I don't mind a software written in C, although I personally wouldn't want to write it anymore.
https://github.com/agnesoft/agdb
Ah, yeah, a different query language.
Claude helped a lot but it's all reviewed and curated by me.
Author of ArcadeDB critiques many nominally open source licenses here:
https://www.linkedin.com/posts/garulli_why-arcadedb-will-nev...
What is a graph database is also relevant:
- Does it need index free adjacency?
- Does it need to implement compressed sparse rows?
- Does it need to implement ACID?
- Does translating Cypher to SQL count as a graph database?When you actually need to run graph algorithms against your relational data, you export the subset of that data into something like Grafeo (embedded mode is a big plus here) and run your analysis.
It's possible to run cypher against duckdb (soon postgres as well via duckdb's postgres extension) without having to import anything. That's a game changer when everything is in the same process.
https://archive.fosdem.org/2025/schedule/event/fosdem-2025-5...
Full history here: https://www.linkedin.com/pulse/brief-history-graphs-facebook...
Typically used with scaleout DBs like databricks & splunk for analytical apps: security/fraud/event/social data analysis pipelines, ML+AI embedding & enrichment pipelines, etc. We originally built it for the compute-tier gap here to help Graphistry users making embeddable interactive GPU graph viz apps and dashboards and not wanting to add an external graph DB phase into their interactive analytics flows.
Single GPU can do 1B+ edges/s, no need for a DB install, and can work straight on your dataframes / apache arrow / parquet: https://pygraphistry.readthedocs.io/en/latest/gfql/benchmark...
We took a multilayer approach to the GPU & vectorization acceleration, including a more parallelism-friendly core algorithm. This makes fancy features pay-as-you-go vs dragging everything down as in most columnar engines that are appearing. Our vectorized core conforms to over half of TCK already, and we are working to add trickier bits on different layers now that flow is established.
The core GFQL engine has been in production for a year or two now with a lot of analyst teams around the world (NATO, banks, US gov, ...) because it is part of Graphistry. The open-source cypher support is us starting to make it easy for others to directly use as well, including LLMs :)
I mean - I understand, some people have fun looking at new tech no matter the source, but my question is is there a person who would be designated to pick a GraphQL in language and would ignore all the LLM flags and put it in production.
There's no big mystery. No conspiracy or organised evangelism. Rust is just really good.
> I doubt Rust gives any meaningful advantage there.
Advantage over what? Haskell & OCaml? Maybe not. C++ or Python? Absolutely. Its type system is far stronger than those, and its APIs are much better designed and harder to misuse.
(Which may not all be true, but perhaps moreso than your average project)
This is another one of the vibe-coded slop projects that are routinely frontpaging HN now. As someone else pointed out, the single author has "written" >100kLOC in diffs per week. It's not possible that any human knows what's in the codebase in any reasonable detail.
Don't get me wrong, graphs have interesting properties and there's something intriguing out these dynamic, open ended queries. But, what features/products/customer journeys are people building with a graph DB.
Every time I explore, I end up back at "yea, but a standard DB will do 90% of this as a 10% of the effort".
Some apps want it to be deterministic.
I'm surprised this question comes up so often.
It's mainly from the vector embedding camp, who rightfully observe that vector + keyword search gets you to 70-80% on evals. What is all this hype about graphs for the last 20-30%?
Do you have any good demos to showcase where graph DBs clearly have an advantage? Its mostly just toy made demos.
vector embeddings on the other hand no matter how limited clearly have proven themselves useful beyond youtube/linkedin thought leader demos.
My other favorite quote: transformers are GNNs which won the hardware lottery.
Longer form at blog.ladybugmem.ai
You want to believe that everything probabilistic has more value and determinism doesn't? Or that the world is made up of tabular data? You have a lot of company.
The other side of the argument I believe has a lot of money.
https://www.anthropic.com/research/mapping-mind-language-mod...
https://research.google/blog/patchscopes-a-unifying-framewor...
I read the blog post and your website but unfortunately didnt help change my perspective.
Thanks for the share
I know pglite, and while it's great someone made that, it's definitely not the same
Just in case folks here were wondering if I'm some type of a graphdb bigot.
I wonder how you reconcile the demand for LLMs with multihop reasoning with the statement above.
I think a lot what is stated here is how things work today and where established companies operate.
The contradictions in their positions are plain and simple.
https://github.com/LadybugDB/ladybug/discussions/204#discuss...
There are second order optimizations that LLMs logically implement that CSR implementing DBs don't. With sufficient funding, we'll be able to pursue those as well.
Perhaps people can invent LSM like structures on top of them.
But at least establish that CSR on disk is a basic requirement before you claim that you're a legit graph database.
A handful of data models have strongly graph-like characteristics where queries require recursive ad hoc joins and similar. If your data is small, this is nominally the use case for a graph database. Often you can make it work pretty well on a good relational database if you are an expert at (ab)using it. Relational databases usually have better features in other areas too.
If you have a very large graph-like data model, then you have to consider more exotic solutions. You will know when you have one of these problems because you already tried everything and everything is terrible. But you still started with a relational database.
JS tests seem fully AI generated thought.. And big difference in quality between some of the ecosystem repo's. Server, Web and memory all seem very well developed, llamaindex and langchain lower effort.
I think the main thing this project needs is more maintainers, but looking purely at the features of this database, and the fact that it's Apache2-0, make it interesting, at least for me.
Especially with OLAP queries.