Both https://typesense.org/ and https://duckdb.org/ (with their spatial plugin) are excellent geo performance wise, the latter now seems really production ready, especially when the data doesn’t change that often. Both fully open source including clustered/sharded setups.
No affiliation at all, just really happy camper.
We will have some more blog posts in the future describing different parts of the system in more detail. We were worried too much density in a single post would make it hard to read.
A while ago I tried to create something that has duckdb + its spatial and SQLite extensions statically linked and compiled in. I realized I was a bit in over my head when my build failed because both of them required SQLite symbols but from different versions.
But given that duckdb handles "take this n GB parquet file/shard from a random location, load it into memory and be ready in < 1 sec" very well I'd argue it's quite easy to build something that scales horizontally.
We use it for both the importer pipeline that processes the 2B row / 200GB compressed GBIF.org parquet dataset and queries like https://www.meso.cloud/plants/pinophyta/cupressales/pinopsid... and the sheer amount of functions[1] beyond simple stuff like "how close is a/b to x/y" or is "n within area x" is just a joy to work with.
[1] https://duckdb.org/docs/stable/core_extensions/spatial/funct...
You can also attach DuckDB to Apache Flight which will make it work beyond local operation.
It's a mini-revolution in the OSM world, where most apps have a bad search experience where typos aren't handled.
But since you asked, yes, I actually enjoy commuting when it is less than 30 minutes each way and especially when it involves physical activities. My best commutes have been walking and biking commutes of around 20-25 minutes each way. They give me exercise, a chance to clear my head, and provide "space" between work and home.
During 2020, I worked from home the entire time and eventually I found it just mentally wasn't good for me to work and live in the same space. I couldn't go into the office, so I started taking hour long walks at the end of every day to reset. It helped a lot.
That said, I've also done commutes of up to an hour each way by crowed train and highway driving and those are...not good.
I don't get this. This idea that 'work life balance' should mean that the two should be compartmentalised to specific blocks of time seems counterproductive to me. To me it feels like an unnatural way of living. 8 hours in which I should only focus on work, 8 hours I should focus on everything else followed by 8 hours of sleep. I don't think that is how we are supposed to operate. Even the 8 hours of sleep in one block is not natural and a recent invention. Before industrialisation people used to sleep in multiple blocks (wikipedia: polyphasic sleeping)
The idea that you have to be 'on' for 8 hours at a time seems extremely stressful to me. No wonder you need an hour afterwards just to unwind. Interleaving blocks of work and personal time over the day feels much more natural and less stressful to me. WFH makes this possible. If I'm stuck on something, I can do something else for a while, maybe even take a short nap. The ability to focus and do mentally straining work comes in waves for me. Being able to go with my natural flow makes me both happier, more relaxed and more productive.
The key to work/life balance to me is not stricter separation but instead better integration.
Different people are different and can have different preferences.
For me, having different physical spaces helps me focus on work at work and my family at home. When they are the same physical space, both suffer. I'm not saying everyone should feel this way.
The overarching point is everyone is different, ymmv.
This is part of the company culture. If the company respects the boundary between work and personal life, and it's a cultural value, then it shouldn't be a problem for you establishing a space even without going to the office. You just close down your work laptop, put it aside and open it up next time when it's time to work again. Of course, there's stuff like on-call shifts, and there's a temptation to just stay later and finish this one thing, but if the company culture does not expect you to be tethered to work 24x7 then it's doable. If the culture is right, you don't need a physical barrier for this to be doable.
> so I started taking hour long walks at the end of every day to reset. It helped a lot.
A good habit. I dont see why any remote worker couldn't do that.
Learning from smart people, making friends, free food and drinks, a DDR machine
My last office job had none of that. Instead it was just sort of like a depressing scaled up version of my home office
1. It's extremely cold and dark! I must wear extra clothes when going inside and I get depressed at wasting a day of nice weather in what looks like a WW1 bunker.
2. Terrible accessibility for disabled people! (such as myself)
3. Filthy toilets!
4. Internet is slower than at home!
5. Half the team lives somewhere else so all meetings are on teams anyway!
6. They couldn't afford a decent headset so I get pain in my head after 5 minutes, but I don't have a laptop so I can't move to a meeting room.
The HR really can't understand why after all these great perks I insist on wanting to work from home. I am such an illogical person!
There were some hospitalisations from work related injuries... Regular bullying, threats of violence....
Lovely office culture!
In-house storage/query systems that are not a product being sold by itself are NIH syndrome by a company with too much engineering resources.
Especially in the context of embedding search, which this article is also trying to do. We need database that can efficiently store/query high-dimensional embeddings, and handle the nuance of real-world applications as well such as filtered-ANN. There is a ton of innovation in this space and it's crucial to powering the next generation architectures of just about every company out there. At this point, data-stores are becoming a bottleneck for serving embedding search and I cannot understate that advancements in this are extremely important for enabling these solutions. This is why there is an explosion of vector-databases right now.
This article is a great example of where the actual data-providers are not providing the solutions companies need right now, and there is so much room for improvement in this space.
* filterable ANN, decomposes into prefiltering or postfiltering.
* dynamic updates and versioning is still very difficult
* slow building of graph indexes
* adding other signals into the search, such as query time boosting for recent docs.
I don’t disagree these systems can work but innovation is still necessary. We are not in a “data stores are solved” world.
Oh, then you must have the secret sauce that allows scaling ES vector search beyond 10,000 results without requiring infinite RAM. I know their forums would welcome it, because that question comes up a lot
Or I guess that's why you included the qualifier about money to invest
But then the follow on question begs: "Am I really suffering the same problems that a niche already-scaled business is suffering"
A question that is relevant to all decision making. I'm looking at you, people who use the entire react ecosystem to deploy a blog page.
Memory-mapping lets us get pretty far, even with global coverage. We are always able to add more RAM, especially since we're running in the cloud.
Backfills and data updates are also trivial and can be performed in an "immutable" way without having to reason about what's currently in ES/Mongo, we just re-index everything with the same binary in a separate node and ship the final assets to S3.
Paradedb = postgres pg_search plugin (the base is tantivy). Need anything else like vectors or whatever, get the plugins for postgres.
The only thing your missing is a LSM solution like RocksDB. See Orioledb what is supposed to become a plugin storage engine for postgres but not yet out of beta.
Feels like people reinvent the wheel very often.
I'd be very happy to use simpler more bulletproof solutions with a subset of ES's features for different use cases.
The one issue I remember is: On ES 5 we once had an issue early on where it regularly went down, turns out that some _very long_ input was being passed into the search by some scraper and killed the cluster.
The only bigger issue we had was when we initially added 10 nodes to double the initial capacity of the cluster. Performance tanked as a result, and it took us about half a day until we finally figured out that the new nodes were using dmraid (Linux RAID0) and as a result the block devices had a really high default read-ahead value (8192) compared to the existing nodes, which resulted in heavy read amplification. The ES manual specifically documents this, but since we hadn't run into this issue ourselves it took us a while to realise what was at fault.
It's really not something that needs much attention in my experience.
Quickwit[1] looks interesting, found via Tantivity reference. Kind of like ES w/ Lucene.
Whenever you see an advertisement like this (these posts are ads for the companies publishing them), they will not be telling you the full truth of their new stack, like the downsides or how serious they can be (if they've even discovered them yet). It's the same for tech talks by people from "big name companies". They are selling you a narrative.
That being said, we are currently working on getting our Google S2 Rust bindings open-sourced. This is a geo-hashing library that makes it very easy to write a reverse geocoder, even from a point-in-polygon or polygon-intersection perspective.
2 - It's a great tool with a lot of tuneability and support!
3 - We've been using it for K8s logs and OTEL (with Jaeger). Seems good so far, though I do wonder how the future of this will play out with the $DDOG acquisition.
I'm guessing it's closed source *aas only?
I'm wondering if anyone here has experience with LMDB and can comment on how they compare?
I'm looking at it next for a project which has to cache and serve relatively small static data, and write and look up millions of individual points per minute.
RocksDB can use thousands of file descriptors at once, on larger DBs. Makes it unsuitable for servers that may also need to manage thousands of client connections at once.
LMDB uses 2 file descriptors at most; just 1 if you don't use its lock management, or if you're serving static data from a readonly filesystem.
RocksDB requires extensive configuration to tune properly. LMDB doesn't require any tuning.
It sounds like they had the wrong architecture to start with and they built a database to handle it. Kudos. Most would have just thrown cache at it or fine tuned a readonly postgis database for the geoip lookups.
Without benchmarks it’s just bold claims we’ll have to ascertain.
Isn't RocksDB just the db engine for Kafka?
Postgres + pg_search (= tantivy) will have gotten them there for 80%. Sure, postgres really needs a plugin storage engine for better SSD support (see orioledb).
But creating your own database for your own company, is just silly.
There is a lot of money in the database market, and everybody wants to do their own thing to tied customers down to those databases. And that is their main goal.