Appreciate the demo: https://deephn.org/?q=apple+silicon
It is fast but, nowhere close to accurate or useful for this specific example. Could not find a way to force the plural form. Neither quotes nor plus worked.
Also, would there be any potential issues if the index was mounted on shared storage between multiple instances?
As for shared storage, do you mean something like NAS or, rather Amazon S3? Cloud-native support of object storage and separating storage and compute is on our roadmap. Challenges will be maintaining latency and the need for more sophisticated caching.
Keep it up!
Bookmarked.
Or at least lazy loading of index in RAM (emulating what mmap would do anyway)
It would increase concurrent read and write speed (index loading, searching) by removing the need to lock seek and read/write.
But I would expect that the mmap implementations do already use io_uring / IoRing.
Yes, lazy loading would be possible, but pure RAM access does not offer enough benefits to justify the effort to replicate much of the memory mapping.
I've had it in mind for a while to build a fuzzy search tool based on parsing each phrase into concepts, parsing the search query into concepts, and finding nearest match based on that. It's a C library and very fast.
https://github.com/ChatScript/ChatScript
Looks like it hasn't been committed to in some time, I'll have to check out their blog and see what's up. I guess with the advent of LLMs, dialog trees are passé.
How's SeekStorm's prowess in mid-cap enterprise? How hairy is the ingest pipeline for sources like: decade old sharepoint sites, PDFs with partial text layers, excel, email.msg files, etc...
How did you demo? Did you spin up your own instance and index the wikipedia corpus like the docs suggest? I'd like to just give it a whirl on an already running instance.
Never mind, found that someone posted a link already.
What's interesting is that they have a self-deployable container model[1] that only phones home for billing so you can self-host the runtime and model.
[0] https://learn.microsoft.com/en-us/azure/ai-services/document...
[1] https://learn.microsoft.com/en-us/azure/ai-services/document...
I’ll take a try this weekend as well.
I use Tantivy, and add refinements like: if the top result is objectively a low-quality one, it's usually a query with a typo finding a document with the same typo, so I run the query again with fuzzy spelling. If all the top results have the same tag (that isn't in the query), then I mix in results from another search with the most common tag excluded. If the query is a word that has multiple meanings, I can ensure that each meaning is represented in the top results.
When using SeekStorm as a server, keeping the latency per query low increases the throughput and the number of parallel queries a server can handle on top of a given hardware. An efficient search server can reduce the required investments in server hardware.
In other cases, only the local search performance matters, e.g., for data mining or RAG.
Also, it's not only about averages but also about tail latencies. While network latencies dominate the average search time, that is not the case for tail latencies, which in turn heavily influence user satisfaction and revenue in online shopping.
—————
Full version: I run it on a dedicated machine 2vcpu2gb on digital ocean. Every tenant has an index and i have like 30k searches per week across all tenants. Each tenant has from 1 to 150k documents in their index. Sentry catches MeilisearchTimeoutException couple times every day with the message that Meilisearch could not finish adding document to index. I don’t care too much about that because background worker is responsible for updating index, so that tasks gets rescheduled. I like to keep my sentry clean, so it’s more an inconvenience than the issue. Meilisearch setup is very straightforward, they provide client libraries for almost all languages (maybe even for esoteric and marginal, idk, i only need python), have pretty decent documentation covering the basics and don’t really require operations at my scale. I really liked the feature of issuing the limited access tokens to be able to set the pre condition. That’s how i limit the searches for particular user on the tenant to see only their data.
For SimilarityType::Bm25fProximity which takes into account the proximity between query term matches within the document, we have so far only anecdotal evidence that it returns significantly more relevant results for many queries.
Systematic relevancy benchmarks like BeIR, MS MARCO are planned.
Any plan to make it run on WASM? I wanted to add this feature to Tantivy a few years ago but they weren't interested, and I had to fall back to a JavaScript search engine that was much slower.
I don't know SeekStorm's team and I did not dig much into the details, but my impression so far is that their benchmark's results are fair. At least I see no reason not to trust them.
Yes, WASM and Python bindings are on our roadmap.
The speed looks great but isn't everything else already fast enough?
Where I really think faster searches will come into play is with AI. There is nothing energy efficient about how LLM work and I really think Enterprise will focus on using LLM to generate as many Q and A pairs during off peak energy hours and using a hybrid search that can bridge semantic (vector) and text. I think for Enterprise the risk of hallucinations (even with RAG) will be too great and fall back to traditional search, but with a better user experience.
Based on the README, it looks like vector search is not supported or planned, but it would be interesting to see if SeekStorm can do this more efficiently than Lucene/OpenSearch and others. I only dabbled in the search space, so I don't know how complex this would be, but I think SeekStorm can become a killer search solution if it can support both.
Edit: My bad, it looks like vector search is PoC.
We’re also at a point where cloud compute is consuming a significant amount of energy globally.
I'm curious about the binary size of it all. Could this be compiled with WASM and run on static pages?
We just released a new OpenAPI based documentation for the SeekStorm server REST API: https://seekstorm.apidocumentation.com
For the library we have the standard rust doc: https://docs.rs/seekstorm/latest/seekstorm/
You add the library via 'cargo add seekstorm' to your project which you anyway have to compile.
As for the server, we may add binaries for the main OS in the future.
WASM and Python bindings are on our roadmap.
The SeekStorm server features an REST API via http: https://seekstorm.apidocumentation.com
It also comes with an embedded Web UI: https://github.com/SeekStorm/SeekStorm?tab=readme-ov-file#bu...
Or did you mean a Web based interface to create and manage indices, define index schemas, add documents etc?
Performance-wise it would be indeed interesting to run a benchmark. The third-party open-source benchmark we are currently using (search_benchmark_game) does not yet support PostgreSQL. So yes, that comparison is still pending.