OpenData Timeseries: Prometheus-compatible metrics on object storage
9 points
1 hour ago
| 5 comments
| opendata.dev
| HN
hagen1778
52 minutes ago
[-]
Comparing self-hosted prices with managed solutions isn't exactly apples to apples.

But if you do compare, VictoriaMetrics cloud for 3Mil active series and twice higher ingestion rate (100K samples/s or 30s scrape interval) will cost you ~$1k/month + storage costs.

See https://victoriametrics.cloud/#estimate-cost

reply
apurvamehta
44 minutes ago
[-]
Agreed. VictoriaMetrics is indeed a very compelling offering. The disk-less approach is significantly simpler to operate, which I think is the biggest difference. Running opendata's version yourself has fewer moving pieces, and standard operations become trivial because no single service retains permanent state.

It's a meaningful change in calculation of running yourself vs paying someone to do it for you IMO.

reply
agavra
41 minutes ago
[-]
anecdotally I've heard confirmations of the challenge of running VictoriaMetrics clusters at scale. they're way better than Cortex/Thanos and they've built a pretty awesome product but still are a pretty significant operational burden.
reply
mdwaud
1 hour ago
[-]
The "why should I care" is about 3/4 of the way down the page:

> None of these numbers are exact, but the structural gap is clear: a handful of nodes costing roughly $560/month versus $10,000-20,000/month for a managed service at the same scale. As we explained earlier, it’s practical to operate OpenData Timeseries yourself and fully realize these massive cost savings since it isn’t a traditional distributed database that manages partitioned and replicated state.

It doesn't look 100% turn-key, but those are compelling numbers.

reply
apurvamehta
1 hour ago
[-]
Good call out, updated the intro with a summary of the cost benefit. Thanks for the feedback!
reply
agavra
1 hour ago
[-]
Good point, a tl;dr is probably worthwhile.

It's definitely not quite turn key just yet but we've been dogfooding it in production against a moderate metrics use case (~30k samples/s) and have it hooked up to grafana (you just configure a prometheus source and point to your deployed URL). We run it on a single node with no replicas ;)

reply
valyala
40 minutes ago
[-]
Interesting solution! According to the provided numbers at "query latency" chapter, the query over cold data, which selects samples for 497 time series over 6 hours time range takes 15 seconds if the queried data isn't available in the cache. This means that typical queries over historical data will take eternity to execute ;(
reply
apurvamehta
35 minutes ago
[-]
yes. this is current issue. there are two solutions:

1. the reason it's slow as you select more series over longer periods of time is that the series has to be pulled for each time bucket in the range, and then the samples have to be pulled for each bucket. By compacting older buckets and merging samples together, historical queries should be pretty comparable to 'more recent' cold queries. 2. We don't pre-cache all the metadata today. If we did that, then we could parallelize sample loads much more efficiently, lowering latency. 3. There is a lot of room to do better batching and tune the parallelism of cold reads.

We've only been at this for a couple of months. THe techniques to improve latency on object storage are well known, we just have to implement them.

Another benefit is this: all the data is on S3, so spinning up more optimized readers to transform older data to do more detailed analysis is also an option with this architecture.

reply
valyala
26 minutes ago
[-]
Yes, there is a solution for masking the read latency at object storage - to run many readers in parallel. I tweeted about it some time ago - https://x.com/valyala/status/1965093140525715714
reply
apurvamehta
10 minutes ago
[-]
+1 to what @agavra said. It's awesome to see you here @valyala. Your writing and talks about timeseries databases were a great inspriratino for us. I recall one of your earlier talks about the data layout design of VM. Opendata Timeseries has emulated a lot of it.
reply
agavra
24 minutes ago
[-]
The other solution is to aggressively size your disk cache and keep effectively the full working set on disk, using object storage just as a durability layer. Then the main benefit is operational simplicity because you have a true shared-nothing architecture between the read replicas (there's no quorum or hash ring to maintain and no deduplication on read). Obviously you'll have a more expensive deployment topology if you do so, but it's still compelling IMO because you have the knobs to tune whether you want to cache on disk or not.
reply
agavra
23 minutes ago
[-]
also super cool to see you on here valyala! we took a bunch of inspiration from your work at VM. kudos to all you've done :)
reply
hagen1778
37 minutes ago
[-]
I am curious to see more tests on the reading path. The article mentions matching 500 series over 6h window with 1m step - and it takes 2s for warmed caches. That doesn't sound good at all.

Especially nowadays, when metrics from k8s ramping up churn rate to hundreds of thousands and millions series.

reply
agavra
17 minutes ago
[-]
This is the biggest gap in the 0.2.1 release. We have a pretty naive query execution engine because we've spent most of the time on core data structures and ingestion.

I have some prototypes of vectorized compute that takes that same query from 2s -> ~800ms, and it's just early days. If you want to contribute to help make it better, the query engine part of it is begging for help!

reply
davistreybig
1 hour ago
[-]
Wow this is so, so much cheaper than alternatives
reply