For example CV triage, you use a LLM with a rubric to extract features, choosing the features you are going to rely on does a lot of work here. Then collect a few hundred examples, label them (accept/reject) and train your trad ML model on top, it will not have the LLM biases.
You can probably use any LLM for feature preparation, and retrain the small model in seconds as new data is added. A coding agent can write its own small-model-as-a-tool on the fly and use it in the same session.
Unless by LLM feature extraction you mean something like "have claude code write some preprocessing pipeline"?
Don't understand here the parallel.
@Author - if you see this is it possible to add comparisons (ie "vanilla" inference latencies vs timber)?
[1] https://gist.github.com/msteiner-google/5f03534b0df58d32abcc... <-- A gist I put together in the past that goes from PyTorch to ONNX and grafts the preprocessing layers to the model, so you can pass the raw input.
if you're working on a fraud problem an open-source fraud model will probably be useless (if it even could exist). and if you own the entire training to inference pipeline i'm not sure what this offers? i guess you can easily swap the backends? maybe for ensembling?
Why not a typical shared library that can be loaded in python, R, Julia, etc., and run on large data sets without even a memory copy?
I know there are specialized trading firms that have implemented projects like this, but most industry workflows I know of still involve data pipelines with scientists doing intermediate data transformations before they feed them into these models. Even the c-backed libraries like numpy/pandas still explicitly depend on the cpython API and can't be compiled away, and this data feed step tends to be the bottleneck in my experience.
That isn't to say this isn't a worthy project - I've explored similar initiatives myself - but my conclusion was that unless your data source is pre-configured to feed directly into your specific model without any intermediate transformation steps, optimizing the inference time has marginal benefit in the overall pipeline. I lament this as an engineer that loves making things go fast but has to work with scientists that love the convenience of jupyter notebooks and the APIs of numpy/pandas.
Rust/Zig/Nim would add toolchain complexity with minimal safety gain for this specific output shape. Those were my considerations.