Dexcom G7 (cloud API) Tandem t:slim X2 and Mobi pumps (direct BLE) Nightscout (point it at your existing instance and you're running in minutes)
What the AI layer does:
Daily briefs summarizing overnight and 24-hour patterns Meal response analysis Conversational chat with RAG-backed clinical knowledge Predictive alerting with configurable thresholds and caregiver escalation
Important: this is monitoring and analysis only. GlycemicGPT does not deliver insulin, does not control your pump, and is not a closed-loop system. It reads your data and gives you insight on top of it. Your clinical decisions stay between you and your care team. Architecture:
Self-hosted via Docker or K8S — the GlycemicGPT stack runs entirely on your hardware BYOAI — bring your own AI provider. Use Ollama for fully local operation (no data leaves your hardware), or point it at Claude, OpenAI, or any OpenAI-compatible endpoint if you prefer a hosted model. Data flows directly from your instance to the provider you choose; nothing is routed through any centralized service operated by the project. GPL-3.0, no subscriptions, no vendor lock-in
Stack:
Backend API: FastAPI, Python 3.12, PostgreSQL 16, Redis 7 Web Dashboard: Next.js 15, React 19, Tailwind CSS, shadcn/ui AI Sidecar: TypeScript, Express, multi-provider proxy Android App: Kotlin, Jetpack Compose, BLE Wear OS: Kotlin, Wear Compose, Watch Face Push API Plugin SDK: Kotlin interfaces, capability-based, sandboxed
Looking for contributors — especially folks with BLE/Android experience or anyone in the diabetes tech space. Plugin SDK is documented if you want to add support for new devices. GitHub: https://github.com/GlycemicGPT/GlycemicGPT
Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being.
But I will check this algo out. Maybe it has some interesting bits.
Is your perspective based on, say, opinionated principle?, or experience?
The benefits are enormous.
The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
And how do you deal with AI hallucinations?
Otherwise, when tuned correctly, oref1 et.al. provide amazing results and are safe. Hard to understand where I would use LLMs in this.
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
Maybe that information is useful to you.
On your work:
this is legit
it is appreciated
Hats off, I salute this, thank you
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.
Marvin
Probably something like SVM for warnings.
Unless the whole purpose is just daily reports.
Do you find the analytics actually helps? I.e. a lot of this will depend on what you ate and whether or not you logged it?