The risk to benefits ratio of introducing a language model to interpret so clear signals is nowhere near justified.
Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
Yep. The oref1 algorithm is amazing and proven to make diabetic's quality of life better, AND SAFE. I don't understand why would you need to add AI to that mix.
But I will check this algo out. Maybe it has some interesting bits.
We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being.
My experience is completely the opposite, of using LLMs to pattern match and cast diagnostic nets.
Is your perspective based on, say, opinionated principle?, or experience?
The benefits are enormous.
The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
> The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
My local physician says otherwise, with respect to facebook posts about dosages. I'm convinced the same applies to LLM generated content with respect to people blindly following the computer.
> No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
if you can't trust this thing then what is it doing? the implication that people that trust this software do not have adult competency is also confusing.
> Is your perspective based on, say, opinionated principle?, or experience?
your perspective is solely based on recent trauma so I don't know if it is more reliable in any
capacity
But if someone dies because this thing hallucinates their reporting - would you feel any sense of culpability?
“GPL says no warranty”
“People need to double check LLM output”
“You’re holding it wrong”
I really don’t know if we, collectively as a civilization, should be willing to accept this kind of hand-waving when it comes to creating things like this.
I think the only thing that could be made better is tuning the I:C/ISF/Basal values automatically. And ISF is already handled by DynamicISF, while not perfect it reduces the variables you have to tweak.
Otherwise, when tuned correctly, oref1 et.al. provide amazing results and are safe. Hard to understand where I would use LLMs in this.
You sort of have that - not automatically though, but you can run autotune against nightscout and get a report of where things need to be adjusted. I run oref1 with DyanmicISF, and just run autotune every few months just to tweak values.
I genuinely don't see where I would use an LLM in this process.
Looks interesting, being a Whoop user for the last few years, I have seen for myself that their AI Coach/AI based suggestions are a hit or miss 3 out of 10 times, slightly concerned about how accurate this will. Not a diabetic patient, but I do monitor my levels with a CGM from time to time, will definitely check it out!
The issue with Whoop’s AI is that there isn’t much data, and the data doesn’t have much prescriptive power, so it can’t really suggest anything useful.
Recovery and Strain scores are made up, and even resting heart rate doesn’t tell you anything prescriptive for the day.
The data available to the LLM in OP’s app is the polar opposite. It’s all actíonable and real, so I bet it can draw more useful insights than Whoop reminding you that you didn’t exercise all week.
So, I'm in the medical field building an EMR and LLMs have obviously been a really important topic in the industry the last few years. We're still not even sure that giving LLM-assisted suggestions TO ACTUAL DOCTORS AND CLINICIANS will be helpful let alone to the patient themselves.
It's breaking the golden rule of these tools which is to have someone with enough knowledge to verify the accuracy of the data it spits out. Patient's famously don't. Hell, even the actual staff don't really understand or know how these tools work (or the ways in which you can/can't trust them).
I'm a T1D and tbh it's not that hard to manage, I just wouldn't need that. But for kids or the elderly, I see a use case.
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
Went through pregnancy with the mother having recently-diagnosed T1 diabetes – just barely not killed by grave neglect on behalf of healthcare due to how badly they missed the diagnosis to begin with.
I've done this with the Libre 2 sensor. I added Gemini to it. It gets like 2 weeks of readings at once, and the user can "chat to their data". I added a meals tool as well, where the user can photo their meal, and the ai estimates the impact on the readings.
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.
How do you protect your life and the life of others using your software against potential lethal errors?
Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
But I will check this algo out. Maybe it has some interesting bits.
We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being.
Is your perspective based on, say, opinionated principle?, or experience?
The benefits are enormous.
The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
My local physician says otherwise, with respect to facebook posts about dosages. I'm convinced the same applies to LLM generated content with respect to people blindly following the computer.
if you can't trust this thing then what is it doing? the implication that people that trust this software do not have adult competency is also confusing.
> Is your perspective based on, say, opinionated principle?, or experience?
your perspective is solely based on recent trauma so I don't know if it is more reliable in any capacity
Changing parameters on the insulin pump because the LLM said so
Neglecting to seek actual medical advice believing a LLM replaces it
Misunderstanding medical complexity (ie a prescription due to medical history not available to the LLM)
You 1000% don't work with the general public in a tech way.
But if someone dies because this thing hallucinates their reporting - would you feel any sense of culpability?
“GPL says no warranty”
“People need to double check LLM output”
“You’re holding it wrong”
I really don’t know if we, collectively as a civilization, should be willing to accept this kind of hand-waving when it comes to creating things like this.
And how do you deal with AI hallucinations?
Otherwise, when tuned correctly, oref1 et.al. provide amazing results and are safe. Hard to understand where I would use LLMs in this.
I genuinely don't see where I would use an LLM in this process.
The data available to the LLM in OP’s app is the polar opposite. It’s all actíonable and real, so I bet it can draw more useful insights than Whoop reminding you that you didn’t exercise all week.
It's breaking the golden rule of these tools which is to have someone with enough knowledge to verify the accuracy of the data it spits out. Patient's famously don't. Hell, even the actual staff don't really understand or know how these tools work (or the ways in which you can/can't trust them).
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
Maybe that information is useful to you.
On your work:
this is legit
it is appreciated
Hats off, I salute this, thank you
Marvin
Probably something like SVM for warnings.
Unless the whole purpose is just daily reports.
Do you find the analytics actually helps? I.e. a lot of this will depend on what you ate and whether or not you logged it?
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.