If it turns out that driving a Prius on Tuesdays slows down Alzheimer’s, a larger pool of subjects would allow us to figure that out.
It's also better for people around the Alzheimer's patient, as it will let them understand why someone's personality and behaviours may be changing, and possibly let them be bit more forgiving of such changes. It will also give family more time to plan and understand the health and community services and support are offered wherever they live.
Everyone should assume that they could lose their full mental capacity at any moment. Strokes, brain injuries, etc. can occur at any age with no warning.
My wife and I finally went through the entire estate planning process several years ago. It was comforting to know that we had a plan for a myriad of scenarios (including the incredibly depressing decision of where our assets would go if our entire family—us and our kids—was gone at the same time).
Best these type of drugs can do is give you a few months extra window (say 4-6 months). They're not a cure. Sadly.
This is all new. There is research hinting at Alzheimer's subtypes, some of which are more likely to respond than others. Even halting the decline is a huge potential breakthrough.
Time will tell if the 30% slowdown continues beyond four years, and/or if earlier treatment with more effective amyloid clearance from newer drugs has greater effects. The science suggests it should.
> her mental acuity scores are (slightly) better than they were last year
My grandfather had a "fall" at work, he then left that job, and held down 2 more engineering jobs before he was diagnosed with a stroking condition and subsequent dementia. I got the distinct impression he thought he had more time, but rapidly declined.
If he knew he was short of time before his rapid decline he probably would have done things differently. Like not buying a house he would later have to sell to pay for aged care.
If he knew he was at risk of a workplace accident he probably wouldn't have worked as an after hours safety engineer at a major treatment plant, where if the worst had happened he could have endangered others.
At a personal level, I've been through this with my grandfather.
I want to know. My family wants to know. I want to prepare because there are things I want to do today that I know I won't be able to do in the future.
In many ways, it's just like many terminal cancer diagnoses. You're going to lose that person, but you have some time.
It's a weird disease and IMO not even really a disease it's a bunch of different causes of cognitive impairment under one umbrella but shouldn't be separated out much further to find actual causes and treatments.
(There's enough info in the supplemental link on this page to have an LLM do the Bayes math for you.)
Looks like my prior was not too bad :)
These patients are already seeing doctors. Would you rather your doctor to hide the diagnosis just because your disease isn't curable (for now)? It's not like we're testing the whole population in masse.
Even though it cannot be reversed or eradicated (yet, let's hope) detection can allow individuals to adopt interventions that help either adjust their lives to better cope with its progression or help mitigate some of the detrimental behavioral consequences. In addition, if you have family to care for it may be impetus to get certain things in order for them before later stages of the disease, etc. It's horrible and bleak, but I could certainly see why one might want to know.
In the lucky case, it can also relieve anxiety. Even though false negatives may still be possible, receiving a negative detection might give people who have anxiety about certain symptoms relief, since they can rule out (rightly or wrongly) a pretty severe disease.
Getting an accurate diagnosis is always important. Cognitive decline could be caused by other problems, some of which are more treatable than others.
If this test came back negative it would suggest extra testing to rule out other conditions like a brain tumor or hydrocephalus.
Your point at the end is essentially correct. There's a couple of reasons that come to my mind:
Early detection lets us test cures more quickly. You can see if the treatment is working without waiting 30 years for symptoms to develop or not. If prevention is all that works, we can verify lifestyle changes, again without having to wait 30 years for symptoms to develop.
Early detection means there's more of a chance of any future treatment succeeding and the patient returning to a normal life. Think of early detection of cancer or heart disease meaning you can be treated with less risky medication and procedures and minimise the damage being done.
Look up some of the more recent treatments - many of which will get better with time. That's why detecting Alzheimer's early is a big deal https://en.wikipedia.org/wiki/Lecanemab
That aside, some moderately effective drugs have recently been approved that can slow down the disease in its early stages. And even if you are no candidate for these: you can start organising the life around you while you still can. Like moving to an assisted living facility.
It is frankly shocking to think disease diagnosis would be a useless thing
https://www.alzheimers.org.uk/news/2025-11-18/promising-rese...
The test is optional. Feel free to skip it.
Tell 50 million people they’re likely to have Alzheimer’s then tell them where to donate towards a cure, or treatments to slow it by a decade.
But apparently your odds go above 30% if you live long enough, so if you could test for being in that cohort I think that result would be too common to actually be devastating.
Pharmaceutical companies have spent something like $50 billion on developing Alzheimer's drugs with, well, the most furtive of straw-grasping to show for it. It's probably the most expensive single disease target (especially as things like cancer are families of diseases)... the failure to have good results isn't for lack of money, and merely throwing more money at it is unlikely to actually make progress towards meaningful treatments.
I just feel the thinking is off, it's like we are trying to treat cuts by removing scabs and scar tissue. We really need deep investigation on the sources, which I feel in many cases are industrial chemicals and how some people's body / immune system respond to them.
One of the most compelling studies I saw was how distance from a Golf Course predicted neurodegenerative diseases, based on their use of certain pesticides.
Someone always says “merely throwing money at the problem…”
What time period was the money spent? The last 25 years?
The United States spends $1 trillion a year in debt interest. $50 billion is nothing
There's Lecanemab and Donanemab. The effects are modest however.
If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.” We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception. You do not wait for the cure before improving the measurement. You improve the measurement so that a cure becomes possible, targeted, and effective.
Medicine is no different. Refusing to improve early, probabilistic diagnosis because today’s treatments are modest confuses sequence with outcome. Breakthroughs do not emerge from vague labels and mixed populations. They emerge from precise, quantitative stratification that allows real effects to be seen. The danger is not that we measure too early. It is that we continue making irreversible clinical and research decisions using imprecise, binary classifications while biological insight and therapeutic tools are advancing rapidly. Building the probabilistic layer now is not premature. It is how we make future intervention feasible.
This is absolutely nothing like the asteroid example, where knowing that anybody is going to fall victim to it would itself be news of astronomical proportions. Previously there was a high chance the event wouldn't happen, and now it seems likely it will, so that entirely change the calculus of your priorities.
This just completely destroys the analogy. (There are other reasons it doesn't fit too, but one is enough.)
The other slightly sad fact is that is also quite likely that any curative treatment will need to be started before you start to show symptoms, because the brain has already lost a lot of it's resilience by then.
https://www.theguardian.com/society/article/2024/may/06/scie...
Your reasoning relies heavily on this statement, which is only true if occurrence is entirely random, which is in most cases not true. A condition can easily mask the cause of the condition and then you have a confounder(-s) that you have no way of controlling. If you can build multiple strata with high risk ratios, you can find baseline similarities and differences in those groups. Early detection is highly important in knowing these confounders in the first place and then controlling for; and as GP mentions allows for more targeted research in treatment. Without this we could easily spend all the research effort on the effect (symptom) of a condition without even approaching treatment of the cause, i.e. prevention.
A very similar thing has happened with the infamous atherosclerotic plaques. AFAIK (correct me if you are aware of any evidence) there is currently no mechanistic model of how these atherosclerotic plaques form. Yet we spend so much effort in lowering the symptomatic side of increased cholesterol/LDL (which has well-known positives) even if there are known metabolic pathways for LDL increase, based entirely on correlational studies, when LDL is not even close to being the best predictor of cardiovascular conditions. LDL just happens to be easy to measure in a blood test and easy to control with oral medication.
With that said, lifestyle changes can slow down the onset of Alzheimer's, so knowing the diagnosis isn't totally useless.
I've long had the suspicion that much of what is called Alzheimers or dementia is some form of prion disease. This study doesn't show that, exactly, but it shows that abnormal proteins may be directly correlated.
So - and I'm not saying this is the case - but suppose that the abnormal proteins identified in this study could be transmitted by blood transfusions or organ transplants. Wouldn't that itself be enough for your diagnosis to help you personally not transmit those proteins to someone else?
If your attitude is that no one else in the world matters once you get a bad diagnosis, then nothing really mattered to you before. Other people are working day and night trying to cure you, so there's no cause for that level of nihilism. You may as well try to help from the vantage point you have.
This is an incredibly short sighted, fragile-ego protecting, selfish instinct.
Making plans while you are cognizant is valuable, and the sooner you know, the longer and better plans you can make. Making plans with friends and family should be done sooner than later with these kinds of things.
It absolutely helps to personally to know, but people avoid emotional pain like the plague. So they delay and delay and then the emotional pain is amplified anyway when things come to ahead. It really is better to rip that band-aid off sooner... I think.
Late-stage Alzheimers', if not every stage, is very likely going to involve microscopic-scale physical damage to brain tissue that is functionally irreversible.
Blowing up an asteroid after you can see it in the sky with your naked eye will not save you.
-------------
It is also possible that what we call "Alzheimers" is actually biochemically five different disorders with distinct etiologies that have the same endpoint, and that it turns out we can cure two of them. Differentiating conditions for a biomedical catch-all category would be essential; "How accurate you can get the tests" is inseparable from this process of definition.
While there aren't any cures yet, certain treatments and lifestyle strategies may slow its progression, keep quality of life as long as possible and stuff like that for as long as possible. (And the sooner you start with that, the better)
If I got an early diagnosis, it would motivate me to get my affairs in order to lessen the burden on my family and check off some bucket list items before it's too late. Don't rob me of that opportunity.
Before ordering the test, ask patients "If you were going to get Alzheimer's, would you want to know?"
What if you found for example all the diagnoses happened shortly after an HSV-1 infection? The more information as to what's going on the better, for research at any rate.
Why are you so furious about the idea of people knowing?
A more objective blood test will make for more accurate diagnoses and and better treatment.
> If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.”
I don’t think the analogy fits, for a couple reasons.
1. People not wanting to know whether they have Alzheimer’s is because of the fear of a fate worse than death — living with Alzheimer’s.
2. People not wanting to know whether they have Alzheimer’s is not the same was not wanting a way to detect it. As you said, being able to measure it may help lead to a cure/treatment. I doubt people are against improving detection — they may just not want the detection to be applied personally.
Wrote up my current systems understanding here https://metamagic.substack.com/p/the-alzheimers-equation, but it makes clear why treatments that target only one variable are mathematically doomed to fail to work on everyone and why there will never be a single "cure". It explains without needing to read 10,000 papers why we keep getting research talking about treatment X helps in some, but not all cases or symptom Y is associated in some, but not all, etc.
I'm not saying your wrong, just that the level of confidence in your assertions is not warranted.
But that is sort of the point of science, you take all the evidence you have and create a hypothesis and iterate as you get more evidence. If I find evidence that suggests something else then I will be happy to tweak or abandon this. My level of confidence comes from the existing evidence and lack of evidence otherwise.
It is a tale as old as time. See the story behind the term. ultracrepidarian: https://en.wiktionary.org/wiki/ultracrepidarian#English
See also: https://www.science.org/content/article/potential-fabricatio...
Amateur's asserting their opinions as facts isn't great, but epistemologically it's no worse (and systemically, like less harmful) than when the experts do it.
Compare this with an amateur writing with certainty about a subject that subject matter experts continue to debate after decades of work.
I know which one of the two I would rather bother listening to.
Saying that experts are less likely to do X doesn't say anything about the relative harm of their doing so. If some rando on the streets is shouting their opinion about what causes Alzheimer's and asserting it's God's Own Truth, it's going to cause less overall harm that a carefully worded (but equally wrong) statement from an expert. (And the fact that we tend to hold experts in higher regard is the reason we should be more concerned about them stating their opinions as facts than about amateurs doing the same.)
I am absolutely not going to plan on a care facility right now. That sounds absolutely bogus.
If I were likely to develop alzheimer's, I'd make more and more expensive accommodation for power of attorney and trusts to shield assets while I was competent to do so.
Knowing an early, painful fate allows you to approach it with dignity.
While you're right from the perspective of humanity taking the steps of gathering data then tackling the disease, most developed countries have single payer healthcare systems that require some level of cost-benefit analysis to approve covering new diagnostic systems.
Alzheimer's disease progression doesn't seem to have any notable preventative indications other than 'eat well, exercise and stay mentally active', all of which are standard recommendations.
Recall that this isn't an issue deciding between funding and non-funding. It's an issue deciding between funding Alzheimer's diagnostics, new GMP agonists, new screening options for highly preventable cancers, etc. Building out a dataset is nice but unless that's surplus money redirected from other programs it's going to come at a real flesh and blood cost.
Imagine you're born and you eventually learn that there's an asteroid on a collision course with earth, from way before you were born. It's going to take many years to get here and you may die before it hits and so far no scientists have been able to come up with a way to deflect it. Do you care?
Adding newness to the situation makes it wildly different.
That's not what would happen. We wouldn't mobilize. We'd fragment. Within days, the prediction would be declared partisan. One bloc would call it settled science; another would call it statistical hysteria. Billionaires would quietly commission private shelters while publicly funding studies questioning whether the asteroid even qualified as "large." News panels would debate whether the projected impact zone was being unfairly politicized. Conspiracy channels would insist the asteroid was fabricated to justify global governance. Others would insist the real asteroid was being hidden. Amateur analysts would flood the internet with homemade trajectory charts proving the professionals wrong. Death threats would arrive in astronomers' inboxes faster than research grants.
People can be fine with being tested so that epidemiologists can work on growing our knowledge and, at the same time, not wanting to know their own diagnosis.
Is this a response to another comment or did I miss the quote in the article? Otherwise it’s just a straw man.
I do want to know.
If it is positive, that is still helping you accurately deal with whatever is happening to you.
If you don't know something the rest of us don't, don't be so arrogant about your pet theories. Such arrogance costs lives.
Systematic review + meta-analysis (basal microbiota; AD): https://alz-journals.onlinelibrary.wiley.com/doi/abs/10.1002... https://pmc.ncbi.nlm.nih.gov/articles/PMC11672027/
Replicated case-control + functional metagenomics (AD dementia): https://alz-journals.onlinelibrary.wiley.com/doi/full/10.100...
Large-cohort metagenomics (stage-specific / early pathology signals): https://pubmed.ncbi.nlm.nih.gov/40164697
Mendelian randomization (AD): https://pubmed.ncbi.nlm.nih.gov/38788075/ https://www.sciencedirect.com/science/article/pii/S227458072... https://pubmed.ncbi.nlm.nih.gov/40665707/ https://journals.sagepub.com/doi/10.1177/25424823261422629 https://www.medrxiv.org/content/10.1101/2025.08.20.25333769v...
Narrative / mechanisms (useful synthesis, not primary causal proof): https://www.sciencedirect.com/science/article/abs/pii/S15681...
MILD COGNITIVE IMPAIRMENT (MCI) / DEMENTIA (cognition-focused evidence) Systematic review (MCI or Alzheimer’s dementia; PRISMA, PROSPERO): https://www.mdpi.com/2035-8377/17/10/155 https://pubmed.ncbi.nlm.nih.gov/41149776/
Scoping review (MCI and AD gut microbiomes + interventions, through Feb 2023): https://pmc.ncbi.nlm.nih.gov/articles/PMC12825029/
Systematic review of microbiota-targeted interventions for cognition/dementia risk: https://www.sciencedirect.com/science/article/pii/S027153172...
RCT-focused systematic review/meta-analysis of probiotics for cognitive impairment risk/AD/MCI: https://pmc.ncbi.nlm.nih.gov/articles/PMC12645680/
FMT in dementia/MCI context (review of effects across neuro cohorts): https://www.sciencedirect.com/science/article/pii/S266635462...
MULTIPLE SCLEROSIS (MS) Microbiome signatures via global data integration / ML: https://pmc.ncbi.nlm.nih.gov/articles/PMC12383397/
Systematic review/meta-analysis of probiotics in MS (preclinical + clinical): https://journals.plos.org/plosone/article?id=10.1371/journal...
Systematic review + meta-analysis (antimicrobial exposure and MS risk; microbiome-disruption relevant): https://www.sciencedirect.com/science/article/abs/pii/S22110...
Mendelian randomization (gut microbiota causally linked to MS): https://pubmed.ncbi.nlm.nih.gov/39065244/ https://www.mdpi.com/2076-2607/12/7/1476
Broad MS gut dysbiosis and therapeutic modulation review: https://pmc.ncbi.nlm.nih.gov/articles/PMC12668904/
MS gut-brain-barrier and intestinal barrier review: https://www.frontiersin.org/journals/immunology/articles/10....
Example MS cohort biomarker/signature work: https://www.nature.com/articles/s41598-024-64369-x https://www.nature.com/articles/s41598-025-19998-1
PARKINSON’S DISEASE (PD) Multi-cohort metagenomic meta-analysis (Nat Commun, 2025): https://www.nature.com/articles/s41467-025-56829-3 https://pubmed.ncbi.nlm.nih.gov/40335465/
Large metagenomics cohort (Nat Commun, 2022): https://www.nature.com/articles/s41467-022-34667-x
Integrated multi-cohort gut metagenome (Movement Disorders, 2023): https://pubmed.ncbi.nlm.nih.gov/36691982/ https://movementdisorders.onlinelibrary.wiley.com/doi/10.100...
Metagenomic analysis (Movement Disorders, 2024): https://pubmed.ncbi.nlm.nih.gov/39192744/
PD causal-inference and MR discussion (review-type synthesis): https://pmc.ncbi.nlm.nih.gov/articles/PMC12512240/
MR study example (PD gut microbiota): https://journals.lww.com/md-journal/fulltext/2025/10310/caus...
Yes. I also realise they have not reached the conclusion of this investigation. (Imagine this attitude towards a police investigation: "They're investigating Roger Rabbit, therefore he must have dunnit!")
Left untreated for a very long time (decade+), it spreads to the brain and causes dementia among other things. Older generations with stigmas, taboos, or from lower educational backgrounds seem (to me) less likely to get tested, so it seems plausible.
Source: Have recently discovered this myself with a family member from their neurologist.
The reason this was detected is that such testing is a standard practice with new dementia patients—among many other tests that identify etiologies of dementia.
No need for a 'PSA'.
We only found out for my family member after the 3rd neurologist's opinion after ~2 years of this.
Not everyone does their professional due diligence - cue endless anecdata about the healthcare industry. It's good to just be aware.
Perhaps—but it's also possible that whoever was in the room with the patient declined STI testing (which I have seen, and which sometimes reflects lack of knowledge around extramarital affairs).
I'm just trying to make it clear that there are dozens of reversible/non-degenerative causes of dementia and there is no way that a fully-trained neurologist doesn't have these memorized.
It's like not knowing what a type system is as a programmer with a reputable degree—impossible.
edit: in fairness, many doctors have unease around discussing sex/infidelity—but the PSA maybe should be to encourage your doctor to put aside concerns around niceties in your parent's care.
And "The effect of shingles vaccination at different stages of dementia" https://news.ycombinator.com/item?id=46164646 (yes, also the Herpes family).
much of the medical profession seems a bit behind the curve on recent findings.
He was quite sane in his life and his work expanding on Marx is of course extremely important.
Basically, almost everybody doesn't have Alzheimers. Sampling from the general population you can get better than 94.5% accuracy just by returning negative on every test. You have to know sensitivity and specificity separately to make any informed judgement ... which they try extremely hard not to tell you.
No it's not, that's a reported mean, presumably with the right number of significant digits.
If you want to criticize the variance/stddev, do so, but you picked the wrong metric if that's what you wanted to complain about.
def has_alzheimers(patient):
return False
What accuracy does this have?What you're looking for is called "signal detection theory".
Even without this method, the doctors have been able to give diagnosis with 75.5% accuracy (according to the paper's claim).
I.e. it needs the original 75% accuracy or so and boosts it another 20%.
The problem is that the assessment itself is slow, expensive and requires skill.
What we really want from a test is high specificity (a positive test means you have accuracy) and high sensitivity (if you get a negative test you don't have it).
This is how we can offer screening.
"A narrative review on the effects of a ketogenic diet on patients with Alzheimer's disease"
https://www.sciencedirect.com/science/article/pii/S127977072...
"Effects of ketogenic diet on cognitive function of patients with Alzheimer's disease: a systematic review and meta-analysis"
And anecdotes from the field:
https://www.youtube.com/watch?v=s86CFw0qhVc
Revolutionizing Assisted Living: Hal Cranmer's Ketogenic & Carnivore Approach to Senior Wellness / Metabolic Mind
One of interesting checks in this study might be to check when (if) any of the participants had taken this vax and what the impact might be on an Alzimer's diagnosis.
It's used to refine clinical diagnosis after patients present with cognitive severe decline.
By the time someone gets this test, they have severe problems. The purpose of this test is to assist with the right diagnosis.
If you have a prevalence of 10 in 1000, how do the numbers shake out?
Well, you test all 1,000. If we assume a 95% accuracy for false-positive and false negatives?
Of the 990 that you test that don't have the disease, the test will false state 50 do have the disease. Yikes!
And of the 10 that do have the disease? You'll miss 1 of them.
It's not terrible. This is a relatively good number. Diagnostics is just terribly difficult.
Spoilers: It's anywhere between 1-15 and 5-30% for false positives and 1-15/5-40 for false negatives. That's imaging, biomarkers, cancer screenings, etc
Like, where do you think the concept of "second opinions" came from? Whimsy? Lets go ask a second doctor if I actually have cancer, it'll be fun!
This statement is quite broad and misses several important factors.
First of all, a test's sensitivity and specificity. The math in your example assumes a balanced test, but on what basis? The math comes out quite different for high-sensitivity or high-specificity tests. (Unfortunately, I could not find the numbers for the test in the linked article.)
Secondly, whom are we testing? The prevalence rate in your example (1%) is unrealistically low even for the general population. But would we screen the general population? No, we'd screen high-risk groups: the elderly, those with certain APOE genotypes etc. Predictive values of a test depend hugely on the prevalence rate.
Lastly, it depends on how the results are used. If it's a high-sensitivity test used to decide whom to send to the next tier in a multi-tier diagnostic system, it could actually be quite effective at that (very rarely missing the disease while greatly reducing the need for more expensive or more invasive testing).