The real problem was allowing the AMA to tightly control the supply of doctors for decades---that this would even be a solution to anything.
There is very real opportunism and profiteering among those advocating and providing this "solution." But they didn't create the months of waiting that normal folk have to see routine specialists or PCPs.
At the very least, they can get instructions on how to do first aid and when to go to the emergency room.
I remember talking to friends who were well-meaning but panicked about the ACA being passed because the system would be inundated with people seeking healthcare, and that it would lead to Soviet-style rationing. The rationing hasn't come from any five-year plan or the like, but simply the supply of doctors not keeping up with the demands of a growing population.
As I understand, the throughput bottleneck on training new doctors is the availability of funded residencies. So chiefly on the hospitals and the government (Medicare, mostly) to spend more.
The meme about AMA artificially limiting the supply of new residency slots turned out to be much more complex the more I read about it (and IMO mostly incorrect, at least as described by the just-so stories on the internet where I first learned about it).
The actual limiting factor is federally funded residency positions which are funded by CMMS, and were artificially capped by the Balanced Budget Act of 1997 during a widespread climate of financial austerity after Gingrich had led Republicans to retake the House in 1994 for the first time in decades.
At the time, AMA apparently did release a report that included support for reducing the number of residency slots, so that detail is correct. But the decision wasn't made by them, and federal healthcare spending was already on the chopping block (and was a politically attractive area to make cuts given the Clinton's administration failed healthcare reform proposal).
As early as 2006 [4], the AMA started releasing policy position statements requesting caps on federal funding to be increased, but it wasn't until the CARES ACT in 2020 that Congress funded 1000 new CMMS funded slots were funded (but limited to small gradual increases each year) and started to be implemented in 2021-2022. And they continue to advocate for increases that so far haven't been adopted.
So while there's a kernel of inarguable truth that the AMA and other medical professional groups did support certain caps back in 1997, it has always been, then and now, a policy decision by Congress. With the motivation to set limits driven by concern over federal healthcare spending inflating budget expenditures.
But it makes for a simpler, emotionally resonant narrative of a shadowy self-interested group pulling the strings at the expense of the public (that also conveniently redirects outrage away from the people actually empowered to fix the problem, or that federal spending is a crucial lever to fix the problem).
There should definitely be an independent board like the fed (different decisions per area, maybe different boards per state) that determines need for the next 5 years. This is the way they do hospital beds (state level), but not doctors to address them! Of course we have to boot the fascist asshats to have a chance at it being independent.
So we get so worked up over this stuff, and for good reason. But personally I'm taking another stance...
let them..
It's not my choice to make someone understand what's best for an individual or as a group. Let them make these decisions, and learn for themselves. Will this cause issues where I am at risk of getting measles? Or that kids could get sick over non pasteurized milk? yes, but we're back in a place where people have to feel the pain.
That's not to judge, or belittle or put anyone down. There's people who have views and values that conflict and that's OK.. Even if it's not the best for us a whole.
People can feel the pain without drawing the right conclusions to rectify the situation. For centuries people thought bad things happened because the gods were displeased. Some still do.
“Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.”
I can understand your viewpoint, but I don't think I could bring myself to agree with you.
Healthcare is a necessary service. The healthy foods website that RFK Jr set up was recently in the news for describing the "best foods to put up your ass". That's the technology that's being advocated for here. There's no quality bar. There's no regulation on accuracy. It's almost certainly the case that if you meet with an AI avatar, you couldn't sue its operator for medical malpractice.
The issue, fundamentally, isn't that you're giving people a choice. It's that this will be the only choice. If healthcare companies don't need to open offices in rural locations, they won't. Even if you're fine with this technology, it'll quickly become the case that it's the only option many people have regardless of whether they want it or not.
I read a thread of stories today about people's parents using technology. One person's mom tried searching for energy drink ingredients and accidentally registered her house as a business on Google Maps. These are the people who are suddenly going to have to interface with an AI avatar about their health. We're replacing medical professionals with a glorified phone tree with RAG search. It's literally going to kill people who don't have a choice.
I agree with this, but think of in more of an evolutionary lens.
I've come to the place where every time we think we have solved a problem, we may not have and have created a new tree of problems.
I am one of those knuckle heads that decided to give carnivore diet a try, and my wife and I have had amazing results on a number of metrics. We were also not even eating junk food or that much processed food.
The sheer complexity of how we live demands humility, and in many ways the skeptics have valids point in many ways.
I look to the Amish, for example, as a way that probably not that bad to live considering how much of the modern world has problems.
At core, some people choose to be experiments, and some decide to the control. This is the reality I believe, and this is how the whole remains robust over the long term.
So the system is corrupted by money and influence and your idea is to "let the people feel the pain."
The Hacker News casual misanthropy bubbles up to the top yet again.
Instead of trying to use your hacker instincts to find a solution you're just got to rest on your heels and let people suffer? What a waste of talent this represents.
> "AI can't read facial expressions, tone of voice, or body language," she said. "And those things matter. That's where the relationship between a patient and provider is built — in the nuance."
It's amazing how, no matter how dramatic the breakthroughs are, people will still totally fail to understand the concept and can't even extrapolate correctly to N+1.
There's absolutely nothing fundamental about current AI tech it that limits it from reading facial expressions and tone of voice and even body language to the extent it can see it. And then, already there are studies showing that patients find AI responses more empathetic than actual doctors. There's a big question whether people will accept it such empathy as genuine, but my money is on people attaching to these things like crazy.
I honestly don't really know. Notice they can't even actually say "human", they say "provider" above. So they are pretty much admitting, the alternative isn't that you get a life long relationship with a person. You're getting a lifelong attachment to a corporate entity either way. People become sentimentally attached to inanimate objects all the time. It's all about what it means to them. There really isn't a rule book here, we have to figure it out.
Since I'm dealing with how good claude code is, I gave a research task to claude to help my in-laws (who are psychologists) understand the coming storm at a more visceral level.
I can't argue with the urgency. I then had claude produce a prompt based on this that I could feed Grok as a project to do therapy, and the result was shockingly good.
Obviously, Dr. Oz is going to be a political lightning rod for some people, but if you factor that out and try using AI for therapy... this is coming. This is happening right now.
Sure, the results CAN be good. The real question is if your health is one of those cases you want to roll the dice with. I don't, but that's up to everyone.
Tesla fsd was the same thing. You needed to have people to try it. Some paid for it with their lives. Will it make progress, sure. Do I want to be the one that dies to make that progress? No
The real healthcare system is rolling a dice too. A different dice, but an expensive one with a lot of delay and inconvenience - it's not like there's a choice that's problem free.
The unfortunate reality is that when you have a doctor you're also rolling the dice. I've had been misdiagnosed a few times in my life. These misdiagnosis were very disruptive but luckily not life threatening.
That's not what I said. There are probably plenty of situations (more standard) where doctors do a good job. The doctors also have the benefit of performing a physical exam, ordering various tests etc. so they have more data.
My point still stands though. They definitely mess things up. At the very least LLMs can be an additional tool that doesn't have the subjective bias, the cost issue, or lack of time/desire to investigate deeper or listen to the patient.
EDIT: LLMs would be better if they could access case data. Maybe that's something to consider (anonymized).
I just upload everything to Lord Grok and the answers I get are amazing. I then run it claude with Ralph and the results define explanation. I am humbled...
I have rolled the dice with it, and it's been great. I use it to tune fitness and every mental dimension. Time will tell.
I will tell you this, using AI is better than using HN and random people who don't know me. I had the best technical conversation of my entire life with AI. I feel like I can handle it, and maybe I will dive into AI psychosis at some point. who knows!
I can say that it is better than the silence I've had for years from no community giving two shits about what I wanted to build. I can admit the number of mistakes I've made, but at a certain point - enough is enough.
You are not wrong. I blew up a great career that burned me out to go on a journey that was absolutely crushing to emerge with a machine that allows me to have the deepest technical conversations of my entire career. Professionally isolated for sure. But, im married and have friends so its not that bad and dont worry.
I just came to many observations about myself and AI has helped process so many emotions that I would never share with another human.
It's unfortunate that there are not citations for some of the numbers. I found this piece compelling:
> Marcus. He is twenty-eight and did two tours in Iraq. He has nightmares four to five nights a week. He drinks to fall asleep. His girlfriend left because she couldn't live with someone who woke up swinging. He tried therapy three times. The first therapist was a civilian who asked how combat made him feel. The second used Cognitive Processing Therapy but sessions were every two weeks, and when she said “trauma narrative,” his chest closed. The third was private practice, $160 a session. Marcus started to describe what happened in the house in Mosul — the one with the family in the back room — and the therapist's face changed. A microexpression, less than a second. Marcus caught it. He stopped talking. He never went back. The 73% problem, made human. Marcus has things inside him that are killing him slowly, and he has never found a room safe enough to say them. Not because the therapists were bad. Because the therapists were human, and the things Marcus needs to say are the things that change how a human looks at you.
This reminds me of a story about how veterans with PTSD from recent wars were reading the Odyssey b/c there is a point where Ajax describes a "flashback".
P.S. Almost every time I hear discussions about veterans and PTSD, the veterans say something like "having a therapist be someone who is ALSO a veteran and has seen combat is worth a million points"
(I'm sure there are therapists who are excellent and are not veterans, just pointing out what the veterans value).
Just 9 parts plus foreword and appendix. It’s cool that it could generate this but it’s going to miss 99% of the audience. I intend to be constructive, not dismissive of your efforts.
I think there is a coming storm as you say: the disinformation crisis on overload. Even what you present makes my eyes roll as a more competent person in the room. It's not an impressive website, it's a carbon copy of plenty of sites that exist.
It is full of text that is hard to decipher as being meaningful in any way, and that has unfortunately been true of LLMs for a long time, they are great at bullshitting.
Yet, to determine if the content makes any sense takes a substantial effort, an effort I am highly confident you did not do.
Still, I believe you're right, this will be enough. This will be "not everything you read on the Internet is true" x1000.
I am having it reviewed by a handful of psychologists, and they are having to grapple with it too. I'm confident that I reviewed it well enough to be concerned, and I'm even more confident that it is the future.
The email he wrote in October 2019 is especially odd. He reached out to FBI wanting information on interviews of victims. FBI blew him off. (EFTA00037405) Between the lines, this really reads to me like he wants to know if anybody accused him. Someone on an internal email said not to tell Oz anything about the meeting. (EFTA00037407)
He met with Epstein on 1/1/16. (EFTA02476629) I believe the meeting was initiated by Oz.
In China, when they didn't have enough medical personnel, they sent the half-trained barefoot doctors to the rural areas to give even basic medical care. Over time they trained more and more people in allopathic medicine and increased the level of care and expertise.
Our elites care so little for us that they won't even bother to send real people, just a sop to tell us to shut up.
Expectations rise with the level of development. No excuses in America.
The Doctor is better technology created by socialists for the purpose of benefiting the crew. Anything created by Dr Oz in association with right wing billionaires is gonna be the opposite of that.
I do not trust Oz et al to make this anything but an unethical grift.
But
Ironically we already have a lack of continuity of care, providers who are so overburdened that they cannot give any human nuance, and a complete lack of coordination across visits or diagnoses.
So, America has done what it usually does; create a problem and then offer to sell the subpar solution.
also, the healthcare industry has used impoverished urban areas as guinea pigs for training programs and hypothesis testing for many generations, why not the same benefit/detriment for rural America?
AI as a tool to make health care cheaper and more scalable is not an inherently bad idea, but I would not let these fools anywhere near it. Anything they touch is going to be a pure grift.
rural here, raised in a medical family.
there is NO rural health care, bare bones self funded "clinics" with no actual doctors. they want doctors here but cant offer competitive compensation, and the governments rural devitalisation plans are working, and demograhics back this up.
every few miles there is another failed back to the land dream farm, and then some giant horsey place that will be a third+ home
general health care is so rotten that the people hired from India to do the work, FLY back home to India, if they need health care for themselves as it's both faster and cheaper counting all the costs.
that we are tumbing towards a crunch point and have no one to blame but the victims, then thats what we will do.
As the article points out, the state of healthcare in rural (and non-rural) communities is entirely a republican choice to make it what it is. This is an attempt to make a lot of money taking over medicine and that little detail that they don't actually deliver reasonable quality care will be "fixed" by sabotaging actual doctors (and everything from hospitals, to nurses, to cleaning staff)
But you're kidding yourself. The whole world, including India has a doctor shortage AND is choosing not to train new medical doctors (frankly: especially India, they're not good about this). If flying to India helps, it'll be temporary.
Oh and having AI do healthcare goes squarely into what AIs are really bad at: adversarial planning. Your doctor is more like a judge: depending on the situation, it's your doctor's job to push (and occasionally force) you to take certain decisions, AND to sometimes push and force the government to take decisions like the COVID lockdown.
Does anyone think for a second Trump will allow for these AIs to be programmed to order him to lock down the country if the situation requires it? Or does he demand they just lie about the situation. I feel like we've seen the answer to this one.
It's fair to worry about another sort of adversarial planning: what if you are a human who is deemed undesirable to the state, and solicit advice from an AI doctor that is backdoored to take correct action as defined by the state? There's now extensive databases on who specifically should be eliminated, but direct removal is going poorly and offending bystanders. So why not subvert the Hippocratic oath? Machines don't even know who that is.
"Turns out the statistically best choice for prediabetes for your patient group is to rely more heavily on soft drinks, but only in wild outbursts punctuated by fasting!"
The real problem was allowing the AMA to tightly control the supply of doctors for decades---that this would even be a solution to anything.
There is very real opportunism and profiteering among those advocating and providing this "solution." But they didn't create the months of waiting that normal folk have to see routine specialists or PCPs.
At the very least, they can get instructions on how to do first aid and when to go to the emergency room.
I remember talking to friends who were well-meaning but panicked about the ACA being passed because the system would be inundated with people seeking healthcare, and that it would lead to Soviet-style rationing. The rationing hasn't come from any five-year plan or the like, but simply the supply of doctors not keeping up with the demands of a growing population.
As I understand, the throughput bottleneck on training new doctors is the availability of funded residencies. So chiefly on the hospitals and the government (Medicare, mostly) to spend more.
The meme about AMA artificially limiting the supply of new residency slots turned out to be much more complex the more I read about it (and IMO mostly incorrect, at least as described by the just-so stories on the internet where I first learned about it).
The actual limiting factor is federally funded residency positions which are funded by CMMS, and were artificially capped by the Balanced Budget Act of 1997 during a widespread climate of financial austerity after Gingrich had led Republicans to retake the House in 1994 for the first time in decades.
At the time, AMA apparently did release a report that included support for reducing the number of residency slots, so that detail is correct. But the decision wasn't made by them, and federal healthcare spending was already on the chopping block (and was a politically attractive area to make cuts given the Clinton's administration failed healthcare reform proposal).
As early as 2006 [4], the AMA started releasing policy position statements requesting caps on federal funding to be increased, but it wasn't until the CARES ACT in 2020 that Congress funded 1000 new CMMS funded slots were funded (but limited to small gradual increases each year) and started to be implemented in 2021-2022. And they continue to advocate for increases that so far haven't been adopted.
So while there's a kernel of inarguable truth that the AMA and other medical professional groups did support certain caps back in 1997, it has always been, then and now, a policy decision by Congress. With the motivation to set limits driven by concern over federal healthcare spending inflating budget expenditures.
But it makes for a simpler, emotionally resonant narrative of a shadowy self-interested group pulling the strings at the expense of the public (that also conveniently redirects outrage away from the people actually empowered to fix the problem, or that federal spending is a crucial lever to fix the problem).
[1] https://www.ama-assn.org/press-center/ama-press-releases/ama...
[2] https://washingtonian.com/2020/04/13/were-short-on-healthcar...
[3] https://www.openhealthpolicy.com/p/medical-residency-slots-c...
[4] https://pmc.ncbi.nlm.nih.gov/articles/PMC1785175/
There should definitely be an independent board like the fed (different decisions per area, maybe different boards per state) that determines need for the next 5 years. This is the way they do hospital beds (state level), but not doctors to address them! Of course we have to boot the fascist asshats to have a chance at it being independent.
See Certificate of Need requirements: https://en.wikipedia.org/wiki/Certificate_of_need
So we get so worked up over this stuff, and for good reason. But personally I'm taking another stance...
let them..
It's not my choice to make someone understand what's best for an individual or as a group. Let them make these decisions, and learn for themselves. Will this cause issues where I am at risk of getting measles? Or that kids could get sick over non pasteurized milk? yes, but we're back in a place where people have to feel the pain.
That's not to judge, or belittle or put anyone down. There's people who have views and values that conflict and that's OK.. Even if it's not the best for us a whole.
People can feel the pain without drawing the right conclusions to rectify the situation. For centuries people thought bad things happened because the gods were displeased. Some still do.
And many of those that still do would drag us kicking and screaming back to the days where talking out against the word of god was a capital crime.
“Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.”
Douglas Adams
I can understand your viewpoint, but I don't think I could bring myself to agree with you.
Healthcare is a necessary service. The healthy foods website that RFK Jr set up was recently in the news for describing the "best foods to put up your ass". That's the technology that's being advocated for here. There's no quality bar. There's no regulation on accuracy. It's almost certainly the case that if you meet with an AI avatar, you couldn't sue its operator for medical malpractice.
The issue, fundamentally, isn't that you're giving people a choice. It's that this will be the only choice. If healthcare companies don't need to open offices in rural locations, they won't. Even if you're fine with this technology, it'll quickly become the case that it's the only option many people have regardless of whether they want it or not.
I read a thread of stories today about people's parents using technology. One person's mom tried searching for energy drink ingredients and accidentally registered her house as a business on Google Maps. These are the people who are suddenly going to have to interface with an AI avatar about their health. We're replacing medical professionals with a glorified phone tree with RAG search. It's literally going to kill people who don't have a choice.
I agree with this, but think of in more of an evolutionary lens.
I've come to the place where every time we think we have solved a problem, we may not have and have created a new tree of problems.
I am one of those knuckle heads that decided to give carnivore diet a try, and my wife and I have had amazing results on a number of metrics. We were also not even eating junk food or that much processed food.
The sheer complexity of how we live demands humility, and in many ways the skeptics have valids point in many ways.
I look to the Amish, for example, as a way that probably not that bad to live considering how much of the modern world has problems.
At core, some people choose to be experiments, and some decide to the control. This is the reality I believe, and this is how the whole remains robust over the long term.
"The common people know what they want, and deserve to get it good and hard."
So the system is corrupted by money and influence and your idea is to "let the people feel the pain."
The Hacker News casual misanthropy bubbles up to the top yet again.
Instead of trying to use your hacker instincts to find a solution you're just got to rest on your heels and let people suffer? What a waste of talent this represents.
Listening to DR. Oz is a path to an early paupers grave.
> "AI can't read facial expressions, tone of voice, or body language," she said. "And those things matter. That's where the relationship between a patient and provider is built — in the nuance."
It's amazing how, no matter how dramatic the breakthroughs are, people will still totally fail to understand the concept and can't even extrapolate correctly to N+1.
There's absolutely nothing fundamental about current AI tech it that limits it from reading facial expressions and tone of voice and even body language to the extent it can see it. And then, already there are studies showing that patients find AI responses more empathetic than actual doctors. There's a big question whether people will accept it such empathy as genuine, but my money is on people attaching to these things like crazy.
>There's a big question whether people will accept it such empathy as genuine, but my money is on people attaching to these things like crazy
Is that something we should cultivate in our society? Being emotionally attached to balls of linear algebra controlled by massive corporations?
I honestly don't really know. Notice they can't even actually say "human", they say "provider" above. So they are pretty much admitting, the alternative isn't that you get a life long relationship with a person. You're getting a lifelong attachment to a corporate entity either way. People become sentimentally attached to inanimate objects all the time. It's all about what it means to them. There really isn't a rule book here, we have to figure it out.
I wouldn’t trust “Dr” Oz to advise me on which brand of cold medicine to buy, much less the health of myself or my child.
Ironic that he is a cardiovascular surgeon offering his opinion on what is a sufficient assessment for OB/GYN care
Since I'm dealing with how good claude code is, I gave a research task to claude to help my in-laws (who are psychologists) understand the coming storm at a more visceral level.
AI produced this: https://nexivibe.com/future.mental.health.web/index.html
I can't argue with the urgency. I then had claude produce a prompt based on this that I could feed Grok as a project to do therapy, and the result was shockingly good.
Obviously, Dr. Oz is going to be a political lightning rod for some people, but if you factor that out and try using AI for therapy... this is coming. This is happening right now.
Sure, the results CAN be good. The real question is if your health is one of those cases you want to roll the dice with. I don't, but that's up to everyone.
Tesla fsd was the same thing. You needed to have people to try it. Some paid for it with their lives. Will it make progress, sure. Do I want to be the one that dies to make that progress? No
The real healthcare system is rolling a dice too. A different dice, but an expensive one with a lot of delay and inconvenience - it's not like there's a choice that's problem free.
The unfortunate reality is that when you have a doctor you're also rolling the dice. I've had been misdiagnosed a few times in my life. These misdiagnosis were very disruptive but luckily not life threatening.
And you trust an LLM trained on Reddit and Twitter not to misdiagnose you?
That's not what I said. There are probably plenty of situations (more standard) where doctors do a good job. The doctors also have the benefit of performing a physical exam, ordering various tests etc. so they have more data.
My point still stands though. They definitely mess things up. At the very least LLMs can be an additional tool that doesn't have the subjective bias, the cost issue, or lack of time/desire to investigate deeper or listen to the patient.
EDIT: LLMs would be better if they could access case data. Maybe that's something to consider (anonymized).
I just upload everything to Lord Grok and the answers I get are amazing. I then run it claude with Ralph and the results define explanation. I am humbled...
I have rolled the dice with it, and it's been great. I use it to tune fitness and every mental dimension. Time will tell.
I will tell you this, using AI is better than using HN and random people who don't know me. I had the best technical conversation of my entire life with AI. I feel like I can handle it, and maybe I will dive into AI psychosis at some point. who knows!
I can say that it is better than the silence I've had for years from no community giving two shits about what I wanted to build. I can admit the number of mistakes I've made, but at a certain point - enough is enough.
>I can say that it is better than the silence I've had for years from no community giving two shits about what I wanted to build
It sounds like you were lacking community, then went deeper into isolation.
You are not wrong. I blew up a great career that burned me out to go on a journey that was absolutely crushing to emerge with a machine that allows me to have the deepest technical conversations of my entire career. Professionally isolated for sure. But, im married and have friends so its not that bad and dont worry.
I just came to many observations about myself and AI has helped process so many emotions that I would never share with another human.
It's unfortunate that there are not citations for some of the numbers. I found this piece compelling:
> Marcus. He is twenty-eight and did two tours in Iraq. He has nightmares four to five nights a week. He drinks to fall asleep. His girlfriend left because she couldn't live with someone who woke up swinging. He tried therapy three times. The first therapist was a civilian who asked how combat made him feel. The second used Cognitive Processing Therapy but sessions were every two weeks, and when she said “trauma narrative,” his chest closed. The third was private practice, $160 a session. Marcus started to describe what happened in the house in Mosul — the one with the family in the back room — and the therapist's face changed. A microexpression, less than a second. Marcus caught it. He stopped talking. He never went back. The 73% problem, made human. Marcus has things inside him that are killing him slowly, and he has never found a room safe enough to say them. Not because the therapists were bad. Because the therapists were human, and the things Marcus needs to say are the things that change how a human looks at you.
This reminds me of a story about how veterans with PTSD from recent wars were reading the Odyssey b/c there is a point where Ajax describes a "flashback".
More details here: https://www.vice.com/en/article/how-ancient-greek-tragedies-...
P.S. Almost every time I hear discussions about veterans and PTSD, the veterans say something like "having a therapist be someone who is ALSO a veteran and has seen combat is worth a million points"
(I'm sure there are therapists who are excellent and are not veterans, just pointing out what the veterans value).
Just 9 parts plus foreword and appendix. It’s cool that it could generate this but it’s going to miss 99% of the audience. I intend to be constructive, not dismissive of your efforts.
Missing the audience is the story of my life. Thanks for being constructive.
I think there is a coming storm as you say: the disinformation crisis on overload. Even what you present makes my eyes roll as a more competent person in the room. It's not an impressive website, it's a carbon copy of plenty of sites that exist.
It is full of text that is hard to decipher as being meaningful in any way, and that has unfortunately been true of LLMs for a long time, they are great at bullshitting.
Yet, to determine if the content makes any sense takes a substantial effort, an effort I am highly confident you did not do.
Still, I believe you're right, this will be enough. This will be "not everything you read on the Internet is true" x1000.
I am having it reviewed by a handful of psychologists, and they are having to grapple with it too. I'm confident that I reviewed it well enough to be concerned, and I'm even more confident that it is the future.
Dr. Oz is in the Epstein files.
The email he wrote in October 2019 is especially odd. He reached out to FBI wanting information on interviews of victims. FBI blew him off. (EFTA00037405) Between the lines, this really reads to me like he wants to know if anybody accused him. Someone on an internal email said not to tell Oz anything about the meeting. (EFTA00037407)
He met with Epstein on 1/1/16. (EFTA02476629) I believe the meeting was initiated by Oz.
In China, when they didn't have enough medical personnel, they sent the half-trained barefoot doctors to the rural areas to give even basic medical care. Over time they trained more and more people in allopathic medicine and increased the level of care and expertise.
Our elites care so little for us that they won't even bother to send real people, just a sop to tell us to shut up.
Expectations rise with the level of development. No excuses in America.
https://en.wikipedia.org/wiki/The_Doctor_(Star_Trek)
The Doctor is better technology created by socialists for the purpose of benefiting the crew. Anything created by Dr Oz in association with right wing billionaires is gonna be the opposite of that.
And you guys push LLMs as a hail all for everything, you tech fucks.
yikes.
I do not trust Oz et al to make this anything but an unethical grift. But Ironically we already have a lack of continuity of care, providers who are so overburdened that they cannot give any human nuance, and a complete lack of coordination across visits or diagnoses. So, America has done what it usually does; create a problem and then offer to sell the subpar solution. also, the healthcare industry has used impoverished urban areas as guinea pigs for training programs and hypothesis testing for many generations, why not the same benefit/detriment for rural America?
AI as a tool to make health care cheaper and more scalable is not an inherently bad idea, but I would not let these fools anywhere near it. Anything they touch is going to be a pure grift.
rural here, raised in a medical family. there is NO rural health care, bare bones self funded "clinics" with no actual doctors. they want doctors here but cant offer competitive compensation, and the governments rural devitalisation plans are working, and demograhics back this up. every few miles there is another failed back to the land dream farm, and then some giant horsey place that will be a third+ home general health care is so rotten that the people hired from India to do the work, FLY back home to India, if they need health care for themselves as it's both faster and cheaper counting all the costs. that we are tumbing towards a crunch point and have no one to blame but the victims, then thats what we will do.
The way healthcare is going in the US, we'll all have to fly to India if we need basic care that:
1. an AI cannot effectively deliver on its own.
2. requires physical testing and actual facilities rather than just prescriptions.
If it works for the workers, ...
As the article points out, the state of healthcare in rural (and non-rural) communities is entirely a republican choice to make it what it is. This is an attempt to make a lot of money taking over medicine and that little detail that they don't actually deliver reasonable quality care will be "fixed" by sabotaging actual doctors (and everything from hospitals, to nurses, to cleaning staff)
But you're kidding yourself. The whole world, including India has a doctor shortage AND is choosing not to train new medical doctors (frankly: especially India, they're not good about this). If flying to India helps, it'll be temporary.
Oh and having AI do healthcare goes squarely into what AIs are really bad at: adversarial planning. Your doctor is more like a judge: depending on the situation, it's your doctor's job to push (and occasionally force) you to take certain decisions, AND to sometimes push and force the government to take decisions like the COVID lockdown.
Does anyone think for a second Trump will allow for these AIs to be programmed to order him to lock down the country if the situation requires it? Or does he demand they just lie about the situation. I feel like we've seen the answer to this one.
It's fair to worry about another sort of adversarial planning: what if you are a human who is deemed undesirable to the state, and solicit advice from an AI doctor that is backdoored to take correct action as defined by the state? There's now extensive databases on who specifically should be eliminated, but direct removal is going poorly and offending bystanders. So why not subvert the Hippocratic oath? Machines don't even know who that is.
"Turns out the statistically best choice for prediabetes for your patient group is to rely more heavily on soft drinks, but only in wild outbursts punctuated by fasting!"
[dead]