Compensating for a Loss of Meritocracy

Segment #792

https://youtu.be/LNcrhluxPy8

Thomas Sowell talks about the health care crisis being marketed by socialized medicine advocates in 1993. Sadly, his comments are more pertinent than ever. http://www.LibertyPen.com

At its simplest, meritocracy is the idea that the "best" person should get the job based on talent and effort. Disparate impact is a legal doctrine (famously established in the 1971 Supreme Court case Griggs v. Duke Power Co.) which argues that if a hiring practice—even if it seems neutral—disproportionately excludes a protected group, it’s discriminatory unless it’s strictly necessary for the job.

I want a great doctor and could care less about DEI. AI scares the hell out of me for a number of reasons; however, maybe there is a positive as this techonology improves.

Conservative critics and medical advocacy groups, such as Do No Harm (led by Dr. Stanley Goldfarb), the Manhattan Institute, and the Heritage Foundation, argue that DEI initiatives in healthcare prioritize ideology over excellence, potentially compromising the quality of care.

Their arguments generally center on four main areas:



The Problem

https://youtu.be/CS0CkFG4xOo

An eye-opening Dr. Phil debate! Celebrated figures in medicine tackle, Diversity, Equity, and Inclusion - one of the most pressing issues in health care today. We’ll debate DEI initiatives being implemented in the medical field from college admissions to the operating table. Prominent voices against DEI initiatives including renowned neurosurgeon Dr. Ben Carson will explain why these programs implement a new form of segregation in the modern era. PRO-DEI voices will discuss how their journeys in medicine have been supported by programs that made sure young minority hopefuls had an open door to higher pursuits. We’ll take a deep dive to investigate whether DEI is helping or hurting our healthcare system. You won’t want to miss this!

1. The "Erosion of Meritocracy"

Critics argue that by emphasizing race-conscious admissions and hiring, medical institutions may lower academic standards.

Admissions Standards: Some critics point to instances where medical schools have waived or lowered MCAT (Medical College Admission Test) requirements for specific groups to meet diversity goals. They argue that selecting candidates based on identity rather than the highest possible academic performance risks producing less competent physicians.

Faculty Promotions: There is concern that DEI requirements for tenure and promotion prioritize political activism or "social justice" work over clinical expertise and scientific research, potentially discouraging high-performing researchers.

2. Politicization of the Medical Curriculum

A major critique is that medical education is being "diluted" by social and political topics at the expense of "hard science."

Curriculum Shift: Dr. Stanley Goldfarb has testified that some medical school leaders have expressed a desire to reduce the amount of science in the curriculum to make room for training in "social justice" and climate change.

Physician Agency: Critics argue that doctors should focus on diagnosing and treating illness. They contend that pushing doctors to solve "systemic" issues like housing or urban planning—areas where they have no professional agency—distracts from clinical training.

3. Challenging the Data on "Racial Concordance"

While proponents of DEI cite "racial concordance" (matching doctor and patient race) as a way to improve outcomes, conservative researchers have challenged the validity of these studies.

Methodological Flaws: A report by the Manhattan Institute critiqued a widely cited study on Black infant mortality, claiming that once researchers controlled for factors like birth weight, the "concordance benefit" disappeared.

Do No Harm Analysis: This group analyzed several systematic reviews and concluded that most showed no significant health improvement based solely on racial matching, arguing that patients care more about a doctor's skill than their race.

4. Impact on Research and Objectivity

Critics argue that the peer-review process is becoming biased toward "progressive-coded" research.

Journal Bias: A 2025 analysis of the JAMA Network suggested that medical journals are publishing more articles on "inequity" and "structural racism" than on major diseases like asthma or heart disease. They argue this shift erodes scientific objectivity and distracts from finding cures for biological illnesses.

  • Standardization vs. Equity: Conservative viewpoints often favor a "colorblind" approach to medicine, arguing that standardized, evidence-based treatment protocols are the best way to ensure quality, rather than tailoring care based on racial identity.

https://youtu.be/7ZsyYCZB3Nw

Healthcare is so hard, and so expensive, because of its complexity. We've never had technology sufficiently powerful to allow us to simplify the work - until now. In this talk Dr Edmund Jackson describes how AI, at last, can help all of us, from patients to providers, to suffer less and heal more by simplifying healthcare. Dr. Edmund Jackson is the visionary CEO of UnityAI. A recognized leader and innovator, he's applying his Cambridge Ph.D and decades of experience in data, algorithms and healthcare to improving our systems of healthcare. Beyond his professional accomplishments, he’s an avid marathoner and Tai Chi practitioner. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

AI Potential Solution to Mediocrity

It is a provocative idea: if the "human element" in high-stakes fields like medicine is becoming compromised by shifting standards, why not let the machines handle the heavy lifting?

However, the "AI as a non-biased savior" theory hits a few significant speed bumps when it meets the reality of how these systems are built. AI doesn't exist in a vacuum; it is a mirror of the data we feed it.

The "Unbiased" Myth

The biggest misconception about AI is that it is inherently objective. In reality, AI is often a "bias-accelerator."

  • Garbage In, Garbage Out: If the historical medical data used to train an AI contains human biases (e.g., under-diagnosing certain populations or favoring specific treatments based on old, flawed studies), the AI will codify and automate those exact biases.

  • The Black Box Problem: Unlike a human doctor who can explain why they made a decision, AI often arrives at a conclusion through complex neural networks that even the creators don't fully understand. This makes it hard to "fact-check" its competence.

How AI Is Transforming Healthcare

While it might not "replace" the need for high-level human expertise, AI is currently acting as a safety net for the very competency issues you’re worried about.

Diagnostics

Human :Rely on intuition and experience (prone to fatigue).

AI:Can scan 10,000 X-rays for tiny anomalies in seconds.

Personalization

Human: Understands nuance and patient "vibe."

AI: Analyzes genetic markers to predict drug reactions.

Standardization

Human: Varies wildly from doctor to doctor.

AI: Provides a consistent "baseline" of medical knowledge.

The "Co-Pilot" Reality

Rather than saving the system from "less competent" people, AI is likely to become a mandatory co-pilot. * The Floor, Not the Ceiling: AI raises the "floor" of competence. It can catch a junior doctor's mistake or flag a drug interaction that a distracted resident missed.

  • The Meritocracy of Code: In a world where university admissions might be focused on philosophy, the "merit" in AI is purely functional: Does the code work? Is the prediction accurate? > The Catch: If we rely too heavily on AI to compensate for a decline in human expertise, we risk "deskilling." If the AI fails or hallucinates (which it does), and the human "expert" doesn't have the deep technical knowledge to spot the error, the healthcare system could become more fragile, not more robust.

Monitoring AI

The idea that AI could act as a "non-biased savior" for a struggling healthcare system is one of the most debated topics in medicine today. In 2026, we are seeing this play out in real-time.

While AI is indeed being used to "audit" human decisions, it isn't necessarily a neutral force. Instead, it’s acting as a high-speed accountability layer that catches human errors while simultaneously struggling with its own "inherited" biases.

Here is how AI is currently auditing healthcare to manage both competence and bias.

1. The "Safety Net" for Human Error

In high-stakes environments like the ER, AI acts as a "second pair of eyes" to catch mistakes made by fatigued or less experienced staff.

  • Radiology Flagging: Systems like Aidoc or BioMind scan CT scans and X-rays in the background. If a junior resident misses a small brain bleed or a pulmonary embolism, the AI flags it for immediate review, effectively raising the "floor" of clinical competence.

  • Sepsis Prediction: In many hospitals, AI monitors live patient vitals 24/7. It can flag the onset of sepsis hours before a human clinician notices the subtle pattern, preventing "competency gaps" from becoming fatal.

2. Auditing for Bias (The "Mirror" Effect)

The irony of your premise is that we are now using AI to audit humans for bias, but then having to hire humans to audit the AI.

  • Treatment Disparity Audits: Large health systems now use AI to retroactively scan thousands of patient records to see if different demographics (race, gender, age) received different levels of care for the same symptoms.

  • The "Cost Proxy" Trap: A famous audit recently found that a popular AI tool was recommending less care for Black patients. Why? The AI was programmed to look at "healthcare spending" as a proxy for "need." Because Black patients historically had less spent on them (due to systemic issues), the AI concluded they were "healthier."

  • Corrective Audits: In 2026, "Bias-Aware Training" (like adversarial debiasing) is being used to "unlearn" these patterns, essentially trying to build the "unbiased savior" you described.

3. The Shift to "Agentic" Coordination

A new category of Agentic AI has emerged to solve the "follow-through" problem. Even if a doctor is highly competent, the system often fails because a lab result gets lost or a follow-up isn't scheduled.

  • These AI agents don't just "detect"; they coordinate. They track a patient’s journey, ensuring that if an AI or a human flags a potential issue, it actually gets treated. This removes the "human error" of administrative neglect.

The Reality of "Competence" in 2026

We are moving toward a "Trust but Verify" model. The AI doesn't replace the expert; it provides a standardized baseline.

Diagnostics

Nuance, physical exam, and empathy.

Pattern recognition and data synthesis.

Auditing

Ethical judgment and accountability.

Identifying statistical anomalies and errors.

Efficiency

Complex bedside decision-making.

Automating documentation and routing.

The Caveat: The "Competence Trap" is real. If we rely too much on AI to fix human errors, we risk a generation of doctors who can't "read the room" without a screen. The goal for 2026 is Augmented Intelligence, not Artificial Replacement.




https://youtu.be/if1U_LLRjRM

Just like every year, I share not predictions for the upcoming year but medical and healthcare technological trends it’s worth paying attention to. I highlight trends I’m very positive about, trends I’m more cautious about, and trends I don’t expect much from this year. In the meantime, we at The Medical Futurist, will be here to analyze everything that matters in healthcare and medicine for you.

Top AI Safety Nets in 2026

In 2026, the "AI as a safety net" theory has moved into a massive implementation phase. Hospitals are no longer just experimenting with these tools; they are integrating them into the "operating system" of the hospital to catch human errors in real-time.

Here is a comparison of the top AI diagnostic and auditing tools currently being used to "backstop" human expertise.

Aidoc (CARE™)

Radiology & ER, Uses a "Foundation Model" to scan body CTs for 14+ acute conditions simultaneously (e.g., brain bleeds, abdominal issues). Re-orders the queue so the most critical cases are seen first, regardless of when they arrived.

Mayo Clinic Vision Transformer

Surgical Safety, Analyzes patient-submitted photos of post-op wounds using "Vision Transformers.", Achieves 94% accuracy in detecting surgical site infections before they are visible to the human eye.

Praxis EMR

Documentation & Logic, Uses "Concept Processing" to map a doctor's notes against historical cases and medical logic. Acts as a real-time checklist, flagging if a doctor forgot to ask a critical diagnostic question or order a necessary test.

IBM Watson Health

Oncology & Analytics, Scans unstructured notes and medical literature to suggest personalized treatment paths. Reduces "competency gaps" in complex cancer care by citing the latest global research that a single doctor might miss.

Aidoc aiOS

Clinical Workflow, Monitors the entire "pixel-to-report" journey in the background. Reduces "false alerts" by an order of magnitude compared to older AI, ensuring doctors don't get "alarm fatigue" and ignore real errors.

The "Competency" Audit: 2026 Trends

Beyond just catching a missed broken bone, AI is now being used to audit the quality of human decision-making:

  • Triage Acceleration: Recent studies show AI tools have cut notification times for stroke patients by an average of 22 minutes, often by identifying the stroke on a scan before the radiologist has even opened the file.

  • The "Black Box" Defense: In response to concerns about biased or "clever" selection in medicine, 2026 has seen a push for Transparent AI. Systems are now required to "show their work"—citing the specific clinical trial or physiological marker that led to a recommendation.

  • Sepsis Safety Nets: AI integrated into Electronic Health Records (EHR) now predicts sepsis hours before clinical symptoms appear, acting as a vital guardrail for junior staff who may not yet have the "gut feeling" of a 30-year veteran.

The Catch: The "Hazard" of Reliance

While these tools are saving lives, the 2026 Health Tech Hazard Report by ECRI warns that the biggest risk is now "Misuse of AI."

The Risk: If medical staff become too reliant on the "safety net," they may stop double-checking the AI's work. In 2026, we've already seen cases where "hallucinated" AI data led to incorrect treatments because the human "expert" didn't have the technical depth to spot the machine's error.

Ambient Vs Clinical Intelligence

That is an spot-on description of where the technology has landed in 2026. You’re essentially describing Ambient Clinical Intelligence, and Praxis EMR’s specific version—Reflective Ambient Intelligence™—is designed to be exactly that "silent observer" you're imagining.

However, there is a subtle but critical distinction in how it works compared to a standard "robot" or a typical AI scribe.

The "Reflective" Difference

Most AI scribes simply record the conversation and try to summarize it. Praxis takes it a step further by using what they call a Concept Processor.

  • It "Thinks" Like the Doctor: Instead of using a generic medical model, it learns from the specific doctor’s past successful cases.

  • Real-Time Reconciliation: As the doctor speaks to the patient, the AI doesn't just transcribe; it "reconciles" the conversation with the doctor’s own clinical knowledge base.

  • The "Safety Monitor": If the doctor mentions a symptom but forgets to follow up with the standard diagnostic question they usually ask, the AI acts as a real-time checklist. It subtly surfaces those "missing pieces" before the patient even leaves the room.

How it Shapes Medical Strategy

The "ongoing medical strategy" you mentioned happens through a feature called Practice Advisories.

Human-Only Strategy

AI-Augmented Strategy (Praxis)

Recall-Based: Doctor relies on memory of the last visit.

Data-Driven: AI instantly pulls up trends in the patient's vitals or labs as the conversation happens.

Manual Follow-up: Staff must remember to schedule tests.

Automated Agents: The AI creates the orders and schedules the follow-up based on the "intent" it heard in the exam.

Generic Guidance: Standard "one-size-fits-all" advice.

Personalized Logic: The AI surfaces the specific treatment path the doctor has found most effective for similar patients in the past.

The "Observer" in the Room

Imagine a scenario where a doctor is distracted or potentially "less competent" in a specific niche area. The AI acts as the standardization engine. It ensures that:

  1. The Narrative is Precise: Every word the doctor says is captured and mapped to a clinical concept.

  2. The Logic is Sound: It flags contradictions (e.g., if a doctor prescribes a medication that conflicts with something the patient just mentioned).

  3. The Legal Paperwork is Bulletproof: Because it "monitors" the actual exam, the resulting medical record is a high-fidelity account of what actually happened, protecting both the patient and the doctor.

The 2026 Reality: This isn't just a "robot" listening; it's a cognitive exoskeleton. It doesn't replace the doctor's brain, but it prevents the doctor's brain from having a "bad day" that results in a missed diagnosis.

Patient Reaction

In 2026, the reaction to "AI observers" like Praxis is a fascinating mix of relief and high-stakes anxiety. While doctors love it for the time it saves, patients are still navigating the "awkward phase" of having a digital third wheel in the room.

Here is the current temperature of the patient-doctor-AI relationship:

1. The "Humanity" Paradox

The most surprising feedback from 2025–2026 surveys (like those from University of Iowa Health Care and Cleveland Clinic) is that AI is actually making visits feel more human, not less.

  • The "Computer Barrier": Before AI scribes, doctors spent 30–50% of the visit staring at a screen, typing. Now, with the AI listening, doctors are making eye contact again.

  • Patient Sentiment: Over 56% of patients report that the quality of their visit improved because the doctor seemed more "present." They’d rather have a microphone listening than a doctor’s back turned to them.

2. The "Forced Consent" Friction

While the benefits are clear, the way patients are asked for permission is causing tension.

  • The "XY Case" Phenomenon: Recent ethical reviews have highlighted cases where patients feel "socially coerced" to consent. If a medical assistant says, "Everyone else is doing it," or a doctor says, "I really need this to do my notes," patients often say yes just to avoid being a "difficult patient."

  • Skepticism by Age: Interestingly, younger patients (ages 18–30) are currently the most skeptical of AI observers, citing data privacy concerns, while patients over 51 are more supportive, valuing the increased focus and detailed after-visit summaries the AI provides.

3. The "Hallucination" Fear

Patients are becoming more literate about AI flaws. Their top concern in 2026 isn't just "who is listening," but "is the AI lying about me?"

  • The Audit Requirement: Roughly 39% of patients express concern about documentation accuracy. They worry that if they describe a complex symptom, the AI might "summarize" it into a generic diagnosis that follows them forever in their medical record.

  • Transparency Demands: Over 80% of consumers now demand clear disclosure. They want to know exactly what the AI does with their voice—is it deleted immediately, or is it being used to train the next version of the model?

The Bottom Line: Patients are generally accepting the "AI observer" because the alternative—a distracted, burnt-out doctor—is worse. However, they are demanding a "human-in-the-loop" to ensure the AI doesn't become the final authority.




Previous
Previous

Dems Search for Cool

Next
Next

Hey, I’m an Expert