Reclaiming Autonomy: Why AI’s Promise Depends on Physician Agency
Wandering through rows and rows of booths offering AI solutions made me realize that we, as usual, are producing technology rather than solving the real problem. The erosion of clinician autonomy will make any marginal improvement a fool’s errand.
HLTH is touted as one of the biggest digital health conferences and I like to attend to see what is going on in the market. To no surprise, this year, I was seeing a lot of AI offers. There were the usual suspects such as algorithms for scheduling, diagnostics, documentation, and patient engagement. Every vendor promised to fix something in healthcare. Most of the solutions sounded the same.
As I wandered around, I thought of the differences I’d seen from when I first started practicing.
The real shift wasn’t in the technology. Most tech comes in cycles anyway. AI might be faster, but we’re having the same conversations. What I did see was a difference in people, specifically more physicians than ever before. They were attending, building, investing, and being part of the conversation.
That matters, because the deeper problem in healthcare today is about autonomy and how we regain it. This is especially relevant since AI has been upending everything with its promises.
Getting Front Line Perspectives
A physician I spoke with had been practicing for twelve years when her hospital rolled out its second AI documentation tool in eighteen months. The first had promised to cut charting time in half but hadn’t done so. (I was confused as to what that meant, based on what metrics? What was the baseline? This doctor didn’t know either). This one was different, the admin assured her. It used ambient listening and would capture everything more easily and comprehensively. She just had to turn it on.
Three weeks in, she found herself spending an extra hour editing her notes. The AI did capture everything. It transcribed her questions to patients but missed the subtle verbal cues that changed her differential diagnosis. For example, she said it documented the twenty seconds she spent explaining treatment options but she spent sitting minutes with a frightened teenager who’d just learned she was pregnant. It couldn’t necessarily capture that and the amount of care it requires. It also did not put in the blocks in her notes she needed to be thorough.
The notes were comprehensive. They were also wrong in ways that mattered for her efficiency. All of us who have worked clinically know how irritating it is when things don’t work well when they are promised to. Or worse, when they are foisted upon us.
What frustrated her most wasn’t the technology failing but that they were trying out a new technology that didn’t fix the problem. The tool was purchased by administrators who’d never written a clinical note, configured by IT teams who’d never seen her workflow, and sold by vendors who measured success in “time to documentation” rather than quality of care. Needless to say, she turned it off more often than not.
The Weight of a System
This story isn’t unique. I heard some version of it in a few different scenarios (notably, from specialists). This is not to say anecdotal stories should be taken as true without examining them. But if anecdotes build, they will lead to research and, sometimes, proof that what we thought was wrong. In this case, perhaps that ambient scribes need to be tailored to specialty.
Others have told me they’ve turned the technology off entirely. Why? Because the AI optimizes for comprehensive documentation, not for clinical utility. It captures everything equally, making it harder to identify what matters. Others are also concerned it leads to ‘chart bloat’ making unwieldly, longer notes and takes more time to get the pertinent information.
Back to our issue of autonomy and the lack of decision making these doctors had. Clinicians are working inside a system that keeps asking for more while giving less in return. Every new tool, platform, or regulation is meant to help, yet each one can add a new layer of friction. We have more regulations that tell us what can and cannot be done. Oversight is slowing eroding away from medical boards.
I’ve been considering this even more as we think of AI in healthcare. It is making things (well, everything) move faster. It is changing the dynamic in ways that previous digital health tools didn’t, and the implications are profound.
Why AI Is Different This Time
Past waves of digital health gave us discrete tools: an EHR, telemedicine visits, a patient portal, wearable devices, a scheduling app. Each of these could live in their own box and the market demonstrated that. Systems could buy some of them or not others - or have it bundled within EMR. In general, uptake was optional for many, and so when tools failed, the damage was contained.
AI is different, however, in three critical ways:
Scale and Speed: AI can process millions of data points and make recommendations in seconds. When it’s wrong, it’s wrong at scale. A flawed algorithm can misdiagnose hundreds of patients before anyone notices the pattern. Previous tools required human initiation for each action while AI acts autonomously. Well, as long as it’s turned on.
The Black Box Problem: With traditional software, clinicians could usually trace why a system made a recommendation. With AI, especially deep learning models, even the developers often can’t explain why the algorithm chose one path over another. This creates a fundamental tension with clinical judgment. How do you override a recommendation you can’t fully understand? Who is responsible for it? This is the most harrowing part of the discussion.
The Stakes of Dependence: Once an AI system becomes embedded in clinical workflow, removing it is nearly impossible. Unlike a failed patient portal that clinicians could simply stop using, AI tools often become load-bearing infrastructure. This makes the cost of getting it wrong existential rather than inconvenient. What do you do when you cannot extricate it from your note taking, for example?
The market understands these differences, even if it doesn’t always acknowledge them. We have discussions about it even if it there is no solution. But the power of AI is bigger than anything we’ve ever seen. That’s why AI investment in healthcare has outpaced previous digital health waves by orders of magnitude. The potential is transformation. But transformation can go in either direction.
The biggest difference between AI and other digital tools, however, is that AI is not a separate entity; it’s embedded into these other products. So it cannot be adopted or implemented in the same way. It cannot also be separated.
The Clinical Judgment Problem
Traditional software augmented human decision-making. AI increasingly replaces it, or tries to. This creates a new category of risk. When a radiologist reviews an X-ray, they bring years of pattern recognition, contextual knowledge about the patient, and clinical intuition. An AI can identify a nodule the radiologist missed. That’s valuable, but it can also flag fifty false positives that lead to unnecessary biopsies, increased patient anxiety, and cascading costs. This is not a small problem as health care costs grow and systems are pressured to cut costs.
The question isn’t whether AI is more accurate than humans in specific tasks. Sometimes it is. We all can admit that. However, does the system as a whole produce better patient outcomes when AI recommendations are introduced into the workflow is less clear. There is also the question of costs. In the radiology example, if an AI gets 90% accuracy vs a radiologist who has 89% accuracy, it isn’t showing a huge benefit worth the risk and investment.
And here’s where autonomy becomes essential: clinicians need the freedom and support to override AI recommendations when their judgment says otherwise. But they also need to be part of defining when and how those overrides should happen.
The Cost of Losing Autonomy
Autonomy is what allows clinicians to think critically and make decisions based on experience, context, and patient needs. When autonomy is removed, care becomes transactional. The clinician’s role shifts from healer to operator, or worse, from physician to overseer of algorithms.
We talk a lot about burnout and changing economics, but what clinicians are really describing is a loss of agency. The sense that their time, training, and presence are being managed by systems and entities that don’t understand what the work feels like on the ground.
Add the takeover by private equity of physician practices and, over time, this chips away at the autonomy that defines good medicine. Clinical judgment is replaced by checklists. Conversations are shortened to meet metrics. Decision-making is filtered through templates, alerts, and dashboards.
The loss is professional and emotional and all of it is making burnout worse. Clinicians enter medicine because they want to help, to connect, to make a difference. When the system becomes so complex that they can no longer practice in alignment with that purpose, it breaks something essential.
We’ve seen this repeatedly: turnover in hospitals, burnout, early retirements, declining morale. But underneath all of it is the same theme: people feeling powerless inside the system they were trained in.
With AI, there’s an additional dimension: the fear of becoming obsolete. Not because AI will replace doctors, not entirely anyway, but because the healthcare system might decide it’s cheaper to treat physicians as supervisors of AI rather than as decision-makers. This changes the incentives of providing care and what the profession actually does.
The Path Forward
AI has the potential to be different from previous digital health waves, not because the technology is more sophisticated, but because the stakes are higher and the integration is deeper. That’s precisely why physician autonomy and agency matter more than ever.
It’s not that clinicians reject technology. This particular myth always irritates me, we are not barriers to technology because we love our current processes so much. It’s that most technology still rejects clinicians and how they need to practice.
When tools are designed without the people who use them, adoption fails. Clinicians can’t afford inefficiency. They are already stretched thin and when a tool slows them down, duplicates work, or fails to deliver useful data, it quickly becomes a burden.
At HLTH, surrounded by endless AI promises, it reinforced to me that the most powerful form of innovation is alignment. When technology and clinicians move in the same direction, both thrive.
Reclaiming autonomy from systems and political entities is hard; but it is possible for hospitals and departments to bring clinicians in to be part of their admin decisions. That would bring clinicians back to purpose and be a reminder that innovation means nothing if it disconnects the people who hold the system together. We will need that to move forward.
If this resonates, subscribe below. I write from the intersections of medicine and meaning, technology and trust, healing and humanity. These stories aren’t just about being a doctor. They’re about what it means to show up, to witness, and to keep learning long after the training ends. Follow me at ardexia.io or draditiujoshi.com


Excellent article, Aditi. We need to create our own guardrails based on these warnings!