Keep AI in Health Care Physician-Centered, TMA Cautions Feds
By Phil West

Doctors Reviewing EHR

As the federal government looks to encourage more widespread adoption of artificial intelligence (AI) in clinical care, the Texas Medical Association urges a physician-centered approach to the rapidly evolving technology, providing insights informed by a recent member survey on how AI is being utilized.

In a formal request for information, the U.S. Department of Health and Human Services’ (HHS’) Assistant Secretary for Technology Policy and the Office of the National Coordinator (ASTP/ONC) posed questions about appropriate regulatory changes, legal and implementation issues, and concerns physicians and patients have regarding AI usage in clinical care.

In response, TMA in a Feb. 23 comment letter emphasized its policy distinguishing what it terms “augmented intelligence.”

TMA specified that “augmented intelligence provides prompts and information to support human decision-making rather than replace it. Augmented intelligence is intended to co-exist with human decision-making – it should not be used to replace physician reasoning and knowledge and should neither replace nor diminish the patient-physician relationship.”

In the missive, TMA President Jayesh “Jay” Shah, MD, illustrated that type of use with results from TMA’s 2025 health information technology survey of member physicians. Respondents were most likely to apply augmented intelligence to reduce administrative burdens, such as with ambient scribes, and are looking for ways to use the technology to assist with prior authorizations and appeals.

But physicians are “still hesitant” to use clinical decision support tools, Dr. Shah wrote.

With input from TMA’s Committee on Health Information Technology and Augmented Intelligence, the comment letter identified several areas of concern medicine has with AI:

  • Transparency in how AI tools are developed and for vendors, proof of claims about the AI tool (e.g., low error rate, HIPAA compliance, etc.);
  • Interoperability, especially in dealing with patient data despite a lack of data vocabulary standardization;
  • Patient safety, to be prioritized during all phases of product development and in AI algorithm development;
  • Protection of patient information, per HIPAA standards;
  • Reliability, especially in how AI uses patient data; and
  • Regulatory clarity and consistency.

“Physicians and patients remain concerned about unreliable technology and its use of data that may be leveraged against physicians and patients. The trust of the company and the reliability of the technology are paramount,” Dr. Shah wrote.

HHS also asked about the biggest barriers to AI adoption in response to which TMA stressed the need for “seamless integration with electronic health records and other technology tools,” and cautioned that investing in AI tools could prove too costly for some physician practices.

Learn more about TMA’s AI resources and related advocacy on its dedicated webpage.

Last Updated On

March 05, 2026

Originally Published On

March 05, 2026

Related Content

Augmented Intelligence

Phil West

Associate Editor 

(512) 370-1394

phil.west[at]texmed[dot]org 

 Phil_West140x140

Phil West is a writer and editor whose publications include the Los Angeles Times, Seattle Times, Austin American-Statesman, and San Antonio Express-News. He earned a BA in journalism from the University of Washington and an MFA from the University of Texas at Austin’s James A. Michener Center for Writers. He lives in Austin with his wife, children, and a trio of free-spirited dogs. 

More stories by Phil West