
San Antonio internist Amith Skandhan, MD, is “very optimistic” that artificial intelligence (AI) will positively transform health care – but has pressing concerns on what guardrails physicians follow in using the technology.
To name a few: Do physicians understand which tasks are or are not suited for AI? How are they using AI while safeguarding protected health information? Are they aware of the legal requirements associated with their particular use(s) of AI?
Dr. Skandhan works to educate the next generation of physicians on how to use AI safely as an associate professor and internal medicine hospitalist at the University of Texas Health Science Center San Antonio. Additionally, he provides insight into emerging technologies as a consultant to the Texas Medical Association’s Committee on Health Information Technology and Augmented Intelligence.
Dr. Skandhan uses AI across multiple fronts, from research writing to drafting educational materials for his students. However, he does so only after reviewing the product’s compliance with Texas and federal law – and recommends other physicians do the same.
“Even though I'm pro-AI, I feel you should still look at it with skepticism. It's a tool … but it's not replacing you,” he said. “AI does not place orders; you still have to make that decision. Physicians are the ones who hold the license to practice medicine in this state.”
And with that license in mind, physician practices must take certain steps to comply with Texas and federal law (e.g., HIPAA). For example, physicians can review their AI systems, patient policies, disclosure and consent procedures, and confirm how an AI tool – and the data it collects – is used by vendors to further compliance efforts.
However, Dr. Skandhan warns “well-intentioned” physicians not to stop there. For example, he said, physicians may use an AI tool that is HIPAA compliant without obtaining proper approval from their institution or practice – a practice he calls “shadow AI.”
Last year, 57% of health care professionals encountered or used an AI platform unauthorized by their institutions, according to a January 2026 study by Wolters Kluwer, a multinational company that provides information software. and services across multiple sectors, including health care.
HIPAA-compliant tools meet technical, administrative, and contractual requirements to safeguard patients’ protected health information, such as by employing end-to-end encryption and prioritizing a compliance framework. On the other hand, an institution's/practice’s approved tool is evaluated by hospitals and other health care systems to ensure it meets that institution’s specific policies, standards, guidelines, and duties under HIPAA (e.g., obtaining certain assurances from the AI tool’s vendor, entering into a business associate agreement with the vendor, etc.).
Dr. Skandhan says the practice of shadow AI becomes even more worrisome as some vendors have begun to include so-called indemnification clauses in their contracts that place some liability on physicians.
These clauses may outline, for example, that while the AI developer is liable for output or usability issues that harm patient care, hospitals and physicians will shoulder the blame for errors arising from poor deployment or misuse of the technology, explains a study from the University of Stanford’s Institute for Human-Centered AI.
Dr. Skandhan said using AI technology in health care “becomes concerning” when physicians “perceive compliance.”
Dr. Skandhan recommends physicians in employed setting communicate with their institutions about which AI systems are approved and for what reasons, and what safeguards the facility has in place to prevent breaches in patient data. He suggests independent physicians, on the other hand, create AI governance policies that clearly outline how they will use these technologies in their practice.
He also advises physicians extensively review all contracts with AI developers before signing, and with a lawyer with experience in technology and health care contracts.
“[AI] has so much potential,” Dr. Skandhan said. “We need better governance. We need to understand what kind of data we are using, what kind of tools we are using, and how and when to use them. As we have these conversations over and over again, we’ll get to that point. The work is just slow and steady.”
Alisa Pierce
Reporter, Division of Communications and Marketing
(512) 370-1469