To Austin otolaryngologist Jeffrey Kahn, MD, Medicare’s value-based payment program, as it stands now, resembles a game more than it does a quality improvement venture.
In the first year of the “game,” Dr. Kahn came away a winner. But that does not make him more enthusiastic about continuing to play with the same rules.
More than six months after 2017 ended, the Centers for Medicare & Medicaid Services (CMS) finally released performance scores and feedback for physicians who participated during the inaugural year of the Merit-Based Incentive Payment System. MIPS is one of two tracks available to participate in the federal Quality Payment Program (QPP) that now pays physicians based on their quality performance.
Dr. Kahn and more than 20 other practitioners at Austin Ear, Nose & Throat Clinic (Austin ENT) strategized to “play” as individuals, rather than as a team. For year one, Dr. Kahn went with the minimum-reporting option to avoid a payment penalty in 2019.
The practice’s information technology specialist downloaded Dr. Kahn’s 2017 feedback report, which revealed what he’d won: a small bonus payment for 2019.
Dr. Kahn believes some MIPS metrics may help physicians improve their clinical behavior. But even as a high performer, he says room for improvement in the program remains.
“We’re essentially reacting to whatever it is that’s happening. I wouldn’t say that we have a very negative or a very positive feeling about it. It’s more of a shoulder shrug, like, ‘Well, this is just what we have to do,’” said Dr. Kahn, chair of the Texas Medical Association’s Council on Health Care Quality. “It’s all about strategizing almost around this [being] a game, as opposed to really digging in and using this as a tool to improve quality.
“That’s not to say that we don’t think quality is important. We do; we think it’s extremely important. But at least for our practice as otolaryngologists, we haven’t seen MIPS as a good vehicle for improving our quality.”
Now that they can see their 2017 MIPS scores and feedback, physicians hope at the very least to use the information as a learning tool for future reporting. Participants who think CMS made an error with their 2017 scoring have until Oct. 1 to file for what CMS calls a “targeted review,” more commonly known as an appeal. (Update: Medicare Gets It Wrong, Re-Calculates MIPS Scores and Changes 2019 Payment Adjustment)
Because CMS doesn’t offer real-time or periodic feedback on scoring, physicians’ ability to know ahead of time where they stand with MIPS is a guessing game. For small practices, the only indicator could come from scoring projections in their electronic health record (EHR) system.
That’s how Abilene family physician Dean Allen Schultz, MD, tracked his performance before CMS released 2017 scores, which are on a 100-point scale. (See “Final Score,” right.)
When he accessed his feedback report, Dr. Schultz found out he got a perfect score of 100, matching his EHR’s projection. But he says the bonus payment he earned, which was slightly more than 2 percent, “won’t be worth the cost of participating for most physicians.”
Dr. Schultz says his EHR nicely incorporates quality reporting into his workflow, and MIPS has been onerous but doable.
“I tell you though, I get so tired of checking off quality indicators, and it’s exhausting,” he said. “You have less energy to devote to the things that I really feel need my attention, so it’s distracting.”
Dr. Schultz says 50 to 75 percent of the time, quality measures add value to his practice; the rest of the time, “the quality measures don’t make much sense, or they’re difficult.”
Some are either out of his direct control, or his EHR won’t recognize that he’s addressed that metric. For example, an icon in his EHR nags him about putting diabetic patients on antithrombotic medication — even if that particular patient already is on aspirin and Dr. Schultz has tried to denote that in the medical record.
“I’ve just really given up on this measure and accept the fact that I’ll probably be dinged on it,” he said. “But I can’t figure out a way to make the system work.”
Technology also proved challenging for Austin ENT to meet the requirements. Dr. Kahn says the group’s EHR couldn’t automatically generate all the information required for the Quality or Advancing Care Information (formerly meaningful use) categories.
“That was a qualified, certified EHR. I think what that speaks to is the fact that if you have a smaller EHR company, they may not be able to keep up with these new requirements that Medicare is throwing out,” he said.
Dr. Kahn says his MIPS report was easy to understand on a bottom-line level but still has flaws.
“It told you what your score was, and it told you what percentage increase in payments from your baseline you should expect to have. … The report was clear,” he said. “The things that were missing were [that] in the report itself, I did not see anything that indicated what the cutoffs would be: If you had done this much better, you would have gotten this percentage payment increase. And there wasn’t any advice about what you could do differently. Since we picked our own metric, we knew what we were being graded on, and so it would be fairly clear what we could do better if our [performance] rate had been low.”
Dr. Kahn says what kept his score down was doing the bare minimum of reporting, but his options were limited.
“With the limitations of our EHR, and with low expectations for getting a meaningful reward, coupled with the amount of work required for extra documentation and uncertainty in the entire process, we only reported on three months, so most of us didn’t have enough patients to meet a threshold that would give us higher points,” he said. “We felt confident that we would get credit for what we reported for those three months. We did not feel that we were likely to get more credit for more months of reporting, given the fact that the MIPS program is budget-neutral, so we made a strategic decision to provide limited reporting. Plus we only have limited resources in our single-specialty practice.”
On the other hand, Dr. Kahn says there are specialty-specific metrics that make sense, though they may not be one-size-fits-all.
A common measure for otolaryngology, for instance, is using the antibiotic Augmentin as a first-line therapy for avoiding CT scans for acute sinusitis. But most of Austin ENT’s patients have chronic sinus issues, he says, and those with acute sinusitis typically have been treated with Augmentin before he sees them.
“So if we want to use another antibiotic, we either have to play the game of finding a diagnosis other than acute sinusitis or we have to remember to take the additional time to document why we are using a non-Augmentin antibiotic. That’s an example of how it feels like a game. This metric doesn’t change how we practice, just how we document.”
Big practice, big lift
Even for a gigantic medical operation like The University of Texas MD Anderson Cancer Center, MIPS proved challenging, says George Perkins, MD, chief medical officer of MD Anderson’s Physician Referral Service. The cancer center’s MIPS operation features resources that a small shop can’t deploy, namely a dedicated committee for steering and strategy that includes staff from performance improvement, nursing, and clinical operations, as well as financial, technology, and communications areas.
If you think having those kinds of resources would enable MD Anderson to excel in MIPS, you’d be right. In late July, Dr. Perkins says the center learned it had cleared the threshold for exceptional performance. But with more than 800 physicians, their analytical work wasn’t done. MD Anderson provided the feedback to every group participant and began parsing it to understand the implications of that score.
Before finishing that deep dive, MD Anderson already was using the feedback as a learning tool.
“For us, we did have resources, but even for an institution that did have resources and content experts in these areas, it was still not an easy lift for us,” Dr. Perkins said. “We had routine meetings. We debated some interpretations. We brought in people to help to make sure that we were doing things not only according to the letter of what was written, but according to the spirit behind the letters that were written.”
Improving MIPS to make it more of an actual quality program, and less of a drive to hit benchmarks, will require “really digging in to find out what physicians truly see as quality,” Dr. Kahn said.
“I don’t think there is a simple, straightforward answer to how you improve this process. I think that’s something that is an evolutionary kind of a process, one where we as an entire group sit down and start talking about what we think is really important, and one where we also engage our patients and the people who are paying for health care in that process as well.”
2017 final score Payment bonus/penalty
0-0.75 Penalty of 4%
0.76-2.9 Penalty <4%
3 Neutral (no payment adjustment)
3.1-69.9 Bonus payment
70-100 Bonus payment, plus an additional bonus for exceptional performance
Performance feedback includes the following information, according to the Centers for Medicare & Medicaid Services:
- 2017 final score and 2019 payment adjustment (i.e. bonus or penalty);
- Final performance category scores and weights;
- Scoring and performance details for the quality and Advancing Care Information (formerly meaningful use) categories;
- Scores for Improvement Activities; and
- Performance details for the cost category (this category was informational only for 2017 and didn’t factor into a participant’s 2017 score).
Physicians who participated in an alternative payment model (APM) in 2017 should contact their APM administrator for details about performance feedback specific to their model.
Don’t Have a MIPS Meltdown: Options for Help
- To access your Merit-Based Incentive Payment System (MIPS) feedback, go to the Quality Payment Program portal at qpp.cms.gov/login and use your Enterprise Identify Data Management account. If you don’t have one, the Medicare fact sheet at tma.tips/qppcms can help.
- If you’ve already gotten your feedback, and understanding or applying your scores is a puzzle, let TMA Practice Consulting lend a hand. TMA’s practice management consultants provide customized on-site assistance to help you focus on clinical processes, electronic health record optimization, and workflow improvement opportunities necessary for successful MIPS reporting. The in-person assessment also provides assistance with reporting, as well as interpreting MIPS performance feedback to help physicians improve their scoring. TMA Practice Consulting provides services to TMA members at below-market rates. For more information, contact TMA Practice Consulting at (800) 523-8776 or email@example.com.
- TMF Health Quality Institute, whose consultants provide free assistance to MIPS-eligible clinicians in Texas and seven other states, provides another option for understanding your scores. TMF helps physicians review their MIPS feedback through web-based meetings. Elaine Gillaspie, project director with TMF, says the consultants instruct physicians on how to download their feedback reports, walk them through the report, and review the scoring results and payment bonuses or penalties. Their consultants also emphasize what’s needed to successfully report for the 2018 performance year, Ms. Gillaspie says. To find more information, visit tma.tips/qppmips.
File an Appeal by Oct. 1
Update: Originally, CMS said physicians had only until Oct. 1 to ask for an appeal (which the agency calls “targeted review”) of their scores, feedback, and payment adjustments. However, because of errors, CMS extended the deadline to Oct. 15.
Original story: Think MIPS is mistaken? If you participate in MIPS — Medicare’s Merit-Based Incentive Payment System — and haven’t yet checked out your scores and feedback, hurry up and do it before the Oct. 1 deadline hits to request a “targeted review,” or appeal.
According to a Centers for Medicare & Medicaid Services fact sheet on targeted reviews, eligible clinicians may request one for several potential reasons, including: calculation errors, data quality issues, and physicians believing they should qualify for automatic reweighting of performance categories due to the 2017 “extreme and uncontrollable circumstances” policy. That last one could come into play for MIPS participants affected by Hurricane Harvey, for instance.
For links to fact sheets and instructional videos related to MIPS performance feedback and targeted reviews, visit tma.tips/mipspflinks.
Tex Med. 2018;114(9):36-39