Skip to content Skip to navigation

Bad assumption leads to EHR implementation failure

October 15, 2013
by Dennis Grantham, Editor-in-Chief
| Reprints
After an exhaustive process, you've chosen the "perfect" EHR. Now for the hard part.

Much has been written about the importance of selecting the right behavioral health EHR system, but unfortunately, there’s no guarantee that a long and rigorous selection process necessarily leads to a more successful long-term result. 

Instead, at least one EHR company executive believes that many providers who seek to implement EHRs fall into an unexpected trap. After setting up detailed RFPs, conducting long and robust EHR selection processes, narrowing the field of contenders, and then making a very careful final selection, their EHR implementation effort falls apart just before or after the just after the go-live date. But why?  

To understand, let’s first look at where providers are coming from. According to David Klements, CEO of Qualifacts, the behavioral health providers his company is talking to as EHR sales prospects typically fall into three big buckets:

1) Those who are still working on paper—a substantial number;

2) Those with failed attempts at previous EHR implementations (failing to move successfully from paper to an EHR, or failing to migrate successfully from an earlier EHR to a newer system), and,

3) Those who have been successful with a previous EHR, but who are looking to replace that EHR system with a newer or more updated product.  

The biggest surprise, he says, is in the middle category, those providers whose implementation projects “absorb a lot of resources and don’t get there.” These providers, he says, “have got their costs down, they understand their margins, they’ve got competitive programs in place. So, they seem ready to compete.” But then, he says, their plans are thwarted by a failed EHR implementation project.  

One big problem among these providers, who often turn to another EHR vendor to conduct a “rescue” during or after a failed EHR implementation, “is that they assume the implementation will work,” says Klements. These providers tend to “analyze and demo everything,” comparing features and functions in great detail so that they can select the “perfect” system. But, in so doing, they make a faulty assumption that “because all systems have similar functions, they’ll all function similarly.”

The excessive focus on feature/function characteristics, part of a typically lengthy RFP and review period for EHR selection, sets up an imbalance. “People put so much weight on choosing between features and functions that they lose sight of the goal of the system,” Klements says, reminding providers that “things don’t get better when you choose an EHR system. Nothing happens until you’re operational.” Getting operational demands attention to the challenges of implementation. This can be a challenge for a team that's already been tired out by a year-long selection ordeal.

And, it doesn't help that the role of an EHR system often is perceived too narrowly within the provider organization. Clinical staff are, of course, critical EHR users, but in a CMHC with a staff of 150, there are many non-clinical employees, Klements notes. And, while the clinicians need a good workflow and process to be successful with the EHR, he adds that "the other staff need to use the EHR, too."Once they are implemented, he maintains that EHRs “basically run the entire operation - scheduling, eligibility, verification and authorization mgmt, treatment planning, clinical documentation, billing and payment posting, ePrescribing and laboratory test results, management and operational reporting, and external (payer, compliance) reporting.” 

Klements notes that providers who fail tend to underestimate the impact of EHR implementation projects and the need to "get everybody involved." He likens the organizational impact of an EHR system in behavioral health to that of an enterprise resource planning system (ERP) in a large manufacturing or refining operation, where upstream processes and outputs are essential to driving downstream processes. Because any of a range of problems - erroneous data, software bugs, human errors, gaps in training, or failures to follow procedures - can cause a cascade of downstream problems that halt key processes, it is essential to root out problems and errors during the implementation phase.

This is done through a series of use-case tests, developed by the vendor and provider, that certify system elements and processes - people, training, forms, data, software, databases – are all working together smoothly, with appropriate and accurate outputs from one process feeding into others. It is at this point that provider personnel come to see “how one process drives the next, how scheduling and admissions drive clinical and service documentation, which in turn drive claims and billing activity.”

He says that the use cases, together with peer performance data, can insure befor the system’s go-live date, that provider staff can “push things though and make certain they work, through your system, through the payer’s system.” The challenge, says Klements, is that everything has to be in place for the use-case test to work: all the patient data, all of the approvals, all of the service documentation, all of the codes, all of the fees, all of the claims requirements – everything. Then you run it through, literally for individual people, insurers, providers, and claims.”

It’s a lot of work, but there’s no other way. “If you test carefully before you go live, then the expectation is success. The go-live should be a non-event,” said Klements. 

Topics

Comments

I must respectfully disagree with Mr. Klements when he refers to providers who “analyze and demo everything,” comparing features and functions in great detail so that they can select the “perfect” system. But, in so doing, they make a faulty assumption that “because all systems have similar functions, they’ll all function similarly.”

Focus on feature/function characteristics is critical to the selection process. It is the difference between the best fit for an organization or an organization making due with what the vendor limitations may be. Additionally, I take umbrage with the implication that providers are not savvy or intelligent enough to consider the impact of EHR implementation across their organization and its staff. Any for smaller organizations who may have less experience and knowledge isn't it the role of the vendor to educate the potential client so that they do consider these aspects before they purchase?

Having worked in several organizations both prior to and post EHR purchase I would say that the issues with the greatest impact on successful EHR implementation are:

1) Vendors who misrepresent the full functionality and limits of their products and
2) Vendors not educating organizational leaders regarding the the range and depth of resources necessary for EHR implementation, ongoing user support, product trouble shooting and vendor communications and negotiations post implementation.

After all, it is the EHR vendors who are the experts and who have the widest view of the many customers to whom they have sold their products; they are the ones who are seen the failures and the successes. One would expect they would want to share this knowledge with potential customers so that more organizations are successful the first time out and they can then tout these successes as part of their sales process.