Posted on

Documentation Integrity in the Age of Speech Recognition

By Selena Chavis for For The Record

The role of speech recognition editors continues to grow as quality assurance moves center stage.

It wasn’t that long ago that documentation integrity processes were heavily defined by the work of transcriptionists. Following widespread adoption of EHRs and advances in speech recognition technology, the landscape of health care documentation has shifted dramatically in recent years—and this momentum continues today.

In tandem with the evolution of speech-enabled documentation, the role of traditional transcription has evolved to encompass a much different scope of work and skill level. Enter the era of the speech recognition editor.

“In straight medical transcription, the transcriptionist was focused on things like correct grammar, complete sentences, punctuation, and use of abbreviations nearly as much as ensuring the accuracy of medical content. As speech recognition became more common, a lot of details like correct grammar, complete sentences, and punctuation fell by the wayside,” says Cyndi Sandusky, CHDS, AHDI-F, director of the Association for Healthcare Documentation Integrity (AHDI). “A speech recognition editor is responsible for listening to an audio file produced by a physician and comparing it to the output produced by the speech recognition engine. The editor then makes changes in the document so that it matches the audio. The editor also flags any discrepancies—medical or otherwise—for the dictator to review.”

According to the AHDI, industry predictions over the past decade that the role of transcriptionists in the documentation process might be replaced by technology were simply unfounded. Why? Technology is only as good as the source of information, and the risks associated with missing or inaccurate information surfaced quickly.

Today, speech recognition editors are needed to bring integrity to technology-enabled documentation processes, as evidenced by the growing demand for their skillsets.

Glenn Krauss, principal with Core CDI, believes that a unique opportunity exists for speech recognition editors to expand their role even further beyond simple questions such as: Are there elements missing? Does the sentence make sense? In addition to the comparison and analysis that accompanies speech recognition output, he envisions these professionals providing a critical feedback loop to inform process improvements.

For example, Krauss points to Joint Commission standards that require approximately 20 elements that should be in a discharge summary to reduce readmission. Building from their knowledge base, speech recognition editors should conduct an analysis of whether those elements exist and educate physicians about how to improve documentation for better outcomes, he says.

“These are talented people. They have to know medical terminology. They have to know clinical medicine to a certain extent. They need to know whether a sentence doesn’t make sense,” Krauss emphasizes.

Speech Recognition Editors: A Closer Look

According to the American Healthcare Documentation Professionals Group, speech editors require additional skills than transcriptionists “that must be honed in order to catch errors made by a hand other than our own, examine the context of the report, and correct any errors made by the speech engine.” Consequently, critical thinking is an imperative.

Sandusky notes that editors “teach” the speech recognition engine what a doctor is saying by correcting mistakes in the transcription. In turn, the technology “learns” how to better hear the physician, and accuracy improves.

“However, in order to teach it effectively, we are taught to leave things pretty much exactly as the doctor says. The editor is responsible for ensuring correct words are transcribed and flagging any discrepancies that might be dictated, but things like run-on or incomplete sentences and abbreviations and slang terms are left as dictated,” Sandusky explains. “The main quality assurance change has therefore been to allow such inconsequential errors and differences in style into the final document and to focus more on accuracy of medical information and eliminate inconsistencies.”

A typical process flow might look something like this: A document produced by a speech recognition engine is received. After all demographic, work type, and physician information is verified, the editor listens to the audio file and corrects the document, flagging discrepancies or inconsistencies for the physician. The document is then uploaded to the EHR for the physician to review and sign.

When an error is identified, whether in back-end speech recognition or editing, Sandusky says there is usually a system in place to flag the physician. “Most speech recognition errors are corrected by the editor as they listen to the audio. For front-end speech recognition, where the dictator dictates, reads, and corrects their own report without the middle step of an editor and then signs, there are varying procedures,” she explains.

Because all health care organizations have their own inherent workflows, Sandusky stresses that processes will vary widely across the industry. For example, each organization will typically assign its own categories and types of errors, as well as determine their importance based on the organization’s unique patient mix. How feedback is submitted, whether minor or critical, will also vary, as well as the process for education and correction.

“The company I work for has a team that audits a percentage of each editor’s production and assigns scores, with a requirement of 99.65% or better as a monthly average,” Sandusky says. “This feedback is provided to the editor regularly in an ongoing quality improvement process.”

Compensation and Contracting

While some companies offer hourly pay, the compensation for editing a line of transcription—typically defined as 65 characters—is generally a percentage of what would be received for straight transcription, Sandusky says. “The idea is that it takes less time to edit than it does to type the line,” she says. “I think the typical range for a speech recognition editing line is 50% to 60% of that for a manually transcribed line, but this varies by company.”

Pay is typically based on productivity. An analysis published in 2018 by Dale Kivi, MBA, a recognized industry documentation thought leader who serves on For The Record’s Editorial Advisory Board, suggested that speech recognition editors were making $8 to $10 per hour, while those on the high end of the productivity scale were making $16 per hour.

Many speech recognition editors work as independent contractors and are responsible for designing and negotiating their own terms. Subsequently, Sandusky suggests the following key points to consider:

• What are the quality expectations?
• How is pay affected by quality scores?
• Is feedback provided regarding quality scores?
• Is there a required hourly or daily production amount (minutes of audio or number of lines)?

“I believe industry standard is around 150 to 200 lines per hour, but this varies among employers,” Sandusky says.

Krauss says a typical contract should include scope of work, payment terms, turnaround time—similar to what would be outlined for traditional transcription services. He also suggests outlining complex scenarios and determining how they will be compensated ahead of time. For example, complex scenarios involving drug names that have to be researched can take extra time.

Forward-Looking Best Practices

Looking ahead, Krauss believes the industry needs to close the feedback gap between speech recognition editors and physicians. For example, he points to missed opportunities with heart failure, explaining that if certain elements of a discharge summary are available to a cardiologist or a primary care doctor, physicians can reduce the potential for readmission by as much as 70%, according to industry best practices.

Krauss believes the goal moving forward should focus on having speech recognition editors look at the record holistically to ensure the highest standard of documentation is being met. This means instead of considering such areas as chief complaint, history of present illness, past social history, and physical assessment separately, speech recognition editors should have the training, knowledge, and the confidence to review these elements together—in essence, becoming the industry’s eyes and ears for quality documentation.

“Quality of documentation does not mean the record makes sense in terms of sentence structure and making sure holes are filled,” Krauss says. “It means that the record serves as a communication tool. That’s where clinical documentation improvement falls down. We don’t treat it as a communication tool—we treat it as a reimbursement tool.”

Krauss goes on to say that he is not aware of any speech recognition organization that provides feedback and knowledge. “I call it knowledge sharing,” he explains. “If I have the knowledge of what is a good note and what is a bad note, I should approach physicians and say, ‘I would like to share my knowledge and passion for good solid documentation because it’s going to save you a lot of time and help you spend more time working with your patients, not with the computer—working smarter, not harder.’”