Posted on

Talking it through: speech recognition takes the strain of digital transformation

By Nuance for Healthcare IT

HITN: COVID-19 has further exposed employee stress and burnout as major challenges for healthcare. Tell us how we can stop digital transformation technologies from simply adding to them.

Wallace: By making sure that they are adopted for the right reasons – meeting clinician’s needs without adding more stress or time pressures to already hectic workflows. For example, Covid-19 being a new disease meant that clinicians had to document their findings in detail and quickly without the process slowing them down – often while wearing PPE. I think speech recognition technology has been helpful in this respect, not just because of speed but also because it allows the clinician time to provide more quality clinical detail in the content of a note.

In a recent HIMSS/Nuance survey, 82% of doctors and 73% of nurses felt that clinical documentation contributed significantly to healthcare professional overload. It has been estimated that clinicians spend around 11 hours a week creating clinical documentation, and up to two thirds of that can be narrative.

HITN: How do you think speech recognition technology can be adapted into clinical tasks and workflow to help lower workload and stress levels?

Wallace: One solution is cloud-based AI-powered speech recognition: instead of either typing in the EPR or EHR or dictating a letter for transcription, clinicians can use their voice and see the text appear in real time on the screen. Using your voice is a more natural and efficient way to capture the complete patient story. It can also speed up navigation in the EPR system, helping to avoid multiple clicks and scrolling. The entire care team can benefit – not just in acute hospitals but across primary and community care and mental health services.

HITN: Can you give some examples where speech recognition has helped to reduce the pressure on clinicians?

Wallace: In hospitals where clinicians have created their outpatient letters using speech recognition, reduction in turnaround times from several weeks down to two or three days have been achieved across a wide range of clinical specialties. In some cases where no lab results are involved, patients can now leave the clinic with their completed outpatient letter.

In the Emergency Department setting, an independent study found that speech recognition was 40% faster than typing notes and has now become the preferred method for capturing ED records. The average time saving in documenting care is around 3.5 mins per patient – in this particular hospital, that is equivalent to 389 days a year, or two full-time ED doctors!

HITN: How do you see the future panning out for clinicians in the documentation space when it comes to automation and AI technologies?

Wallace: I think we are looking at what we call the Clinic Room of the Future, built around conversational intelligence. No more typing for the clinician, no more clicks, no more back turned to the patient hunched over a computer.

The desktop computer is replaced by a smart device with microphones and movement sensors. Voice biometrics allow the clinician to sign in to the EPR verbally and securely (My Voice is my Password), with a virtual assistant responding to voice commands. The technology recognises non-verbal cues – for example, when a patient points to her left knee but only actually states it is her knee. The conversation between the patient and the clinician is fully diarised, while in the background, Natural Language Processing (using Nuance’s Clinical Language Understanding engine) is working to create a structured clinical note that summarises the consultation, and codes the clinical terms eg. with SNOMED CT.

No more typing for the clinician, no more clicks, no more back turned to the patient hunched over a computer, resulting in a more professional and interactive clinician/patient consultation. 

Healthcare IT News spoke to Dr Simon Wallace, CCIO of Nuance’s healthcare division, as part of the ‘Summer Conversations’ series.

Posted on

VA to move Nuance’s voice-enabled clinical assistant to the cloud: 5 details

By Katie Adams for Becker’s Hospital Review

The Department of Veterans Affairs is migrating to the cloud platform for Nuance’s automated clinical note-taking system, the health system said Sept. 8.

Five details:

  1. ​​The VA will use the Nuance Dragon Medical One speech recognition cloud platform and Nuance’s mobile microphone app, allowing physicians to use their voices to document patient visits more efficiently. The system is intended to allow physicians to spend more time with patients and less time on administrative work.
  2. The VA deployed Nuance Dragon Medical products systemwide in 2014. It is now upgrading to the system’s cloud offering so its physicians can utilize the added capabilities and mobile flexibility.
  3. To ensure Nuance’s products adhere to the government’s latest guidance on data security and privacy, the Federal Risk and Authorization Management Program approved the VA’s decision to adopt the technologies.
  4. “The combination of our cloud-based platforms, secure application framework and deep experience working with the VA health system made it possible for us to demonstrate our compliance with FedRAMP to meet the needs of the U.S. government. We are proving that meeting security requirements and delivering the outcomes and workflows that matter to clinicians don’t have to be mutually exclusive,” Diana Nole, Nuance’s executive vice president and general manager of healthcare, said in a news release.
  5. Nuance Dragon Medical One is used by more than 550,000 physicians.
Posted on

There’s Nothing Nuanced About Microsoft’s Plans For Voice Recognition Technology

By Enrique Dans for Forbes

Several media have already reported on Microsoft’s advanced talks over an eventual acquisition of Nuance Communications, a leader in the field of voice recognition, with a long and troubled history of mergers and acquisitions. The deal, which was finally announced on Monday, was estimated to be worth as much as $16 billion, which would make it Microsoft’s second-largest acquisition after LinkedIn in June 2016 for $26.2 billion, but has ended up closing at $19.7 billion, up 23% from the company’s share price on Friday.

After countless mergers and acquisitions, Nuance Communications has ended up nearly monopolizing the market in speech recognition products. It started out as Kurzweil Computer Products, founded by Ray Kurzweil in 1974 to develop character recognition products, and was then acquired by Xerox, which renamed it ScanSoft and subsequently spun it off. ScanSoft was acquired by Visioneer in 1999, but the consolidated company retained the ScanSoft name. In 2001, ScanSoft acquired the Belgian company Lernout & Hauspie, which had previously acquired Dragon Systems, creators of the popular Dragon NaturallySpeaking, to try to compete with Nuance Communications, which had been publicly traded since 1995, in the speech recognition market. Dragon was the absolute leader in speech recognition technology accuracy through the use of Hidden Markov models as a probabilistic method for temporal pattern recognition. Finally, in September 2005, ScanSoft decided to acquire Nuance and take its name.

Since then, the company has grown rapidly through acquisitions, buying as many as 52 companies in the field of speech technologies, in all kinds of industries and markets, creating a conglomerate that has largely monopolized related commercial developments, licensing its technology to all kinds of companies: Apple’s Siri was originally based on Nuance technology — although it is unclear how dependent on the company it remains.

The Microsoft purchase reveals the company’s belief in voice as an interface. The pandemic has seen videoconferencing take off, triggering an explosion in the use of technologies to transcribe voice: Zoom, for example, incorporated automatic transcription in April last year using Otter.ai, so that at the end of each of my classes, I automatically receive not only the video of them, but also their full transcript (which works infinitely better when the class is online than when it takes place in face-to-face mode in a classroom).

Microsoft, which is in the midst of a process of strong growth through acquisitions, had previously collaborated with Nuance in the healthcare industry, and many analysts feel that the acquisition intends to deepen even further into this collaboration. However, Microsoft could also be planning to integrate transcription technology into many other products, such as Teams, or throughout its cloud, Azure, allowing companies to make their corporate environments fully indexable by creating written records of meetings that can be retrieved at a later date. 

Now, Microsoft will try to raise its voice — it has almost twenty billion reasons to do so — and use it to differentiate its products via voice interfaces. According to Microsoft, a pandemic that has pushed electronic and voice communications to the fore is now the stimulus for a future with more voice interfaces, so get ready to see more of that. No company plans a twenty billion dollar acquisition just to keep doing the same things they were doing before.

Posted on

Leveraging AI-powered speech recognition tech to reduce NHS staff burnout

From Open Access Government

The last 18 months have pushed our National Health Service (NHS) to breaking point. Services that were already overstretched and underfunded have been subjected to unprecedented strain on their resources. This strain has now become a national emergency, risking the entire future of the health service, according to a recent government report.

From treating countless Covid-19 cases and supporting vaccination programmes, to providing essential treatment and care, UK healthcare professionals are at maximum capacity and, understandably, struggling to cope. In fact, a recent survey from Nuance revealed that this period has led to dramatic increases in stress and anxiety across primary (75%) and secondary (60%) care within the NHS. When excessively high levels of stress are experienced over a prolonged period, it can result in clinician burnout which, in turn, can lead to many feeling like they have no choice but to leave the medical professional altogether. In England, GP surgeries lost almost 300 full-time medical professionals in the three months prior to Christmas and, by 2023, a shortfall of 7,000 GPs is anticipated, according to recent reports. In addition, it is believed that up to a third of nurses are thinking about leaving their profession due to pandemic-related burnout.  

These individuals enabled and maintained a new front line in the wake of the pandemic. They are also the people that we applauded every week and depended on during the most challenging days. However, the unwavering pressure and heavy workloads are causing significant damage to their own health. An urgent and effective solution is required if the NHS is to continue delivering its life-saving services and care.

The burden of administrative processes

Over the course of the pandemic, the way in which healthcare services are delivered has changed. One of the most significant changes has been a shift towards teleconsultations or virtual appointments. A RCGP investigation of GP appointments discovered that prior to the pandemic as much as 70% of consultations were face-to-face. This diminished to 23% during the first weeks of the crisis.

While some medical professionals and patients are in favour of this new format, for many, the swift switch to a virtual approach has generated an influx of workload, especially when it comes to documentation processes. In fact, Nuance’s research revealed that 67% of primary care respondents believe the pandemic has increased the overall amount of clinical administration. Although there are a few causational factors, such as heavy workloads and time pressure, the transition towards remote consultations appears to be a significant contributor. This is because the risk factor and diagnostic uncertainty of remote consultations are generally higher than face to face appointments. Also, patients that are triaged by telephone often still need a follow-up face to face appointment which is leading to more double handling of patients than happened in the past.

Before the pandemic, clinicians were reportedly spending an average of 11 hours per week on clinical documentation. This figure is only likely to have increased during the pandemic’s peak, when hospitals were at their busiest and remote appointments were most needed. And, we’re not in the clear yet, as the vaccination programme continues to progress and teleconsultation is set to stay. Therefore, moving forward, we need to think about how we can best support our clinical professionals by easing their administrative burden.

AI-powered speech recognition: a step in the right direction

Modern technologies – such as speech recognition solutions – can be leveraged to help reduce some of the administrative pressures being placed on clinical professionals and enable them to work smarter and more effectively. These technologies are designed to recognise and record passages of speech, converting them into detailed clinical notes, regardless of how quickly they’re delivered. By reducing repetition and supporting standardisation across departments, they can also enhance the accuracy as well as the quality of patient records. For example, voice activated clinical note templates can provide a standardised structure to a document or letter, thus meeting the requirements set out by the PRSB (Professional Record Standards Body).

Using secure, cloud-based speech solutions, healthcare professionals are able to benefit from these tools no matter where they are based. The latest technologies provide users with the option to access their single voice profile from different devices and locations, even when signing in from home. This advancement could significantly reduce the administrative burden of virtual consultations, therefore helping to decrease burnout levels amongst NHS staff.

Calderdale and Huddersfield NHS Trust is one of many organisations already benefiting from this technology. The team there leveraged speech recognition as part of a wider objective to help all staff members and patients throughout the Covid-19 crisis. Serving a population of around 470,000 people and employing approximately 6,000 employees, the trust wanted to save time and enable doctors to improve safety, whilst minimising inflection risk. By using this technology on mobile phones. clinicians could instantly update patient records without having to touch shared keyboards. Having experienced the benefits of this solution, the trust is considering leveraging speech recognition to support virtual consultations conducted over MS Teams, in order to enhance the quality of consultations, while alleviating some of the pressures placed upon employees. 

This challenging period has only emphasised how vital the NHS is within the UK. However, the increased workloads and administrative duties brought on by the pandemic are causing higher levels of burnout than ever before. Something needs to change and although technology advancements such as AI-powered speech recognition is now part of the solution there is also a need for public bodies to determine why the administrative burden has continued to rise and perhaps reassess the importance of bureaucratic tasks and where it is essential for information to be recorded.

Posted on

What Microsoft’s Acquisition of Nuance Could Mean For The Future of Workplace AI

By Zachary Comeau for My Tech Decisions

Microsoft’s recent announcement that it is acquiring healthcare artificial intelligence and voice recognition company Nuance could signal a new era of voice-enabled technologies in the enterprise.

Nuance’s speech recognition technology for medical dictation is currently used in 77% of U.S. hospitals, and Microsoft plans to integrate those technologies with its Microsoft Cloud for Healthcare offering that was introduced last year.

However, the purchase price of $19.7 billion indicates that Microsoft has plans to bring more voice recognition technology to other vertical markets aside from healthcare.

We sat down with Igor Jablokov, founder and CEO of augmented AI company Pryon and an early pioneer of automated cloud platforms for voice recognition that helped invent the technology that led to Amazon’s virtual assistant Alexa, to talk about Microsoft’s move and how intelligent voice technology could impact the workplace.

What do you make of Microsoft’s acquisition of Nuance?

So look, it’s going to be a popular thing to talk about moves in healthcare, especially as we’re still through the throes of this pandemic. And most of us, I’m sure had a challenging 2020. So that’s a great, way to frame the acquisition, given Nuance, some of the medical dictation and other types of projects that they inserted into the healthcare workflow. So, that makes sense. But, would anybody actually pay that much for just something for healthcare? I would imagine Microsoft could have had as big an impact, if not larger, going directly for one of those EHR companies like Epic. So, that’s why, I’m like, “All right, healthcare, that’s good.” , is it going to be a roll up where they will be going after Epic in places like that, where there’s already lots of stored content, and then vertically integrate the whole thing? That’s, that’s the next play that I would see. They’re gunning for to own that workflow. Right. Okay. So that’s that piece. Now. On the other hand I see it as a broader play in employee productivity, because whenever Microsoft really opens up their pocketbooks, like they did here, right, this is, was what their second largest acquisition, it’s typically to reinforce the place where they’re, they’re the strongest than where they’re essentially , dairy cow is, and that’s employee productivity.

Microsoft has never been solely focused on healthcare. Their bread and butter is the enterprise. So how can the same technologies be applied to the enterprise?

You’re exactly right. Now why do we have special knowledge of the Nuance stuff? Well, the team that’s in this company Pryon, actually developed many of the engines inside of Nuance. So many years ago, Nuance felt like their engines were weak, and that IBM’s were ahead of the curve, if you will. I believe around the 2008 downturn, they came in to acquire the majority of IBM SAS speech chats and, and the like, and related AI technologies. And my now current chief technology officer was assigned to that unit project in terms of collaborating with them to integrate it into their, into their work for half a decade. So, that’s the plot twist here. We have a good sense now, these, it is true, that these engines were behind Siri and all these other experiences, but in reality, it wasn’t Nuance engines, it was IBM engines that were acquired through Nuance that ended up getting placed there, because of how highly accurate and more flexible these things were.

So let’s start with something like Microsoft Teams. To continue bolstering Teams with things like live transcriptions, to put a little AI system inside of Teams that has access to the enterprise’s knowledge as people are discussing things – it may not even be any new product, it could just be all the things that Microsoft is doing but they just needed more hands on deck, right in terms of this being a massive acqui-hire in terms of having more scientists and engineers working on applied AI. So I would say a third of it is they need more help with things that they’re already doing. , a third of it is a healthcare play, but I would watch for other moves for their vertical integration there. And then the third is for new capability that that we haven’t experienced yet on the employee productivity side of Microsoft.

Microsoft already has their version of Siri and Alexa: Cortana. What do you think about Cortana and how it can be improved?

They attempted for it to be their thing everywhere. They just pulled it off the shelves – or proverbial shelves – on mobile, so it no longer exists as a consumer tech. So the only place that it lives now is on Windows desktops, right? So that’s not a great entry point. Then they tried doing the mashup, where, Cortana could be called via Alexa and vice versa. But when I talked to the unit folks at Amazon, and I’m like, “Look, you’re, you’re not going to allow them unit to really do what they want to do, right? Because they’re not going to allow you to do what you want to do on those desktops.” So it almost ends up being this weird thing like calling into contact centers and being transferred to another contact center. That’s what it felt like. In this case, Alexa got the drop on them, which is, which is strange and sorrowful in some ways.

Other AI assistants like Alexa are much further along than Cortana, but why aren’t we seeing much adoption in the enterprise?

There’s multiple reasons for that. There’s, there’s the reason of accuracy. And accuracy isn’t just you say something, you get an answer. But where do you get it from? Well, it has to be tied into enterprise data sources, right? Because most enterprises are not like what we have at home, where we buy into the Apple ecosystem, the Amazon ecosystem, the Google ecosystem. They’re heterogeneous environments where they have bits and pieces from every vendor. The next piece is latency and getting quick results that are accurate at scale. And then the last thing is security, right. So there’s certainly things that that Alexa developers do not get access to. And that’s not going to fly in the enterprise space. One of the things that we hear from enterprises, in pilots and in production, said that they’re starting to put in these API’s is starting to be their crown jewels, and the most sensitive things that they got. And, and if you actually read the terms and conditions from a lot of the big tech companies that are leveraging AI stuff, they’re very nebulous with where the information goes, right? Does it get transcribed or not? Are people eyeballing this stuff? Or not? And so most enterprises are like, “Hold on a second, you want us to put our secrets, we make these microchips and you want us to put secrets on M&A deals we’re about to do.?” They’re uncomfortable about that. It’s just a different ball of wax. And that’s why I think it’s going to be purpose-built companies that are going to be developing enterprise API’s.

I think there will be a greater demand for bringing some of these virtual assistants we all know to the enterprise – especially since we’ve been at home for over a year and using them in our home.

Your intuition is spot on. It’s not even so much people coming from home into work environments – it’s a whole generation that has been reared with Alexa and Siri and these  things. When you actually look at the majority of user experiences at work, using Concur or SAP or Dynamics, or Salesforce, or any of these types of systems, and they’re gonna toss grenades at this stuff over time, especially as they elevate in authority through the natural motions of expanding and their influence over their career. I think there’s going to be a new a new generation of enterprise software that’s going to be purpose built for these folks that are going to be taking over business. That’s basically the chink in the armor for any of these traditional enterprise companies. If you think if you look at Oracle, if you look at IBM, if you look at HP, if you look at Dell, if you look at any one of them. I don’t know where they go, at least on the software side. When a kid has grown up with Alexa, and there they are at 26 years old, they’re like, “No, I’m not gonna use that.” Why? Why can I just blurt something out and get an instant answer? But here I am running a region of Baskin Robbins, and I can’t say, “How many ice cream cones did we sell when it was 73 degrees out?” and get an instant answer one second later. So that’s what’s going to happen. I mean, we’re certainly as a company, since our inception, we’ve been architected not for the current world, but for this future world. Already elements of this are in production, as we announced with Georgia Pacific in in late January, and we’re working through it. And I have to say, one of the biggest compliments that I get, whether it’s showing this to big enterprises or government agencies and the like, is fundamentally they’re like, “Holy smokes, this doesn’t feel like anything else that we use. But behind the scenes not only are we using top flight UX folks to develop this, but we’re also working with behavioral scientists and the like, because all that want to use our software not have to use our software. But, most enterprise software gets chosen by the the CIO, the CTO, the CISO, and things like that. And most of them are thinking checking off boxes on functionality. And most enterprise developers cook their blue and white interface, get the fun feature function in there and call it a day. And I think they’re missing such opportunities by not finishing the work.

Posted on

Microsoft buys speech recognition company Nuance in $16B deal, second biggest since LinkedIn

By Steve Kovach for CNBC

Microsoft announced Monday that it will buy speech recognition company Nuance Communications for $16 billion, the tech giant’s largest acquisition since it bought LinkedIn for more than $26 billion in 2016.

The deal is the latest sign Microsoft is hunting for more growth through acquisitions. The company is also reportedly in talks to buy the chat app Discord for about $10 billion and last year tried to buy TikTok’s U.S. business for about $30 billion before the deal was derailed. Last month, Microsoft acquired gaming company Zenimax for $7.6 billion.

Shares of Nuance were up nearly 23 percent in premarket trading Monday, representing approximately the same premium Microsoft plans to pay based on the closing price Friday. Trading was halted on the stock after that pop and are expected to resume trading around 9 a.m. ET. Microsoft shares were slightly negative.

Nuance would be aligned with the part of Microsoft’s business that serves businesses and governments. Nuance derives revenue by selling tools for recognizing and transcribing speech in doctor’s visits, customer-service calls and voicemails. In its announcement Monday, Microsoft said Nuance’s technology will be used to augment Microsoft’s cloud products for health care, which were launched last year.

The company reported $7 million in net income on about $346 million in revenue in the fourth quarter of 2020, with revenue declining 4 percent on an annualized basis. Nuance was founded in 1992, and had 7,100 employees as of September 2020.

Microsoft said Nuance’s CEO Mark Benjamin will remain at the company and report to Scott Guthrie, the Microsoft executive in charge of the company’s cloud and artificial intelligence businesses.

Nuance has a strong reputation for its voice recognition technology, and it has been considered an acquisition target for companies like Apple, Microsoft and more for several years. Microsoft has voice recognition built into many of its products already, but it has recently shut down some products featuring its voice assistant Cortana.

Posted on

Nuance’s voicebot helps people book COVID-19 vaccines

By Leila Hawkins for Healthcare Global

Software firm Nuance communications has developed a conversational voicebot to that can guide users to answer questions about COVID-19, confirm their eligibility for a vaccination and schedule appointments where available. 

One of Nuance’s Intelligent Engagement solutions, it also sends the user an SMS text to confirm appointments after the call.

The voicebot is currently being deployed by Walgreens – one of the largest pharmacy retail chains in the US. Walgreens customers can call a helping or individual store to speak to the voicebot, which is available in English and Spanish, 24 hours a day. 

“Ensuring equitable access to care is essential,” said Robert Weideman, Executive Vice President and General Manager, Nuance. “Using our proven voice- and AI-powered solutions to help as many Walgreens customers as possible experience a more modern, convenient, and secure process for scheduling their COVID vaccine appointments is one of the most important outcomes we can achieve.”

Nuance are pioneers in conversational AI and have also developed Dragon Medical One, a cloud-based, GDPR-compliant speech recognition tool that enables clinicians to use their voice to capture patient information. This data can then be stored across a number of different platforms. 

In February this year the tool was given the #1 Best in KLAS Award for Speech Recognition (Front-end EMR)

Posted on

The Rise Of AI Voice Assistants In Clinical Documentation

By Sindhu Kutty for Forbes

Medical decision-making must remain with clinicians, but why does cumbersome data entry work continue to bog down their time? Can AI be used to allow physicians to spend less time on administrative tasks and more on value-added care?

Physicians can spend approximately one-third of their time creating notes and reviewing medical records in the electronic health record (EHR), and while some of this is related to bolstering ongoing care to help patients achieve positive health outcomes (for example, ensuring continuity of care for the patient between venues), the majority is for billing documentation (financial reimbursement) and to ensure regulatory compliance. And this comes at a significant cost. As payment models become more complex, physicians are seeking ways to improve clinical documentation. AI shows great promise.

This is particularly true for the tail end of the patient-physician encounter during the clinical validation or data reviews conducted for reimbursement, research and quality improvement. For example, Cerner, a leading EHR vendor, has developed a natural language processing (NLP) engine to automate medical chart reviews by evaluating EHR data and identifying opportunities for improvement and validation of documentation for in-patient encounters in near real-time.

That is great on the tail end, but what about the first step to excellent clinical documentation in the patient-physician encounter — data entry? The ultimate success would be clinical documentation software taking structured data from the patient-physician encounter and automatically entering it into its EHR field without any human help.

Voice recognition can help eliminate the burden of data entry from the care team. Physicians have long utilized medical dictation to dictate a structured clinical note along with human-powered medical transcription services, or software such as Dragon in conjunction with EHR, to alleviate their administrative burden. However, ambient voice can now get the same data from a natural interaction between physician and patient, which removes data entry completely. AI can automate the process (by becoming an “auto-scribe”) of producing clinical notes through speech recognition and natural language processing technology, in real-time, by listening in on patient-physician conversations or from summaries provided by physicians’ post-encounters with patients.

There are a significant number of market entrants in this space. For example, Nuance Communications provides software that uses neural networks to map patient-physician conversations into a note in the EHR (AI-based speech-to-text system) but requires wall-mounted devices with microphones to record each interaction. Nuance’s latest acquisition, Saykara, uses an AI voice assistant via a mobile application on the physician’s cell phone to transform salient conversational content between physicians and patients into clinical notes, prescription orders and referrals, and then populate both structured and narrative data directly into EHRs. This documentation of encounters in real-time negates the need for manual data entry or human transcription. In addition, physicians can also use this technology from within Zoom videoconference calls to document telehealth visits.

The U.S. transcription market size is approximately $20 billion (led by its use in health care) and is bolstered by this transition from traditional to AI-powered solutions, with voice recognition playing a significant role in this forecast. Through the use of innovative technologies like voice recognition and NLP, physicians can recoup up to three hours per day back for direct patient care and in the service of healing. This could mean an increase by up to a third of patient revenue per day, which amounts to a significant financial stream for provider organizations.

Based on my nearly two decades of experience consulting with more than 20 U.S. health systems, using AI to transform the clinical documentation portion of care delivery can create more seamless experiences for both patients and physicians on the continuum of care and should be explored as part of the overall digital health strategy. AI holds a lot of promise in leaning out a number of processes and reducing the burden on already overtaxed physicians especially during Covid-19. The return on investment is there, and I encourage CIOs and other health care executives to consider building these technologies into their strategic road maps.

Posted on

Nuance Dragon Medical One Earns #1 Best in KLAS Award for Speech Recognition: Front-End EMR

From PR Newswire

BURLINGTON, Mass., Feb. 2, 2021 /PRNewswire/ — Nuance Communications, Inc. (NASDAQ: NUAN) today announced that Nuance Dragon® Medical One cloud-based speech recognition platform captured top honors as the 2021 Best in KLAS: Software & Services award winner, earning praise from clinicians as the #1 conversational AI speech recognition solution helping deliver and document better patient care.

In addition to the 2021 Best in KLAS Speech Recognition (Front-End EMR) award, Nuance also captured the 2021 Best in KLAS Quality Management award. This is the sixth year Nuance has ranked first for its cloud-based quality management solutions used by health systems to capture, monitor, and report hospital and physician performance data to improve care quality, patient safety, and financial integrity.

“The KLAS awards are significant additions to the growing number of distinctions that Dragon Medical One and our entire healthcare solutions portfolio are winning for improving care quality, patient experiences, and health system financial resilience while meaningfully addressing physician burnout,” said Diana Nole, Executive Vice President and General Manager of Healthcare at Nuance. “The Best in KLAS designations also reflect the continuous innovations and enhancements we’re adding to the healthcare solution portfolio as part of our comprehensive approach to using conversational and ambient AI to solve healthcare’s greatest challenges.”

With deployments at health systems worldwide, Nuance Dragon Medical is trusted by over 550,000 physicians. The Nuance Dragon Ambient eXperience™ (DAX™) ambient clinical intelligence (ACI) solution further extends the power of Nuance Dragon Medical to create a fully voice-enabled and ambient environment for exam room and telehealth visits to enhance clinical documentation and improve provider-patient experiences.

“Each year, thousands of healthcare professionals across the globe take the time to share their voice with KLAS,” said Adam Gale, President of KLAS. “They know that sharing their perspective helps vendors to improve and helps their peers make better decisions. These conversations are a constant reminder of how necessary accurate, honest, and impartial reporting is in the healthcare industry. The Best in KLAS report and the awards it contains set the standard of excellence for software and services firms. Vendors who win the title of “Best in KLAS” should celebrate and remember that providers now accept only the best from their products and services. The Best in KLAS award serves as a signal to provider and payer organizations that they should expect excellence from the winning vendors.”

To learn more about the Best in KLAS awards and Nuance scores, you can access the 2021 Best in KLAS: Software & Services report, including the vendor comparison chart here. KLAS will present the awards during a Virtual Awards Show on February 23, 2021.

About Nuance Communications, Inc.
Nuance Communications (NASDAQ: NUAN) is a technology pioneer with market leadership in conversational AI and ambient intelligence. A full-service partner trusted by 90 percent of U.S. hospitals and 85 percent of the Fortune 100 companies worldwide, Nuance creates intuitive solutions that amplify people’s ability to help others.

Posted on

US Department of Veterans Affairs Doctors Will Bring Nuance Speech Recognition Tech to Telehealth Calls

By Eric Hal Schwartz for Voice Bot

Doctors for the U.S. Department of Veterans Affairs (VA) will take notes and records of remote appointments using Nuance’s Dragon Medical One platform, the company announced this week. The speech recognition technology supports a virtual assistant enabling medical professionals interact with their pateints from afar without the distraction of notetaking, improving care as the role of telehealth expands during the current COVID-19 health crisis.

NUANCED CARE

Nuance launched Dragon Medical as an enhancement for the existing Dragon transcription program. The platform is designed to track a patient’s history and treatment more efficiently than manual systems. The virtual assistant can understand medical vocabulary well enough to mark important comments, collating the notes into a usable format to help fill out electronic health records and other paperwork. The VA is applying Nuance’s tech to appointments over the phone or through the VA Video Connect platform. That the VA chose Nuance isn’t too big a surprise. The company supplies its platform to many parts of the federal government, including the Military Health System. The VA started operating Dragon Medical back in 2014, so the tech is already approved for integration into its services, which will speed up the adoption.

“Helping frontline clinicians at the VA and other major health systems has been our highest priority since the pandemic began,” Nuance executive vice president and general manager of healthcare Diana Nole said in a statement. “The combination of our cloud-based platforms, organizational agility and deep experience working with the VA health system made it possible for us to act quickly and deliver the technology solutions needed to protect and assist physicians treating patients remotely. While our strong sense of mission and purpose in serving critical healthcare organizations and businesses already is very clear, it becomes amplified knowing that our technology solutions are playing a role in caring for our nation’s Veterans.”

REMOTE HEALTH

The COVID-19 pandemic has only escalated the demands on every doctor’s time and energy, and telemedicine has attracted a lot of interest as a result. Nuance recently upped its partnership with Microsoft, for instance, the virtual check-ups performed over Microsoft Teams now offer Nuance’s ambient clinical intelligence (ACI) to transcribe the conversation and help fill in electronic health records. And  health technology developer Cerner added the Dragon Medical Virtual Assistant to its platform this summer, allowing doctors using Cerner’s platform to fill in and search EHRs of patients by voice. The Cerner partnership will be a feature for VA doctors using Nuance’s tech as well.

While bringing voice AI to medical records had been on the rise already, COVID-19 accelerated the trend, with investors casting a speculative eye on startups in the field. Venture capitalists showered Saykara and Suki raising $9 million and $40 million, respectively, for their takes on the technology. Suki is also being used as a feature in larger products like Amazon’s Amazon Transcribe Medical. More than half a million physicians already use Dragon Medical, and the company claims it cuts the time spent on paperwork by up to 75%. Even for remote calls, that frees up a lot of time and energy for VA doctors to better care for their patients.