HITN: COVID-19 has further exposed employee stress and burnout as major challenges for healthcare. Tell us how we can stop digital transformation technologies from simply adding to them.
Wallace: By making sure that they are adopted for the right reasons – meeting clinician’s needs without adding more stress or time pressures to already hectic workflows. For example, Covid-19 being a new disease meant that clinicians had to document their findings in detail and quickly without the process slowing them down – often while wearing PPE. I think speech recognition technology has been helpful in this respect, not just because of speed but also because it allows the clinician time to provide more quality clinical detail in the content of a note.
In a recent HIMSS/Nuance survey, 82% of doctors and 73% of nurses felt that clinical documentation contributed significantly to healthcare professional overload. It has been estimated that clinicians spend around 11 hours a week creating clinical documentation, and up to two thirds of that can be narrative.
HITN: How do you think speech recognition technology can be adapted into clinical tasks and workflow to help lower workload and stress levels?
Wallace: One solution is cloud-based AI-powered speech recognition: instead of either typing in the EPR or EHR or dictating a letter for transcription, clinicians can use their voice and see the text appear in real time on the screen. Using your voice is a more natural and efficient way to capture the complete patient story. It can also speed up navigation in the EPR system, helping to avoid multiple clicks and scrolling. The entire care team can benefit – not just in acute hospitals but across primary and community care and mental health services.
HITN: Can you give some examples where speech recognition has helped to reduce the pressure on clinicians?
Wallace: In hospitals where clinicians have created their outpatient letters using speech recognition, reduction in turnaround times from several weeks down to two or three days have been achieved across a wide range of clinical specialties. In some cases where no lab results are involved, patients can now leave the clinic with their completed outpatient letter.
In the Emergency Department setting, an independent study found that speech recognition was 40% faster than typing notes and has now become the preferred method for capturing ED records. The average time saving in documenting care is around 3.5 mins per patient – in this particular hospital, that is equivalent to 389 days a year, or two full-time ED doctors!
HITN: How do you see the future panning out for clinicians in the documentation space when it comes to automation and AI technologies?
Wallace: I think we are looking at what we call the Clinic Room of the Future, built around conversational intelligence. No more typing for the clinician, no more clicks, no more back turned to the patient hunched over a computer.
The desktop computer is replaced by a smart device with microphones and movement sensors. Voice biometrics allow the clinician to sign in to the EPR verbally and securely (My Voice is my Password), with a virtual assistant responding to voice commands. The technology recognises non-verbal cues – for example, when a patient points to her left knee but only actually states it is her knee. The conversation between the patient and the clinician is fully diarised, while in the background, Natural Language Processing (using Nuance’s Clinical Language Understanding engine) is working to create a structured clinical note that summarises the consultation, and codes the clinical terms eg. with SNOMED CT.
No more typing for the clinician, no more clicks, no more back turned to the patient hunched over a computer, resulting in a more professional and interactive clinician/patient consultation.
Healthcare IT News spoke to Dr Simon Wallace, CCIO of Nuance’s healthcare division, as part of the ‘Summer Conversations’ series.
The Department of Veterans Affairs is migrating to the cloud platform for Nuance’s automated clinical note-taking system, the health system said Sept. 8.
The VA will use the Nuance Dragon Medical One speech recognition cloud platform and Nuance’s mobile microphone app, allowing physicians to use their voices to document patient visits more efficiently. The system is intended to allow physicians to spend more time with patients and less time on administrative work.
The VA deployed Nuance Dragon Medical products systemwide in 2014. It is now upgrading to the system’s cloud offering so its physicians can utilize the added capabilities and mobile flexibility.
To ensure Nuance’s products adhere to the government’s latest guidance on data security and privacy, the Federal Risk and Authorization Management Program approved the VA’s decision to adopt the technologies.
“The combination of our cloud-based platforms, secure application framework and deep experience working with the VA health system made it possible for us to demonstrate our compliance with FedRAMP to meet the needs of the U.S. government. We are proving that meeting security requirements and delivering the outcomes and workflows that matter to clinicians don’t have to be mutually exclusive,” Diana Nole, Nuance’s executive vice president and general manager of healthcare, said in a news release.
Nuance Dragon Medical One is used by more than 550,000 physicians.
Several media have already reported on Microsoft’s advanced talksover an eventual acquisition of Nuance Communications, a leader in the field of voice recognition, with a long and troubled history of mergers and acquisitions. The deal, which was finally announced on Monday, was estimated to be worth as much as $16 billion, which would make it Microsoft’s second-largest acquisition after LinkedIn in June 2016 for $26.2 billion, but has ended up closing at $19.7 billion, up 23% from the company’s share price on Friday.
After countless mergers and acquisitions, Nuance Communications has ended up nearly monopolizing the market in speech recognition products. It started out as Kurzweil Computer Products, founded by Ray Kurzweilin 1974 to develop character recognition products, and was then acquired by Xerox, which renamed it ScanSoft and subsequently spun it off. ScanSoft was acquired by Visioneer in 1999, but the consolidated company retained the ScanSoft name. In 2001, ScanSoft acquired the Belgian company Lernout & Hauspie, which had previously acquired Dragon Systems, creators of the popular Dragon NaturallySpeaking, to try to compete with Nuance Communications, which had been publicly traded since 1995, in the speech recognition market. Dragon was the absolute leader in speech recognition technology accuracy through the use of Hidden Markov models as a probabilistic method for temporal pattern recognition. Finally, in September 2005, ScanSoft decided to acquire Nuance and take its name.
Since then, the company has grown rapidly through acquisitions, buying as many as 52 companies in the field of speech technologies, in all kinds of industries and markets, creating a conglomerate that has largely monopolized related commercial developments, licensing its technology to all kinds of companies: Apple’s Siri was originally based on Nuance technology — although it is unclear how dependent on the company it remains.
The Microsoft purchase reveals the company’s belief in voice as an interface. The pandemic has seen videoconferencing take off, triggering an explosion in the use of technologies to transcribe voice: Zoom, for example, incorporated automatic transcription in April last year using Otter.ai, so that at the end of each of my classes, I automatically receive not only the video of them, but also their full transcript (which works infinitely better when the class is online than when it takes place in face-to-face mode in a classroom).
Microsoft, which is in the midst of a process of strong growth through acquisitions, had previously collaborated with Nuance in the healthcare industry, and many analysts feel that the acquisition intends to deepen even further into this collaboration. However, Microsoft could also be planning to integrate transcription technology into many other products, such as Teams, or throughout its cloud, Azure, allowing companies to make their corporate environments fully indexable by creating written records of meetings that can be retrieved at a later date.
Now, Microsoft will try to raise its voice — it has almost twenty billion reasons to do so — and use it to differentiate its products via voice interfaces. According to Microsoft, a pandemic that has pushed electronic and voice communications to the fore is now the stimulus for a future with more voice interfaces, so get ready to see more of that. No company plans a twenty billion dollar acquisition just to keep doing the same things they were doing before.
The last 18 months have pushed our National Health Service (NHS) to breaking point. Services that were already overstretched and underfunded have been subjected to unprecedented strain on their resources. This strain has now become a national emergency, risking the entire future of the health service, according to a recent government report.
From treating countless Covid-19 cases and supporting vaccination programmes, to providing essential treatment and care, UK healthcare professionals are at maximum capacity and, understandably, struggling to cope. In fact, a recent survey from Nuance revealed that this period has led to dramatic increases in stress and anxiety across primary (75%) and secondary (60%) care within the NHS. When excessively high levels of stress are experienced over a prolonged period, it can result in clinician burnout which, in turn, can lead to many feeling like they have no choice but to leave the medical professional altogether. In England, GP surgeries lost almost 300 full-time medical professionals in the three months prior to Christmas and, by 2023, a shortfall of 7,000 GPs is anticipated, according to recent reports. In addition, it is believed that up to a third of nurses are thinking about leaving their profession due to pandemic-related burnout.
These individuals enabled and maintained a new front line in the wake of the pandemic. They are also the people that we applauded every week and depended on during the most challenging days. However, the unwavering pressure and heavy workloads are causing significant damage to their own health. An urgent and effective solution is required if the NHS is to continue delivering its life-saving services and care.
The burden of administrative processes
Over the course of the pandemic, the way in which healthcare services are delivered has changed. One of the most significant changes has been a shift towards teleconsultations or virtual appointments. A RCGP investigation of GP appointments discovered that prior to the pandemic as much as 70% of consultations were face-to-face. This diminished to 23% during the first weeks of the crisis.
While some medical professionals and patients are in favour of this new format, for many, the swift switch to a virtual approach has generated an influx of workload, especially when it comes to documentation processes. In fact, Nuance’s research revealed that 67% of primary care respondents believe the pandemic has increased the overall amount of clinical administration. Although there are a few causational factors, such as heavy workloads and time pressure, the transition towards remote consultations appears to be a significant contributor. This is because the risk factor and diagnostic uncertainty of remote consultations are generally higher than face to face appointments. Also, patients that are triaged by telephone often still need a follow-up face to face appointment which is leading to more double handling of patients than happened in the past.
Before the pandemic, clinicians were reportedly spending an average of 11 hours per week on clinical documentation. This figure is only likely to have increased during the pandemic’s peak, when hospitals were at their busiest and remote appointments were most needed. And, we’re not in the clear yet, as the vaccination programme continues to progress and teleconsultation is set to stay. Therefore, moving forward, we need to think about how we can best support our clinical professionals by easing their administrative burden.
AI-powered speech recognition: a step in the right direction
Modern technologies – such as speech recognition solutions – can be leveraged to help reduce some of the administrative pressures being placed on clinical professionals and enable them to work smarter and more effectively. These technologies are designed to recognise and record passages of speech, converting them into detailed clinical notes, regardless of how quickly they’re delivered. By reducing repetition and supporting standardisation across departments, they can also enhance the accuracy as well as the quality of patient records. For example, voice activated clinical note templates can provide a standardised structure to a document or letter, thus meeting the requirements set out by the PRSB (Professional Record Standards Body).
Using secure, cloud-based speech solutions, healthcare professionals are able to benefit from these tools no matter where they are based. The latest technologies provide users with the option to access their single voice profile from different devices and locations, even when signing in from home. This advancement could significantly reduce the administrative burden of virtual consultations, therefore helping to decrease burnout levels amongst NHS staff.
Calderdale and Huddersfield NHS Trust is one of many organisations already benefiting from this technology. The team there leveraged speech recognition as part of a wider objective to help all staff members and patients throughout the Covid-19 crisis. Serving a population of around 470,000 people and employing approximately 6,000 employees, the trust wanted to save time and enable doctors to improve safety, whilst minimising inflection risk. By using this technology on mobile phones. clinicians could instantly update patient records without having to touch shared keyboards. Having experienced the benefits of this solution, the trust is considering leveraging speech recognition to support virtual consultations conducted over MS Teams, in order to enhance the quality of consultations, while alleviating some of the pressures placed upon employees.
This challenging period has only emphasised how vital the NHS is within the UK. However, the increased workloads and administrative duties brought on by the pandemic are causing higher levels of burnout than ever before. Something needs to change and although technology advancements such as AI-powered speech recognition is now part of the solution there is also a need for public bodies to determine why the administrative burden has continued to rise and perhaps reassess the importance of bureaucratic tasks and where it is essential for information to be recorded.
The speech recognition software system has made life easier and is becoming increasingly popular in recent years. Several companies and business leaders are choosing the speech-to-text software system as it is faster and produces accurate results.
A speech recognition software converts the verbal language into text by using machine learning algorithms. Speech recognition ensures that the users can control these devices and create documents efficiently. According to reports, the speech recognition software market is estimated to rise from US$10.70 billion in 2020 to US$27.155 billion by 2026.
For businesses, this speech-to-text software system is making valuable contributions. In the legal and healthcare sectors, the software is used to produce accurate documentation. Customer service executives use this technology to input information about the callers like their names, account numbers, queries, and other data. It ensures cost-effectiveness by reducing and even eliminating the employment of agents and improving customer service facilities at the same time.
Here is a list of the top 10 speech recognition software to look out for in 2021.
1. Google Docs Voice Typing: Google’s improved version of speech recognition in one of the most popular word processors in the world has drawn attention from several companies and business leaders in the world. This feature is integrated with Google Suite and enables formatting and editing of the contents in Docs by using voice typing. One of the key benefits of this software is that it can be used on both Windows and Mac devices and is completely free of cost.
2. Winscribe: Winscribe is owned by the company Nuance and provides services like documentation workflow management, transcriptions, and assists organizations in managing their dictations. It ensures easy documentation solutions by allowing users to use unique input methods like digital voice recording from computers, smartphones, tablets, and other devices. It can be used on Andriod, iPhone, and PC devices. One of the added benefits of using Winscribe is that it can detect deviations from the usual workflow and take corrective measures.
3. Speechnotes: Speechnotes is dictation software powered by Google, easy to use, and ensures efficient documentation. The users do not need to create a separate account to login into the app. They can simply open the application, click on the microphone and start dictating. It also allows the users to insert punctuation marks by dictating them through voice commands or using the built-in punctuation keyboard. It provides the users several fonts and text size options, including in-app purchases to unlock the premium features.
4. Gboard: Android users can enjoy Gboard services by installing the application from Google Play Store. Gboard is an instant speech-to-text application, designed as a keyboard for physical input but, it also carries the speech input option, making it more efficient to use. This application can also work with Google Translate and provides dictation services in over 60 different languages. Even though the application is not authentically a transcription application, it provides its users with all the facilities available in a basic transcription tool.
5. Otter: Otter is a cloud-based speech recognition program that provides real-time transcription by allowing the users to edit, search, and organize transcriptions according to their preferences. Otter is programmed to work in collaboration with teams, specifically for meetings, interviews, and lectures, where different speakers are appointed IDs to make transcriptions easier to understand. It is a paid program that provides three types of plans, individually designed to suit the users.
6. Windows 10 Speech Recognition: Windows 10 is providing built-in voice recognition features for its users through accurate transcriptions. The users can give basic commands to the system and access its assistant features through voice control algorithms. The users can also train the software according to their preferences by reading texts to the system and allowing access to their documents for the software’s deeper understanding of the users’ vocabularies.
7. Dragon Speech Recognition Software: The Dragon Speech Recognition Software is owned by Nuance. It follows an AI-based speech recognition algorithm that learns the voice of its user with greater accuracy over time and supports cloud-based document management. It is one of the fastest and claims to deliver 99% of speech recognition accuracy. It aims to eliminate the barriers to productivity and increase efficient interaction between its users and the computers.
8. Microsoft Azure Speech-to-Text: Microsoft Azure’s Speech-to-text feature is powered by deep neural network models and ensures real-time audio transcriptions. This feature allows the users to create texts from multiple audio sources and offers customization options to work better with distinct speech patterns and background sounds. The users can also change various specialist vocabularies like product names, place names, and other technical information.
9. IBM Watson Speech to Text: IBM Watson’s Speech-to-Text feature is powered by AI and machine learning algorithms. Instead of transcribing speech-to-text in real-time, it allows its users to convert batches of audio files and process them through language, audio frequency, and a range of other output options. It enables tagging transcriptions with speaker labels, smart formatting, and other features.
10. Amazon Transcribe: It is a cloud-based, automatic speech recognition platform that converts audio files into texts. It aims to provide a more efficient service than the traditional speech recognition software by recognizing texts in low frequency and noisy recordings. Amazon Transcribe uses the deep learning process that enables formatting and adds punctuation automatically. It is one of the most powerful and efficient speech-to-text platforms and is mostly used for business enterprises.
Microsoft’s recent announcement that it is acquiring healthcare artificial intelligence and voice recognition company Nuance could signal a new era of voice-enabled technologies in the enterprise.
Nuance’s speech recognition technology for medical dictation is currently used in 77% of U.S. hospitals, and Microsoft plans to integrate those technologies with its Microsoft Cloud for Healthcare offering that was introduced last year.
However, the purchase price of $19.7 billion indicates that Microsoft has plans to bring more voice recognition technology to other vertical markets aside from healthcare.
We sat down with Igor Jablokov, founder and CEO of augmented AI company Pryon and an early pioneer of automated cloud platforms for voice recognition that helped invent the technology that led to Amazon’s virtual assistant Alexa, to talk about Microsoft’s move and how intelligent voice technology could impact the workplace.
What do you make of Microsoft’s acquisition of Nuance?
So look, it’s going to be a popular thing to talk about moves in healthcare, especially as we’re still through the throes of this pandemic. And most of us, I’m sure had a challenging 2020. So that’s a great, way to frame the acquisition, given Nuance, some of the medical dictation and other types of projects that they inserted into the healthcare workflow. So, that makes sense. But, would anybody actually pay that much for just something for healthcare? I would imagine Microsoft could have had as big an impact, if not larger, going directly for one of those EHR companies like Epic. So, that’s why, I’m like, “All right, healthcare, that’s good.” , is it going to be a roll up where they will be going after Epic in places like that, where there’s already lots of stored content, and then vertically integrate the whole thing? That’s, that’s the next play that I would see. They’re gunning for to own that workflow. Right. Okay. So that’s that piece. Now. On the other hand I see it as a broader play in employee productivity, because whenever Microsoft really opens up their pocketbooks, like they did here, right, this is, was what their second largest acquisition, it’s typically to reinforce the place where they’re, they’re the strongest than where they’re essentially , dairy cow is, and that’s employee productivity.
Microsoft has never been solely focused on healthcare. Their bread and butter is the enterprise. So how can the same technologies be applied to the enterprise?
You’re exactly right. Now why do we have special knowledge of the Nuance stuff? Well, the team that’s in this company Pryon, actually developed many of the engines inside of Nuance. So many years ago, Nuance felt like their engines were weak, and that IBM’s were ahead of the curve, if you will. I believe around the 2008 downturn, they came in to acquire the majority of IBM SAS speech chats and, and the like, and related AI technologies. And my now current chief technology officer was assigned to that unit project in terms of collaborating with them to integrate it into their, into their work for half a decade. So, that’s the plot twist here. We have a good sense now, these, it is true, that these engines were behind Siri and all these other experiences, but in reality, it wasn’t Nuance engines, it was IBM engines that were acquired through Nuance that ended up getting placed there, because of how highly accurate and more flexible these things were.
So let’s start with something like Microsoft Teams. To continue bolstering Teams with things like live transcriptions, to put a little AI system inside of Teams that has access to the enterprise’s knowledge as people are discussing things – it may not even be any new product, it could just be all the things that Microsoft is doing but they just needed more hands on deck, right in terms of this being a massive acqui-hire in terms of having more scientists and engineers working on applied AI. So I would say a third of it is they need more help with things that they’re already doing. , a third of it is a healthcare play, but I would watch for other moves for their vertical integration there. And then the third is for new capability that that we haven’t experienced yet on the employee productivity side of Microsoft.
Microsoft already has their version of Siri and Alexa: Cortana. What do you think about Cortana and how it can be improved?
They attempted for it to be their thing everywhere. They just pulled it off the shelves – or proverbial shelves – on mobile, so it no longer exists as a consumer tech. So the only place that it lives now is on Windows desktops, right? So that’s not a great entry point. Then they tried doing the mashup, where, Cortana could be called via Alexa and vice versa. But when I talked to the unit folks at Amazon, and I’m like, “Look, you’re, you’re not going to allow them unit to really do what they want to do, right? Because they’re not going to allow you to do what you want to do on those desktops.” So it almost ends up being this weird thing like calling into contact centers and being transferred to another contact center. That’s what it felt like. In this case, Alexa got the drop on them, which is, which is strange and sorrowful in some ways.
Other AI assistants like Alexa are much further along than Cortana, but why aren’t we seeing much adoption in the enterprise?
There’s multiple reasons for that. There’s, there’s the reason of accuracy. And accuracy isn’t just you say something, you get an answer. But where do you get it from? Well, it has to be tied into enterprise data sources, right? Because most enterprises are not like what we have at home, where we buy into the Apple ecosystem, the Amazon ecosystem, the Google ecosystem. They’re heterogeneous environments where they have bits and pieces from every vendor. The next piece is latency and getting quick results that are accurate at scale. And then the last thing is security, right. So there’s certainly things that that Alexa developers do not get access to. And that’s not going to fly in the enterprise space. One of the things that we hear from enterprises, in pilots and in production, said that they’re starting to put in these API’s is starting to be their crown jewels, and the most sensitive things that they got. And, and if you actually read the terms and conditions from a lot of the big tech companies that are leveraging AI stuff, they’re very nebulous with where the information goes, right? Does it get transcribed or not? Are people eyeballing this stuff? Or not? And so most enterprises are like, “Hold on a second, you want us to put our secrets, we make these microchips and you want us to put secrets on M&A deals we’re about to do.?” They’re uncomfortable about that. It’s just a different ball of wax. And that’s why I think it’s going to be purpose-built companies that are going to be developing enterprise API’s.
I think there will be a greater demand for bringing some of these virtual assistants we all know to the enterprise – especially since we’ve been at home for over a year and using them in our home.
Your intuition is spot on. It’s not even so much people coming from home into work environments – it’s a whole generation that has been reared with Alexa and Siri and these things. When you actually look at the majority of user experiences at work, using Concur or SAP or Dynamics, or Salesforce, or any of these types of systems, and they’re gonna toss grenades at this stuff over time, especially as they elevate in authority through the natural motions of expanding and their influence over their career. I think there’s going to be a new a new generation of enterprise software that’s going to be purpose built for these folks that are going to be taking over business. That’s basically the chink in the armor for any of these traditional enterprise companies. If you think if you look at Oracle, if you look at IBM, if you look at HP, if you look at Dell, if you look at any one of them. I don’t know where they go, at least on the software side. When a kid has grown up with Alexa, and there they are at 26 years old, they’re like, “No, I’m not gonna use that.” Why? Why can I just blurt something out and get an instant answer? But here I am running a region of Baskin Robbins, and I can’t say, “How many ice cream cones did we sell when it was 73 degrees out?” and get an instant answer one second later. So that’s what’s going to happen. I mean, we’re certainly as a company, since our inception, we’ve been architected not for the current world, but for this future world. Already elements of this are in production, as we announced with Georgia Pacific in in late January, and we’re working through it. And I have to say, one of the biggest compliments that I get, whether it’s showing this to big enterprises or government agencies and the like, is fundamentally they’re like, “Holy smokes, this doesn’t feel like anything else that we use. But behind the scenes not only are we using top flight UX folks to develop this, but we’re also working with behavioral scientists and the like, because all that want to use our software not have to use our software. But, most enterprise software gets chosen by the the CIO, the CTO, the CISO, and things like that. And most of them are thinking checking off boxes on functionality. And most enterprise developers cook their blue and white interface, get the fun feature function in there and call it a day. And I think they’re missing such opportunities by not finishing the work.
Microsoft announced Monday that it will buy speech recognition company Nuance Communications for $16 billion, the tech giant’s largest acquisition since it bought LinkedIn for more than $26 billion in 2016.
The deal is the latest sign Microsoft is hunting for more growth through acquisitions. The company is also reportedly in talks to buy the chat app Discord for about $10 billion and last year tried to buy TikTok’s U.S. business for about $30 billion before the deal was derailed. Last month, Microsoft acquired gaming company Zenimax for $7.6 billion.
Shares of Nuance were up nearly 23 percent in premarket trading Monday, representing approximately the same premium Microsoft plans to pay based on the closing price Friday. Trading was halted on the stock after that pop and are expected to resume trading around 9 a.m. ET. Microsoft shares were slightly negative.
Nuance would be aligned with the part of Microsoft’s business that serves businesses and governments. Nuance derives revenue by selling tools for recognizing and transcribing speech in doctor’s visits, customer-service calls and voicemails. In its announcement Monday, Microsoft said Nuance’s technology will be used to augment Microsoft’s cloud products for health care, which were launched last year.
The company reported $7 million in net income on about $346 million in revenue in the fourth quarter of 2020, with revenue declining 4 percent on an annualized basis. Nuance was founded in 1992, and had 7,100 employees as of September 2020.
Microsoft said Nuance’s CEO Mark Benjamin will remain at the company and report to Scott Guthrie, the Microsoft executive in charge of the company’s cloud and artificial intelligence businesses.
Nuance has a strong reputation for its voice recognition technology, and it has been considered an acquisition target for companies like Apple, Microsoft and more for several years. Microsoft has voice recognition built into many of its products already, but it has recently shut down some products featuring its voice assistant Cortana.
Software firm Nuance communications has developed a conversational voicebot to that can guide users to answer questions about COVID-19, confirm their eligibility for a vaccination and schedule appointments where available.
One of Nuance’s Intelligent Engagement solutions, it also sends the user an SMS text to confirm appointments after the call.
The voicebot is currently being deployed by Walgreens – one of the largest pharmacy retail chains in the US. Walgreens customers can call a helping or individual store to speak to the voicebot, which is available in English and Spanish, 24 hours a day.
“Ensuring equitable access to care is essential,” said Robert Weideman, Executive Vice President and General Manager, Nuance. “Using our proven voice- and AI-powered solutions to help as many Walgreens customers as possible experience a more modern, convenient, and secure process for scheduling their COVID vaccine appointments is one of the most important outcomes we can achieve.”
Nuance are pioneers in conversational AI and have also developed Dragon Medical One, a cloud-based, GDPR-compliant speech recognition tool that enables clinicians to use their voice to capture patient information. This data can then be stored across a number of different platforms.
Doctors for the U.S. Department of Veterans Affairs (VA) will take notes and records of remote appointments using Nuance’s Dragon Medical One platform, the company announced this week. The speech recognition technology supports a virtual assistant enabling medical professionals interact with their pateints from afar without the distraction of notetaking, improving care as the role of telehealth expands during the current COVID-19 health crisis.
Nuance launched Dragon Medical as an enhancement for the existing Dragon transcription program. The platform is designed to track a patient’s history and treatment more efficiently than manual systems. The virtual assistant can understand medical vocabulary well enough to mark important comments, collating the notes into a usable format to help fill out electronic health records and other paperwork. The VA is applying Nuance’s tech to appointments over the phone or through the VA Video Connect platform. That the VA chose Nuance isn’t too big a surprise. The company supplies its platform to many parts of the federal government, including the Military Health System. The VA started operating Dragon Medical back in 2014, so the tech is already approved for integration into its services, which will speed up the adoption.
“Helping frontline clinicians at the VA and other major health systems has been our highest priority since the pandemic began,” Nuance executive vice president and general manager of healthcare Diana Nole said in a statement. “The combination of our cloud-based platforms, organizational agility and deep experience working with the VA health system made it possible for us to act quickly and deliver the technology solutions needed to protect and assist physicians treating patients remotely. While our strong sense of mission and purpose in serving critical healthcare organizations and businesses already is very clear, it becomes amplified knowing that our technology solutions are playing a role in caring for our nation’s Veterans.”
The COVID-19 pandemic has only escalated the demands on every doctor’s time and energy, and telemedicine has attracted a lot of interest as a result. Nuance recently upped its partnership with Microsoft, for instance, the virtual check-ups performed over Microsoft Teams now offer Nuance’s ambient clinical intelligence (ACI) to transcribe the conversation and help fill in electronic health records. And health technology developer Cerner added the Dragon Medical Virtual Assistant to its platform this summer, allowing doctors using Cerner’s platform to fill in and search EHRs of patients by voice. The Cerner partnership will be a feature for VA doctors using Nuance’s tech as well.
While bringing voice AI to medical records had been on the rise already, COVID-19 accelerated the trend, with investors casting a speculative eye on startups in the field. Venture capitalists showered Saykara and Suki raising $9 million and $40 million, respectively, for their takes on the technology. Suki is also being used as a feature in larger products like Amazon’s Amazon Transcribe Medical. More than half a million physicians already use Dragon Medical, and the company claims it cuts the time spent on paperwork by up to 75%. Even for remote calls, that frees up a lot of time and energy for VA doctors to better care for their patients.
Voice recognition market is estimated to reach US$31.82 billion by 2025
Technology is invading in every sector. New inventions, innovation and devices are making life easier for everyone. Voice recognition technology is one such amazing initiative to look for in the growing innovation era.
Voice recognition also known as speech recognition, is a computer software program or a hardware device with the ability to receive, interpreting and understanding voice and carry out commands. The technology unravels the feature to easily create and control documents by speaking, with the help of technology.
Voice and speech recognition features authorize contactless control to several devices and equipment that deliver input for automatic translation and generates print-ready diction. Voice commands are replied through speech recognition devices. According to a report by Grand View Research, Inc, the global speech and voice recognition market size are estimated to reach US$31.82 billion by 2025 with a CAGR of 17.2% during the forecast period.
The growth of the overall market is primarily driven by factors such as rising acceptance of advanced technology espoused with increasing consumer demand for smart devices, a growing sense of personal data safety and security, and increasing usage of voice-enabled payments and shopping by retailers.
The demand for related devices like voice-activated systems, voice-enabled devices and the voice-enabled virtual assistant system is also expected to spike with the growing invasion of speech-based technology in diverse industries. The major adoption is observed in the banking and automobile sectors. The reason behind this is to counter fraudulent activities and enhance security by embracing voice biometrics for authentication of users. It is expected that the growing Artificial Intelligence (AI)-based systems will trigger the market soon.
Analytics Insight presents the top 10 companies operating in the global speech and voice recognition market in 2020
Nuance Communication founded in 2001 provides speech recognition and artificial intelligence products which focus on server and embedded speech recognition, telephone call steering systems, automated telephone directory services, and medical transcription software and systems.
The Massachusetts based company features Nuance recognizer for contact centres, a software that consistently delivers a great consumer service experience while improving self-service system’s containment rate and Dragon Naturally Speaking that creates documents, spreadsheets and email simply by speaking. The company is a partner with 75% of fortune 100 companies and around thousands of healthcare organisations.
Google’s mother company Alphabet was founded in 1998. Google provides a variety of services ranging from search engines, cloud computing, online advertisement technologies, and computer hardware and software. The California headquartered company is a global pioneer in internet-based products and services. Currently, the good-for-all company is stepping into the speech recognition market. It provides a service to convert speech-to-text feature which accurately converts speech into text using an API powered by Google’s AI technology. Google has strong network coverage with 70 offices in 50 countries across the globe.
Amazon headquartered at Washington was founded in 1994. The company functions through three core segments namely, North America, international and amazon web series segments in the retail sales of consumers products and subscription. Amazon focuses on advanced technologies like artificial intelligence, cloud computing, consumer electronics, e-commerce and digital streaming. Amazon transcribe makes it easy for developers to add speech to text capability the application.
Apple, Inc is a California headquartered company that is involved in sectors like manufacturing, marketing and selling mobile phones, media devices and computers to consumers worldwide. Apple was found in 1977. The company sells its products and services mostly through direct sales force, online and retail stores and through third-party cellular network carriers, resellers and wholesalers. The Apple speech recognition process involves capturing audio of the user’s voice and sending data to Apple’s servers for processing.
IBM Corporation was founded in 1911. The New York headquartered company functions through five key segments such as cognitive solutions, technology services and cloud platforms, global business services, systems and global financing. IBM also manufactures and sells software and hardware. It delivers numerous hosting and consulting services from mainframe processors to nanotechnology domains. IBM’s speech recognition enables systems and applications to understand and process human speech.
Microsoft Corporation found in 1975 is a pioneer as a technology company. The Redmond, Washington headquartered company is known for its software products that mainly include Internet Explorer, Microsoft Windows OS, Microsoft Office Suite and Edge Web Browser. The Microsoft speech recognition used in Windows 10 helps find the user’s voice by the system.
Agnitio was found in 2004 as a spin-off from the Biometric Recognition Group-ATVS at the Technical University of Madrid. The Madrid, Spain headquartered company is a biometrics technology company that uses unique biometric characteristics to verify an individual’s identity. Agnitio speech recognition program for windows lets the user control their computer by using voice.
Verint Voice Vault
Verint Systems was founded in 2002. The New York headquartered analytics company sells software and hardware products for customer engagement management, security, surveillance and business intelligence. Verint VoiceVault voice biometrics is a standardized approach to mobile identity verification.
iFLYTEK headquartered at Hefei, Anhui, China is an advanced enterprise dedicated to research and development of advanced technologies like intellectual speech and language technologies, speech information services, integration of e-governance systems and development of software and chip products. The company was founded in 1999. The market coverage of the company is spread across North America, Europe, Asia-Pacific, Latin America, Middle East and Africa. iFLYTEK speech recognition provides services such as speech synthesis, automatic speech recognition and speech expansion.
Baidu headquartered at Beijing, China consists of two segments including Baidu Core and iQIYI. The company was founded in 2000. The company has a direct sales market in Beijing, Dongguan, Guangzhou, Shanghai, Shenzen and Suzhou. Baidu speech recognition provides services like streamlining multi-layer truncated attention model (SMLTA) for online automatic speech recognition (ASR).
Get the Speech Rec Pros Newsletter!