Posted on

How do humans understand speech?

By Aaron Wagner for Penn State News

UNIVERSITY PARK, Pa. — New funding from the National Science Foundation’s Build and Broaden Program will enable a team of researchers from Penn State and North Carolina Agricultural and Technical State University (NC A&T) to explore how speech recognition works while training a new generation of speech scientists at America’s largest historically Black university.

Research has shown that speech-recognition technology performs significantly worse at understanding speech by Black Americans than by white Americans. These systems can be biased, and that bias may be exacerbated by the fact that few Americans of color work in speech-science-related fields.

Understanding how humans understand speech

Navin Viswanathan, associate professor of communication sciences and disorders, will lead the research team at Penn State.

“In this research, we are pursuing a fundamental question,” Viswanathan explained. “How human listeners perceive speech so successfully despite considerable variation across different speakers, speaking rates, listening situations, etc., is not fully understood. Understanding this will provide insight into how human speech works on a fundamental level. On an immediate, practical level, it will enable researchers to improve speech-recognition technology.”

Joseph Stephens, professor of psychology, will lead the research team at NC A&T.

“There are conflicting theories of how speech perception works at a very basic level,” Stephens said. “One of the great strengths of this project is that it brings together investigators from different theoretical perspectives to resolve this conflict with careful experiments.”

According to the research team, speech-recognition technology works in many aspects of people’s lives, but it is not as capable as a human listener at understanding speech, especially when the speech varies from norms established in the software. Speech-recognition technology can be improved using the same mechanisms that humans use, once those mechanisms are understood.

Building and broadening the field of speech science

Increasing diversity in speech science is the other focus of the project. 

“When a field lacks diversity among researchers, it can limit the perspectives and approaches that are used, which can lead to technologies and solutions being limited, as well,” Stephens said. “We will help speech science to become more inclusive by increasing the capacity and involvement of students from groups that are underrepresented in the field.”

The National Science Foundation’s Build and Broaden Program focuses on supporting research, offering training opportunities, and creating greater research infrastructure at minority-serving institutions. New awards for the Build and Broaden Program, which total more than $12 million, support more than 20 minority-serving institutions in 12 states and Washington, D.C. Nearly half of this funding came from the American Rescue Plan Act of 2021. These funds aim to bolster institutions and researchers who were impacted particularly hard by the COVID-19 pandemic.

Build and Broaden is funding this project in part because it will strengthen research capacity in speech science at NC A&T. The project will provide research training for NC A&T students in speech science, foster collaborations between researchers at NC A&T and Penn State, and enhance opportunities for faculty development at NC A&T.

By providing training in speech science at NC A&T, the research team will mentor a more diverse group of future researchers. Increasing the diversity in this field will help to decrease bias in speech-recognition technology and throughout the field.

Viswanathan expressed excitement about developing a meaningful and far-reaching collaboration with NC A&T.

“This project directly creates opportunities for students and faculty from both institutions to work together on questions of common interest,” Viswanathan said. “More broadly, we hope that this will be the first step towards building stronger connections across the two research groups and promoting critical conversations about fundamental issues that underlie the underrepresentation of Black scholars in the field of speech science.”

Ji Min Lee, associate professor of communications sciences and disorders; Anne Olmstead, assistant professor of communications sciences and disorders; Matthew Carlson, associate professor of Spanish and linguistics; Paola “Guili” Dussias, professor of Spanish, linguistics and psychology; Elisabeth Karuza, assistant professor of psychology; and Janet van Hell, professor of psychology and linguistics, will contribute to this project at Penn State. Cassandra Germain, assistant professor of psychology; Deana McQuitty, associate professor of speech communication; and Joy Kennedy, associate professor of speech communication, will contribute to the project at North Carolina Agricultural and Technical State University.

Posted on

The Race to Save Indigenous Languages, Using Automatic Speech Recognition

By Tanner Stening for News@Northeastern

Michael Running Wolf still has that old TI-89 graphing calculator he used in high school that helped propel his interest in technology. 

“Back then, my teachers saw I was really interested in it,” says Running Wolf, clinical instructor of computer science at Northeastern University. “Actually a couple of them printed out hundreds of pages of instructions for me on how to code” the device so that it could play games. 

What Running Wolf, who grew up in a remote Cheyenne village in Birney, Montana, didn’t realize at the time, poring over the stack of printouts at home by the light of kerosene lamps, was that he was actually teaching himself basic programming.

“I thought I was just learning how to put computer games on my calculator,” Running Wolf says with a laugh. 

But it hadn’t been his first encounter with technology. Growing up in the windy plains near the Northern Cheyenne Indian Reservation, Running Wolf says that although his family—which is part Cheyenne, part Lakota—didn’t have daily access to running water or electricity, sometimes, when the winds died down, the power would flicker on, and he’d plug in his Atari console and play games with his sisters. 

These early experiences would spur forward a lifelong interest in computers, artificial intelligence, and software engineering that Running Wolf is now harnessing to help reawaken endangered indigenous languages in North and South America, some of which are so critically at risk of extinction that their tallies of living native speakers have dwindled into the single digits. 

Running Wolf’s goal is to develop methods for documenting and maintaining these early languages through automatic speech recognition software, helping to keep them “alive” and well-documented. It would be a process, he says, that tribal and indigenous communities could use to supplement their own language reclamation efforts, which have intensified in recent years amid the threats facing languages. 

“The grandiose plan, the far-off dream, is we can create technology to not only preserve, but reclaim languages,” says Running Wolf, who teaches computer science at Northeastern’s Vancouver campus. “Preservation isn’t what we want. That’s like taking something and embalming it and putting it in a museum. Languages are living things.”

The better thing to say is that they’ve “gone to sleep,” Running Wolf says. 

And the threats to indigenous languages are real. Of the roughly 6,700 languages spoken in the world, about 40 percent are in danger of atrophying out of existence forever, according to UNESCO Atlas of Languages in Danger. The loss of these languages also represents the loss of whole systems of knowledge unique to a culture, and the ability to transmit that knowledge across generations.

While the situation appears dire—and is, in many cases—Running Wolf says nearly every Native American tribe is engaged in language reclamation efforts. In New England, one notable tribe doing so is the Mashpee Wampanoag Tribe, whose native tongue is now being taught in public schools on Cape Cod, Massachusetts. 

But the problem, he says, is that in the ever-evolving field of computational linguistics, little research has been devoted to Native American languages. This is partially due to a lack of linguistic data, but it is also because many native languages are “polysynthetic,” meaning they contain words that comprise many morphemes, which are the smallest units of meaning in language, Running Wolf says. 

Polysynthetic languages often have very long words—words that can mean an entire sentence, or denote a sentence’s worth of meaning. 

Further complicating the effort is the fact that many Native American languages don’t have an orthography, or an alphabet, he says. In terms of what languages need to keep them afloat, Running Wolf maintains that orthographies are not vital. Many indigenous languages have survived through a strong oral tradition in lieu of a robust written one.

But for scholars looking to build databases and transcription methods, like Running Wolf, written texts are important to filling in the gaps. What’s holding researchers back from building automatic speech recognition for indigenous languages is precisely that there is a lack of audio and textual data available to them.

Using hundreds of hours of audio from various tribes, Running Wolf has managed to produce some rudimentary results. So far, the automatic speech recognition software he and his team have developed can recognize single, simple words from some of the indigenous languages they have data for. 

“Right now, we’re building a corpus of audio and texts to start showing early results,” Running Wolf says. 

Importantly, he says, “I think we have an approach that’s scientifically sound.”

Eventually, Running Wolf says he hopes to create a way for tribes to provide their youth with tools to learn these ancient languages by way of technological immersion—through things like augmented or virtual reality, he says. 

Some of these technologies are already under development by Running Wolf and his team, made up of a linguist, a data scientist, a machine learning engineer, and his wife, who used to be a program manager, among others. All of the ongoing research and development is being done in consultation with numerous tribal communities, Running Wolf says.

“It’s all coming from the people,” he says. “They want to work with us, and we’re doing the best to respect their knowledge systems.”

Posted on

Why do these 15 states have so few ASCs?

By Marcus Robertson for Becker’s ASC Review

With the COVID-19 pandemic ushering in many changes to the healthcare industry, leaders need to recognize emerging patterns to identify areas primed for growth.

In figures recently released by Becker’s ASC Review, nine states plus the District of Columbia were found to have fewer than one ASC per 100,000 residents — and the next five states hardly fared better.

Some, such as fifth-fewest New York and sixth-fewest Massachusetts, likely come as a surprise.

So, why do these states have so few ASCs? The following trends may shed some light:

  • Poverty rate figures show a moderate correlation with per-capita ASC figures: nine states listed below land among the 20 states with the highest poverty rates, while six are in the 20 states with the lowest poverty rates. 
  • Surgeons per capita figures show little, if any, correlation to ASC per capita figures: six of the 15 states with the fewest ASCs per capita rank in the top 10 in surgeons per capita, and eight of the bottom 15 states for ASCs rank in the top 16 for surgeons. 
  • In a modest correlation, the bottom 15 states for ASCs per capita generally, but not always, have lower rates than their immediate neighbors
  • All but one have certificate of need laws, the strongest correlation measured here.
    • New Mexico, the only of the bottom 15 states without a CON law, has the nation’s third-highest poverty rate, and all states bordering it have more ASCs per capita as well as more ASCs as an absolute figure.

Note: States are listed from fewest to most ASCs per capita.

State**ASCsASCs per (100k) capitaPoverty rateCON lawActive surgeons*Surgeons* per (100k) capita
Vermont20.319%Y13020.22
District of Columbia30.4414.6%Y26037.71
West Virginia80.4514.6%Y90050.18
Virginia610.718.8%Y8509.85
New York1470.7311.8%Y2,96014.65
Massachusetts540.778.2%Y1,86026.46
Kentucky350.7814.4%Y4008.88
Alabama410.8214.6%Y4308.56
Iowa290.919.1%Y41012.85
New Mexico200.9416.2%N60028.34
Oklahoma401.0113.2%Y40010.10
Illinois1311.029.2%Y1,1809.21
Michigan1061.0510.6%Y1,41013.99
Maine151.1010%Y22016.15
Rhode Island131.188.8%Y

* Surgeons include those employed by ASCs, hospitals and other organizations; ophthalmologists are not included in these figures. The Bureau of Labor Statistics did not include surgeon employment data for Rhode Island.

** List of states includes the District of Columbia.

Posted on

Physician burnout in healthcare: Quo vadis?

By Ifran Khan for Fast Company

Burnout was included as an occupational phenomenon in the International Classification of Diseases (ICD-11) by the World Health Organization in 2019.

Today, burnout is prevalent in the forms of emotional exhaustion, personal, and professional disengagement and a low sense of accomplishment. While cases of physician fatigue continue to rise, some healthcare companies are looking to technology as a driver of efficiency. Could technology pave the way to better working conditions in healthcare?

While advanced technologies like AI cannot solve the issue on their own, data-driven decision-making could alleviate some operational challenges. Based on my experience in the industry, here are some tools and strategies healthcare companies can put into practice to try and reduce physician burnout.

CLINICAL DOCUMENTATION SUPPORT

Clinical decision support (CDS) tools help sift through copious amounts of digital data to catch potential medical problems and alert providers about risky medication interactions. To help reduce fatigue, CDS systems can be used to integrate decision-making aids and channel accurate information on a single platform. For example, they can be used to get the correct information (evidence-based guidance) to the correct people (the care team and patient) through the correct channels (electronic health record and patient portal) in the correct intervention (order sets, flow sheets or dashboards) at the correct points (for workflow-based decision making).

When integrated with electronic health records (EHRs) to merge with existing data sets, CDS systems can automate data collection on vital life signs and alerts to aid physicians in improving patient care and outcomes.

AUTOMATED DICTATION

Companies can use AI-enabled speech recognition solutions to reduce “click fatigue” by interpreting and converting human voice into text. When used by physicians to efficiently translate speech to text, these intelligent assistants can reduce effort and error in documentation workflows.

With the help of speech recognition through AI and machine learning, real-time automated medical transcription software can help alleviate physician workload, ultimately addressing burnout. Data collected from dictation technology can be seamlessly added to patient digital files and built into CDS systems. Acting as a virtual onsite scribe, this ambient technology can capture every word in the physician-patient encounter without taking the physician’s attention off their patient.

MACHINE LEARNING

Resource-poor technologies sometimes used in telehealth often lack the bandwidth to transmit physiological data and medical images — and their constant usage can lead to physician distress.

In radiology, advanced imaging through computer-aided ultrasounds can reduce the need for human intervention. Offering a quantitative assessment through deep analytics and machine learning, AI recognizes complex patterns in data imaging, aiding the physician with the diagnosis.

NATURAL LANGUAGE PROCESSING

Upgrading the digitized medical record system, automating the documentation process, and augmenting the medical transcription are the foremost benefits of natural language processing (NLP)-enabled software. These tools can reduce administrative burdens on physicians by analyzing and extracting unstructured clinical data to document relevant points in a structured manner. That avoids the instance of under-coding and streamlines the way medical coders extract diagnostic and clinical data, enhancing value-based care.

MITIGATING BURNOUT WITH AI

Advanced medical technologies can significantly reduce physician fatigue, but they must be tailored to the implementation environment. That reduces physician-technology friction and makes the adaptation of technology more human-centered.

The nature of a physician’s job may always put them at risk of burnout, but optimal use and consistent management of technology can make a positive impact. In healthcare, seeking technological solutions that reduce the burden of repetitive work—and then mapping the associated benefits and studying the effects on staff well-being and clinician resilience—provides deep insights.

Posted on

Voice Recognition Is Awesome, But How Did It Get So Good?

By Arthur Brown for Make Use Of

Voice recognition technology has a rich history of development that’s led it to what it is today. It’s at the core of modern life, giving us the ability to do tasks just by talking to a device. So, how has this astonishing technology evolved over the years? Let’s take a look.

1952: The Audrey System

The first step in voice recognition came about in the early 1950s. Bell Laboratories developed the first machine that could understand the human voice in 1952, and it was named the Audrey System. The name Audrey was sort of a contraction of the phrase Automatic Digit Recognition. While this was a major innovation, it had some major limitations.

Most prominently, Audrey could only recognize the numerical digits 0-9, no words. Audrey would give feedback when the speaker said a number by lighting up 1 of 10 lightbulbs, each one corresponding to a digit.

While it could understand the numbers with 90% accuracy, Audrey was confined to a specific voice type. This is why the only person who would really use it was HK Davis, one of the developers. When a number was spoken, the speaker would need to wait at least 300 milliseconds before saying the next one.

Not only was it limited in functionality, but it was also limited in utility. There wasn’t much use for a machine that could only understand numbers. One possible use was dialing telephone numbers, but it was much faster and easier to dial the numbers by hand. Though Audrey didn’t have a graceful existence, it still stands as a great milestone in human achievement.

1962: IBM’s Shoebox

A decade after Audrey, IBM tried its hands at developing a voice recognition system. At the 1962 World Fair, IBM showed off a voice recognition system named Showbox. Like Audrey, its main job was understanding the digits 0-9, but it could also understand six words: plus, minus, false, total, subtotal, and off.

Shoebox was a math machine that could do simple arithmetic problems. As for feedback, instead of lights, Shoebox was able to print out the results on paper. This made it useful as a calculator, though the speaker would still need to pause between each number/word.

1971: IBM’s Automatic Call Identification

After Audrey and Shoebox, other labs around the world developed voice recognition technology. However, it didn’t take off until the 1970s, when in 1971, IBM brought the first-of-its-kind invention to the market. It was called the Automatic Call Identification system. It was the first voice recognition system that was used over the telephone system.

Engineers would call and be connected to a computer in Raleigh, North Carolina. The caller would then utter one of the 5,000 words in its vocabulary and get a “spoken” response as an answer.

1976: Harpy

In the early 1970s, the U.S Department of Defense took an interest in voice recognition. DARPA (Defence Advanced Research Projects Agency) developed the Speech Understanding Research (SUR) program in 1971. This program provided funding to several companies and universities to aid research and development for voice recognition.

In 1976, because of SUR, Carnegie Mellon University developed the Harpy System. This was a major leap in voice recognition technology. The systems until that point were able to understand words and numbers, but Harpy was unique in that it could understand full sentences.

It had a vocabulary of just about 1,011 words, which, according to a publication by B. Lowerre and R. Reddy, equated to more than a trillion different possible sentences. The publication then states that Harpy could understand words with 93.77% accuracy.

The 1980s: The Hidden Markov Method

The 1980s were a pivotal time for voice recognition technology, as this is the decade where voice recognition technology, as this was the decade that we were introduced to the Hidden Markov Method (HMM). The main driving force behind HMM is probability.

Whenever a system registers a phoneme (the smallest element of speech), there’s a certain probability of what the next one will be. HMM uses these probabilities to determine which phoneme will most likely come next and form the most likely words. Most voice recognition systems today still use HMM to understand speech.

The 1990s: Voice Recognition Reaches The Consumer Market

Since the conception of voice recognition technology, it has been on a journey to find a space in the consumer market. In the 1980s, IBM showcased a prototype computer that could do speech-to-text dictation. However, it wasn’t until the early 1990s that people started to see applications like this in their homes.

In 1990, Dragon Systems introduced the first speech-to-text dictation software. It was called Dragon Dictate, and it was originally released for Windows. This $9,000 program was revolutionary for bringing voice recognition technology to the masses, but there was one flaw. The software used discrete dictationmeaning the user must pause between each word for the program to pick them up.

In 1996, IBM again contributed to the industry with Medspeak. This was a speech-to-text dictation program as well, but it didn’t suffer from discrete dication as Dragon Dictate did. Instead, this program could dictate continuous speech, which made it a more compelling product.

2010: A Girl Named Siri

Throughout the 2000s, voice recognition technology exploded in popularity. It was implemented into more software and hardware than ever before, and one crucial step in the evolution of voice recognition was Siri, the digital assistant. In 2010, a company by the name of Siri introduced the virtual assistant as an iOS app.

At the time, Siri was an impressive piece of software that could dictate what the speaker was saying and give an educated and witty response. This program was so impressive that Apple acquired the company that same year and gave Siri a bit of an overhaul, pushing it towards the digital assistant we know today.

It was through Apple that Siri got its iconic voice (voice by Susan Benett) and a host of new features. It uses natural language processing to control most of the system’s functions.

The 2010s: The Big 4 Digital Assistants

As it stands, four big digital assistants dominate voice recognition and additional software.

  • Siri is present across nearly all of Apple’s products: iPhones, iPods, iPads, and the Mac family of computers.
  • Google Assistant is present across most of the 3 billion + Android devices on the market. In addition, users can use commands across many Google services, like Google Home.
  • Amazon Alexa doesn’t have much of a dedicated platform where it lives, but it’s still a prominent assistant. It’s available to be downloaded and used on Android devices, Apple devices. and even select Lenovo laptops
  • Bixby is the newest entry to the digital assistant list. It’s Samsung’s homegrown digital assistant, and it’s present among the company’s phones and tablets.

A Spoken History

Voice recognition has come a long way since the Audrey days. It’s been making great gains in multiple fields; for example, according to Clear Bridge Mobile, the medical field benefited from voice-operated chatbots during the pandemic in 2020. From only being able to understand numbers to understanding different variations of full sentences, voice recognition is proving to be one of the most useful technologies of our modern age.

Posted on

Are residency policies creating physician shortages? 5 recent studies to know

By Patsy Newitt for Becker’s Hospital Review

California has the most active specialty physicians in the U.S., according to 2021 data published by the Kaiser Family Foundation. 

Here are five things to know from recently published studies:

1. Artificial intelligence technology may deter one-sixth of medical students from pursuing careers in radiology because of negative opinions of AI in the medical community, according to a study published in Clinical Imaging Oct. 2 .

2. Medical students identifying as sexual minorities are underrepresented in undergraduate medical training and among certain specialties following graduation, according to a study published Sept. 30 in JAMA Network Open.

3. Bottlenecks in the physician training and education pipeline are limiting entry for residency and playing a vital role in U.S. physician shortages and care access issues, according to a Sept. 20 report from nonpartisan think tank Niskanen Center. 

4. California has the most active specialty physicians in the U.S, according to 2021 data published by Kaiser Family Foundation Sept. 22. Here are the number of specialty physicians by state.

5. At least 93 percent of providers qualified for a positive payment adjustment from 2017 through 2019 under the Merit-based Incentive Payment System, according to a new report from the Government Accountability Office.

Posted on

Physician admits to stealing more than $500K from New Jersey practice

By Marcus Robertson for Becker’s Hospital Review

Walter Sytnik, DO, of Voorhees, N.J., admitted Sept. 9 to defrauding his former employer, an unnamed New Jersey medical practice, to the tune of more than $500,000. Dr. Voorhees stole checks from the practice and used them to pay personal expenses.

Prior to attending medical school, Dr. Sytnik worked for the New Jersey practice as a bookkeeper. From May 2013 to April 2018, he used the checks he stole to pay credit card bills, reordering new checks when he ran out. Dr. Sytnik forged the signature of the practice’s physician.

The mail fraud charge carries a maximum penalty of 20 years in prison and a $250,000 fine, or twice the gross gain or loss from the offense, whichever is greatest. Dr. Sytnik agreed to repay the full amount as part of his plea agreement. Sentencing is scheduled for Jan. 10, 2022.

Posted on

Three Ways AI Is Improving Assistive Technology

Wendy Gonzalez for Forbes

Artificial intelligence (AI) and machine learning (ML) are some of the buzziest terms in tech and for a good reason. These innovations have the potential to tackle some of humanity’s biggest obstacles across industries, from medicine to education and sustainability. One sector, in particular, is set to see massive advancement through these new technologies: assistive technology. 

Assistive technology is defined as any product that improves the lives of individuals who otherwise may not be able to complete tasks without specialized equipment, such as wheelchairs and dictation services. Globally, more than 1 billion people depend on assistive technology. When implemented effectively, assistive technology can improve accessibility and quality of life for all, regardless of ability. 

Here are three ways AI is currently improving assistive technology and its use-cases, which might give your company some new ideas for product innovation: 

Ensuring Education For All

Accessibility remains a challenging aspect of education. For children with learning disabilities or sensory impairments, dictation technology, more commonly known as speech-to-text or voice recognition, can help them to write and revise without pen or paper. In fact, 75 out of 149 participants with severe reading disabilities reported increased motivation in their schoolwork after a year of incorporating assistive technology.

This technology works best when powered by high-quality AI. Natural Language Processing (NLP) and machine learning algorithms have the capability to improve the accuracy of speech recognition and word predictability, which can minimize dictation errors while facilitating effective communication from student to teacher or among collaborating schoolmates. 

That said, according to a 2001 study, only 35% of elementary schools — arguably the most significant portion of education a child receives — provide any assistive technology. This statistic could change due to social impact of AI programs. These include Microsoft’s AI for Accessibility initiative, which invests in innovations that support people with neuro-diversities and disabilities. Its projects include educational AI applications that provide students with visual impairments the text-to-speech, speech recognition and object recognition tools they need to succeed in the classroom.  

Better Outcomes For Medical Technology

With a rapidly aging population estimated to top approximately 2 billion over the age of 60 by 2050, our plans to care for our loved ones could rely heavily on AI and ML in the future. Doctors and entrepreneurs are already paving the way; in the past decade alone, medical AI investments topped $8.5 billion in venture capital funding for the top 50 startups. 

Robot-assisted surgery is just one AI-powered innovation. In 2018, robot-assisted procedures accounted for 15.1% of all general surgeries, and this percentage is expected to rise as surgeons implement additional AI-driven surgical applications in operating rooms. When compared to traditional open surgery, robot-assisted surgeons tend to leave smaller incisions, which reduces overall pain and scarring, thereby leading to quicker recovery times.

AI-powered wearable medical devices, such as female fertility-cycle trackers, are another popular choice. Demand for products including diabetic-tracking sweat meters and respiratory patients’ oximeters have created a market that’s looking at a 23% CAGR by 2023. 

What’s more, data taken from medical devices could contribute to more than $7 billion in savings per year for the U.S. healthcare market. This data improves doctors’ understanding of preventative care and better informs post-recovery methods when patients leave after hospital procedures.

Unlocking Possibilities In Transportation And Navigation

Accessible mobility is another challenge that assistive technology can help tackle. Through AI-powered running apps and suitcases that can navigate through entire airports, assistive technology is changing how we move and travel. One example is Project Guideline, a Google project helping individuals who are visually impaired navigate their way through roads and paths with an app that combines computer vision and a machine-learning algorithm to aid the runner alongside a pre-designed path. 

Future runners and walkers may one day navigate roads and sidewalks unaccompanied by guide dogs or sighted guides, gaining autonomy and confidence while accomplishing everyday tasks and activities without hindrance. For instance, developed and spearheaded by Chieko Asakawa, a Carnegie Mellon Professor who is blind, CaBot is a navigation robot that uses sensor information to help avoid airport obstacles, alert someone to nearby stores and assist with required actions like standing in line at airport security checkpoints. 

The Enhancement Of Assistive Technology

These are just some of the ways that AI assistive technology can transform the way individuals and society move and live. To ensure assistive technologies are actively benefiting individuals with disabilities, companies must also maintain accurate and diverse data sets with annotation that is provided by well-trained and experienced AI teams. These ever-updating data sets need to be continually tested before, during and after implementation.

AI possesses the potential to power missions for the greater good of society. Ethical AI can transform the ways assistive technologies improve the lives of millions in need. What other types of AI-powered assistive technology have you come across and how could your company make moves to enter this industry effectively? 

Posted on

Embracing AI-enabled technologies as a strategic asset

By Robert Budman for Nuance Blog

Earlier this month, we learned that nearly 70% of health system executives plan more significant investments in AI-powered technologies to support a wide range of use cases, particularly for operational capabilities such as documentation workflows. Combined with a vision of what AI can make possible, these projected investments drive significant growth for AI in healthcare which industry experts expect to grow to nearly $40 billion by 2026.

What’s the vision? Ultimately, it’s simple: to solve some of healthcare’s most demanding challenges and positively impact patient care. One healthcare executive, recently interviewed by Becker’s Hospital Review, shared a key “wish list” item for AI in healthcare: for providers to “be hands-free of technology and hands-on for patient care.”

That’s where conversational AI solutions like Nuance Dragon Medical One play an important role. Instead of keeping physicians tethered to a workstation in the exam room, backs turned to patients, speech-enabled platforms free physicians to engage fully with their patients. Now they can use the power of their voice to capture every patient’s complete story more naturally and efficiently and to automate high-value documentation tasks, such as inserting templates and frequently used text. Because of Dragon Medical One’s ability to do all of that and more, it recently earned the distinction of #1 Best in KLAS for speech recognition (front-end EMR).

Tanner Health System and their CMIO, Dr. Bonnie Boles, believe that providers will be documenting notes well into the evening, missing dinners and family events, and falling behind with patients in the waiting room without a speech-enabled documentation workflow. The combination not only diminishes provider satisfaction but also can contribute to feelings of burnout.

Dragon Medical One has been a game-changer for Tanner Health. “Providers are more relaxed with the patients, and patients appreciate these more natural interactions. You are not looking down at a keyboard and clicking. Instead, they see you talking about what’s going on with them, and that allows them to see how much you’ve listened. It helps you bond better with the patients,”- said Dr. Boles.

Solving for healthcare’s most demanding challenges

Dr. Boles and Tanner Health embraced AI-enabled technology as a strategic asset. As a result, they’ve found new ways to improve satisfaction and the quality of life for their providers. Their physicians agree that Dragon Medical One has made it easier to capture patient stories and create higher-quality documentation.

According to Dr. Boles, “The providers are delighted with their ability to capture their narratives and be robust in their dictation. One of our Teleneurologists said, ‘I love it, and Tanner Health should love it too because my notes are so much better.’”

Implementing new technology is one thing; making the most of that investment is another. For that, you need providers fully committed and bought into the technology. So I conclude with my advice: 1) Provide the necessary training and support to maximize utilization among all providers, doctors, and nurses alike. 2) Promote positive outcomes and success stories among your clinical staff. When they see the value, they’ll use the technology even more. 3) Measure adoption and understand what’s driving it. It’s a continuous process that can help healthcare organizations make their visions for the future a reality in the present.

Posted on

Talking it through: speech recognition takes the strain of digital transformation

By Nuance for Healthcare IT

HITN: COVID-19 has further exposed employee stress and burnout as major challenges for healthcare. Tell us how we can stop digital transformation technologies from simply adding to them.

Wallace: By making sure that they are adopted for the right reasons – meeting clinician’s needs without adding more stress or time pressures to already hectic workflows. For example, Covid-19 being a new disease meant that clinicians had to document their findings in detail and quickly without the process slowing them down – often while wearing PPE. I think speech recognition technology has been helpful in this respect, not just because of speed but also because it allows the clinician time to provide more quality clinical detail in the content of a note.

In a recent HIMSS/Nuance survey, 82% of doctors and 73% of nurses felt that clinical documentation contributed significantly to healthcare professional overload. It has been estimated that clinicians spend around 11 hours a week creating clinical documentation, and up to two thirds of that can be narrative.

HITN: How do you think speech recognition technology can be adapted into clinical tasks and workflow to help lower workload and stress levels?

Wallace: One solution is cloud-based AI-powered speech recognition: instead of either typing in the EPR or EHR or dictating a letter for transcription, clinicians can use their voice and see the text appear in real time on the screen. Using your voice is a more natural and efficient way to capture the complete patient story. It can also speed up navigation in the EPR system, helping to avoid multiple clicks and scrolling. The entire care team can benefit – not just in acute hospitals but across primary and community care and mental health services.

HITN: Can you give some examples where speech recognition has helped to reduce the pressure on clinicians?

Wallace: In hospitals where clinicians have created their outpatient letters using speech recognition, reduction in turnaround times from several weeks down to two or three days have been achieved across a wide range of clinical specialties. In some cases where no lab results are involved, patients can now leave the clinic with their completed outpatient letter.

In the Emergency Department setting, an independent study found that speech recognition was 40% faster than typing notes and has now become the preferred method for capturing ED records. The average time saving in documenting care is around 3.5 mins per patient – in this particular hospital, that is equivalent to 389 days a year, or two full-time ED doctors!

HITN: How do you see the future panning out for clinicians in the documentation space when it comes to automation and AI technologies?

Wallace: I think we are looking at what we call the Clinic Room of the Future, built around conversational intelligence. No more typing for the clinician, no more clicks, no more back turned to the patient hunched over a computer.

The desktop computer is replaced by a smart device with microphones and movement sensors. Voice biometrics allow the clinician to sign in to the EPR verbally and securely (My Voice is my Password), with a virtual assistant responding to voice commands. The technology recognises non-verbal cues – for example, when a patient points to her left knee but only actually states it is her knee. The conversation between the patient and the clinician is fully diarised, while in the background, Natural Language Processing (using Nuance’s Clinical Language Understanding engine) is working to create a structured clinical note that summarises the consultation, and codes the clinical terms eg. with SNOMED CT.

No more typing for the clinician, no more clicks, no more back turned to the patient hunched over a computer, resulting in a more professional and interactive clinician/patient consultation. 

Healthcare IT News spoke to Dr Simon Wallace, CCIO of Nuance’s healthcare division, as part of the ‘Summer Conversations’ series.