Posted on

CMS proposal lowers physician pay in 2022: 4 details

By Laura Dyrda for Becker’s Hospital Review

CMS issued the 2022 Physician Fee Schedule proposal July 13, which will lower physician pay next year if it goes into effect without changes.

Four details:

1. CMS proposed decreasing the 2022 physician pay conversion factor 3.75 percent next year, from $34.89 to $33.58. The adjustment would account for changes in the relative value units and expenditures related to other proposed policy updates.

2. CMS noted that Congress has proposed budget neutrality updates that will account for the RVU changes and the 3.75 percent payment increase from the Consolidated Appropriations Act of 2021 is set to expire at the end of the year.

3. Specialty physician organizations are calling on Congress to intervene and stop cuts to physician pay, according to a statement from the Surgical Care Coalition.

“At a time when medical practices have been dramatically impacted by the COVID-19 pandemic, causing a significant backlog of patients in need of surgical care, further cuts are not only unsustainable, they ultimately threaten patient access to care,” said Richard Hoffman, MD, president of the American Society of Cataract and Refractive Surgery, in the statement. “This is especially true for patients receiving sight-restoring cataract surgery, one of the most successful and frequently performed procedures for Medicare beneficiaries.”

4. The proposed rule is open for comment through Sept. 13 and would take effect Jan. 1, 2022, if the proposed changes are finalized.

Posted on

What Is The Difference Between Speech Recognition And Voice Recognition?

 By Ratnesh Shinde for Tech Notification

This article has made to let you know a difference between two technologies that are similar but distinct, namely speech recognition and voice recognition technology.

Even though both voice recognition and speech recognition seem like they mean the same thing, they are two very distinct technologies.

Digital assistants such as Amazon’s Alexa, Microsoft’s Cortana, and Apple’s Siri have helped to make these words more widely known throughout the world. In addition to speech recognition, these assistants make use of voice recognition technology as well.

By 2024, the overall number of digital voice assistants in use will reach 8.4 billion units, which is more than the whole population of the world, according to Statista research.

However, there are still many individuals who have questions that need to be answered, so let’s take a deeper look at speech recognition and voice recognition.

What is Speech Recognition?

Because speech recognition is intertwined with voice recognition, if a certain voice is recognized, the speech recognition software may then identify the speech. What is the procedure? Speech recognition can transcribe or caption the words that are coming out of the speaker’s lips by utilizing a variety of speech pattern algorithms and language models. High-quality audio is required for the program to accurately transcribe the speech and achieve high accuracy in the transcription.

The following are the requirements for high accuracy voice recognition:

  • There is only one speaker.
  • There is no background noise.
  • It is advisable to use a high-quality microphone.

When is it necessary to use voice recognition?

To take notes, the text can be transcribed using speech recognition software, which can be used to assist in taking notes.

Auto-generated subtitles, dictaphones, and text relays for deaf and hard-of-hearing individuals are all used to make films more accessible to people with impairments. These services may make it easier for individuals with disabilities to interact with the media and the rest of the world.

What is Voice recognition?

As we all know, speech and voice recognition are two distinct technologies, yet they are interconnected in many ways.

If you train the program to identify a certain voice, it can recognize almost any voice. A variety of phrases are practiced by the user, and the program then utilizes these phrases to identify the speaker, their delivery style, and tone of voice, all of which are important factors in speech recognition. This is the method that is used by default by the vast majority of virtual assistants and voice-to-text apps.

The following are the limitations of speech recognition:

  • The job that has been asked to be executed has limited capability.
  • If the statements are not properly understood, the virtual assistant might request that they be repeated.
  • If a few words are left out, the result might be quite different.
  • When any change in the tone or delivery of the voice is recognized, the accuracy of speech recognition suffers a significant decrease.

When is it necessary to use speech recognition?

Users may verify their identities by speaking aloud as a password. This enhances security while saving money on biometrics.

Operations that are more effective and efficient – The ability to correctly communicate with technology via speech minimizes the need for error scanning and instead enables more accurate tasks to be completed at a faster pace.

Virtual assistants are made possible by the use of voice and speech recognition technology.

What is the significance of the term “Smart Technology” and what are the obstacles to the widespread use of voice technology?

In the smart device business, virtual assistants have emerged as a critical component, since they have become fundamental to how customers engage with their gadgets. And as the industry progresses and its technology improves to a higher degree, businesses are increasingly looking for ways to make greater and better use of “Smart technology.”

However, there are still significant obstacles to the widespread use of speech technology throughout the world.

Accuracy was thought to be one of the most significant obstacles to the widespread adoption of speech technology.

Some people believe that difficulties with an accent (the way you speak) or dialect-related detection will make speech technology adoption more difficult.

So, that’s all there is to know about voice recognition and speech recognition, thank you very much.

Conclusion

Voice recognition and facial recognition are both working their way up to the top of the technological food chain.

Furthermore, these technologies have more potential than just being used as assistants, and audio-to-text Softwares are assisting many industries that are not particularly technologically oriented, such as the healthcare industry, educational facilities, the financial industry, the government, and other similar organizations, among others.

People are becoming increasingly enthusiastic about the prospect of integrating virtual assistants with their software to spur innovation. Are you looking forward to it as well? Well, and other resources may assist you in turning creative ideas into successful projects.

Posted on

14% of physicians sought new employment due to COVID-19, survey says

By Patsy Newitt for Becker’s Hospital Reviews

Fifteen percent of physicians had to take out a loan because of COVID-19, according to an April 2020 survey. 

AMN Healthcare, B.E. Smith and Merritt Hawkins’ recent report called “Will there be a doctor in the house?” detailed physician supply, demand and staffing during COVID-19.

As a result of the COVID-19 pandemic:

  • 14 percent of physicians sought employment at a different practice 
  • 6 percent of physicians found a job that does not involve direct patient care
  • 7 percent of physicians closed their practice temporarily
  • 5 percent of physicians retired
  • 4 percent of physicians left private practice and sought employment with a hospital or other entity 
  • 15 percent of physicians took out a loan 
  • 2 percent of physicians sought physical healthcare: 
  • 3 percent of physicians sought mental healthcare
  • 66 percent of physicians continued to practice 
Posted on

Alexa Introduces Voice Profiles for Kids and New AI Reading Tutor

By ERIC HAL SCHWARTZ for Voice Bot AI

Amazon has augmented Alexa’s voice profile feature with a version aimed specifically at children. Parents and guardians can use the new Alexa Voice Profiles for Kids tool to enable a personalized experience for up to four children per account. The profiles have debuted alongside Reading Sidekick, a new AI-powered tutor to encourage and help children become literate.

AI READING

Reading Sidekick is the central part of the kid-focused profiles at the moment. Designed for those between the ages of six and nine, Reading Sidekick uses Alexa to help teach a kid to read any of the several hundred titles in its library of supported books, both in digital and physical form. It just required an Echo smart speaker or smart display and an Amazon Kids+ subscription. Amazon Kids+ is what Amazon renamed FreeTime and FreeTime Unlimited and offers exclusive Alexa Skills and other content for $3 a month for Prime members and $5 a month for non-prime members. When a child says, “Alexa, let’s read,” the voice assistant asks what book they want to read and how much they want to read, with choices of taking turns, a little, or a lot. Taking turns means Alexa and the child will trade reading sections, while a little or a lot shifts the ratio one way or the other. Regardless, Alexa will praise their success and even prompt them with the next word if they get stuck.

“With the arrival of Reading Sidekick, we are hopeful we can make reading fun for millions of kids to set them up for a lifetime of learning and a love of reading,” Alexa Education and Learning head Marissa Mierow said. “Alexa provides a welcoming, no-judgment zone and is always ready to help and to read.”

ALEXA FOR KIDS

Amazon first debuted voice profiles for Alexa users back in 2017, enabling Alexa to respond differently to the same query based on who is speaking without switching accounts. This made it easier for a family or roommates to share an Alexa device. Third-party developers were given permission to integrate that element into their Alexa skill in 2019, and the voice assistant began applying user contact information to personalize interactions with Alexa last year. The voice recognition feature even expanded to Amazon’s call center platform in December. The voice profiles created for children function largely the same way but with a narrower range of functions.

It would be an impressive feat for Amazon to have Alexa understand children as well as it does adults. The difficulties involved are why children’s speech recognition tech startup SoapBox Labs were formed. SoapBox, which new Voice Activity Detection (VAD) and Custom Wakeword tools in May, builds on a database of thousands of hours of children’s speech and its own deep learning technology to understand the unique patterns and inflections of children’s speech. There’s no denying that there’s a growing demand for kid-focused voice AI, however. Earlier this year, Google released its own reading tutor for kids, but that feature doesn’t have the personalized touch of Amazon’s new profiles. However, it will almost inevitably be included in the cluster of lawsuits Amazon faces over whether Alexa violates children’s privacy. The new features also meant teaching Alexa to understand better how kids speak and the many variations based on location, age, background, and other factors. The microphones in an Echo are also adjusted when the kids’ profile is engaged as they may be farther away or sitting behind a book when using Reading Sidekick.

Posted on

First mobile phone made 75 years ago; what it takes for tech to go from breakthrough to big time

By Daniel Bliss for The Free Press Journal

I have a cellphone built into my watch.

People now take this type of technology for granted, but not so long ago it was firmly in the realm of science fiction. The transition from fantasy to reality was far from the flip of a switch. The amount of time, money, talent and effort required to put a telephone on my wrist spanned far beyond any one product development cycle.

The people who crossed a wristwatch with a cellphone worked hard for several years to make it happen, but technology development really occurs on a timescale of decades.

While the last steps of technological development capture headlines, it takes thousands of scientists and engineers working for decades on myriad technologies to get to the point where blockbuster products begin to capture the public’s imagination.

The first mobile phone service, for 80-pound telephones installed in cars, was demonstrated on June 17, 1946, 75 years ago. The service was only available in major cities and highway corridors and was aimed at companies rather than individuals. The equipment filled much of a car’s trunk, and subscribers made calls by picking up the handset and speaking to a switchboard operator. By 1948, the service had 5,000 customers.

The first handheld mobile phone was demonstrated in 1973, nearly three decades after the introduction of the first mobile phone service. It was nearly three decades after that before half the U.S. population had a mobile phone.

Big history in small packages

As an electrical engineer, I know that today’s mobile phone technology has a remarkable number of components, each with a long development path. The phone has antennas and electronics that allow signals to be transmitted and received. It has a specialized computer processor that uses advanced algorithms to convert information to signals that can be transmitted over the air. These algorithms have hundreds of component algorithms. Each of these pieces of technology and many more have development histories that span decades.

A common thread running through the evolution of virtually all electronic technologies is miniaturisation. The radio transmitters, computer processors and batteries at the heart of your cellphone are the descendants of generations of these technologies that grew successively smaller and lighter.

The phone itself would not be of much use without cellular base stations and all the network infrastructure that is behind them. The first mobile phone services used small numbers of large radio towers, which meant that all the subscribers in a big city shared one central base station. This was not a recipe for universal mobile phone service.

Engineers began working on a concept to overcome this problem at about the time the first mobile phone services went live, and it took nearly four decades to roll out the first cellular phone service in 1983. Cellular service involves interconnected networks of smaller radio transceivers that hand off moving callers from one transceiver to another.

Military necessity

Your cellphone is a result of over a hundred years of commercial and government investment in research and development in all of its components and related technologies. A significant portion of the cutting-edge development has been funded by the military.

A major impetus for developing mobile wireless technologies was the need during World War II for troops to communicate on the move in the field. The SRC-536 Handie-Talkie was developed by the predecessor to Motorola Corporation and used by the US Army in the war. The Handie-Talkie was a two-way radio that was small enough to be held in one hand and resembled a telephone. Motorola went on to become one of the major manufacturers of cellphones.

The story of military investment in technology becoming game-changing commercial products and services has been repeated again and again. Famously, the Defense Advanced Research Projects Agency developed the technologies behind the internet and speech recognition. But DARPA also made enabling investments in advanced communications algorithms, processor technology, electronics miniaturisation and many other aspects of your phone.

Looking forward

By realising that it takes many decades of research and investment to develop each generation of technology, it’s possible to get a sense of what might be coming. Today’s communications technologies – 5G, WiFi, Bluetooth, and so on – are fixed standards, meaning they are each designed for a single purpose. But over the last 30 years, the Department of Defence and corporations have been investing in technologies that are more capable and flexible.

Your phone of the near future might not only fluidly signal in ways that are more efficient, enable longer ranges or higher data rates, or last significantly longer on a charge, it might also use that radiofrequency energy to perform other functions. For example, your communications signal could also be used as a radar signal to track your hand gestures to control your phone, measure the size of a room, or even monitor your heart rate to predict cardiac distress.

It is always difficult to predict where technology will go, but I can guarantee that future technology will build on decades upon decades of research and development.

Posted on

10 physician income statistics by practice size, ownership

By Laura Dyrda for Becker’s Hospital Review

Physicians in private practice earned about 4 percent more than nonphysician practice owners in 2020, and compensation changes as the practice grows, according to Medical Economics‘ salary, productivity and profession survey.

The survey, published June 3, gathered responses from physicians in multiple specialties and practice settings, with 51 percent having ownership interest in their practices.

Ten findings on physician income by practice type:

  1. Physician owners: $276,000
  2. Nonowner physicians: $265,000
  3. Employed at an inpatient hospital: $274,000
  4. Private practice: $268,000
  5. Hospital-owned practice: $297,000
  6. Solo practice: $242,000
  7. Two-physician group: $259,000
  8. Three to 10 physician group: $287,000
  9. Eleven to 50 physician group: $281,000
  10. More than 50 physicians: $286,000
Posted on

TOP 10 SPEECH RECOGNITION SOFTWARE TO LOOK OUT FOR IN 2021

By Sayantani Sanyal for Analytics Insight

A list of the top speech recognition software one can embrace in 2021.

The speech recognition software system has made life easier and is becoming increasingly popular in recent years. Several companies and business leaders are choosing the speech-to-text software system as it is faster and produces accurate results.

A speech recognition software converts the verbal language into text by using machine learning algorithms. Speech recognition ensures that the users can control these devices and create documents efficiently. According to reports, the speech recognition software market is estimated to rise from US$10.70 billion in 2020 to US$27.155 billion by 2026.

For businesses, this speech-to-text software system is making valuable contributions. In the legal and healthcare sectors, the software is used to produce accurate documentation. Customer service executives use this technology to input information about the callers like their names, account numbers, queries, and other data. It ensures cost-effectiveness by reducing and even eliminating the employment of agents and improving customer service facilities at the same time.

Here is a list of the top 10 speech recognition software to look out for in 2021.

1. Google Docs Voice Typing: Google’s improved version of speech recognition in one of the most popular word processors in the world has drawn attention from several companies and business leaders in the world. This feature is integrated with Google Suite and enables formatting and editing of the contents in Docs by using voice typing. One of the key benefits of this software is that it can be used on both Windows and Mac devices and is completely free of cost.

2. Winscribe: Winscribe is owned by the company Nuance and provides services like documentation workflow management, transcriptions, and assists organizations in managing their dictations. It ensures easy documentation solutions by allowing users to use unique input methods like digital voice recording from computers, smartphones, tablets, and other devices. It can be used on Andriod, iPhone, and PC devices. One of the added benefits of using Winscribe is that it can detect deviations from the usual workflow and take corrective measures.

3. Speechnotes: Speechnotes is dictation software powered by Google, easy to use, and ensures efficient documentation. The users do not need to create a separate account to login into the app. They can simply open the application, click on the microphone and start dictating. It also allows the users to insert punctuation marks by dictating them through voice commands or using the built-in punctuation keyboard. It provides the users several fonts and text size options, including in-app purchases to unlock the premium features.

4. Gboard: Android users can enjoy Gboard services by installing the application from Google Play Store. Gboard is an instant speech-to-text application, designed as a keyboard for physical input but, it also carries the speech input option, making it more efficient to use. This application can also work with Google Translate and provides dictation services in over 60 different languages. Even though the application is not authentically a transcription application, it provides its users with all the facilities available in a basic transcription tool.

5Otter: Otter is a cloud-based speech recognition program that provides real-time transcription by allowing the users to edit, search, and organize transcriptions according to their preferences. Otter is programmed to work in collaboration with teams, specifically for meetings, interviews, and lectures, where different speakers are appointed IDs to make transcriptions easier to understand. It is a paid program that provides three types of plans, individually designed to suit the users.

6. Windows 10 Speech Recognition: Windows 10 is providing built-in voice recognition features for its users through accurate transcriptions. The users can give basic commands to the system and access its assistant features through voice control algorithms. The users can also train the software according to their preferences by reading texts to the system and allowing access to their documents for the software’s deeper understanding of the users’ vocabularies.

7. Dragon Speech Recognition Software: The Dragon Speech Recognition Software is owned by Nuance. It follows an AI-based speech recognition algorithm that learns the voice of its user with greater accuracy over time and supports cloud-based document management. It is one of the fastest and claims to deliver 99% of speech recognition accuracy. It aims to eliminate the barriers to productivity and increase efficient interaction between its users and the computers.

8. Microsoft Azure Speech-to-Text: Microsoft Azure’s Speech-to-text feature is powered by deep neural network models and ensures real-time audio transcriptions. This feature allows the users to create texts from multiple audio sources and offers customization options to work better with distinct speech patterns and background sounds. The users can also change various specialist vocabularies like product names, place names, and other technical information.

9. IBM Watson Speech to Text: IBM Watson’s Speech-to-Text feature is powered by AI and machine learning algorithms. Instead of transcribing speech-to-text in real-time, it allows its users to convert batches of audio files and process them through language, audio frequency, and a range of other output options. It enables tagging transcriptions with speaker labels, smart formatting, and other features.

10. Amazon Transcribe: It is a cloud-based, automatic speech recognition platform that converts audio files into texts. It aims to provide a more efficient service than the traditional speech recognition software by recognizing texts in low frequency and noisy recordings. Amazon Transcribe uses the deep learning process that enables formatting and adds punctuation automatically. It is one of the most powerful and efficient speech-to-text platforms and is mostly used for business enterprises.

Posted on

Musicians ask Spotify to publicly abandon controversial speech recognition patent

I. Bonifacic for Engadget

At the start of the year, Spotify secured a patent for a voice recognition system that could detect the “emotional state,” age and gender of a person and use that information to make personalized listening recommendations. As you might imagine, the possibility that the company was working on a technology like that made a lot of people uncomfortable, including digital rights non-profit Access Now. At the start of April, the organization sent Spotify a letter calling on it to abandon the tech. After Spotify privately responded to those concerns, Access Now, along with several other groups and a collection of more than 180 musicians, are asking the company to publicly commit to never using, licensing, selling or monetizing the system it patented. Some of the individuals and bands to sign the letter include Rage Against the Machine guitarist Tom Morello, rapper Talib Kweli and indie group DIIV.

In a new letter addressed to Spotify CEO Daniel Ek, the coalition outlines five primary concerns with the technology. It worries it will allow Spotify or any other company that deploys it to manipulate users emotionally, harvest their personal information and discriminate against trans and non-binary people. It also says the technology will only serve to further worsen economic inequalities in the music industry. “Music should be made for human connection, not to please a profit-maximizing algorithm,” the group says. Access Now asks Spotify to publicly respond to its request by May 18th, 2021.

When we asked Spotify to comment on Access Now’s request, the company pointed Engadget to a letter it sent to the organization in mid-April. “Spotify has never implemented the technology described in the patent in any of our products and we have no plans to do so,” Horacio Gutierrez, Spotify’s head of global affairs, says in the letter. “The decision to patent an invention does not always reflect the company’s intent to implement the invention in a product, but is instead influenced by a number of other considerations, including our responsibilities to our users and to society at large.”

Posted on

Less than half of physicians are in independent practices

Laura Dyrda for Becker’s Hospital Review

For the first time, less than 50 percent of physicians reported working in physician-owned practices last year, according to a May 5 American Medical Association report.

The AMA surveyed 3,500 physicians in September and October 2020 about their employment and practice situations.

Five findings:

1. Forty-nine percent of physicians worked in wholly physician-owned practices last year, including 38.4 percent who are practice owners.

2. Since 2018, the number of physicians in private practice dropped 5 percentage points.

3. Sixty-six percent of surgical specialists are in private practice.

4. One-third of physicians younger than 40 were in private practice.

5. Forty-three percent of physicians worked in single-specialty practices, and 26.2 percent worked in multispecialty groups.

Few physicians attributed changing employment status to the pandemic, indicating a larger trend away from physician ownership, the report said. ASC leaders in markets across the U.S. face challenges finding new physicians for their centers as independence wanes.

Brian Bizub, CEO of Raleigh Orthopaedic, said that while communities in North Carolina are growing rapidly, there has been a shift away from private practice to hospital employment. He said reimbursement declines in private practice make it difficult to manage overhead, and the referral networks are drying up as primary care physicians become affiliated with hospitals.

“The current state of healthcare reform is creating uncertainties, and the shift in physician preferences are leaning toward hospital employment over private practice,” he said. “Recent trends and published studies clearly show that younger physicians are interested not only in practicing medicine, but maintaining a quality home life as well. Less interest exists in physicians seeking administrative tasks and concerns with overhead and payer reimbursements.”

His group has been able to maintain private ownership because of collaboration with a local health system to become part of its referral network.

On the other hand, some communities are seeing a spike in the number of physicians interested in ASCs. Danilo D’Aprile, administrator of Danbury, Conn.-based Orthopaedic Specialty Surgery Center, said more physicians have requested center credentials since the pandemic began, especially for total joint replacements.

“Our total joint replacement program is very robust,” he said. “We are in a good position to attract a lot of these doctors, and I have a lot of physicians coming to me to request privileges and ownership because they want to be here.”

Posted on

What Microsoft’s Acquisition of Nuance Could Mean For The Future of Workplace AI

By Zachary Comeau for My Tech Decisions

Microsoft’s recent announcement that it is acquiring healthcare artificial intelligence and voice recognition company Nuance could signal a new era of voice-enabled technologies in the enterprise.

Nuance’s speech recognition technology for medical dictation is currently used in 77% of U.S. hospitals, and Microsoft plans to integrate those technologies with its Microsoft Cloud for Healthcare offering that was introduced last year.

However, the purchase price of $19.7 billion indicates that Microsoft has plans to bring more voice recognition technology to other vertical markets aside from healthcare.

We sat down with Igor Jablokov, founder and CEO of augmented AI company Pryon and an early pioneer of automated cloud platforms for voice recognition that helped invent the technology that led to Amazon’s virtual assistant Alexa, to talk about Microsoft’s move and how intelligent voice technology could impact the workplace.

What do you make of Microsoft’s acquisition of Nuance?

So look, it’s going to be a popular thing to talk about moves in healthcare, especially as we’re still through the throes of this pandemic. And most of us, I’m sure had a challenging 2020. So that’s a great, way to frame the acquisition, given Nuance, some of the medical dictation and other types of projects that they inserted into the healthcare workflow. So, that makes sense. But, would anybody actually pay that much for just something for healthcare? I would imagine Microsoft could have had as big an impact, if not larger, going directly for one of those EHR companies like Epic. So, that’s why, I’m like, “All right, healthcare, that’s good.” , is it going to be a roll up where they will be going after Epic in places like that, where there’s already lots of stored content, and then vertically integrate the whole thing? That’s, that’s the next play that I would see. They’re gunning for to own that workflow. Right. Okay. So that’s that piece. Now. On the other hand I see it as a broader play in employee productivity, because whenever Microsoft really opens up their pocketbooks, like they did here, right, this is, was what their second largest acquisition, it’s typically to reinforce the place where they’re, they’re the strongest than where they’re essentially , dairy cow is, and that’s employee productivity.

Microsoft has never been solely focused on healthcare. Their bread and butter is the enterprise. So how can the same technologies be applied to the enterprise?

You’re exactly right. Now why do we have special knowledge of the Nuance stuff? Well, the team that’s in this company Pryon, actually developed many of the engines inside of Nuance. So many years ago, Nuance felt like their engines were weak, and that IBM’s were ahead of the curve, if you will. I believe around the 2008 downturn, they came in to acquire the majority of IBM SAS speech chats and, and the like, and related AI technologies. And my now current chief technology officer was assigned to that unit project in terms of collaborating with them to integrate it into their, into their work for half a decade. So, that’s the plot twist here. We have a good sense now, these, it is true, that these engines were behind Siri and all these other experiences, but in reality, it wasn’t Nuance engines, it was IBM engines that were acquired through Nuance that ended up getting placed there, because of how highly accurate and more flexible these things were.

So let’s start with something like Microsoft Teams. To continue bolstering Teams with things like live transcriptions, to put a little AI system inside of Teams that has access to the enterprise’s knowledge as people are discussing things – it may not even be any new product, it could just be all the things that Microsoft is doing but they just needed more hands on deck, right in terms of this being a massive acqui-hire in terms of having more scientists and engineers working on applied AI. So I would say a third of it is they need more help with things that they’re already doing. , a third of it is a healthcare play, but I would watch for other moves for their vertical integration there. And then the third is for new capability that that we haven’t experienced yet on the employee productivity side of Microsoft.

Microsoft already has their version of Siri and Alexa: Cortana. What do you think about Cortana and how it can be improved?

They attempted for it to be their thing everywhere. They just pulled it off the shelves – or proverbial shelves – on mobile, so it no longer exists as a consumer tech. So the only place that it lives now is on Windows desktops, right? So that’s not a great entry point. Then they tried doing the mashup, where, Cortana could be called via Alexa and vice versa. But when I talked to the unit folks at Amazon, and I’m like, “Look, you’re, you’re not going to allow them unit to really do what they want to do, right? Because they’re not going to allow you to do what you want to do on those desktops.” So it almost ends up being this weird thing like calling into contact centers and being transferred to another contact center. That’s what it felt like. In this case, Alexa got the drop on them, which is, which is strange and sorrowful in some ways.

Other AI assistants like Alexa are much further along than Cortana, but why aren’t we seeing much adoption in the enterprise?

There’s multiple reasons for that. There’s, there’s the reason of accuracy. And accuracy isn’t just you say something, you get an answer. But where do you get it from? Well, it has to be tied into enterprise data sources, right? Because most enterprises are not like what we have at home, where we buy into the Apple ecosystem, the Amazon ecosystem, the Google ecosystem. They’re heterogeneous environments where they have bits and pieces from every vendor. The next piece is latency and getting quick results that are accurate at scale. And then the last thing is security, right. So there’s certainly things that that Alexa developers do not get access to. And that’s not going to fly in the enterprise space. One of the things that we hear from enterprises, in pilots and in production, said that they’re starting to put in these API’s is starting to be their crown jewels, and the most sensitive things that they got. And, and if you actually read the terms and conditions from a lot of the big tech companies that are leveraging AI stuff, they’re very nebulous with where the information goes, right? Does it get transcribed or not? Are people eyeballing this stuff? Or not? And so most enterprises are like, “Hold on a second, you want us to put our secrets, we make these microchips and you want us to put secrets on M&A deals we’re about to do.?” They’re uncomfortable about that. It’s just a different ball of wax. And that’s why I think it’s going to be purpose-built companies that are going to be developing enterprise API’s.

I think there will be a greater demand for bringing some of these virtual assistants we all know to the enterprise – especially since we’ve been at home for over a year and using them in our home.

Your intuition is spot on. It’s not even so much people coming from home into work environments – it’s a whole generation that has been reared with Alexa and Siri and these  things. When you actually look at the majority of user experiences at work, using Concur or SAP or Dynamics, or Salesforce, or any of these types of systems, and they’re gonna toss grenades at this stuff over time, especially as they elevate in authority through the natural motions of expanding and their influence over their career. I think there’s going to be a new a new generation of enterprise software that’s going to be purpose built for these folks that are going to be taking over business. That’s basically the chink in the armor for any of these traditional enterprise companies. If you think if you look at Oracle, if you look at IBM, if you look at HP, if you look at Dell, if you look at any one of them. I don’t know where they go, at least on the software side. When a kid has grown up with Alexa, and there they are at 26 years old, they’re like, “No, I’m not gonna use that.” Why? Why can I just blurt something out and get an instant answer? But here I am running a region of Baskin Robbins, and I can’t say, “How many ice cream cones did we sell when it was 73 degrees out?” and get an instant answer one second later. So that’s what’s going to happen. I mean, we’re certainly as a company, since our inception, we’ve been architected not for the current world, but for this future world. Already elements of this are in production, as we announced with Georgia Pacific in in late January, and we’re working through it. And I have to say, one of the biggest compliments that I get, whether it’s showing this to big enterprises or government agencies and the like, is fundamentally they’re like, “Holy smokes, this doesn’t feel like anything else that we use. But behind the scenes not only are we using top flight UX folks to develop this, but we’re also working with behavioral scientists and the like, because all that want to use our software not have to use our software. But, most enterprise software gets chosen by the the CIO, the CTO, the CISO, and things like that. And most of them are thinking checking off boxes on functionality. And most enterprise developers cook their blue and white interface, get the fun feature function in there and call it a day. And I think they’re missing such opportunities by not finishing the work.