How Beyond Verbal hopes to make Siri and Alexa emotionally intelligent


An Israeli startup specializing in analyzing human emotion and traits based on voice is opening up its technology to help artificial intelligence (AI) assistants — such as Siri and Alexa — understand people’s moods by listening to how they sound when they speak.

Founded out of Tel Aviv in 2012, Beyond Verbal has been building on decades of emotion analytics research with its own studies, which have been carried out in conjunction with notable organizations including the Mayo Clinic, the University of Chicago, Scripps, and Hadassah Medical Center. Its data gathering has led to the collation of more than 2.5 million “emotion-tagged voices” across 40 languages. The company’s technology doesn’t consider the content or context of the spoken word, but instead looks for signs of anxiety, arousal, anger, and other emotions through examining the intonation in a person’s voice.

Though the technology is still in its infancy, potential use cases include a call center improving relationships with its customers by analyzing their mood during calls or, in the health realm, a provider evaluating someone’s mental wellbeing. And research is also ongoing to establish whether it can effectively be used to detect physical conditions like heart problems.

Siri gets smart?

Today marks a notable evolution for Beyond Verbal as it looks to introduce its emotion analytics to the virtual personal assistant (VPA) realm via a new API for developers.

While Apple’s Siri and Amazon’s Alexa are improving all the time at understanding your words — “Hey Alexa, play me a song by The Beatles” — they’re not so adept at recognizing your mood. And that is the ultimate goal of Beyond Verbal’s new API: It wants to bring emotional intelligence to digital assistants.

“Today’s digital world is rapidly transforming the way we interact with our technology and each other,” said Yuval Mor, CEO at Beyond Verbal. “Virtual private assistants have begun to take on a personalized experience. We are very excited for this next step in fusing together the breakthrough technology of AI and Beyond Verbal’s Emotions Analytics, providing unique insight into personalized tech and remote monitoring.”

So… how could this be used in reality? And why would an Amazon Echo device ever need to know what mood you’re in?

Given that Alexa and Co. can now power countless third-party voice-enabled services, Beyond Verbal suggests a number of possible use cases: Alexa could play you an upbeat song if the tone of your voice is downbeat, or it could tell you that a friend of yours sounds upset and suggest an uplifting movie for you to buy them.

Bear in mind, much of this is future-gazing, and there are many more potentially important use cases that have yet to be uncovered — and one of those is health care. In the future, Siri could observe a downturn in your mood over a number of weeks. Or if current research fulfils its promise, it could even help detect serious physical ailments.

“In the not so far future, our aim is to add vocal biomarker analysis to our feature set enabling Virtual Private Assistants to analyze your voice for specific health conditions,” added Mor.

Bumps in the road

The technology needs some improvements at a fundamental level. As things stand, during a conversation between a Beyond Verbal-enabled VPA and a user, 13 seconds of speech is required initially to render the first analysis, after which an emotional analysis can be carried out every 4 seconds — this process is for each and every conversation. It’s difficult to imagine a time when people will chat long enough with their Amazon Echo or Apple HomePod to enable mood detection. This is why passively collecting a person’s voice throughout a day will be vital for the technology to have any chance of succeeding.

“Currently a command style conversation would render too little voice to give an emotional analysis,” a company spokesperson acknowledged to VentureBeat. “We are currently working on an additional feature set which would reduce the time even more — this however is still not available for commercial use.”

Another way of improving mood detection would be through the broader Internet of Things, whereby voice is analyzed through multiple devices including wearables, mobiles, smart cars, and so on — but such ubiquity is likely some time off.

However, Beyond Verbal is raising VC cash, having nabbed around $ 10.8 million in venture funding, including a $ 3 million round in September 2016. If the company is to realize its vision of bringing real emotional intelligence to AI, at scale, it will need all the money it can get.

VentureBeat

Post Author: martin

Martin is an enthusiastic programmer, a webdeveloper and a young entrepreneur. He is intereted into computers for a long time. In the age of 10 he has programmed his first website and since then he has been working on web technologies until now. He is the Founder and Editor-in-Chief of BriefNews.eu and PCHealthBoost.info Online Magazines. His colleagues appreciate him as a passionate workhorse, a fan of new technologies, an eternal optimist and a dreamer, but especially the soul of the team for whom he can do anything in the world.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.