With a single API call, interpret emotional expressions and generate empathic responses. Meet the first AI with emotional intelligence.
Hume argues that emotions drive choice and well-being
At Hume AI, we take this as a guiding principle behind ethical AI: in order to serve our preferences, algorithms should be guided by our emotions.
Recognizing the need to map out the emotions that animate thought and action, Hume also proposed a taxonomy of over 16 emotional states, but lacked scientific evidence.
EVI is a conversational voice API powered by empathic AI. It is the only API that measures nuanced vocal modulations, guiding language and speech generation. Trained on millions of human interactions, our empathic large language model (eLLM) unites language modeling and text-to-speech with better EQ, prosody, end-of-turn detection, interruptibility, and alignment.
This is one among many items I will regularly tag in Pinboard as oegconnect, and automatically post tagged as #OEGConnect to Mastodon. Do you know of something else we should share like this? Just reply below and we will check it out.
The demo it offers is very interesting, you speak to the AI in voice, and it responds, using what seems like some kind of map to inflect tone into the conversation. I tried to offer sarcasm and ridicule. It was quite a bit more than ELIZA! It aims to detect emotions in my voice, and you can see how it is choosing on the right to respond
I mean, I cringe when my friends in open ed turn off their critical thinking to accept some very mildly positive thing from genAInt, like error-filled transcription/translation/alt-text-generation, because they just think it would be cool if the absurdly overblown claims were true … despite the climate-destroying, plutocrat-enabling, mass plagiarism-based statistical model absolutely not making the world better … but this has to take the dystopia cake!
Pedagogy of care, anyone? … By generating responses statistically correlated with what actual humans who care do?
What about any of the work of people like Safiya Noble, Ruha Benjamin, Joy Buolamwini, or any of the other scholars who have shown how this sort of tech is a catastrophic failure for people of color or neurodivergent folks? As it is even for a student who is tired because they are working a double shift at Wallmart in addition to being a full-time student and are doing their best to keep their eyes open while trying to take advantage of the life-changing potential of higher ed (as were many of my students in Pueblo, CO, USA). So when this tech fails, it exacerbates inequity, rather than diminishing it!
Alan, I love you like a brother, but would you please stop pushing these dystopian visions into my timeline?
I can never tell who’s being serious these days. When someone stands up and says they need $7trillion to change the world, and that, oh, by the way, they’ll make viable fusion power along the way, I can’t tell if the world is pretending that might make sense because everyone is in on the joke or if people are accepting it as real.