Nimblechapps Blog

This is the section where we get a chance to rant about latest happenings in the tech world. Keen to know our thoughts? Refer our blogs

MIT’s Wearable AI can Detect a Conversation’s Tone

MIT Wearable AI System

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute for Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution for people with anxiety or conditions such as Asperger’s syndrome. Because a single conversation can be interpreted in very different ways it can make social situations extremely stressful.

It can Detect a Happy or Sad Tone

They have come up with a wearable AI system that can detect a conversation’s tone. It’s an artificially intelligent wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals. MIT CSAIL researchers have created this wearable system that can tell whether the person you’re talking to is happy or sad.

Uses Specialized Algorithms

Tuka A lanai, a graduate student from MIT and Ph.D. candidate Mohammad Ghassemi describe an AI system that uses specialized algorithms to analyze audio, text transcriptions, and physiological signals. This is done to help determine a conversation’s overall tone in real-time. The system runs on a Samsung Simband, a modular, research-centric wrist wearable that can be tricked out with a wide variety of sensors. The system also has the capacity to run custom algorithms on its hardware.

Samsung Simband

The device takes Samsung’s Simband smartwatch, which can measure movement, heart rate, blood pressure, blood flow, and skin temperature. They have then paired it with audio capture which can pick up signals like tone, pitch, and word choice, and provide a transcript of the text. By weighing all of the incoming signals, algorithms can easily classify each five-second installment of conversation as either a positive or a negative response. The system can also estimate the emotion of isolated five-second intervals of the conversation, roughly 8 percent better than existing methods.

Artificial Intelligence (AI)

The team started with over 500 signals that could tip off how a conversation was going. It ranges from movement to speech patterns, to individual word choice. They let onboard artificial intelligence to figure out which was most important, rather than letting assumptions dictate. These are also the types of social cues that can be difficult for those with anxiety, or for those on the autism spectrum.

Additional Features that you can Expect

The Simband system can ascertain overall tone with 83 percent accuracy. That leaves plenty of room for misinterpretation, as does the fact that its descriptive buckets are so vast. There’s still plenty of room for improvement here. The researchers are hoping to be able to adapt the system to more common smartwatches, like Apple’s. They foresee additional features, like making the device buzz if an exchange helps interaction go better, and twice when things are getting awkward, to keep its wearer pointed toward a conversational North Star. They also intend to add more granularity to the system. They want to dive into expressions like boredom, tension, excitement, and all the other textures of human expression.

Two Basic Algorithms

The team trained two algorithms on the data after capturing 31 different conversations of several minutes each. The first one classified the overall nature of a conversation as either happy or sad. The second algorithm classified each five-second block of every conversation as either positive, negative, or neutral.

The algorithm’s results rang well with what we humans might expect to observe. For example, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic speech patterns were associated with happy stories. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.


Right now, the system only provides binary feedback for the conversations on the whole. It can only label individual interactions as either positive or negative. The Simband platform is another limiting factor since the wearable isn’t commercially available yet. Still, there’s a clear path to development. The researchers hope to find a way to use the system on commercial wearables like the Apple Watch. With more data, algorithms learn and improve, which would, in turn, make the system more effective.