PC190023.jpg

Intendo

 

Intendo

How do we raise awareness and understanding of AI and speech analysis?

 

Summary: Intendo is a device that demonstrates how artificial intelligence interprets human language in terms of emotions. The project’s goals are to educate people about how AI works. The presence of AI and automated analysis of our language is only increasing. We wanted to raise people’s awareness and understanding of this analysis. You interact with Intendo by speaking to it, and it responds with an emotional analysis of what you said. In addition there are 5 thermal printers, one for each emotion. The prevalence of statements of each emotion will be readily visible to passersby by the physical length of the papers hanging from these printer.

Scope: 3 Weeks

Role: Concept Development, Coding, Overall Design

Collaborators: Nour Malaeb and Janel Wong

 
 
 

Inspiration

 

A wishing telephone that prints an inspirational, enigmatic fortune and unique Rorschach pattern based off your intentions.

Our initial concept was of a device the user speaks a wish into and then receives a unique token based off of their wish. We imagined use in a public space and wanted the piece to be whimsical and artistic.

To achieve the “Rorschach” like unique pattern effect we were interested in, we experimented with the effects of heat on thermal paper using cigarette butts and soldering irons. We considered using a heat pad in the device to produce a unique pattern for each users receipt. The intensity of heat required to achieve this on thermal paper was found to be quite high so we decided to explore other possibilities for creating the unique token.

 
 
intendo-whisper.jpg
 
 

First Prototype

 
 

To test the fundamental user interaction we built a prototype using speech recognition and processing that produced a random response after any statement was recognized.

To build a proof of concept and test the fundamental user interaction we used speech recognition in processing using WebSockets to receive speech recognition from chrome.

There was no sentiment analysis so we set it up to randomly print one response phrase from an array regardless of the emotional content of the phrase the user spoke into it.

One component we knew was critical in the design was how to communicate to a passerby what input was necessary. In this first iteration we coded the arduino to print a phrase asking for wishes whenever someone stepped in front of the device and remained there for a couple of seconds. In user testing we liked the impression of the device coming alive and expressing a desire to you.

 
 
vlcsnap-2017-01-12-00h49m27s799.png
 
 

Iteration

 

The presence of AI and automated analysis of our language is only increasing. We wanted to raise people’s awareness and understanding of this.

Working off the testing and feedback we got from our prototype we began thinking how to iterate and build upon our initial concept, we rethought the token people receive from the printer. Instead of an enigmatic design, we thought to provide them with a receipt that was instructive on how their speech was interpreted and classified by the computer.

The sentiment analysis APIs we were exploring categorized the emotional content of statements into 5 broad categories. To capture not only the emotions of an individual statement but also the cumulative emotional state of a space, we thought to add 5 printers one corresponding to each of the emotions of sentiment analysis. In addition to receiving a token containing the emotional breakdown of your statement, the device will print your phrase on the printer corresponding to the primary emotion of your statement. This will allow passersby to quickly get a sense of the most common emotions being spoken into the device.

To draw passersby in more as well as make the device’s idle, recording and processing states more clear, we added a button to trigger the recording and a LED that changed from white when idle to red when the button is depressed and it is recording to green when the button is released and it is processing your input.

 
 
 
 

Watson API

 

For this iteration we built the speech recognition and sentiment analysis using Watson’s API.

Josh Zheng's Medium posts were extremely helpful for us when doing this (AlchemyLanguage Sentiment Analysis in Python and How To Build a Candy Machine With Feelings).

 
 

In Arduino we coded the token people receive, displaying their statement and a bar chart made up of ‘#’ showing the different levels of the five emotions the computer is interpreting within their statement.

We set up the five printers and tested coding the python to route the phrase to one printer at a time. Once that was working reliably, we built two wooden boxes. The first box contained the button, IR sensor, microphone, arduino and

 

printer which delivered the token to the user. The second box contained the 5 printers with their paper feed routed to hang down out of the box and display the statements classified under each of the 5 emotions.

 
 
 
 
 
 

Testing

 

We tidied up the wiring and mounted Intendo in a public space (the lobby of our department) for testing.

 
 
intendo-005 - smaller.jpg