top of page

Overview

#metoo voices is an interactive sound installation that creates a physical representation of the magnitude of the #metoo movement.

Role

Ideation, User Experience Design, Prototyping

Team

Glenda Capdeville, Paula Daneze

Scope

Final project for Physical Computing course, MFA Interaction Design, School of Visual Arts, November - December, 2017

Advisors

Eric Forman, Faculty and Head of Innovation, School of Visual Arts

Carrie Kengle and Bruno Kruse, Co-Founders, Area of Effect

Project Background​

In 2006, activist Tarana Burke  began the “Me Too” campaign to encourage “empowerment through empathy” in support of women of color, especially in underprivileged communities, who had experienced sexual harassment or assault. Burke believed that survivors could help each other heal by showing each other that they were not alone, that there were other people who had gone through it, too. 

In October 2017, on the social media platform Twitter, actress Alyssa Milano used the words “me too” as a hashtag in a tweet in a response to multiple women coming forward to share their experiences of sexual assault committed by Hollywood producer Harvey Weinstein, and encouraged others to use the hashtag too. The hashtag exploded–it was used more than 1.7 million times within just a few days by users sharing their personal experiences with sexual assault. 

#metoo voices transforms words on a screen into human voices, emulating the experience of sharing a painful story and knowing immediately that you are not alone. 

How It Works

Using Twitter’s API, the installation counts how many times the hashtag #metoo has been tweeted or retweeted. When a participant says “Me too!” into the megaphone, the installation counts the number of instances of the hashtag in the last minute and plays recordings of people saying “Me too!” in that number

 

#metoo voices is built with the following electronic components:

 

- Sound detection sensor module

- Arduino

- Music Maker MP3 shield for Arduino

- Meteor Javascript framework and node.js

- Twitter API

- Three 8 ohm speakers

 

The sound detection sensor module reads the volume of sound from the megaphone when a user says “me too”, per a threshold set in our Arduino code. When that threshold is met, the value is sent to our javascript program, which is connected to Twitter’s API and tracking instances of the #metoo hashtag. When the value is received, the program counts how many times it has been used in the last minute, then sends that value back to Arduino. Arduino then plays pre-recorded files of anonymous voices saying “me too” in that value.

Process

We began by brainstorming what an interaction would look like that would demonstrate the breadth of the #metoo movement, beyond statistics and numbers communicated through text. As a team, we were interested in how technology could mimic human behavior and how we could generate empathy or a sense of connection for a user, in response to an interaction with a computer.

 

We arrived at the conclusion that if a user could hear other human voices and feel like they were talking with another human being, as opposed to reading tweets from other humans, the experience would be more memorable and thus more more impactful. Our concept began to take shape: we wanted to transform data about the #metoo hashtag from Twitter into the sound of human voices.  

 

We started at the source of the #metoo movement, Twitter. Once we were able to receive data from Twitter’s open API about the #metoo hashtag, we set out to determine the best technology to receive that data and communicate with the human participating in our interaction.

bottom of page