The CSAIL group at MIT have a paper out titled ‘Predicting latent narrative mood using audio and physiologic data‘.
The paper uses a Samsung Simband in combination with audio and physiological data to show how wearables might be used to predict moods. Initial tests have shown an accuracy rate of 83%.
Real time estimation of the emotional content in 30 seconds of collected data, using our optimised NN. The colour of the text at the top of the plot reflects the ground truth labels generated by the research assistant (blue for negative, red for positive, black for neutral). The predictions of the network (y-axis) reflect the underlying emotional state of the narrator.