Susurrant audio analysis for human(ist)s.

Susurrant (soo-SIR-ant) is a tool for analyzing collections of sounds, especially those that are part of a social network.

Multi-Modal Analysis

Input your audio, text, and social network data, then navigate a machine-generated index of the distribution of "topics" within your corpus.

Listen Like a Machine

By sonifying its internal feature representations as well as the outputs from its algorithm, Susurrant aims to give you a better understanding of the machine listening process from start to finish.

Free and Open-Source

Susurrant is and will always be free. The code is released under the Affero GPLv3 License and is available on GitHub.


This software is far from a proper release, but development details (roadmap, etc.) can be found on the main repository's wiki. I welcome input about how this software might be useful to you, and especially welcome contributions of documentation/code!

See Install for (terse) instructions on how to run Susurrant locally, or visit the demo for a taste of how it works (currently without sound).

Recent Updates

  • A Visual Analogue for "Algorithmic Listening"

    There’s a fun post over on Google’s Research Blog detailing the “dreams” of neural networks, generated by feeding random noise images into the networks and ‘nudging’ the image iteratively toward a certain classification. While the details are somewhat different, this is a great visual analogue for the processes used in Susurrant to expose the workings of its own audio “interpretation” algorithms.

  • Presentation at "Inertia" (UCLA)

    I’ve just returned from a wonderful conference at UCLA called “Inertia: A Conference on Sound, Media, and the Digital Humanities,” organized by Mike D’Errico and company. On Friday, I presented the notion of “algorithmic listening” (i.e., the practice of listening to algorithms that listen) and its application in the context of my development of Susurrant.