M2TD is a web app that allows a user to create music through body movements. Specific motions are mapped to a distinct sounds ensuring that the user is always in rhythm

M2TDRevolution, boxes appear on the screen and moving the desired body part into them activates a sound

Github

https://github.com/betcherj/M2TD

Stack

  1. Capture video from webcam → OpenCV

  2. Object recognition + motion capture → BlazePose

    1. https://github.com/CMU-Perceptual-Computing-Lab/openpose/ → F2D human skeleton motion capture
    2. https://github.com/freemocap/freemocap → Open source motion capture library

    MultiPerson3DPoseEstimation.pdf

  3. Audio output

    1. simpleaudio python package for processing wav files downloaded from freesound

Progress

Sprint Goal Notes Blockers Review
Jan 11th - Jan 25th ~~Decide on stack
Get streaming to beep when I waive~~ * Run a lightweight open pose on a webcam which plays downloaded wave files async when certainly body parts appear on screen

Draw boxes on the screen that indicate targets, move body part through the box in order to activate the sound effect

Switch movement tracking to blaze pose for better performance

Devise a way of mapping recordings on screen to vectors | * Concerned that code added to the same thread as the pose tracking will result in computation slowdown (it is important for this to be real time)

Figure out a good way to breakdown and download the component parts of a song

There should be an easier way to load sounds | | | | | | | | | |

Todo

Will need to figure out how to map wav files to movements / positions

Notes

Open pose installing