Code of Music / Week Eight / Markov Chains by Pippa Kelmenson

Assignment:

Design and implement a generative/interactive based on Markov chains. Record a 20-second composition/improvisation using it. Post the running sketch, code, and documentation to your blog.

Inspiration:

M Horwich Markov Oscillators

Luisa Markov Melody

Directional Light

Result:

Made with Bomani Oseni McClendon.

Uses WebGL, Tone.js, and P5.js. You can find our code here (rectangles) and here (spheres).

Code of Music & Listening Machines / Week Seven / Midterm & Final Assignment by Pippa Kelmenson

Inspiration:

Ryan Ross Smith - Study 42, Study 4, Study 34.

Interaction:

A simple sequencer that is activated by a mouse click. Each click will either add sound or take away the sound. It's up to the user to make their own pattern.

Code of Music:

Listening Machines:

Made with Sohaila Mosbeh.

You can find our code here.

Performative Avatars / Week Six / Race / Wrap3 / VRChat Interviews by Pippa Kelmenson

Assignment:

  • Morph your 3D scan using Wrap3

    1. Write a blog post with screenshots or screen recordings of your 3D scan in DAZ

      • What worked and what didn't work? What did you struggle with in this assignment?

Response:

After resizing and orienting my 3D scan in Maya (video here), I was able to morph it in Wrap3 (video here), and then rig it in Mixamo (video here). I still have yet to import the textures into Mixamo for animation, and reformat my avatar’s eyesockets in Wrap3.

Performative Avatars / Week Five / Gender / 3D Scanning / Parallel 3+4 by Pippa Kelmenson

Assignment:

Using one of the scanning apps work with someone to scan your head. Write a blog post with a photo or screen recording of your scanned head. What was difficult about the 3D scanning? What worked and what didn’t? Does this scan feel like an accurate representation of your head? Any technical challenges you had with this assignment?

Response:

Since I haven’t been tested for COVID-19 at Tandon in the past two weeks, I discovered that I’m now not allowed to physically go to ITP until my next test this week, so unfortunately I couldn’t take advantage of the itSeez3D camera at school. Although my roommate and I tried numerous times to scan my body with both the ScandyPro and Capture apps, it was incredibly difficult to settle on a scan that we were happy with. After trying many different methods (scanning with an iPhoneX front camera while mirroring to another iPhone 11Pro, having my roommate walk in circles around me, and keeping her still while rotating myself in the shot) we finally got a usable scan, only to find out that ScandyPro would only let me download the .obj file if I subscribed to their monthly services. We found the best method to be when she held the phone while I rotated slowly, but even then the scan would cut off parts of my legs and arms due to my distance from the camera and movement. I decided that this week I’ll get my COVID-19 test and return to ITP to have someone scan my body properly with the itSeez3D app and associated hardware.

Update:

After days of trying to get a useable 3D scan with ItSeez3D, I finally got one! As it turns out, the app only reveals that you’ve captured details like noses and hands in your 3d sketch after rendering them to the cloud in Hi-Res.

model_preview.png

The Body, Everywhere and Here / Week Six & Seven / Final Assignment by Pippa Kelmenson

Assignment:

Create something that reflects on ideas we've discussed (new/old, alone/partner) - make/document a prototype that you can use for class discussion & user testing.

Response:

For my final project, I hope to combine elements of sound, machine learning, and body tracking to create a gesture-controlled harmonic sequencer. With Tone.js, ml5, PoseNet, and Three.js, I would like to track users’ gestures to create 3D sine waveforms that are multiplied with each hand shown. This way, each hand adds a layer of harmony onto the original frequency, creating a chord of tones from the 4 hands of 2 people. My goal is that, after drawing a sine wave, the ml5 machine learning model Meyda can assume the frequency of the wave based on its amplitude and cycle, and then play it as a pure tone accordingly. Inspiration below.

Inspiration:

Testing:

Since working on my original prototypes, I was able to draw up a PoseNet sketch that included ellipses falling from users’ hands with live and recorded data. Unfortunately, I wasn’t able to access a Kinect, so my goal was to use elements of Three.Meshline ribbons and Tone.js oscillators in my PoseNet sketch. After trying to import the recorded data from a Kinect to PoseNet (code here) and combine with a Three.js environment and Three.Meshline ribbons (code here), I ran into problems making Three.js and P5.js work together as libraries. Eventually, I was able to incorporate the harmonic sequencer I made in P5.js into my PoseNet sketch, but am still debugging my Three.js tests.

I plan on continuing this project in the future for my Listening Machines class, so that Tone.js can interface with PoseNet and Meyda to reflect a harmonic sequencer drawn and controlled by two users’ hands gestures.

You can find my updated (but still broken) code here.