Week 1 What’s my topic? explore/destroy/reassemble

I am interested in the concept of how music is created based on structures and rules but also those boundaries are constantly being tested throughout music history. Music composition became much less restricted by rules and structures in the Romantic period than the earlier Classical and Baroque period. And later came Schonberg's twelve-tone music and minimalists' process music in the early 20th century. Avant-Garde musicians continuously challenge the definition of music and progress the way music is perceived by breaking old traditions and inventing new rules.

There is a spectrum between predictability and variability when we judge a piece of musically pleasing or not. Musicians and listeners can fall anywhere on this spectrum, which determines a musician's style and listener's tastes.

Final proposal

Noise

Noise Filter2.gif

Noise is an audio enhanced optical illusion device.

Noise is an audio enhanced optical illusion device. It explores the connection between auditory white noise and visible static noise through moving body as the viewer moves in space to observe the still image pattern, which moves in different ways following viewer’s change of perspectives and induce optical illusion. The body motion also modulates frequency and panning in auditory noise in different directions which directly corresponds to the changes in optical illusion as well as the body motion.

Process


generated noise patterns in processing, saved as pdf

noise9back.png
IMG_4192.jpg


Determine the distance between viewer and image.

In photoshop, edit image rotation and size using angular size equation

IMG_4073.JPG


Laser cut and etched two 12’’ x 12’’ clear acrylic sheet, and opaque base for holding LED lights.

Body tracking in OpenFrameworks: ofxKinect  + ofxOpenCV 

OpenFrameworks sending OSC to MaxMSP which generate noise modulated by body positions.

Github repo

The Lot Radio App

The Lot Radio is an online independent radio station and hangout spot in Williamsburg that brings the local music community together. Being a huge fan and the digital communication manager at the radio station, I integrated some new functionalities to the previous app to enhance the streaming experience: a real-time program schedule fetched from Google Calendar, a chat room to interact with the DJs, as well as an online merchandise store. The upgraded version is currently available for download in App Store here.

The existing app has only has one page that lets audience switch between audio and video streaming:

One of the most important features that needs to be included in the app is the program schedule, so listeners are able to check upcoming shows.

I also integrated the chatroom from the website, which allows listeners to chat with live DJs on the go.

The third tab is the merch shop, where users can place radio merch orders directly in the app.

Process

App transfer & migration

  • Registered The Lot Radio Inc. apple developer account

  • Requested app transfer

  • Migrated app

Google Calendar API

  • Get calendar ID and API key for API request

  • Google Calendar API documentation 

  • find the right request parameters

  • Alamofire

  • SwiftyJson

Time zone, time format conversion

  • API request parameters in RFC3339 timestamp with mandatory time zone offset

  • RFC3339 TimeStamp standard: 2019-05-09T10:00:00-04:00, 2019-05-09T10:00:00Z

  • Request Json of schedule for next 7 days

  • Display show start/end time in local time 

  • Configure table view sections and rows

UI

  • Tab Bar Controller

  • Navigation Controller

  • Table View

  • Web View

  • UI Gesture Recognizer

  • UI Animation

Challenge

  • API call

  • Time zone/format conversion

  • Auto Layout

  • Communication

Next step

  • Fix chatroom 

  • Mixcloud archive

  • Publish app

Algorithmic laser project proposal

Audio enhanced optical illusion(Moire patterns), Zoetrope + music box

Viewer’s body motion controls time(audio) and space(optical illusion/animation), adds another dimension to traditional audio visual piece.

Visual reference for optical Illusion (Processing/OF):


Audio examples(Max/msp): minimal sounds

Rising pitch  

Arpeggio up

Arpeggio down

Spinning

Shepard Tone


Body motion(front/back/left/right/up/down) - displacement

- audio (frequency/pitch, energy, speed/playhead, reverb, panning)

- visual (size, rotation, displacement, distortion)


Process & tools:

design files algorithmically in processing/OF

export pdf - laser cut patterns

body motion detector - Kinect vs webcam(PoseNet) OR slider pot ==> triggers audio change in Max

Materials:

Enclosure with 2 slots, front and back, for inserting and switching screens, clear acrylic sheets

LED panel with diffuser to light up the pattern

plan A

plan A

plan B

plan B

plan A - fun, move in all 6 directions, scalable, kinect/webcam more stable setup(OSC), but can moving body get as many frames as moving sheet?

plan B - more stability and control, but limited directions of motion, more distinct illusion but less interactivity.

Sonic Cubes

Sonic Cubes is a set of instruments intended for a live performance exploring random music generation system. It involves motions and spatial arrangements of 6 cubes that trigger and manipulate sound in a 17-channel audio set up. Listeners are surrounded by speakers at ear level arranged in a circle.

The programs of choice are Max/MSP and reacTIVision, a vision framework that tracks motions of objects with a webcam. The detailed technical documentation can be found here in my previous blog post. Having established connections between physical objects and Max through reacTIVision, I started my composition process. The affordance of 6 cubes suggests the action of tossing around so that thousands of different combinations of sound can be generated. Having the goal of a random music generation system in mind, I started experimenting with different sounds, melodic, percussive and experimental. I was very inspired by Terry Riley’s “In C”, where the entire 40 minutes of composition is based on a very simple repeating note pattern. I began by attaching a short note sequence to each side of a cube and sending MIDI noteout to play piano notes. I started with fewer cubes each representing a type of 7th chord like fully diminished, half diminished, and minor 7th etc. I then took out certain notes in the chord(e.g. 3rd and 7th, or root) leaving fewer notes in the sequence. I started to notice some interesting patterns and unexpected combinations like this. However, the longer it lasts, the more robotic and dry it sounded due to the uniform piano sound, so I thought about using different program/synth options. I tried different ways to send MIDI to dac~ through different sound, like VST~ and X.FM~, but I found fluidSynth~ to be the most handy. It loads any soundfont files, which opens up to tons of possibilities. I explored different combinations of synth, drone and percussions, to make it more musical. I then also applied some effect modulations to certain sounds triggered by rotation of the cubes. After many experimentations, the piece came together.

IMG_0760.JPG

I have achieved my goals in the way that all the technical aspects worked exactly the way I wanted, moving the cubes under the camera would affect the direction of the sound source, all the cubes are set to the right beat whenever they are added or removed. Something that didn’t work as expected was when all the cubes are thrown in the camera and when too many things are changing at the same time, the system would get confused and when certain cubes are removed, some of the clips would not stop playing. Through this project, I learned about ambisonics using the ICST-ambisonics package on ambisonics encoding and decoding and automation, as well as reacTIVision, the unique object tracking framework. If I were to make a revision of the work, I would make triggering and stopping more reliable, and use a translucent table surface and hide the camera underneath the table, or even explore other less musical, more experimental sound and attach the fiducial IDs to other objects so that the interactions imply different and closer relationships between the action and the sounds.

LED design plan for Skylight Modern Staircase

Venue: Skylight Modern

Interactivity: slow and smooth color chase when no one is around; white meteor trail moving on top of default color chase when people approach to the staircase

cyan.png

Color scheme:

Mounting: LED tapes will be mounted inside the handrail slots on both sides of the wall

Screen Shot 2019-05-01 at 11.38.52 AM.png

The entire handrails and walls around them will be lit.

According to calculations on the section view: there will be long pixel tapes of 30’ - 11 7/8’’ long on each side of the wall.

Section view

Section view

Plan view

Plan view

Riser diagram

Riser diagram


Bill of materials:

4x RGBW 5V(90w) 5m/16.4ft Pixel Tape @$35.99 each

2x AC to DC5V40A (200W) power supply @$26.80 each

2x 16awg power cord @$6.71 each

1x DMXKing LeDMX4 PRO @$129.00

1x ethernet cable @$8.59

10x 2m LED Channel $70.06 total

wago 221 lever nut - 2 way & 3 way $23.92 + $18.95

18/3 cable - 200ft $42.27

18/2 cable - 100ft $22.92

1x microcontroller $29.25

2x proximity sensor $3.95 each

1x laptop

total: $560







Week 9 Non-visual Interfaces

Voice Meme is an app uses non-visual interface that adds sound effects as you talk. It is perfect for comedians, public speakers, youtubers who want to add a touch of humor as they talk to their audience.

The app detects keywords using speech recognition framework, and plays the corresponding sound file and displaying a gif. I used this SwiftGif library to load gif to UIImageView.

I was trying to determine the time it took for user to say certain words, so it only load the gif and sound file when it’s longer than a certain threshold. I tried to use lastSegment.duration. According to the definition of duration, it is the number of seconds it took for the user to say the word represented by the segment, measured from the start of the utterance. However when I printed lastSegment.duration the number seems to keep incrementing instead of returning the time it took to say the last word.

Another issue I encountered was that there is a 60 sec limit to an individual speech request, therefore it’s only going to listen for 60sec of live audio. However, I wanted the app to listen for as long as the user wants, but couldn’t figure out how to reset the SFSpeechRecognitionTask at the end of every 60 seconds.

Week 8 Augmented Reality

I created a 2D AR VJ Camera app that uses audio input to animate camera filters.

To access audio data from microphone, I used AVAudioRecorder, and followed this tutorial to get the averagePower from the recorder.

I used my own map function to scale audio level to parameters of different types of filters.

I went through the entire list of Core Image Filter Reference , and found a few that are interesting for VJ purposes, and referred to this post to set the value of filter parameters.

One issue I encountered was not all the filter parameters I saw on Core Image Filter Reference has a relative Filter Parameter keys, and I wasn’t be able to set parameters dynamically using self.currentFilter!.setValue(inputCount, forKey: ???). Later I figured out I could just use the parameter string for key.

Github link


Here’s a screen capture of the app demo, with camera facing a TV screen playing a festival set.



Four Tet's LED Paradise

One of my favorite performances is a live set by Four Tet for his album ‘Morning/Evening’, which features a large scale 3D light installation. Four Tet transferred the same stage set up to multiple locations in the world, each one slightly different from another.

Read More

Week 6-7 Multipeer Connectivity

Mobile Lab Musical Chairs is a virtual musical chairs game where certain number of players occupy less number of circular buttons(chairs), each player can only occupy one button per round, whoever did not end up occupying any button lose the round.

Read More

Serge Phase - Stereo

I am very obsessed with process-based music and the idea that by intentionally choosing and arranging notes through pitch and time, pleasant sound with great complexity can arise from very minimal amount of musical material. I am especially interested in Steve Riech’s use of phasing technique in Piano Phase(1966), where two pianist play the same simple motif over and over but in and out of sync with each other. Therefore I would like to understand Steve Riech's phase shift more deeply by recreating the process using the same technique but different instruments and motif.

Read More

USB Human Interface Device (HID)

For this HID assignment, I wanted to build a steering wheel for computer racing games. Almost all of the computer racing games uses left and right keys for steering and up key for gas with a few additional keys that varies for other purposes depending on the game. So I decided to map 4-5 keys from keyboard to my control surface using this keyboard library. I used tilt detection of an accelerometer’s x-axis value to control the steering and a few switches for gas and other functions.

Read More

RCA Final Proposal

Our final project will be a participatory performance exploring intimacy and connection through anonymous touch. It is inspired by a durational performance that Jordan did last semester for a Design for Discomfort class. In that previous performance, he sat in Washington Square Park, blindfolded and wearing earmuffs, with his hand extended, and a sign that said “Will you hold my hand?” Strangers stopped by to hold or shake his hand. The performance explored what can or might be communicated through touch alone, and the differences in experience and connection when one removes the usual visual and conversational information by which we assess, judge, and relate to other people.

Read More