This seems to be my biggest hurdle so far. This might be the solution.
I was able to take a user made crossfader to get two of the videos to fade from one to another. I need to make a “stop” for the out faded video upon crossfade completion. And the crossfade needs also work between 3 videos, depending on the amount of people (blobs).
I’ve worked out a 3 way, toggling, video crossfader, triggered by the MaKey MaKey circuit completion. Now to make it more conditional.
Reaction to Legalism.
The project uses the punishment/reward principles of the Chinese philosophy of Legalism. Collaborative compliance is rewarded, non-collaboration is punished, individualism ignored.
Ples Stereotypically pleasant videos and sounds.
UnPles Stereotypically unpleasant videos and sounds.
Ples+ Brighter, more vibrant pleasant videos and sounds.
MMc MaKey MaKey circuit completion
PosRef Affirmation of intended action, positive reinforcement.
- When no one is around, the project sits dormant, but is always scanning
- Once one or more people are detected, it triggers the slow transition into Ples
- If only one person is detected, the state remains in Ples
- If there are more than one person detected, it triggers a timer.
- After a specified length of time, set to the timer, there is a slow crossfade to UnPles
- If MMc, before UnPles, then the slow crossfade to Ples+
- If MMc after UnPles, then slow crossfade to Ples+
- Upon MMc, PosRef.
Now that all that is down on paper, I can use it as a reference for building the interface. It works great as a checklist of actors.
I was able to get the camera to see when there was two blobs, and then start playing a video and separate track of audio, after a certain amount of time elapsed. Here’s the screen shot.
I need to have it play something immediately once two blobs have been found, and then crossfade to another track of video and sound once a period of time has elapsed, unless two keyboard triggers have been registered simultaneously.
Initial Idea: Forced cooperation for the betterment of the group. Viewers must both engage in a task, such as putting their hands on a surface. If they both don’t comply, the experience is designed to punish the participants by creating a negative atmosphere using sound and video. When they comply, they are rewarded with soothing sounds and video imagery.
There would be a MakeyMakey board that would serve as the physical interface that much be used by the pair of participants. The experience would overall be controlled by Isadora to “see’ that there are participants present.
I’ve been torn between my website running WordPress, and ditching it and only using Tumblr. I can’t help but want to keep what I control, my website, on my server space. So, I’m going to try both, and see which one gets more attention and interaction. I’m going to use a couple widgets that make sure that the same posts are on both platforms. Lets hope it works as advertised.
It seems my project has me on the ropes. More work is going into executing a technique than has gone into the idea behind the project. That’s a position I seem to get myself into too often. It’s also the driving force behind my lack of coding language-based projects over the last couple of years.
My new line of inquiry sits here for the moment.
KinectCoreVisio can be found here: https://github.com/patriciogonzalezvivo/KinectCoreVision
It’s a bit of a whirlwind, and following of directions that I don’t entirely understand. But, you just have to dive in right?
Now to figure out why Synapse crashes on startup…
It seems the program’s author only shows compatibility up to a certain version of OSX. I’m going to try using my old Macbook which is using an older version of OSX. Fingers crossed.
It seems my old laptop is too old. It requires legacy versions of assorted software, that have eluded my searches. All of which cannot be assured of successfully working if found. Too much time suck without a firm promise of a payoff. It’s time to rethink my strategy here.
I had the privilege of sitting in a class, while the teacher taught the class about remix culture. Where it comes from, historical references, etc. It was a surreal experience to have my memories taught to a class, that I’m sitting in. My brain immediately turned toward music, and then out of some weird knee-jerk reaction, I turned against me first instinct. I started thinking of a 3d piece that fit the requirements of the project, without fully discarding my musical attachment. I was drawn to the idea of bringing the sampling tradition in hip-hop into a physical world. I played around with the idea of dissecting a sample-based song to determine the percentage each sample was of the song’s whole. I would then cut out pie shapes from the records that the samples came from, in the corresponding percentage of a whole record. The next part of the project was to find a song.
This became a project changer, as the records from most sample-based songs have become expensive due to their collectibility. As a solution to this problem, I realized that my first instinct would make for a more original work, and help push me to make music. Using records I have laying around that I wouldn’t be sad about cutting up, I sampled selections from 3 records.
The records that were sampled were chosen at random from my yard sale collection. They were all in poor condition, scratched, dusty, stored for decades without dust covers. I made no effort to repair the sound. I wanted it crunchy. I recorded the parts I was interested in, and laid them out in Audition. Conveniently this is how I started making music in the first place, back in 2002. I used to use Cool Edit Pro, which was later bought by Adobe, and then called Audition. It felt natural to use it for this kind of work.
For the arrangement of the song, I allowed the elements of the some to simply present themselves, without any effects, and only minor EQ’ing. I kept the arrangement simple, until the finish of the 1st part. The 2nd part introduces the use of timestretching and stereo field manipulation to pull the sound apart into the listening space. I do not normalize my tracks, as I love a dynamic range of volume. The whole some serves to set up the 3 part. This is where I take each sample and stretch it excessively to create an entirely new feel, while maintaining a subtle connection to the rest of the song.
The secondary intent of the work is to take the mood from slow and lighthearted, to a more aggressively introspective one.
Critique for this was a bit more complicated that I initially thought. I was caught in the idea of making, more so than how it would be received. Although, I did think that music would be tough to crit, as many social factions define themselves using music more than any other cultural aspect. It quickly dawned on me that it was being presented where all other projects were visually driven. At least half of the critique was on the visual presentation of the music, ie. how the speakers sat on the table, the table itself, and the fact that I chose to set the records sampled upright between the speakers. One crit even included commentary on the style of the album covers.
From this I took away a couple things. Next time, I’ll use a couple pedestals, and turn off the lights, to focus the attention on the aural display. The other take away is on generational, and genre considerations; it can’t be pandered to, and cannot be avoided. If I’m going to make music, it is to be for myself, without compromise. And there will be people that wont like it.
I started my research with this article explaining how to get things up and running:
Now to get started on getting Isadora to see the Kinect…
I’m currently pouring over the TroikaTronix Forums, as there seems to be several people that have already worked out the process.http://troikatronix.com/troikatronixforum/discussion/93/kinect-driver/p1
Isadora it seems, is a program that fills in a lot of holes of possibilities in my work. I’ve frequently had very fragmented ideas of combining scripted function, with the use of multiple forms of media. This could get really exciting. It has gotten exciting.
I’m still sorting out the simple things, like starting and stopping the video without having to mouse around and click. Something more like a key stroke. Basic controls really. I’m finding that there’s no shortage of information, and it seems almost everything has been done before. Once I’ve picked up steam, my endgame is going to be something related to a Kinect, controlling the manipulation, and playback of a video.