I was able to get the camera to see when there was two blobs, and then start playing a video and separate track of audio, after a certain amount of time elapsed. Here’s the screen shot.
I need to have it play something immediately once two blobs have been found, and then crossfade to another track of video and sound once a period of time has elapsed, unless two keyboard triggers have been registered simultaneously.
Initial Idea: Forced cooperation for the betterment of the group. Viewers must both engage in a task, such as putting their hands on a surface. If they both don’t comply, the experience is designed to punish the participants by creating a negative atmosphere using sound and video. When they comply, they are rewarded with soothing sounds and video imagery.
There would be a MakeyMakey board that would serve as the physical interface that much be used by the pair of participants. The experience would overall be controlled by Isadora to “see’ that there are participants present.
I’ve been torn between my website running WordPress, and ditching it and only using Tumblr. I can’t help but want to keep what I control, my website, on my server space. So, I’m going to try both, and see which one gets more attention and interaction. I’m going to use a couple widgets that make sure that the same posts are on both platforms. Lets hope it works as advertised.
It seems my project has me on the ropes. More work is going into executing a technique than has gone into the idea behind the project. That’s a position I seem to get myself into too often. It’s also the driving force behind my lack of coding language-based projects over the last couple of years.
My new line of inquiry sits here for the moment.
KinectCoreVisio can be found here: https://github.com/patriciogonzalezvivo/KinectCoreVision
It’s a bit of a whirlwind, and following of directions that I don’t entirely understand. But, you just have to dive in right?
Now to figure out why Synapse crashes on startup…
It seems the program’s author only shows compatibility up to a certain version of OSX. I’m going to try using my old Macbook which is using an older version of OSX. Fingers crossed.
It seems my old laptop is too old. It requires legacy versions of assorted software, that have eluded my searches. All of which cannot be assured of successfully working if found. Too much time suck without a firm promise of a payoff. It’s time to rethink my strategy here.
I had the privilege of sitting in a class, while the teacher taught the class about remix culture. Where it comes from, historical references, etc. It was a surreal experience to have my memories taught to a class, that I’m sitting in. My brain immediately turned toward music, and then out of some weird knee-jerk reaction, I turned against me first instinct. I started thinking of a 3d piece that fit the requirements of the project, without fully discarding my musical attachment. I was drawn to the idea of bringing the sampling tradition in hip-hop into a physical world. I played around with the idea of dissecting a sample-based song to determine the percentage each sample was of the song’s whole. I would then cut out pie shapes from the records that the samples came from, in the corresponding percentage of a whole record. The next part of the project was to find a song.
This became a project changer, as the records from most sample-based songs have become expensive due to their collectibility. As a solution to this problem, I realized that my first instinct would make for a more original work, and help push me to make music. Using records I have laying around that I wouldn’t be sad about cutting up, I sampled selections from 3 records.
The records that were sampled were chosen at random from my yard sale collection. They were all in poor condition, scratched, dusty, stored for decades without dust covers. I made no effort to repair the sound. I wanted it crunchy. I recorded the parts I was interested in, and laid them out in Audition. Conveniently this is how I started making music in the first place, back in 2002. I used to use Cool Edit Pro, which was later bought by Adobe, and then called Audition. It felt natural to use it for this kind of work.
For the arrangement of the song, I allowed the elements of the some to simply present themselves, without any effects, and only minor EQ’ing. I kept the arrangement simple, until the finish of the 1st part. The 2nd part introduces the use of timestretching and stereo field manipulation to pull the sound apart into the listening space. I do not normalize my tracks, as I love a dynamic range of volume. The whole some serves to set up the 3 part. This is where I take each sample and stretch it excessively to create an entirely new feel, while maintaining a subtle connection to the rest of the song.
The secondary intent of the work is to take the mood from slow and lighthearted, to a more aggressively introspective one.
Critique for this was a bit more complicated that I initially thought. I was caught in the idea of making, more so than how it would be received. Although, I did think that music would be tough to crit, as many social factions define themselves using music more than any other cultural aspect. It quickly dawned on me that it was being presented where all other projects were visually driven. At least half of the critique was on the visual presentation of the music, ie. how the speakers sat on the table, the table itself, and the fact that I chose to set the records sampled upright between the speakers. One crit even included commentary on the style of the album covers.
From this I took away a couple things. Next time, I’ll use a couple pedestals, and turn off the lights, to focus the attention on the aural display. The other take away is on generational, and genre considerations; it can’t be pandered to, and cannot be avoided. If I’m going to make music, it is to be for myself, without compromise. And there will be people that wont like it.
I started my research with this article explaining how to get things up and running:
Now to get started on getting Isadora to see the Kinect…
I’m currently pouring over the TroikaTronix Forums, as there seems to be several people that have already worked out the process.http://troikatronix.com/troikatronixforum/discussion/93/kinect-driver/p1
Isadora it seems, is a program that fills in a lot of holes of possibilities in my work. I’ve frequently had very fragmented ideas of combining scripted function, with the use of multiple forms of media. This could get really exciting. It has gotten exciting.
I’m still sorting out the simple things, like starting and stopping the video without having to mouse around and click. Something more like a key stroke. Basic controls really. I’m finding that there’s no shortage of information, and it seems almost everything has been done before. Once I’ve picked up steam, my endgame is going to be something related to a Kinect, controlling the manipulation, and playback of a video.
A route I remember more than any other, is my walk from home, to my elementary school. Which is odd. I normally took the bus, and when I walked, would typically walk a different route. My most travelled route being through a residential area. It maybe that as I got older, and brave enough to walk on the busier street, I became more aware of the details of this route.
Then there was Stark St. A more serious street name there never was. 5 lanes wide, with a speed limit of 40mph. It was a thrill to cross. Marked crosswalks were 20 blocks apart. Frogger taught me everything I needed.
The first ideas of the project are always the hardest. Mining the ether. The first ideas thrown around with Tara and I, were to use several devices to orchestrate an interactive graphic. Similar to a magic show using mobile devices.
This is where we got caught up in trying to expand the project to incorporate something a little more directly interactive. Another suggestion was to create a group drawing using an app, and a group of users in a common space. The speed of motion would affect the rate at which the participants line would “bleed”. The drawing would have some basic outline that the participant could use as a starting point, or they could ignore it entirely. And then the group disbanded.
I then moved into Avery and Teagan’s group. The run down I picked up is that we are going to interview volunteers on how they think the internet and holistic healing work. We would have a designated interview space. I mentioned using the recorded videos and use common references as sync point in the video, displayed on separate monitors, and using multiple channels of sound. With a third screen showing the face of the viewer.
As the project has evolved, it’s led me to continue to take on a production role. Each of us in our group are interviewing two people, and the footage presented per the project spec.