so, it would be REALLY cool to store the comicbook quote feed and semantically analyze it. That way I can load images that have related tags, and the flow from one cell to the next will be all the more cogent. It's one thing to produce meaning by juxtaposition, it's a deeper experience when that juxtaposition is driven by a taxonomy. Can it happen before March 28th? hard to say...
Regardless, leveraging the power of the map/reduce within CouchDB will locate similar Tweets/verbage quickly.
my coding TODos/challenges in the coming week/end - it's a lot but it'll be fun!!!
1)load/retrieve images into/from couchdb through node.js. ofx-OSC-node-osc or maybe just http shouldn't be too difficult. Since I'm going with images vs video in this first iteration, I'm not worried about needing any kind of high-octane storage facility.
2)UI - need to alternatingly show images, fade in/fade out. I don't love oF for this. alphaBlending is way more complicated than it needs to be (unless maybe I look into ofxTween) and I certainly don't need any shaders at this point.
- decide how I'm going to hook up the Twitter API and Google Speech. I think that in a museum environment, processing audio may be a challenge, hence it's my intention to also offer a keyboard implementation. Also, in terms of twitter, do I want for sure want to do this? And if so, do I want to make them use their own twitter handle, which involves the log inand all that madness? Isn't it just easier to create a twitter account for this project and take advantage of the hashtags and @s to attach the participant's handle if desired? I like the idea of
Twitter b/c people can observe the evolution of their story tags even after they leave the museum.
- it's out of scope for March 28th, but I still want to present an "audience" as a counter-balance to the comic-book-feed eg, Google 'cartoon/black-and-white' Images that match the participant's gesture and even facial expression...