Blogs Media Lab Projects

Multitouch Kiosks Highlight Collection

It’s difficult to blog about a collections-focused touch screen in a museum without drawing comparisons to the amazing Collections Wall at the Cleveland Museum of Art — and feeling entirely inadequate. We’re not (yet!) anywhere near that scale, but luckily for our egos we weren’t aiming there with this project. We wanted a simple, intuitive interface and […]

It’s difficult to blog about a collections-focused touch screen in a museum without drawing comparisons to the amazing Collections Wall at the Cleveland Museum of Art — and feeling entirely inadequate. We’re not (yet!) anywhere near that scale, but luckily for our egos we weren’t aiming there with this project. We wanted a simple, intuitive interface and something we could evolve in-house after watching and analyzing user behavior.

wall_2

AJ explores some of the garden artwork on the new screens.

We stayed true to the original idea of a “big swipe” interaction, creating what’s essentially an enormous photo-viewing application with zoomable images and some video mixed in. As another way to celebrate the 25th anniversary of the Minneapolis Sculpture Garden, we chose to launch the screens with highlights from the Garden.

Under the hood

The screen are running Google Chrome in Kiosk Mode and displaying a simple web page supported by a lot of custom Javascript. To keep things fast each screen is running a Squid web proxy to keep a local copy of the content, and the videos are also stored locally to avoid buffering issues. I thought Squid would manage to cache the videos, but due to the way they’re served using HTTP Range Requests I had to install a very vanilla Apache server locally to get them working. A bit of ugly overhead that keeps it from being a truly standalone solution, but something we were unable to solve on a deadline.

table_change

Design and New Media hanging paper representations of the screens to get a sense of scale and placement.

We’re also logging interactions using Google Analytics’ Track Event API (easy, since it’s just a web page!). Right now we’re tracking when the screen is “woken up” by a visitor’s interaction, when they open the textual “Info” button, and when they play a video. With video we also separately track if they watch all the way to the end, and if they don’t we log the timestamp they stopped watching.

The project is on our public Github account, so please have a look if you’re interested.

Content admin

The content is fed to the web page via a simple JSON file. AJ built an online editor that allows us to rearrange slides, import new collection objects and video, and edit or create the “Info text.” Very often our public-facing projects run into tight deadlines to launch and the admin / maintenance side of things never gets finished, so I’m quite excited to have this working.

Screen Shot 2013-06-06 at 1.31.58 PM

Lessons Learned

Gestures are different at this scale.

Sure, we know HTML5 and Javascript and have built some nice gestural interfaces before, but we weren’t prepared for the differences of a large-scale screen. Instead of tidy touch events using single fingers, we were seeing people swipe with their whole hand, or two fingers, or four. People tried to zoom using whole hands dragging in and out. Kids would “tickle” the screen and overwhelm our scripts, leaving the device crippled. While we gained many days by developing and designing in a familiar toolset, we lost almost as many days trying to rapidly mature our touch library. Midway through the project Ideum released Gestureworks Web Socket bindings for Javascript, which is absolutely the approach I’d take next time if we stick with HTML5. We learned the hard way that a true multi-touch vocabulary is not something you can just “whip up” from scratch…

Attract screen felt like a “home page”

Eric built a fantastic opening animation to attract visitors’ attention, which they would then dismiss to start interacting with the slides: “Tap to begin.” A number of our early testers mistook the animated gestural instructions for a menu, and were quite distressed when they couldn’t find a way back to the “menu.” We toyed with changing the intro and couldn’t solve it until finally we realized we just needed to change one word: “Swipe to begin.” By making the intro video actually be the first slide, users were able to discover the operation of using the screens (swipe) at the same time as they “dismissed” the intro. And the intro was always available by just swiping back. It’s a no-brainer now that we see it, but it’s one I’m glad we tested and re-tested.

Video is being watched until the end!

… about 10% of the time. That doesn’t sound like much, but it’s honestly higher than I expected (caveat: only a few days of stats). The space isn’t an especially inviting one for consuming media, but the content is compelling enough people are happy to stay and watch. We’re still collecting data to see if there are any trends around the timestamp when people stop watching — I hope this will inform the type of video content that is most appropriate for the medium and environment.

Position matters

So far the right-hand screen is used almost twice as often as the left-hand, which is a bit deeper in the space. So it may just be ease of access and people engaging with whatever they reach first, but we’re watching this closely for clues for future content: maybe one screen could be a long-running silent video? Maybe one screen never returns to the attract mode? Do we run something entirely different on each screen so there’s a reason to try both?

 Summary

A fun and challenging project that launched on-time and does pretty much what we set out to do. Can’t ask for more!

If you’re in the Twin Cities, please stop by and try out the new screens, and tweet us @walkerartcenter with feedback.

 

Beyond Interface: #Opencurating and the Walker’s Digital Initiatives

The new Walker Art Center website “heralds a paradigmatic shift for innovative museum websites in creating an online platform with an emphasis on publishing,” write Max Andrews and Mariana Cánepa Luna of the Barcelona-based curatorial office Latitudes, who add that the site places the Walker “at the centre of generating conversations around content from both inside […]

The new Walker Art Center website “heralds a paradigmatic shift for innovative museum websites in creating an online platform with an emphasis on publishing,” write Max Andrews and Mariana Cánepa Luna of the Barcelona-based curatorial office Latitudes, who add that the site places the Walker “at the centre of generating conversations around content from both inside and outside the Walker’s activities.” The pair discusses ideas behind the site with Robin Dowden, the Walker’s director of new media initiatives, web editor Paul Schmelzer, and Nate Solas, senior new media designer, as part of #OpenCurating, Latitudes’ new research effort investigating the ways contemporary art projects “can function beyond the traditional format of exhibition-and-catalogue in ways which might be more fully knitted into the web of information which exists in the world today.” Consisting of a moderated Twitter discussion, an event in Barcelona, and a series of 10 online interviews, #OpenCurating launches with the conversation below. As #OpenCurating content partner, the Walker will host conversations from this developing series on its homepage.

(more…)

Walkerart.org Design Notes #1

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own […]

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own as an internationally-focused “digital Walker,” instead of something that merely serves the local, physical space.

We largely started from scratch with the user experience and design of the site; the old site, for all its merits, had started to show its age on that front, being originally designed over six years ago – an eternity in web-years. That said, we’re still traditionalists in some ways where new media design is concerned, and took a really minimal, monochromatic, print/newspaper-style approach to the homepage and article content. So in a way, it’s a unique hybrid of the old/time-tested (in layout) and new/innovative (in concept and content), hopefully all tempered by an unadorned, type-centric aesthetic that lets the variety of visuals really speak for themselves.

Our inspiration was a bit scattershot, as we tried to bridge a gap between high and low culture in a way reflective of the Walker itself. Arts and cultural sites were obviously a big part (particularly Metropolis M and it’s wonderful branded sidebar widgets), but not so much museums, which have traditionally been more conservative and promotionally-driven. With our new journalistic focus, two common touchstones became The New York Times’ site and The Huffington Post – with the space in between being the sweet spot. The former goes without saying. The latter gets a bad rap, but we were intrigued by it’s slippery, weirdly click-enticing design tricks and general sense of content-driven chaos enlivened by huge contrasts in scale. The screaming headlines aren’t pretty, but they’re tersely honest and engaging in an area where a more traditional design would introduce some distance. And the content, however vapid, is true to its medium; it’s varied and easily digestible. (See also Jason Fried’s defense of the seemingly indefensible.)

Of course, we ended up closer to the classier, NYT side of things, and to that end, we were really fortunate to start this process around the advent of truly usable web font services. While the selection’s still rather meager beyond the workhorse classics and a smattering of more gimmicky display faces (in other words, Lineto, we’re waiting), really I’m just happy to see less Verdana in the world. And luckily for us, the exception-to-the-rule Colophon Foundry has really stepped up their online offerings lately – it’s Aperçu that you’re seeing most around the site, similar in form to my perennial favorite Neuzeit Grotesk but warmer, more geometric, and with a touch of quirk.

Setting type for the web isn’t without it’s issues still, with even one-point size adjustments resulting in sometimes wildly different renderings, but with careful trial-and-error testing and selective application of the life-saving -webkit-font-smoothing CSS property, we managed to get as close as possible to our ideal. It’s the latter in particular that allows us elegant heading treatments (though only visible in effect to Safari and Chrome): set to antialiased, letterforms are less beholden to the pixel grid and more immune to the thickening that sometimes occurs on high-contrast backgrounds.

It’s not something I’d normally note, but we’re breaking away from the norm a bit with our article treatments, using the more traditional indentation format instead of the web’s usual paragraph spacing, finding it to flow better. It’s done using a somewhat complex series of CSS pseudo-elements in combination with adjacent selectors – browser support is finally good enough to accomplish such a thing, thankfully, though it does take a moment to get used to on the screen, strangely enough. And we’re soon going to be launching another section of the site with text rotation, another technique only recently made possible in pure CSS. Coming from a print background, it’s a bit exciting to have these tools available again.

Most of the layout is accomplished with the help of the 960 Grid System. Earlier attempts at something more semantically meaningful proved more hassle than they were worth, considering our plethora of more complex layouts. We’ve really attempted something tighter and more integrated than normally seen on the web, and I think it’s paid off well. That said, doing so really highlighted the difficulties of designing for dynamic systems of content – one such case that reared it’s head early on was titles in tiles (one of the few “units” of content used throughout the site).

A tricky issue at first considering our avoidance of ugly web aesthetics like fades (and artificial depth/dimensionality, and gradients, and drop shadows…), but one eventually solved with the implementation of our date treatments, whose connecting lines also function nicely as a cropping line – a tight, interlocking, cohesive system using one design element to solve the issues of another. We’ve tried to use similar solutions across the site, crafting a system of constraints and affordances, as in the case of our generated article excerpts:

Since we’re losing an element of control with freeform text fields on the web and no specific design oversight as to their individual display, we’ve chosen to implement logic that calculates an article title’s line-length, and then generates only enough lines of the excerpt to match the height of any neighboring articles. It’s a small detail for sure, but we’re hoping these details add up to a fine experience overall.

Anyway, there’s still more to come – you’ll see a few painfully neglected areas here and there (our collections in particular, but also the Sculpture Garden and to a lesser extent these blogs), but they’re next on our list and we’ll document their evolution here.

Process/miscellany

Digital Wayfinding in the Walker, Pt. 1

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New […]

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New Media’s purview, and only occasionally so for Design, a recent initiative to improve the flow and general satisfaction of visitors brought with it the idea of using digital displays, with their malleable content and powerful visual appeal, to guide and direct people throughout the Walker.

Our new static directional signage

Currently installed in one location of an eventual three, and with a simple “phase one” version of the content, the Bazinet Lobby monitor banks cycle through the title graphics for all the exhibitions currently on view, providing a mental checklist of sorts that allows the visitor to tally what he or she has or hasn’t yet seen that directly references the vinyl graphics at each gallery entrance. The corner conveniently works as an intersection for two hallways leading to a roughly equivalent number of galleries in either direction, one direction leading to our collection galleries in the Barnes tower, and the other our special exhibition galleries in the Herzog & de Meuron expansion. To this end, we’ve repurposed the “street sign” motif used on our new vinyl wall graphics to point either way (which also functions as a nice spacial divider). Each display tower cycles through it’s given exhibitions with a simple sliding transition, exposing the graphics one by one. An interesting side effect of this motion and the high-contrast LCDs has been the illusion of each tower being a ’70s-style mechanical lightbox; I’ve been tempted to supplement it with a soundtrack of quiet creaking.

The system, powered by Sedna Presenter and running on four headless, remotely-accessible Mac Minis directly behind the wall, affords us a lot of flexibility. While our normal exhibitions cycle is a looped After Effects composition, we’re also working on everything from decorative blasts of light and pattern (the screens are blindingly bright enough to bathe almost the entire lobby in color), to live-updating Twitter streams (during parties and special events), to severe weather and fire alerts (complete with a rather terrifying pulsating field of deep red). In fact, this same system is now even powering our pre-show cinema trailers. I’m particularly interested in connecting these to an Arduino’s environmental sensors that would allow us to dynamically change color, brightness, etc. based on everything from temperature to visitor count to time of day — look for more on that soon.

See it in action:

Behind the scenes / Severe weather alert:

 

Installation:

  

Building the Benches and Binoculars Touchscreen Kiosk

[flickrvideo]http://www.flickr.com/photos/vitaflo/4119139342/[/flickrvideo] For our exhibition Benches and Binoculars, I was asked to create a touchscreen kiosk. The artwork in Benches and Binoculars is hung salon-style, making it impractical to use wall labels on works that are hanging 20 feet up in the air. Many get around this by having a gallery “map” (and our Design dept […]

[flickrvideo]http://www.flickr.com/photos/vitaflo/4119139342/[/flickrvideo]

For our exhibition Benches and Binoculars, I was asked to create a touchscreen kiosk. The artwork in Benches and Binoculars is hung salon-style, making it impractical to use wall labels on works that are hanging 20 feet up in the air. Many get around this by having a gallery “map” (and our Design dept did create these as well for the exhibit), but much like the exhibition itself, we thought it was a good time to “re-imagine” the gallery map.

I had never worked on a touchscreen app before. Sure, I’ve created kiosks here at the Walker but a touchscreen brings some new challenges, as well as some new opportunities. Input is both easier, and more difficult. You just use your hands, but people aren’t always sure how they are supposed to use their hands to perform actions, or even that they can.

Walker Director Olga Viso using the Benches and Binoculars kiosk

Walker Director Olga Viso using the Benches and Binoculars kiosk

As such my main goal when making the kiosk was to keep it simple. Don’t let the interface get in the way of the information. The interface should help facilitate finding the content you want easily. Too many times I’ve seen these types of devices be more about the technology than about the content on them. This meant making the kiosk less “flashy”, but in turn also made it more useful.

In the end the layout was rather simple. The screen has an exact (to the pixel) representation of the artwork hanging on the walls. Moving your hand right and left on the kiosk moved the walls on it left and right. Tapping on an artwork brought up a modal window with a high res image of the object as well as the label text. There is nothing particularly fancy or new about this idea, and there really shouldn’t have been. Much more would have taken away the experience you were there for, namely viewing the artworks on the walls.

As for the technology involved, we decided to use the HP Touchsmart PC for this particular kiosk. It uses an infrared field above the screen to track “touch”. As such you don’t actually have to make physical contact with the screen to activate a touch event, you just have to break the infrared plane.

We decided on the 22″ version because we wanted the machine to be single use. With the way the computer is set up, it’s not all that great at multi-touch as it is. And wanting to keep the device as simple as possible led to wanting to keep usable by one person at a time. There is a larger version of the Touchsmart but any larger than the 22″ and it felt like you were supposed to have more than one person use it at a time, which we wanted to stay away from.

Since we didn’t have to worry about multi use, we had a few more options on what to build the interface with. Most people would probably go the Flash route but for us Flash is usually the choice of last resort. This is for various reasons, not the least of which for me is lack of experience with Flash. But most of what you can do in Flash these days can also be done in the browser, and given that front end interfaces are my forte, that’s where I went.

The interface is just a simple HTML page that dynamically calls ArtsConnectEd for its data. Thankfully, Nate was able to leverage a lot of the work he did on ACE for this which sped up development drastically. Interaction is just built with some jQuery scripts I wrote. All in all it wasn’t all that difficult to get together except for a few snags (isn’t there always some?).

Using the Kiosk.

Using the Kiosk.

One was that I found very early on that interacting with a touchscreen is a lot different from using a mouse. Hit areas are much different since when you press on a screen your finger tends to “roll”. On the first mousedown event, the tip of your finger is in one spot, but as you press, the mouse position shifts lower on the screen as your finger flattens out from pressing into the screen. This means the mouseup event is in a totally different spot, which can cause issues with trying to register a proper click. A problem exists when trying to register a drag event for the same reason. As such I had to program in some “slush” room to compensate for this.

The second issue I had was that of the computer and browser itself. The Touchsmarts, while having a decent CPU were really slow and sluggish in general. I had from the beginning targeted Firefox for the development platform. Mainly because it has many fullscreen kiosk implementations as add ons. But once I loaded up 98 images with all of the CSS drop shadows, transparencies, etc, the entire browser was very sluggish and choppy.

I had read recently that Google Chrome was pushing v4 to be a lot faster and their new beta had just been released for it. Testing it I found that it was about 3 times faster than Firefox. The issue was it had no true kiosk mode. I was in a bind. I had a nice fullscreen kiosk in Firefox that was choppy, and a decent speed browser in Chrome that had no kiosk mode.

After much searching I found that a kiosk patch was in development on the browser. The only issue was patching it into a build. Unfortunately Google’s requirements for building Chrome on Windows is not trivial and I couldn’t find anyone to do it for me. In desperation, I emailed the creator of the patch, Mohamed Mansour, to see if he could build me a binary with his patch in it. Thankfully he came through and was able to offer up a custom build of Chrome with the kiosk mode built in that I could use for the exhibition. It’s worked wonderfully (note, this patch has since been checked into the Google Chrome nightlies).

In the end it turned out better than I thought it would. Chrome was fast enough for me to go back and add in new features like proper acceleration when “throwing” the walls. And the guys in the Walker carpentry shop, especially David Dick, made a beautiful pedestal to install the kiosk in, complete with a very nice black aluminum bezel. I couldn’t be more happy and from the looks of it our visitors are as well. It goes a long way to my (and New Media’s) goal of taking complex technology and making it simple for users, as well as the Walker’s mission of the active engagement of audiences.

You can see more photos in my Flickr set:
http://www.flickr.com/photos/vitaflo/sets/72157622839288542/

Building the Walker’s mobile website with Google AppEngine, part 1

Over the summer, our department made a small but significant policy change. We decided to take a cue from Google’s 20% time philosophy and spend one day a week working on a Walker-related project of our choosing. Essentially, we wanted to embark on quicker, more nimble projects that hold more interest for our team. The […]

mwalker-iphoneOver the summer, our department made a small but significant policy change. We decided to take a cue from Google’s 20% time philosophy and spend one day a week working on a Walker-related project of our choosing. Essentially, we wanted to embark on quicker, more nimble projects that hold more interest for our team. The project I decided to experiment with was making a mobile website for the Walker, m.walkerart.org.

Reviewing our current site to inform the mobile site

The web framework we use for most of our site has the ability, with some small changes, to load different versions of a page based on a visitor’s User Agent (what browser they’re using). This would mean we could detect if a visitor was running IE on a Desktop or Mobile Safari on an iPhone, and serve each of them two different versions of a page. This is how a lot of mobile sites are done.

This is not the approach we went with for our mobile site, because it violates two of the primary rules (in my mind) of making a mobile website:

  1. Make it simple.
  2. Give people the stuff they’re looking for on their phones right away.

Our site is complicated: we have pages for different disciplines, a calendar with years of archives, and many specialty sites. Rule #1, violated. To address #2, I took a look at our web analytics to figure out what most people come to our site looking for. This won’t surprise anyone, but it’s hours, admission, directions, and what’s happening today at the Walker:

Top Walker Pages as noted by Google Analytics

Top Walker Pages as noted by Google Analytics

So it seems pretty clear those things should be up front. One of the other things you might want to access on a mobile is Art on Call. While Art on Call is designed primarily around dial-in access, there is also a website, but it isn’t optimized for the small screen of a smartphone. We have WiFi in most spaces within our building, so accessing Art on Call via an web interface and streaming audio via HTTP rather than POTS is a distinct possibility that I wanted to enable.

Prototyping with Google AppEngine

I decided to develop a quick prototype using Google AppEngine, thinking I’d end up using GAE in the end, too. Because this was a 20% time project, I had the freedom to do it using the technology I was interested in. AppEngine has the advantage of being something that isn’t hosted on our server, so there was no need to configure any complicated server stuff. In my mind, AppEngine is perfect for a mobile site because of the low bandwidth requirements for a mobile site. Google doesn’t provide a ton for free, but if your pages are only 20K each, you can fit quite a bit within the quotas they do give provide. AppEngine’s primary language is also python, a big plus, since python is the best programming language.

In about two days I built a proof of concept mobile site that displayed the big-ticket pages (hours, admission,etc.) and had a simple interface for Art on Call. Using iUi as a front-end framework was really, really useful here, because it meant that the amount of HTML/CSS/JS I had to code was super minimal, and I didn’t have to design anything.

I showed the prototype to Robin and she enthusiastically gave me the green light to work on it full-time.

Designing a mobile website

A strategy I saw when looking at mobile sites was to actually have two mobile sites: one for the A-grade phones (iPhone, Nokia S60, Android, Pre) and one for the B- and C-grade phones (Blackberry and Windows Mobile). The main difference between the two is the use of javascript and some more advanced layout. Depending on what version of Blackberry you look at, they have a pretty lousy HTML/CSS implementation, and horrendous or no javascript support.

To work around this, our mobile site does not use any javascript on most pages and tries to keep the HTML/CSS pretty simple. We don’t do any fancy animations to load between pages like iUi or jQtouch do: even on an iPhone those animations are slow. If you make your pages small enough, they should load fast enough and a transition is not necessary.

Designing mobile pages is fun. The size and interface methods for the device force you to re-think how to people interact and what’s important. They’re also fun because they’re blissfully simple. Each page on our mobile site is usually just a headline, image, paragraph or two, and some links. Laying out and styling that content is not rocket surgery.

Initially, when I did my design mockups in Photoshop, I wanted to use a numpad on the site for Art on Call, much like the iPhone features for making a phone call. There’s no easy input for doing this, but I thought it wouldn’t be too hard to create one with a little javascript (for those that had it). Unfortunately, due to the way touchscreen phones handle click/touch events in the browser, there’s a delay between when you touch and when the click event fires in javascript. This meant that it was possible to touch/type the number much faster than the javascript events fired. No go.

Instead, the latest versions of WebKit provide with a HTML5 input field with a type of “number”. On iPhone OS 3.1 and better, it will bring up the keypad already switched to the numeric keys. It does not do this on iPhone OS prior to 3.1. I’m not sure how Android and Pre handle it.

Mocked up Art on Call code input.

Mocked up Art on Call code input.

Implimented Art on Call code input.

Implimented Art on Call code input.


Comparing smartphones

Here’s a few screenshots of the site on various phones:

Palm Pre

Palm Pre

Android 1.5

Android 1.5

Blackberry 9630

Blackberry 9630



Not pictured is Windows Mobile, because it looks really bad.

A future post may cover getting all of these emulators up and running, because it’s not as straight easy as it should be. Working with the blackberry emulator is especially painful.

How our mobile site works

The basic methodology for our mobile site is to pull the data, via either RSS or XML from our main website, parse it, cache it, and re-template it for mobile visitors. Nearly all of the pages on our site are available via XML if you know how to look. Parsing XML into usable data is a computationally expensive task, so caching is very important. Thankfully, AppEngine provides easy access to memcache, so we can memcache the XML fetches and the parsing as much as possible. Here’s our simple but effective URL parse/cache helper function:

[python]
from google.appengine.api import urlfetch
from xml.dom import minidom
from google.appengine.api import memcache

def parse(url,timeout=3600):
memKey = hash(url)
r = memcache.get(‘fetch_%s’ % memKey)
if r == None:
r = urlfetch.fetch(url)
memcache.add(key="fetch_%s" % memKey, value=r, time=timeout)
if r.status_code == 200:
dom = memcache.get(‘dom_%s’ % memKey)
if dom == None:
dom = minidom.parseString(r.content)
memcache.add(key="dom_%s" % memKey, value=dom, time=timeout)
return dom
else:
return False
[/python]

Google AppEngine does not impose much of a structure for your web app. Similar to Django’s urls.py, you link regular expressions for URLS to class-based handlers. You can’t pass any variables beyond what’s in the URL or the WebOb to the request handler. Each handler will call a different method, depending if it’s a GET, POST, DELETE, http request. If you’re coming from the django world like me, this is not much of a big deal at first, but it gets tedious pretty fast. If I had it to do over again, I’d probably use app-engine-patch from the outset, and thus be able to use all the normal django goodies like middleware, template context, and way more configurable urls.

Within each handler, we also cache the generated data where possible. That is, after our get handler has run, we cache all the values that we pass to our template that won’t change over time. Here’s an example of the classes that handle the visit section of our mobile site:

[python]
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.api import memcache
from xml.dom import minidom
from google.appengine.api import memcache
from utils import feeds, parse, template_context, text
import settings

class VisitDetailHandler(webapp.RequestHandler):
def get(self):
url = self.request.get("s") + "?style=xml"
template_values = template_context.getTempalteValues(self.request)
path = settings.TEMPLATE_DIR + ‘info.html’
memKey = hash(url)

r = memcache.get(‘visit_%s’ % memKey)
if r and not settings.DEBUG:
template_values.update(r)
self.response.out.write(template.render(path, template_values))
else:
dom = parse.parse(url)
records = dom.getElementsByTagName("record")
contents = []
for rec in records:
title = text.clean_utf8(rec.getElementsByTagName(‘title’)[0].childNodes[0].nodeValue)
body = text.clean_utf8(rec.getElementsByTagName(‘body’)[0].childNodes[0].nodeValue)
contents.append({‘title’:title,’body’:body})

back = {‘href’:’/visit/#top’, ‘text’:’Visiting’}
cacheableTemplateValues = { "contents": contents,’back’:back }
memcache.add(key=’visit_%s’ % memKey, value={ "contents": contents,’back’:back }, time=7200)
template_values.update(cacheableTemplateValues)
self.response.out.write(template.render(path, template_values))
[/python]

Dealing with parsing XML via the standard DOM methods is a great way to test your tolerance for pain. I would use libxml and xpath, AppEngine doesn’t provide those libraries in their python environment.

Because the only part of Django’s template system that AppEngine uses is the template language, and nothing else, we have to roll our own helper functions for context. Meaning, if we want to pass a bunch variables by default to our templates, something easy in django, we have to do it a little differently with GAE. I set up a function called getTemplateValues, which we pass the WebOb request, and it ferrets out and organizes info we need for the templates, passing it back as a dict.

[python]
def ua_test(request):
uastring = request.headers.get(‘user_agent’)
uaDict = {}
if "Mobile" in uastring and "Safari" in uastring:
uaDict['isIphone'] = True
if ‘BlackBerry’ in uastring:
uaDict['isBlackBerry'] = True
return uaDict

def getTempalteValues(request):
myDict = {}
myDict.update(ua_test(request))
myDict.update(googleAnalyticsGetImageUrl(request))
return myDict
[/python]

In my next post, I’ll talk about how to track visitors on a mobile site using google analytics, without using javascript.

Exhibition wiki for Worlds Away

In the research process for Worlds Away: New Suburban Landscapes, Design Director and Curator Andrew Blauvelt uncovered many interesting words invented to describe suburbia. Andrew enlisted now-former Design Fellow Jayme Yen and Visual Arts Fellow Rachel Hooper to assist in the research for the exhibit, and to further research the lexicon of suburbia. To make […]

In the research process for Worlds Away: New Suburban Landscapes, Design Director and Curator Andrew Blauvelt uncovered many interesting words invented to describe suburbia. Andrew enlisted now-former Design Fellow Jayme Yen and Visual Arts Fellow Rachel Hooper to assist in the research for the exhibit, and to further research the lexicon of suburbia. To make the collecting of the terminology easier, we set up a private wiki for them to use.

The wiki of terms has transformed into the lexicon found in the Worlds Away exhibition catalog (soon to be found in the Walker Shop). We thought the lexicon would make a great resource, so it was decided to build it into a larger exhibition website.

Worlds Away Website

Site URL: http://design.walkerart.org/worldsaway/

The exhibition website is still a wiki, and you can help enhance and add to the terms in the lexicon. Each entry in the lexicon consists of a definition, a section for images, and a google map. You can modify or enhance the definitions, or add new terms we might not know about. Images can be added to better describe the term. And map locations can also be submitted to give a satellite overview for terms that may best be seen from above (cloverleaf, for instance). We also added bios for all the artists in the exhibition, as well as a few sample essays and excerpts from other essays found in the catalog. Additionally, the selected videos from our YouTube competition can be found on the video section of the site.

The design of the site is drawn from the exhibition catalog design by Senior Designer Chad Kloepfer. The book uses different paper and ink colors in different sections to compartmentalize the types of content (essays, interviews, lexicon, and topics). The site also takes the book or paper metaphor and uses it as the navigation mechanism, allowing you to always see the index for the other sections of the site.

I wanted to enforce a rigours structure on the wiki, not let it grow out of hand, and only allow public edits in the lexicon section. Like our other wiki sites, this one is based on pmwiki, which allows for a rigorous permissions system. We’re using a few extensions, extended markup (for footnotes), Google Map API, NewPageBoxPlus, and DictIndex (for the lexicon list). Pmwiki is quite hackable, and the skin we constructed makes good use of that hackability. For the animation and accordion, I’m using my favorite javascript library, MooTools.

Please take some time to explore the site and enhance the lexicon of terms.

Quartz Composer in Leopard

Most techies probably know that Leopard has been out for a while now. Aside from all the goodness that is Time Machine, the thing that has me most excited is the new version of Quartz Composer. Create Digital Motion did a great post about what’s new, and you should read their post for the exhaustive […]

Most techies probably know that Leopard has been out for a while now. Aside from all the goodness that is Time Machine, the thing that has me most excited is the new version of Quartz Composer. Create Digital Motion did a great post about what’s new, and you should read their post for the exhaustive info.

Aside from many useful things (closed loops!), there are two things that stick out to me as exceedingly useful for creating dynamic digital signage:

  • Data crunching: Quartz Composer can now load, and download XML files, which makes it much easier to move large chunks of data in and out of your composition.
  • Multiple screens — or multiple projectors: There is now support for running Quartz compositions across multiple screens, and also a cluster.

Being able to use XML data rather than just an RSS feed could be extremely useful for specifying things beyond text and images. Color values, timing, or any number of things could be included here in XML. The way we generate most of our pages here at the walker, our output is XML, so piping something like the Walker Calendar into a Quartz Composition just got much easier.

The second thing on that list is the really exciting part. As part of the Developer Tools, apple added a new application called Quartz Composer Visualizer, aka QCV. It does a couple of things. It lets you play a single quartz composition across multiple screens, which you could not do with Quartz Composer in Tiger. I’m not sure yet how this works across multiple video cards. It also adds a network mode, where a host and clients share the same composition and synchronize via the network. Here’s a movie I made of a modified version of our Vineland Lobby Kiosk Screensaver:

[youtube]http://www.youtube.com/watch?v=baVKPNsNWyY[/youtube]

This is running on two different computers, my laptop and my desktop (with two displays). For the most part, the displays are in perfect sync. There is a little blip, but I think that’s probably because my Desktop is struggling to keep up, due to an older video card. There is also the option to run a second composition as an “optional processing composition”. What this means is that you can create another composition that has the logic for processing the data and settings, which gets passed along to the display compositions. Basically, this allows you to use a MVC way of doing things. Here’s a screenshot of the app in use:

Quartz Composer Visualizer

Finding clients is done via bonjour, so it is limited to the local network, but all you have to do is fire it up on each machine and they find each other. Depending on how well separate video cards are supported, it could be quite easy to run a multiple screen setup from one high-end Mac Pro, since most of QC processing happens on the video card(s). Mac minis could also work as well, though due to the underwhelming onboard video, might not have enough horsepower to do any fancy core image effects.

QCV isn’t an industrial level application; you couldn’t ship this off to a client as a complete solution for a digital signage project. But for use in house, or a situation where it could be monitored more closely, it could be extremely useful. The complete source code to QCV is also included in the developer tools, and it’s meant as a template and example for people. An enterprising objective-c developer (which I am not) could create such an industrial level application. But as a template application, it is surprisingly useful. QC and QCV are the things in leopard that excite me the most.

New Teens website

Last week, after quite a bit of work, the re-designed teens site went live: (larger screenshot) In discussing what a new site might be like with Witt, Christi and WACTAC, we came to the conclusion that the types of content we wanted to be on the site didn’t have a very clear relationship to each […]

Last week, after quite a bit of work, the re-designed teens site went live:

new_teens_thumb.jpg

(larger screenshot)

In discussing what a new site might be like with Witt, Christi and WACTAC, we came to the conclusion that the types of content we wanted to be on the site didn’t have a very clear relationship to each other, and that the audiences for each are different. There is, in effect, a “business audience”, which is visiting the site looking for information on what Teen Programs is, what they do, how to apply, etc. This audience most likely consists of parents, other museum professionals, and Teens looking to apply to WACTAC. The other audience are other teens, or others interested in what the teens are interested in. The new site literally divides the page in half for each of these audiences.

“The business side of things” is a simple information based site, loosely based on the look and feel artistic program sites. The layout was adapted somewhat to fit better into the dynamic space of the Teens site, but the style is the same. “The play side of things” is where the teens make their mark by posting blog entries, artwork, links and events. There are several different ways that WACTAC makes this page theirs:

  • Blogging: the site’s back-end is WordPress, so blogging is built-in. Every teen in WACTAC now has an account, and Witt is working with the council to cultivate ideas that can be formed into posts.
  • Links, for when the teens find something that isn’t quite worthy of an entirely new blog post, but maybe deserves a short note and a link, we’ve got that covered too. The links are culled from del.icio.us via RSS feed. Right now we use a shared wactac account, but in the future, and should any of the teens want to have their own del.icio.us account, a network can be set up and we can pull a combined feed.
  • Events are highly important to the site as well, and these are pulled via RSS from a shared account on Upcoming.org. We wanted the teens to be able to not only highlight their events at the Walker, but non-WACTAC Walker events as well as non-Walker events. I looked into several systems to essentially create a group calendar, and using Upcoming in this way seemed the easiest. It is essentially a social bookmarking service like del.icio.us, except it deals with the temporal and location based data that an event has. The time and location is in the RSS feed, which makes it a cinch to pull and display.
  • Art from the teens and other people that have influenced them will also be on the site. For the time being, this section is a category within the blog that gets special treatment. Images posted here are displayed in a larger size using a lightbox clone. Down the road, depending on how much this is used, we might consider replacing this with flickr. We’re using yahoo services for everything else, so why not make it complete?
  • Customizing the interface is one of the features of the site that I think makes this page really the teen’s space. Much like myspace, the teens can customize the colors, text, and background of this side of the site. Unlike myspace, they don’t edit the CSS themselves. Instead, the theme includes an admin panel that allows the teens to pick the colors for the boxes and text, as well as change the header and background images. I’m using a handy color picker based on mootools to make it easy to use.

This is the most “dynamic” site I’ve built so far, and I re-learned a lot about using javascript, especially with the Mootools framework. The hyper-object-oriented nature of JS + moo is both confusing and extremely powerful. For a javascript framework, mootools is quite compact and does a lot. There are also quite a few classes and user-contributed scripts out there based on it. In addition to the color picker mentioned above, the business side of things uses a heavily modified version of SmoothGallery. This article on “The Hows and Whys of Degradable Ajax” was also helpful in figuring out how to do the ajax loading on the business side of things in a semi-accessible fashion.

There are other things in the works for the site, including a Facebook app and perhaps a MySpace widget. That is the subject for another day, however.

If you’re looking for the old site, it still exists in archived form: Arhived Walker teens website.

Photobooth Redux

The next After Hours Preview Party isn’t happening for another two months, but I have recently been doing some work on the Party People Photobooth. During the Picasso Preview Party, we experienced some trouble with the camera control system. Specifically, the camera, an older Canon 10D, would get into a frustrating state where it wouldn’t […]

A Beautiful Smiling Intern

The next After Hours Preview Party isn’t happening for another two months, but I have recently been doing some work on the Party People Photobooth. During the Picasso Preview Party, we experienced some trouble with the camera control system. Specifically, the camera, an older Canon 10D, would get into a frustrating state where it wouldn’t talk to the computer, and the only cure was to cycle the power by physically disconnecting it from a power source, cycling the power switch wouldn’t do the trick. Add in that the CF card somehow corrupted itself, that the timing of the capture was always tenuous at best, and it was clear gphoto2 as the camera connection program wasn’t working.

Enter PSRemote. Having seen the software package Photoboof, I knew there had to be a better way. Indeed, looking at the requirements for Photoboof, it is noted that if you need to buy PSRemote if you wish to use a Canon PowerShot camera. Looking at PSRemote, we see that it “also includes a DLL and a sample program (complete with C++ source code) which allows other applications to release the camera’s shutter and adjust the shutter speed and aperture”. Sounds nifty, huh? The only (big) downside of PSRemote is that it runs on Windows only. Despite the pain this would this would inflict upon me, I decided that the benefits potentially outweighed the personal suffering and inevitable reinstall/reboot sequences I would endure.

Camera and Software

The Cameron Wittig in the Walker Photography Studio happened to have a Canon G7 that we could use, and it worked beautifully with PSRemote. The sample program that PSRemote provides for CLI access to snapping photos also works great, giving a reliable delay of about 1.5 seconds from hitting enter to the flash going off. Instead of using Photoboof, I used the Max/MSP + Jitter to control the preview and PSRemote. Under Windows, Jitter needs java installed from Sun, and the vdig component for quicktime so quicktime can talk to firewire devices. For the Camera, we added the AC adapter so it wouldn’t run off batteries, the lens adapter and macro ring adapter so that the ring flash would fit on the camera. It fits great and the camera stays powered up.

Video Preview

PSRemote, in it’s GUI form, can show a live video preview of what the camera sees, pulled over USB. The CLI sample program doesn’t provide this, though it is certainly possible for a person who knows c++ and the Windows development environment. Instead, I used the video output from the G7 connected to a firewire digitizer box and then pulled the digitized video back into Max/MSP this way. Certainly it is not the most elegant solution, but it is very reliable. PSRemote does turn off all the icons on the Camera display when you enable the video out preview. The added benefit of all this was that I no longer have to align an iSight and the actual capture camera so they see the same things. Now, the capture camera is also the preview camera. Our capture station isn’t very fast (an old AMD XP1700), so I am only able to run the preview (320×240) at 5fps, but as a preview, it works great. For the countdown text, displaying text in Jitter on windows is not so good. The jit.gl.text2d produces text that is not anti-aliased and just not great looking. It does, work, however.

Talking to PSRemote from Max proved to be a little tricky at first, mostly because I had forgotten how windows is put together. The DOSHack external under Windows provides similar functionality to the shell or aka.shell externals on OSX. The trick here is that you can only call built-in commands, or call upon programs located in c:\windows\system32\ (which is why you can launch notepad with the external). The solution is to simply place the PSRemote sample program and the DLL into the system32 directory, and then it magically works. Coming from OSX/Linux land, this doesn’t really strike me as an optimal solution, but it does work.

Proof is in the Pudding

As a test of this whole setup, I set up the photobooth for a private event a little over a week ago. Despite only having a week to put this together, everything worked with only one minor glitch. PSRemote saves the captured photos in sequentially numbered files (1.jpg, 2.jpg, etc..). My scripts that transfered the files around were erasing the captured files after copying them to the display computers. When this happened PSRemote would name the next file 1.jpg, and when it got transfered, it would replaced the existing file named 1.jpg. A quick rewrite of my transfer script fixed this and then it was in business. During the event, there were almost 100 photos captured and no crashes or other glitches.

Future Plans

The G7 has different white balance and levels than the 10D does, so the post processing script needs to be adjusted. I am planning on cutting Photoshop out of the mix, and instead post-processing the images with Imagemagick, since that can be easily installed on the projection computers. I also plan on enjoying the Frida Kahlo Preview Party a lot more since I won’t have to be baby-sitting the camera the whole time. It is my hope that this will make the Party People Photobooth a much more stable platform that won’t need to be revisited for testing every time we set it up.

Demo Movie

Attached is also a revised clip of what the projection looks like. My original annoucement post featured a similar clip, but with test photos before we ever took real photos during an event. Here is a clip using some photos taken during the Picasso opening (but not with the picasso-ify filter).

Next