Blogs Media Lab

Building the Benches and Binoculars Touchscreen Kiosk

[flickrvideo]http://www.flickr.com/photos/vitaflo/4119139342/[/flickrvideo] For our exhibition Benches and Binoculars, I was asked to create a touchscreen kiosk. The artwork in Benches and Binoculars is hung salon-style, making it impractical to use wall labels on works that are hanging 20 feet up in the air. Many get around this by having a gallery “map” (and our Design dept […]

[flickrvideo]http://www.flickr.com/photos/vitaflo/4119139342/[/flickrvideo]

For our exhibition Benches and Binoculars, I was asked to create a touchscreen kiosk. The artwork in Benches and Binoculars is hung salon-style, making it impractical to use wall labels on works that are hanging 20 feet up in the air. Many get around this by having a gallery “map” (and our Design dept did create these as well for the exhibit), but much like the exhibition itself, we thought it was a good time to “re-imagine” the gallery map.

I had never worked on a touchscreen app before. Sure, I’ve created kiosks here at the Walker but a touchscreen brings some new challenges, as well as some new opportunities. Input is both easier, and more difficult. You just use your hands, but people aren’t always sure how they are supposed to use their hands to perform actions, or even that they can.

Walker Director Olga Viso using the Benches and Binoculars kiosk

Walker Director Olga Viso using the Benches and Binoculars kiosk

As such my main goal when making the kiosk was to keep it simple. Don’t let the interface get in the way of the information. The interface should help facilitate finding the content you want easily. Too many times I’ve seen these types of devices be more about the technology than about the content on them. This meant making the kiosk less “flashy”, but in turn also made it more useful.

In the end the layout was rather simple. The screen has an exact (to the pixel) representation of the artwork hanging on the walls. Moving your hand right and left on the kiosk moved the walls on it left and right. Tapping on an artwork brought up a modal window with a high res image of the object as well as the label text. There is nothing particularly fancy or new about this idea, and there really shouldn’t have been. Much more would have taken away the experience you were there for, namely viewing the artworks on the walls.

As for the technology involved, we decided to use the HP Touchsmart PC for this particular kiosk. It uses an infrared field above the screen to track “touch”. As such you don’t actually have to make physical contact with the screen to activate a touch event, you just have to break the infrared plane.

We decided on the 22″ version because we wanted the machine to be single use. With the way the computer is set up, it’s not all that great at multi-touch as it is. And wanting to keep the device as simple as possible led to wanting to keep usable by one person at a time. There is a larger version of the Touchsmart but any larger than the 22″ and it felt like you were supposed to have more than one person use it at a time, which we wanted to stay away from.

Since we didn’t have to worry about multi use, we had a few more options on what to build the interface with. Most people would probably go the Flash route but for us Flash is usually the choice of last resort. This is for various reasons, not the least of which for me is lack of experience with Flash. But most of what you can do in Flash these days can also be done in the browser, and given that front end interfaces are my forte, that’s where I went.

The interface is just a simple HTML page that dynamically calls ArtsConnectEd for its data. Thankfully, Nate was able to leverage a lot of the work he did on ACE for this which sped up development drastically. Interaction is just built with some jQuery scripts I wrote. All in all it wasn’t all that difficult to get together except for a few snags (isn’t there always some?).

Using the Kiosk.

Using the Kiosk.

One was that I found very early on that interacting with a touchscreen is a lot different from using a mouse. Hit areas are much different since when you press on a screen your finger tends to “roll”. On the first mousedown event, the tip of your finger is in one spot, but as you press, the mouse position shifts lower on the screen as your finger flattens out from pressing into the screen. This means the mouseup event is in a totally different spot, which can cause issues with trying to register a proper click. A problem exists when trying to register a drag event for the same reason. As such I had to program in some “slush” room to compensate for this.

The second issue I had was that of the computer and browser itself. The Touchsmarts, while having a decent CPU were really slow and sluggish in general. I had from the beginning targeted Firefox for the development platform. Mainly because it has many fullscreen kiosk implementations as add ons. But once I loaded up 98 images with all of the CSS drop shadows, transparencies, etc, the entire browser was very sluggish and choppy.

I had read recently that Google Chrome was pushing v4 to be a lot faster and their new beta had just been released for it. Testing it I found that it was about 3 times faster than Firefox. The issue was it had no true kiosk mode. I was in a bind. I had a nice fullscreen kiosk in Firefox that was choppy, and a decent speed browser in Chrome that had no kiosk mode.

After much searching I found that a kiosk patch was in development on the browser. The only issue was patching it into a build. Unfortunately Google’s requirements for building Chrome on Windows is not trivial and I couldn’t find anyone to do it for me. In desperation, I emailed the creator of the patch, Mohamed Mansour, to see if he could build me a binary with his patch in it. Thankfully he came through and was able to offer up a custom build of Chrome with the kiosk mode built in that I could use for the exhibition. It’s worked wonderfully (note, this patch has since been checked into the Google Chrome nightlies).

In the end it turned out better than I thought it would. Chrome was fast enough for me to go back and add in new features like proper acceleration when “throwing” the walls. And the guys in the Walker carpentry shop, especially David Dick, made a beautiful pedestal to install the kiosk in, complete with a very nice black aluminum bezel. I couldn’t be more happy and from the looks of it our visitors are as well. It goes a long way to my (and New Media’s) goal of taking complex technology and making it simple for users, as well as the Walker’s mission of the active engagement of audiences.

You can see more photos in my Flickr set:
http://www.flickr.com/photos/vitaflo/sets/72157622839288542/

Building the Walker’s mobile site, part 2 — google analytics without javascript

As I mentioned in my last post on our mobile site, one of the key features for our site was making sure that we don’t use any javascript unless absolutely necessary. If you use Google Analytics  (GA) as your stats package, this poses a problem, since the supported way to run GA is via a […]

ga_mobileAs I mentioned in my last post on our mobile site, one of the key features for our site was making sure that we don’t use any javascript unless absolutely necessary. If you use Google Analytics  (GA) as your stats package, this poses a problem, since the supported way to run GA is via a chunk of javascript at the bottom of every page. And to make matters worse, the ga.js file is not gzipped, so you’re loading 9K which would otherwise be about 4k, on a platform where every byte counts. By contrast, if you could just serve the tracking gif, it is 47 bytes. And no javascript that might not run on B-grade or below devices.

A few weeks ago, Google announced support for analytics inside mobile apps and some cursory support for mobile sites:

Google Analytics now tracks mobile websites and mobile apps so you can better measure your mobile marketing efforts. If you’re optimizing content for mobile users and have created a mobile website, Google Analytics can track traffic to your mobile website from all web-enabled devices, whether or not the device runs JavaScript. This is made possible by adding a server side code snippet to your mobile website which will become available to all accounts in the coming weeks (download snippet instructions). We will be supporting PHP, Perl, JSP and ASPX sites in this release. Of course, you can still track visits to your regular website coming from high-end, Javascript enabled phones.

And that is the extent of the documentation you will find anywhere on Google on how to run analytics without javascript. The code included is handy if you happen to run one of their platforms, but the Walker’s mobile site runs on the python side of AppEngine, so their code doesn’t do us much good. Thankfully, since they provide us with the source, we can without too much trouble, translate the php or perl into python and make it AppEngine friendly.

How it works

Regular Google Analytics works by serving some javascript and a small 1px x 1px gif file to your site from Google. The gif lets Google learn many things from the HTTP request your browser makes, such as your browser, OS, where you came from, your rough geo location, etc. The javascript lets them learn all kinds of nifty things about your screen, flash versions, event that fire, etc. And Google tracks you through a site by setting some cookies on that gif they serve you.

To use GA without javascript, we can still do most of that, and we do it by generating our own gif file and passing some information back to Google through our server. That is, we generate a gif, assign and track our own cookie, and then gather that information as you move through the site, and use a HTTP request with the appropriate query strings and pass it back to Google, which they then compile and treat as regular old analytics.

The Code

To make this work in appeinge, we create a  URL in our webapp that we’ll serve the gif from. I’m using “/ga/”:

[python]
def main():
application = webapp.WSGIApplication(
[('/', home.MainHandler),
# edited out extra lines here
('/ga/', ga.GaHandler),
],
debug=False)
wsgiref.handlers.CGIHandler().run(application)
[/python]

And here’s the big handler for /ga/. I based it mostly off the php and some of the perl (click to expand the full code):

[code lang="python" collapse="true"]
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
import re, hashlib, random, time, datetime, cgi, urllib, uuid

# google analytics stuff
VERSION = "4.4sh"
COOKIE_NAME = "__utmmobile"

# The path the cookie will be available to, edit this to use a different cookie path.
COOKIE_PATH = "/"

# Two years in seconds.
COOKIE_USER_PERSISTENCE = 63072000

GIF_DATA = [
chr(0x47), chr(0x49), chr(0x46), chr(0x38), chr(0x39), chr(0x61),
chr(0x01), chr(0x00), chr(0x01), chr(0x00), chr(0x80), chr(0xff),
chr(0x00), chr(0xff), chr(0xff), chr(0xff), chr(0x00), chr(0x00),
chr(0x00), chr(0x2c), chr(0x00), chr(0x00), chr(0x00), chr(0x00),
chr(0x01), chr(0x00), chr(0x01), chr(0x00), chr(0x00), chr(0x02),
chr(0x02), chr(0x44), chr(0x01), chr(0x00), chr(0x3b)
]

class GaHandler(webapp.RequestHandler):
def getIP(self,remoteAddress):
if remoteAddress == '' or remoteAddress == None:
return ''

#Capture the first three octects of the IP address and replace the forth
#with 0, e.g. 124.455.3.123 becomes 124.455.3.0
res = re.findall(r'\d+\.\d+\.\d+\.', remoteAddress)
if res:
return res[0] + "0"
else:
return ""

def getVisitorId(self, guid, account, userAgent, cookie):
#If there is a value in the cookie, don't change it.
if type(cookie).__name__ != 'NoneType': # or len(cookie)!=0:
return cookie

message = ""

if type(guid).__name__ != 'NoneType': # or len(guid)!=0:
#Create the visitor id using the guid.
message = guid + account
else:
#otherwise this is a new user, create a new random id.
message = userAgent + uuid.uuid1(self.getRandomNumber()).__str__()

m = hashlib.md5()
m.update(message)
md5String = m.hexdigest()

return str("0x" + md5String[0:16])

def getRandomNumber(self):
return random.randrange(0, 0x7fffffff)

def sendRequestToGoogleAnalytics(self,utmUrl):
'''
Make a tracking request to Google Analytics from this server.
Copies the headers from the original request to the new one.
If request containg utmdebug parameter, exceptions encountered
communicating with Google Analytics are thown.
'''
headers = {
"user_agent": self.request.headers.get('user_agent'),
"Accepts-Language": self.request.headers.get('http_accept_language'),
}
if len(self.request.get("utmdebug"))!=0:
data = urlfetch.fetch(utmUrl, headers=headers)
else:
try:
data = urlfetch.fetch(utmUrl, headers=headers)
except:
pass

def get(self):
'''
Track a page view, updates all the cookies and campaign tracker,
makes a server side request to Google Analytics and writes the transparent
gif byte data to the response.
'''
timeStamp = time.time()

domainName = self.request.headers.get('host')
domainName = domainName.partition(':')[0]

if len(domainName) == 0:
domainName = "m.walkerart.org";

#Get the referrer from the utmr parameter, this is the referrer to the
#page that contains the tracking pixel, not the referrer for tracking
#pixel.
documentReferer = self.request.get("utmr")

if len(documentReferer) == 0 or documentReferer != "0":
documentReferer = "-"
else:
documentReferer = urllib.unquote_plus(documentReferer)

documentPath = self.request.get("utmp")
if len(documentPath)==0:
documentPath = ""
else:
documentPath = urllib.unquote_plus(documentPath)

account = self.request.get("utmac")
userAgent = self.request.headers.get("user_agent")
if len(userAgent)==0:
userAgent = ""

#Try and get visitor cookie from the request.
cookie = self.request.cookies.get(COOKIE_NAME)

visitorId = str(self.getVisitorId(self.request.headers.get("HTTP_X_DCMGUID"), account, userAgent, cookie))

#Always try and add the cookie to the response.
d = datetime.datetime.fromtimestamp(timeStamp + COOKIE_USER_PERSISTENCE)
expireDate = d.strftime('%a,%d-%b-%Y %H:%M:%S GMT')

self.response.headers.add_header('Set-Cookie', COOKIE_NAME+'='+visitorId +'; path='+COOKIE_PATH+'; expires='+expireDate+';' )
utmGifLocation = "http://www.google-analytics.com/__utm.gif"

myIP = self.getIP(self.request.remote_addr)

#Construct the gif hit url.
utmUrl = utmGifLocation + "?" + "utmwv=" + VERSION + \
"&utmn=" + str(self.getRandomNumber()) + \
"&utmhn=" + urllib.pathname2url(domainName) + \
"&utmr=" + urllib.pathname2url(documentReferer) + \
"&utmp=" + urllib.pathname2url(documentPath) + \
"&utmac=" + account + \
"&utmcc=__utma%3D999.999.999.999.999.1%3B" + \
"&utmvid=" + str(visitorId) + \
"&utmip=" + str(myIP)

# we dont send requests when we're developing
if domainName != 'localhost':
self.sendRequestToGoogleAnalytics(utmUrl)

#If the debug parameter is on, add a header to the response that contains
#the url that was used to contact Google Analytics.
if len(self.request.get("utmdebug")) != 0:
self.response.headers.add_header("X-GA-MOBILE-URL" , utmUrl)

#Finally write the gif data to the response.
self.response.headers.add_header('Content-Type', 'image/gif' )
self.response.headers.add_header('Cache-Control', 'private, no-cache, no-cache=Set-Cookie, proxy-revalidate' )
self.response.headers.add_header('Pragma', 'no-cache' )
self.response.headers.add_header('Expires', 'Wed, 17 Sep 1975 21:32:10 GMT' )
self.response.out.write(''.join(GIF_DATA))

[/code]

So now we know what to do with our requests at /ga/ when we get them, we just need to make the proper requests to that URL in the first place. So we need to generate the URL we’re going to have the visitor’s browser request in the first place. With normal django, we would be able to use template_context to automatically insert it into the page’s template values. But, since AppEngine doesn’t use that, we have our own helper functions to do that, which I showed some of in my last post. Here’s the updated helper functions, with the GoogleAnalyticsGetImageUrl function included:

[code lang="python"]
import settings

def googleAnalyticsGetImageUrl(request):
url = ""
url += '/ga/' + "?"
url += "utmac=" + settings.GA_ACCOUNT
url += "&utmn=" + str(random.randrange(0, 0x7fffffff))

referer = request.referrer
query = urllib.urlencode(request.GET) #$_SERVER["QUERY_STRING"];
path = request.path #$_SERVER["REQUEST_URI"];

if len(referer) == 0:
referer = "-"

url += "&utmr=" + urllib.pathname2url(referer)

if len(path)!=0:
url += "&utmp=" + urllib.pathname2url(path)

url += "&guid=ON";

return {'gaImgUrl':url}

def getTempalteValues(request):
myDict = {}
myDict.update(ua_test(request))
myDict.update(googleAnalyticsGetImageUrl(request))
return myDict

[/code]

Assuming we use getTemplateValues to set up our inital template_values dict, we should have a variable named ‘gaImgUrl’ in our page. To use it, all we need to do is put this at the bottom of every page on the site:

[code lang="html"]
<img src="{{ gaImgUrl }}" alt="analytics" />
[/code]

My settings file contains the GA_ACCOUNT variable, but replaces the standard GA-XXXXXX-X setup with MO-XXXXXX-X. I’m assuming the MO- tells google that it’s a mobile so accept the proxied requests.

One thing to keep in mind with this technique is that you cannot cache your rendered templates. The image you server will necessarily have a different query string every time, and if you cached it, you would ruin your analytics. Instead, you should cache nearly everything from your view functions, except the gaImgUrl variable.

New Media kills in the Walker’s pumpkin carving contest

Every year, the Walker has a staff halloween party, which includes a departmental pumpkin carving contest. And this isn’t just a carve a grocery store pumpkin contest, it’s a creative, conceptual, re-imagine an artist or artwork pumpkin contest. Invariably, our carpentry shop and registration departments usually blow everyone else out of the water. Those of […]

Every year, the Walker has a staff halloween party, which includes a departmental pumpkin carving contest. And this isn’t just a carve a grocery store pumpkin contest, it’s a creative, conceptual, re-imagine an artist or artwork pumpkin contest. Invariably, our carpentry shop and registration departments usually blow everyone else out of the water. Those of us that are a little less hands-on with the art work tend to be outclassed every year (exhibits 1, 2, and 3). New Media Initiatives never wins.

But not this year.

This year, we had a plan.

Actually, we came up with the plan after our no-show defeat last year, but we smartly held onto it for this year (thank you, iCal). On the day of the contest, we replaced every image of artwork on the Walker website with an image of a pumpkin.

walker homepage with pumpkins

And the rest of the pages (click to embiggen):

Calendar

Calendar

Collections and Resources

Collections and Resources

Artists-in-Residence

Artists-in-Residence

Visual Arts

Visual Arts

Design Blog

Design Blog



We ended up winning in the “Funniest Pumpkin” category.

Because we serve all of our media from a single server using lighttpd, and our files are all uniformly named, we were able to implement a simple rule set in lighty to replace the images. Instead of the requested file, each image was re-directed to a simple perl script that would grab a random jpg from our pool of pumpkin images, and send it’s contents instead. Part of the plan was that we would only serve these images to people visiting our site from inside our internal network. The rest of the world would see our website just as always. In our department, we all unplugged our ethernet cables and ran off of our firewall’d WiFi, which effectively put us outside the network, seeing nothing different on the site. We had a hard time holding back evil cackles as people came to us wondering how our site was hacked, and watching it slowly dawn on them that this was our pumpkin.

The images we used were all the creative commons licensed flickr images of pumpkins I could find. There were 54 of them. Here they are, for credit:

Building the Walker’s mobile website with Google AppEngine, part 1

Over the summer, our department made a small but significant policy change. We decided to take a cue from Google’s 20% time philosophy and spend one day a week working on a Walker-related project of our choosing. Essentially, we wanted to embark on quicker, more nimble projects that hold more interest for our team. The […]

mwalker-iphoneOver the summer, our department made a small but significant policy change. We decided to take a cue from Google’s 20% time philosophy and spend one day a week working on a Walker-related project of our choosing. Essentially, we wanted to embark on quicker, more nimble projects that hold more interest for our team. The project I decided to experiment with was making a mobile website for the Walker, m.walkerart.org.

Reviewing our current site to inform the mobile site

The web framework we use for most of our site has the ability, with some small changes, to load different versions of a page based on a visitor’s User Agent (what browser they’re using). This would mean we could detect if a visitor was running IE on a Desktop or Mobile Safari on an iPhone, and serve each of them two different versions of a page. This is how a lot of mobile sites are done.

This is not the approach we went with for our mobile site, because it violates two of the primary rules (in my mind) of making a mobile website:

  1. Make it simple.
  2. Give people the stuff they’re looking for on their phones right away.

Our site is complicated: we have pages for different disciplines, a calendar with years of archives, and many specialty sites. Rule #1, violated. To address #2, I took a look at our web analytics to figure out what most people come to our site looking for. This won’t surprise anyone, but it’s hours, admission, directions, and what’s happening today at the Walker:

Top Walker Pages as noted by Google Analytics

Top Walker Pages as noted by Google Analytics

So it seems pretty clear those things should be up front. One of the other things you might want to access on a mobile is Art on Call. While Art on Call is designed primarily around dial-in access, there is also a website, but it isn’t optimized for the small screen of a smartphone. We have WiFi in most spaces within our building, so accessing Art on Call via an web interface and streaming audio via HTTP rather than POTS is a distinct possibility that I wanted to enable.

Prototyping with Google AppEngine

I decided to develop a quick prototype using Google AppEngine, thinking I’d end up using GAE in the end, too. Because this was a 20% time project, I had the freedom to do it using the technology I was interested in. AppEngine has the advantage of being something that isn’t hosted on our server, so there was no need to configure any complicated server stuff. In my mind, AppEngine is perfect for a mobile site because of the low bandwidth requirements for a mobile site. Google doesn’t provide a ton for free, but if your pages are only 20K each, you can fit quite a bit within the quotas they do give provide. AppEngine’s primary language is also python, a big plus, since python is the best programming language.

In about two days I built a proof of concept mobile site that displayed the big-ticket pages (hours, admission,etc.) and had a simple interface for Art on Call. Using iUi as a front-end framework was really, really useful here, because it meant that the amount of HTML/CSS/JS I had to code was super minimal, and I didn’t have to design anything.

I showed the prototype to Robin and she enthusiastically gave me the green light to work on it full-time.

Designing a mobile website

A strategy I saw when looking at mobile sites was to actually have two mobile sites: one for the A-grade phones (iPhone, Nokia S60, Android, Pre) and one for the B- and C-grade phones (Blackberry and Windows Mobile). The main difference between the two is the use of javascript and some more advanced layout. Depending on what version of Blackberry you look at, they have a pretty lousy HTML/CSS implementation, and horrendous or no javascript support.

To work around this, our mobile site does not use any javascript on most pages and tries to keep the HTML/CSS pretty simple. We don’t do any fancy animations to load between pages like iUi or jQtouch do: even on an iPhone those animations are slow. If you make your pages small enough, they should load fast enough and a transition is not necessary.

Designing mobile pages is fun. The size and interface methods for the device force you to re-think how to people interact and what’s important. They’re also fun because they’re blissfully simple. Each page on our mobile site is usually just a headline, image, paragraph or two, and some links. Laying out and styling that content is not rocket surgery.

Initially, when I did my design mockups in Photoshop, I wanted to use a numpad on the site for Art on Call, much like the iPhone features for making a phone call. There’s no easy input for doing this, but I thought it wouldn’t be too hard to create one with a little javascript (for those that had it). Unfortunately, due to the way touchscreen phones handle click/touch events in the browser, there’s a delay between when you touch and when the click event fires in javascript. This meant that it was possible to touch/type the number much faster than the javascript events fired. No go.

Instead, the latest versions of WebKit provide with a HTML5 input field with a type of “number”. On iPhone OS 3.1 and better, it will bring up the keypad already switched to the numeric keys. It does not do this on iPhone OS prior to 3.1. I’m not sure how Android and Pre handle it.

Mocked up Art on Call code input.

Mocked up Art on Call code input.

Implimented Art on Call code input.

Implimented Art on Call code input.


Comparing smartphones

Here’s a few screenshots of the site on various phones:

Palm Pre

Palm Pre

Android 1.5

Android 1.5

Blackberry 9630

Blackberry 9630



Not pictured is Windows Mobile, because it looks really bad.

A future post may cover getting all of these emulators up and running, because it’s not as straight easy as it should be. Working with the blackberry emulator is especially painful.

How our mobile site works

The basic methodology for our mobile site is to pull the data, via either RSS or XML from our main website, parse it, cache it, and re-template it for mobile visitors. Nearly all of the pages on our site are available via XML if you know how to look. Parsing XML into usable data is a computationally expensive task, so caching is very important. Thankfully, AppEngine provides easy access to memcache, so we can memcache the XML fetches and the parsing as much as possible. Here’s our simple but effective URL parse/cache helper function:

[python]
from google.appengine.api import urlfetch
from xml.dom import minidom
from google.appengine.api import memcache

def parse(url,timeout=3600):
memKey = hash(url)
r = memcache.get(‘fetch_%s’ % memKey)
if r == None:
r = urlfetch.fetch(url)
memcache.add(key="fetch_%s" % memKey, value=r, time=timeout)
if r.status_code == 200:
dom = memcache.get(‘dom_%s’ % memKey)
if dom == None:
dom = minidom.parseString(r.content)
memcache.add(key="dom_%s" % memKey, value=dom, time=timeout)
return dom
else:
return False
[/python]

Google AppEngine does not impose much of a structure for your web app. Similar to Django’s urls.py, you link regular expressions for URLS to class-based handlers. You can’t pass any variables beyond what’s in the URL or the WebOb to the request handler. Each handler will call a different method, depending if it’s a GET, POST, DELETE, http request. If you’re coming from the django world like me, this is not much of a big deal at first, but it gets tedious pretty fast. If I had it to do over again, I’d probably use app-engine-patch from the outset, and thus be able to use all the normal django goodies like middleware, template context, and way more configurable urls.

Within each handler, we also cache the generated data where possible. That is, after our get handler has run, we cache all the values that we pass to our template that won’t change over time. Here’s an example of the classes that handle the visit section of our mobile site:

[python]
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.api import memcache
from xml.dom import minidom
from google.appengine.api import memcache
from utils import feeds, parse, template_context, text
import settings

class VisitDetailHandler(webapp.RequestHandler):
def get(self):
url = self.request.get("s") + "?style=xml"
template_values = template_context.getTempalteValues(self.request)
path = settings.TEMPLATE_DIR + ‘info.html’
memKey = hash(url)

r = memcache.get(‘visit_%s’ % memKey)
if r and not settings.DEBUG:
template_values.update(r)
self.response.out.write(template.render(path, template_values))
else:
dom = parse.parse(url)
records = dom.getElementsByTagName("record")
contents = []
for rec in records:
title = text.clean_utf8(rec.getElementsByTagName(‘title’)[0].childNodes[0].nodeValue)
body = text.clean_utf8(rec.getElementsByTagName(‘body’)[0].childNodes[0].nodeValue)
contents.append({‘title':title,’body':body})

back = {‘href':’/visit/#top’, ‘text':’Visiting’}
cacheableTemplateValues = { "contents": contents,’back':back }
memcache.add(key=’visit_%s’ % memKey, value={ "contents": contents,’back':back }, time=7200)
template_values.update(cacheableTemplateValues)
self.response.out.write(template.render(path, template_values))
[/python]

Dealing with parsing XML via the standard DOM methods is a great way to test your tolerance for pain. I would use libxml and xpath, AppEngine doesn’t provide those libraries in their python environment.

Because the only part of Django’s template system that AppEngine uses is the template language, and nothing else, we have to roll our own helper functions for context. Meaning, if we want to pass a bunch variables by default to our templates, something easy in django, we have to do it a little differently with GAE. I set up a function called getTemplateValues, which we pass the WebOb request, and it ferrets out and organizes info we need for the templates, passing it back as a dict.

[python]
def ua_test(request):
uastring = request.headers.get(‘user_agent’)
uaDict = {}
if "Mobile" in uastring and "Safari" in uastring:
uaDict['isIphone'] = True
if ‘BlackBerry’ in uastring:
uaDict['isBlackBerry'] = True
return uaDict

def getTempalteValues(request):
myDict = {}
myDict.update(ua_test(request))
myDict.update(googleAnalyticsGetImageUrl(request))
return myDict
[/python]

In my next post, I’ll talk about how to track visitors on a mobile site using google analytics, without using javascript.

Art(ists) on the Verge 2: Grants for new media artists in minnesota

Minneapolis-based Northern Lights.mn has announced the second year of Ar(ists) on the Verge: Northern Lights announces a second round of Art(ists) on the Verge commissions (AOV2). AOV2 is an intensive, mentor-based fellowship program for 5 Minnesota-based, emerging artists or artist groups working experimentally at the intersection of art,  technology, and digital culture with a focus […]

Photo by k0a1a.net.

Minneapolis-based Northern Lights.mn has announced the second year of Ar(ists) on the Verge:

Northern Lights announces a second round of Art(ists) on the Verge commissions (AOV2). AOV2 is an intensive, mentor-based fellowship program for 5 Minnesota-based, emerging artists or artist groups working experimentally at the intersection of art,  technology, and digital culture with a focus on network-based practices that are interactive and/or participatory. AOV2 is generously supported by the Jerome Foundation.

Northern Lights was founded by former Walker New Media Curator Steve Dietz. The grants this year will be juried by Dietz, along with Kathleen Forde, Curator for Time-Based Arts at the Experimental Media and Performing Arts Center (EMPAC) in Troy, NY, and the Walker’s chief curator, Darsie Alexander.

The resulting show  show at the Weisman Art Museum from last years grantees was worth checking out. It is good to see work being done to create our own new media art structures here in Minnesota, rather than watching cool things like Eyebeam happen from afar.

And by the way, Northern Lights’ blog, Public Address, has become one of my favorite reads for neat artwork being made around the world. I confess I find a lot of art blogs rather dry and esoteric, but not Public Address. And, this may seem somewhat mundane and obvious, but near every post has an interesting image, which is nice for an art blog.

Access the Walker’s website from Minneapolis Public WiFi

If you’re visiting town and are out and about, getting info on the Walker and other cultural institutions in the city via the web just got easier. Minneapolis’ city-wide wireless network now lets users access walkerart.org without being a subscriber. Here’s how it works: On your computer, select the “City of Minneapolis Public WiFi” network. […]

If you’re visiting town and are out and about, getting info on the Walker and other cultural institutions in the city via the web just got easier. Minneapolis’ city-wide wireless network now lets users access walkerart.org without being a subscriber. Here’s how it works:

On your computer, select the “City of Minneapolis Public WiFi” network.

select_wifi

Open your browser and point yourself to walkerart.org. That should do it. You may be directed to a user agreement log in screen and then the “walled garden” of Minneapolis city information and lists of other accessible community sites. The Walker is listed under Area Arts & Culture > Arts & Museums > Art Museums.

Wireless Log In Screen

Wireless Log In Screen

Minneapolis Dowtown Area Walled Garden Portal

Minneapolis Dowtown Area Walled Garden Portal


A brief history of Minneapolis Municipal WiFi

Several years ago, the City of Minneapolis joined with USI Wireless to build out a city-wide network. The goal was to provide access for city government and citizens. The city would be a core tenant, paying USI, and USI would sell access to citizens. The city required USI to build a community portal and USI must provide grants out of it’s profits to non-profits working to bridge the digital divide.

Over the last several years, the network has slowly been built out. Right now there are some problem areas, which include Loring Park and the Minneapolis Sculpture Garden. My understanding is that these areas should see service sometime soon, though I’m not sure of any exact plans on the Sculpture Garden.

There are a couple things I have really liked about the network:

  • We’re doing it. A lot of cities have talked about building municipal wifi, and then discover large problems and things don’t work well. There have been some issues with in Minneapolis, it is taking longer to build the network than originally thought, but my impression is that it has worked fairly well.
  • It’s network neutral. The agreement between the city and USI specifically requires USI to not hinder any type of traffic over another.
  • Parts of it are free. This is how you can get to our site for free.
  • It’s low cost. The cost for being a subscriber is pretty low, compared to other wire-based providers.
  • It’s local. USI is a local company.

For more information on the network and the history, Peter Fleck has been blogging about Minneapolis WiFi for some time.

Behind-the-scenes of ArtsConnectEd: Art Finder

On September 1, 2009 the new ArtsConnectEd became available at ArtsConnectEd.org.  The new site provides access to more than 100,000 museum resources, including audio, video, images, and information about works of art, all of which can be saved and presented with the more powerful Art Collector. This project was at least three years in the […]

On September 1, 2009 the new ArtsConnectEd became available at ArtsConnectEd.org.  The new site provides access to more than 100,000 museum resources, including audio, video, images, and information about works of art, all of which can be saved and presented with the more powerful Art Collector.

This project was at least three years in the making, with the last two of those being the technical work of research, design, and development.  In this series of posts I’d like to present some of the decisions we struggled with and the process we went through in developing the new site.  I’ll start with the Art Finder, followed by a post on the Art Collector and presentations, and finish with a post about some of the more technical aspects including the data and harvesting technologies we’re using.

Art Finder

The Art Finder is the guts of the site, a portal into our thousands and thousands of objects, text records, and more.  I don’t think it’s an exaggeration to say designing and building this component was the biggest challenge we faced in the entire process.  We’ve redesigned the interface many times, often significantly, and are still not certain it’s right.  We’ve changed the underlying technology from a SQL / Lucene hybrid to a straight-up Solr search engine.  We’ve debated (endlessly) what fields to include, and what subset of our data to present in those fields.  We’ve gone back and forth over tab titles, and even whether to use tabs.  A rocky road, to say the least.

The big idea

What if we could start with everything and narrow it down from there?  Offer the user the entire collection and let them whittle away at it until they found what they wanted?

It’s all browse.  Keyword is just another filter.

To me this is the big breakthrough of the ArtsConnectEd interface.  We don’t hide the content behind a search box, or only show filters after you try a keyword.  We don’t have a separate page for “Advanced Search”, but we offer the same power through filters.  There is still a keyword field for those who know exactly what they’re looking for, but we get to use our metadata in a more powerful way than simple text.  That is, since know the difference between the word “painting” appearing in the description and something that is a painting, we can present that to the user through filters.

How we Got here

browse_wireframeWe wanted many ways for the user to explore the collection, with the idea we might hopefully mimic some of the serendipity of exploring a gallery.  The tech committee felt early on that we’d need, in addition to a robust search, some way to freely browse.  Our initial attempt was to split the Art Finder into a Browse interface (left) and a Search interface (right).search_wireframe

After forcing users to choose a content type to browse (Object, Text, etc), we exposed facets (fields) to allow filtering, e.g. by Medium or Style.  These facets were hidden by default in the Search interface, where instead you started with a keyword and content type as tabs — but could then click to reveal the same browse filters!  The more we played with these two ideas, the more we realized they were essentially the same thing, the only difference being a confusing first step and then having to learn two interfaces.  The real power of the site was in combining them, committing fully to Browse, and adding the keyword search as a filter.

Lastly, as we harvested more of our collections we realized pushing filters to the front offered a better way to drill down when many of our records are not text-heavy and thus less findable via keyword search.  In many ways browse leveled the playing field of our objects between those with healthy wall labels and those with more sparse metadata.

fact_discovery

What works

(In my humble opinion!)  A good browse has to do a few things:

  • Be fast. Studies have shown that slow search (or browse) results derail a user’s chain of thought and makes it difficult to complete tasks.  We went one step further and did away with the “Go” button for everything but keyword – making a change to a pulldown automatically updates your result set.  (It’s not instant, but it’s fast enough the action feels connected to the results)
  • Reduce complex fields to an intuitive subset. We have a huge range of unique strings for the Medium field, but we’ve broadly grouped them to present a reasonable-sized pulldown.  Likewise for the Culture pulldown.  (We manually reduce the terms for Medium, and have a automated Bayesian filter for the Culture field)
  • Have good breadcrumbs. Users need to know what options are in effect and be able to backtrack easily.
  • Avoid dead ends. With many interfaces it’s entirely too easy to browse yourself into an empty set.  By showing numbers next to our filter choices, we can help users avoid these “dead ends”.
  • Expose variety. Type “Jasper Johns” in the artist field, and check out the Medium pulldown: it shows the bulk of his work is in Prints, but we also have a few sculptures, some mixed media, etc.  A nice way to see the variety of an artist’s work at-a-glance.
  • Autocomplete complicated fields. If a search box is targeted to a field (like our Artist box), it needs to autocomplete.  Leaving a field like this open to free text is asking for frustration as people get 0 results for “Claes Oldenberg“. (Auto-suggest “did you mean” should also work!)
  • Have lots of sort options. One of my favorite features of the new Art Finder is the ability to sort by size.  Super cool.  (check out the Scale tab in the detail view for more fun!)

I’m biased after this project, but I’m fairly convinced combining faceted browsing with keyword search is absolutely the way to go for collection search.  It gives the best of both worlds, powerful but still intuitive.

facets_1

What could be better

… but is it really intuitive?  People seem to still be looking for a big inviting search box to start with.  The interface is crowded, and the number of options looks intimidating.  We’ve ended up avoiding using the words “Search” and “Browse” because they were loaded and causing confusion.  We’ve tried many versions of the tab bar to try to clarify what filters apply globally (e.g. Institution) and which only effect that tab (Works of Art have an Artist, for instance), but I don’t believe we’ve solved it.

I think the two components of the interface that give us the most trouble and confusion are actually the “Has Image” checkbox and the “Reset All” button.  These are consistently missed by people in testing, and we have tried almost everything we can think of.  Oh, and the back button.  The back button is “broken” in dynamic search like this.

Also, while I really like the look of the tiles in the results panel, we’ve had to heavily overload the rollover data to show fields we can sort by since there’s no more room in the tiles.  We also intended to create alternative result formats, such as text bars, etc, which could show highlights on matching keywords, but this item was pushed back for other features.

We’ve defaulted to sorting alphabetically by title when a user first reaches the page, and I’m no longer sure this is best.  As we’ve populated the collections in ArtsConnectEd we’ve ended up with a bunch of works that have numbers for titles, make the alpha sort less obvious.

You tell me!  Give the site a spin and post a comment – what works, and what could be better?

Resources:

  • Designing for Faceted Search (http://www.uie.com/articles/faceted_search/)
  • Faceted Search: Designing Your Content, Navigation, and User Interface (http://www.uie.com/events/virtual_seminars/facets/FacetedSearchVS35Handout.pdf)
  • Faceted Search (http://en.wikipedia.org/wiki/Faceted_search)
  • Best Practices for Designing Faceted Search Filters (http://www.uxmatters.com/mt/archives/2009/09/best-practices-for-designing-faceted-search-filters.php)
  • V&A Collections (beta) (http://www.vam.ac.uk/cis-online/search/?q=blue&commit=Search&category%5B%5D=5&narrow=1&offset=0&slug=0)
    • Their facets aren’t as up front as I’d like (you have to start with a keyword), but they’re done really well once they show up.
    • You can also cheat and leave keyword blank to get a full browse and go right to the facets…  Maybe start here?
  • MOMA Collections (http://www.moma.org/collection/search.php)
    • Nice presentation of facets, but I wish two things: show me a number next to all constraints, not just artists, and let me add a keyword.  (I got a dead end looking for on-view film from the 20s or 2000s)  I also like that it’s a true browse – leaving everything at “All” seems to give me the whole collection.

Web and video standards roundup

Eric at Adapted Studio put together this sweet little demo of HTML5 and Canvas in action, in the form of the Game of life. Source code is included, too, if you want to learn a few nifty things. Color me surprised, but Microsoft is actually purporting to work together on at least some of the […]

life2 html5 learning advanced javascript audio codec wtf

  • Eric at Adapted Studio put together this sweet little demo of HTML5 and Canvas in action, in the form of the Game of life. Source code is included, too, if you want to learn a few nifty things.
  • Color me surprised, but Microsoft is actually purporting to work together on at least some of the HTML5 spec. This could be good. Using <video> would be much easier if everyone would do it. But there still is the nasty issue of codecs, which is even more thorny than W3C specs.
  • This is from about a year ago, but John Resig (of jQuery fame) posted a very nice tutorial for Learning Advanced Javascript. It clears up a lot of confusion about seemingly advanced techniques.
  • Also worth perusing is Mark Pilgrim’s Gentle introduction to video formatting. If you’re a video geek, you might know some of this, but there’s detail that might fill in some gaps. The slides are also slightly amusing. I had no idea the .mkv format came from a bunch of guys in Russia that decided to opensource it.

HTML 5 image form here.

IE6 Must Die (along with 7 and 8)

One of the trending topics on Twitter currently is “IE6 Must Die“, which are mainly retweets to a blog post entitled “IE6 Must Die for the Web to Move On“. This is certainly true, IE6 has many rendering bugs and lacks support for so many things that it is simply a nightmare to work with. […]

iedestroyOne of the trending topics on Twitter currently is “IE6 Must Die“, which are mainly retweets to a blog post entitled “IE6 Must Die for the Web to Move On“. This is certainly true, IE6 has many rendering bugs and lacks support for so many things that it is simply a nightmare to work with. The amount of time and money wasted in supporting this browser across the web is staggering.

In fact a few months ago the New Media department decided to drop support for IE6 on all future websites we create. The last website we built with full IE6 support was the new ArtsConnectEd, mainly because teachers tend to have little say in what browsers they can use on school computers. However, moving forward we’re phasing out support for IE6. It simply costs us too much time and resources for the dwindling number of users it has on our sites (currently under 10%, which is down 45% from last year and falling fast). We’re not alone, many other sites are doing this as well.

However calling for the killing of IE6 ignores a bit of history as well as new problems to come. There was a time not so long ago when all web developers wanted to be using IE6. The goal back then was to kill off IE5. You see, IE5 had an incorrect box model. Padding and margins were included in a boxes width and height instead of adding to it like in standards compliant browsers.

This caused all sorts of layout errors, and meant hacks (like the Simplified Box Model Hack) had to be used to get content to align correctly. These hacks were so widely used that Apple was going to allow them to be used in the first version of Safari until I convinced Dave Hyatt (lead Safari dev) to take out support for it. IE6 fixed this bug and everyone was happy (for a while anyway).

Going back further, IE5, even with its broken box model, was at one time the browser of choice back when IE4 was killing Javascript programmers because it didn’t support document.getElementById(). IE4 only supported the proprietary document.all leading to a horrible fracturing of Javascript, whereas IE5 added in the JS standard we still use today. Before people embraced IE5, cross platform JS on the web was almost non-existent, a fact I attempted to rectify by building my Assembler site in 1999.

The reason I bring this up is because we have a history of this behavior with regards to IE. We yearn for the more modern versions, only to end up hating those same versions later on. This will not change with the death of IE6. Soon, it will be IE7 that we are trashing, and then IE8 will be the bane of our existence.

This only becomes more clear as we move to HTML5. IE8 doesn’t support it, nor does it support any CSS3. While IE8 does support many of the older standards it had been ignoring for so long, having just recently been released it is already out of date. All of the other browsers do support these advanced web technologies, but IE is the lone browser to ignore them. Once again IE is two steps behind where the web is going, and severely limits our ability to push web technology forward to everyone for many years to come.

So while we celebrate the death of IE6, let us not forget that there will be a new thorn in our side to take its place in short order. IE7, you’re next.

Some thoughts on preserving Internet Art

We’re in the process of retiring our last production server running NT and ColdFusion (whew!), and this means we needed to get a few old projects ported to our newer Linux machines.  The main site, http://aen.walkerart.org/, is marginally database-driven: that is, it pulls random links and projects from a database to make the pages different […]

aenWe’re in the process of retiring our last production server running NT and ColdFusion (whew!), and this means we needed to get a few old projects ported to our newer Linux machines.  The main site, http://aen.walkerart.org/, is marginally database-driven: that is, it pulls random links and projects from a database to make the pages different each time you load.  The admin at the time was nice enough to include MDB dump files from the Microsoft Access(!) project database, and the free mdbtools software was able to extract the schema and generate import scripts.  Most of this page works as-is, but I had to tweak the schema by hand.

After the database was ported to MySQL, it was time to convert the ColdFusion to PHP.  (Note: the pages still say .cfm so we don’t break links or search engines – it’s running php on the server)  Luckily the scripts weren’t doing anything terribly complicated, mostly just selects and loops with some “randomness” thrown in.  I added a quick database-abstraction file to handle connections and errors and sanitize input, and things were up and running quickly.

… sort of.  The site is essentially a repository of links to other projects, and was launched in February 2000.  As you might imagine there’s been some serious link rot, and I’m at a bit of loss on how to approach a solution.  Steve Dietz, former New Media curator here at the Walker, has an article discussing this very issue here (ironically mentioning another Walker-commissioned project that’s suffered link rot.  Hmm.).

One strategy Dietz suggests is to update the links by hand as the net evolves.  This seems resource-heavy, even if a link-validating bot could automate the checking — someone would have to curate new links and update the database.  I’m not sure we can make that happen.

It also occurred to me to build a proxy using the wayback machine to try to give the user a view of the internet in early 2000.  There’s no API for pulling pages, but archive.org allows you to build a URL to get the copy of a page closest to a specific date, so it seems possible.  But this is tricky for other reasons – what if the site actually still exists?  Should we go to the live copy or the copy from 2000?  Do we need to pull the header on the url and only go to archive.org if it’s a 404 to 500?  And what if the domain is now owned by a squatter who returns a 200 page of ads?  Also, archive.org respects robots.txt, so a few of our links have apparently never been archived and are gone forever.  Rough.

In the end, the easy part was pulling the code to a new language and server – it works pretty much exactly like it did before, broken links and all.  The hard part is figuring out what to do with the rest of the web…  I do think I’ll try to build that archive.org proxy someday, but for now the fact it’s running on stable hardware is good enough.

Thoughts?  Anyone already built that proxy and want to share?

Next