Blogs Media Lab Mobile

Setting up smartphone emulators for testing mobile websites

While developing the Walker’s mobile site, I needed to test the site in a number of browsers to ensure compatibility. If you thought testing a regular website was a pain, mobile is an order of magnitude worse. Our mobile site is designed to work on modern smartphones. If you’re using a 4 year old Nokia […]

While developing the Walker’s mobile site, I needed to test the site in a number of browsers to ensure compatibility. If you thought testing a regular website was a pain, mobile is an order of magnitude worse.

Our mobile site is designed to work on modern smartphones. If you’re using a 4 year old Nokia phone with a 120×160 screen, our site does not and will not work for you. If you want to test on older/less-smart phones, PPK has a quick overview post that has some pointers. Even so, getting the current smartphone OS running is no piece of cake. So this post will outline how to get iPhone, Android, WebOS, and, ugh, BlackBerry running in emulation. Note: I left out Windows Mobile, as does 99% of the smartphone buying public.

Let’s knock off some low hanging fruit: iPhone

Getting the iPhone to run in emulation is very easy. First, you have to have a mac. If you’re a web developer, you’re probably working on a mac. You need to get the iPhone developer tools. You’ll have to register for a free Apple Developer account, agreeing to their lengthy and draconian agreement. Once that’s done, you can slurp down the humongous 2.3gb download and install it. Once installed, you’ll have a nice folder named Developer at the root of your drive, and navigate inside it and look for the iPhone Simulator.app. That’s your boy, so launch it and, hooray! You can now test your sites in Mobile Safari.

iPhone Simulator in Apple Developer Tools

The iPhone Simulator is by far the easiest to work with, since it’s a nice pre-packaged app, just like any other. And it is a simulator, not an emulator. The difference being, a simulator just looks and acts like an iPhone, but actually runs native code on your machine. An emulator emulates a different processor, running the whole host OS inside the emulator. The iPhone simulator runs an x68 version of Safari, and just links to the mobile frameworks, compiled in x86, on your local machine. A real, actual iPhone has all the same frameworks, but they’re compiled in ARM code on the phone.

Walker Art Center on the iPhone

Android

In typical google fashion, Android is a bit more confusing, but also more powerful. There are roughly three different flavors of Android out there in the wild: 1.5, 2.0, and 2.1. The browser is slightly different in each, but for most simple sites this should be relatively unimportant.

To get the Android Emulator running, download the Android SDK for your platform. I’m on a Mac, so that’s what I focus on here. You’ll need to have up-to-date java, but if you’re on a Mac, this isn’t a problem. Bonus points to google for being the only one to not require you to sign up as a developer to get the SDK. Once you have the file, unpack it and pull up the Terminal. Navigate to the directory, and then look inside the tools directory. You need to launch the “android” executable:

Very tricky: Launch the android executable.

This will launch the Android SDK and Android AVD Manager:

Android SDK and AVD Manager

The first thing you’ll probably want to do is go to Installed Packages and hit Update All…, just to get everything up-to-date. With that done, move back to Virtual Devices and we’re going to create a New virtual device:

Set up new Android Virtual Device

Name it whatever you want, I’d suggest using Android 2.1 as your target, give it a file size of around 200mb (you don’t need much if you aren’t going to install any apps) and leave everything else as default. Once it’s created, you can simply hit start, wait for it to boot, and you’re now running Android:

Android Emulator Running

Palm WebOS

Palm is suffering as a company right now, and depending on the rumors, is about to be bought by Lenovo, HTC, Microsoft, or Google. Pretty much everyone agrees that WebOS is really cool, so it’s definitely worth testing your mobile site on. WebOS, like the iPhone and Android, use Webkit as it’s browser, so things here are not going to be unexpected. The primary difference is the available fonts.

Running the WebOS emulator is very easy, at least on the Mac. First, you need to download an grab a copy of VirtualBox, and second, you download and install the Palm SDK. Both are linked from this page.

Installing VirtualBox is dead easy, and works just like any other OS X .pkg install process:

Then download and install the Palm WebOS SDK:

When you’re done, look in your /Applications folder for an app named Palm Emulator:

When you launch the emulator, you’ll be asked to choose a screen size (corresponding to either the Pre or the Pixi) and then it will start VirtualBox. It’s a bit more of a cumbersome startup process than the iPhone Simulator, but about on par with Android.

WebOS emulator starting up. It fires up VirtualBox in the background.

WebOS running.

BlackBerry

BlackBerry is the hairiest of all the smartphones in this post. Unless you know the Research In Motion ecosystem, and I don’t, it seems that there are about 300 different versions of BlackBerry, and no easy way to know what version you should test on. From what I can tell, the browser is basically the same on all the more recent phones, so picking one phone and using that should be fairly safe. RIM is working on BlackBerry 6, which is purported to include a WebKit based browser, addressing the sadness their browser causes in web developers everywhere.

The first thing you’re going to need to simulate a BlackBerry is a windows machine. I use VMWare Fusion on my mac, and have several instances of XP, so this is not a problem. The emulator is incredibly slow and clunky, so you’ll want a fairly fast machine or a Virtual Machine with the settings for RAM and CPU cranked up.

There are three basic parts you’ll need to install to get the BlackBerry emulator running: Java EE 5, BlackBerry Smartphone Simulator, and BlackBerry Email and MDS Services Simulator. Let’s start with Java. You need Java Enterprise Edition 5, and you can get that on Sun/Oracle’s Java EE page. I’ve had Java EE 5 and 6 on my windows machine for quite some time, so I’m not actually sure what version BlackBerry requires, but it’s one of them, and they’re both free. Get it, install it, and add one more hunk of junk to your system tray.

Now you need the emulators themselves: To get an emulator, head over to the RIM emulator page and pick a device. I went with the 9630 since that seems fairly popular and it was at the top of the list of devices to chose. I’d grab the latest OS for a generic carrier. You will have to register for a no-cost RIM developer account to download the file.

While you’re there, you’ll also want to grab the MDS (aka Mobile Data Service) emulator. This is what enables the phone to actually talk to the internet. To grab this, click on the “view all BlackBerry Smartphone Simulator downloads” link, and then choose the first item from the list, “BlackBerry Email and MDS Services Simulator Package”. Click through and grab the latest version.

Once the download completes, copy the .EXEs to windows and run them. You’ll walk through the standard windows install process, and when you’re done, you’ll be left with some new menu items. Let’s start the MDS up first, since we’d like a net connection. Here’s where you should find it:

I like to take screenshots of Windows to show how crazy bad it is.

And this is what it looks like starting up:

MDS running. It's a java app.

Now let’s start up the phone emulator itself:

BlackBerry 9630 Emulator

For me, it takes quite a while to start the phone, about a minute. I started off with a smaller VM instance and it was 5+ minutes to launch, so be warned. After it starts, you’ll be left with a screen like this:

You can’t use the mouse to navigate on the screen, which is crazy counter-intuitive for anyone who has used the other three phones mentioned in this post. Instead, you click on the buttons on screen or use your keyboard to navigate. Welcome to 2005. To get to the browser, hit the hangup button, then arrow over to the globe and hit enter. You can hit the little re-wrap/undo button to get to the URL field once the browser launches. Here’s what our site looks like:

Building the Walker’s mobile site, part 2 — google analytics without javascript

As I mentioned in my last post on our mobile site, one of the key features for our site was making sure that we don’t use any javascript unless absolutely necessary. If you use Google Analytics  (GA) as your stats package, this poses a problem, since the supported way to run GA is via a […]

ga_mobileAs I mentioned in my last post on our mobile site, one of the key features for our site was making sure that we don’t use any javascript unless absolutely necessary. If you use Google Analytics  (GA) as your stats package, this poses a problem, since the supported way to run GA is via a chunk of javascript at the bottom of every page. And to make matters worse, the ga.js file is not gzipped, so you’re loading 9K which would otherwise be about 4k, on a platform where every byte counts. By contrast, if you could just serve the tracking gif, it is 47 bytes. And no javascript that might not run on B-grade or below devices.

A few weeks ago, Google announced support for analytics inside mobile apps and some cursory support for mobile sites:

Google Analytics now tracks mobile websites and mobile apps so you can better measure your mobile marketing efforts. If you’re optimizing content for mobile users and have created a mobile website, Google Analytics can track traffic to your mobile website from all web-enabled devices, whether or not the device runs JavaScript. This is made possible by adding a server side code snippet to your mobile website which will become available to all accounts in the coming weeks (download snippet instructions). We will be supporting PHP, Perl, JSP and ASPX sites in this release. Of course, you can still track visits to your regular website coming from high-end, Javascript enabled phones.

And that is the extent of the documentation you will find anywhere on Google on how to run analytics without javascript. The code included is handy if you happen to run one of their platforms, but the Walker’s mobile site runs on the python side of AppEngine, so their code doesn’t do us much good. Thankfully, since they provide us with the source, we can without too much trouble, translate the php or perl into python and make it AppEngine friendly.

How it works

Regular Google Analytics works by serving some javascript and a small 1px x 1px gif file to your site from Google. The gif lets Google learn many things from the HTTP request your browser makes, such as your browser, OS, where you came from, your rough geo location, etc. The javascript lets them learn all kinds of nifty things about your screen, flash versions, event that fire, etc. And Google tracks you through a site by setting some cookies on that gif they serve you.

To use GA without javascript, we can still do most of that, and we do it by generating our own gif file and passing some information back to Google through our server. That is, we generate a gif, assign and track our own cookie, and then gather that information as you move through the site, and use a HTTP request with the appropriate query strings and pass it back to Google, which they then compile and treat as regular old analytics.

The Code

To make this work in appeinge, we create a  URL in our webapp that we’ll serve the gif from. I’m using “/ga/”:

[python]
def main():
application = webapp.WSGIApplication(
[('/', home.MainHandler),
# edited out extra lines here
('/ga/', ga.GaHandler),
],
debug=False)
wsgiref.handlers.CGIHandler().run(application)
[/python]

And here’s the big handler for /ga/. I based it mostly off the php and some of the perl (click to expand the full code):

[code lang="python" collapse="true"]
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
import re, hashlib, random, time, datetime, cgi, urllib, uuid

# google analytics stuff
VERSION = "4.4sh"
COOKIE_NAME = "__utmmobile"

# The path the cookie will be available to, edit this to use a different cookie path.
COOKIE_PATH = "/"

# Two years in seconds.
COOKIE_USER_PERSISTENCE = 63072000

GIF_DATA = [
chr(0x47), chr(0x49), chr(0x46), chr(0x38), chr(0x39), chr(0x61),
chr(0x01), chr(0x00), chr(0x01), chr(0x00), chr(0x80), chr(0xff),
chr(0x00), chr(0xff), chr(0xff), chr(0xff), chr(0x00), chr(0x00),
chr(0x00), chr(0x2c), chr(0x00), chr(0x00), chr(0x00), chr(0x00),
chr(0x01), chr(0x00), chr(0x01), chr(0x00), chr(0x00), chr(0x02),
chr(0x02), chr(0x44), chr(0x01), chr(0x00), chr(0x3b)
]

class GaHandler(webapp.RequestHandler):
def getIP(self,remoteAddress):
if remoteAddress == '' or remoteAddress == None:
return ''

#Capture the first three octects of the IP address and replace the forth
#with 0, e.g. 124.455.3.123 becomes 124.455.3.0
res = re.findall(r'\d+\.\d+\.\d+\.', remoteAddress)
if res:
return res[0] + "0"
else:
return ""

def getVisitorId(self, guid, account, userAgent, cookie):
#If there is a value in the cookie, don't change it.
if type(cookie).__name__ != 'NoneType': # or len(cookie)!=0:
return cookie

message = ""

if type(guid).__name__ != 'NoneType': # or len(guid)!=0:
#Create the visitor id using the guid.
message = guid + account
else:
#otherwise this is a new user, create a new random id.
message = userAgent + uuid.uuid1(self.getRandomNumber()).__str__()

m = hashlib.md5()
m.update(message)
md5String = m.hexdigest()

return str("0x" + md5String[0:16])

def getRandomNumber(self):
return random.randrange(0, 0x7fffffff)

def sendRequestToGoogleAnalytics(self,utmUrl):
'''
Make a tracking request to Google Analytics from this server.
Copies the headers from the original request to the new one.
If request containg utmdebug parameter, exceptions encountered
communicating with Google Analytics are thown.
'''
headers = {
"user_agent": self.request.headers.get('user_agent'),
"Accepts-Language": self.request.headers.get('http_accept_language'),
}
if len(self.request.get("utmdebug"))!=0:
data = urlfetch.fetch(utmUrl, headers=headers)
else:
try:
data = urlfetch.fetch(utmUrl, headers=headers)
except:
pass

def get(self):
'''
Track a page view, updates all the cookies and campaign tracker,
makes a server side request to Google Analytics and writes the transparent
gif byte data to the response.
'''
timeStamp = time.time()

domainName = self.request.headers.get('host')
domainName = domainName.partition(':')[0]

if len(domainName) == 0:
domainName = "m.walkerart.org";

#Get the referrer from the utmr parameter, this is the referrer to the
#page that contains the tracking pixel, not the referrer for tracking
#pixel.
documentReferer = self.request.get("utmr")

if len(documentReferer) == 0 or documentReferer != "0":
documentReferer = "-"
else:
documentReferer = urllib.unquote_plus(documentReferer)

documentPath = self.request.get("utmp")
if len(documentPath)==0:
documentPath = ""
else:
documentPath = urllib.unquote_plus(documentPath)

account = self.request.get("utmac")
userAgent = self.request.headers.get("user_agent")
if len(userAgent)==0:
userAgent = ""

#Try and get visitor cookie from the request.
cookie = self.request.cookies.get(COOKIE_NAME)

visitorId = str(self.getVisitorId(self.request.headers.get("HTTP_X_DCMGUID"), account, userAgent, cookie))

#Always try and add the cookie to the response.
d = datetime.datetime.fromtimestamp(timeStamp + COOKIE_USER_PERSISTENCE)
expireDate = d.strftime('%a,%d-%b-%Y %H:%M:%S GMT')

self.response.headers.add_header('Set-Cookie', COOKIE_NAME+'='+visitorId +'; path='+COOKIE_PATH+'; expires='+expireDate+';' )
utmGifLocation = "http://www.google-analytics.com/__utm.gif"

myIP = self.getIP(self.request.remote_addr)

#Construct the gif hit url.
utmUrl = utmGifLocation + "?" + "utmwv=" + VERSION + \
"&utmn=" + str(self.getRandomNumber()) + \
"&utmhn=" + urllib.pathname2url(domainName) + \
"&utmr=" + urllib.pathname2url(documentReferer) + \
"&utmp=" + urllib.pathname2url(documentPath) + \
"&utmac=" + account + \
"&utmcc=__utma%3D999.999.999.999.999.1%3B" + \
"&utmvid=" + str(visitorId) + \
"&utmip=" + str(myIP)

# we dont send requests when we're developing
if domainName != 'localhost':
self.sendRequestToGoogleAnalytics(utmUrl)

#If the debug parameter is on, add a header to the response that contains
#the url that was used to contact Google Analytics.
if len(self.request.get("utmdebug")) != 0:
self.response.headers.add_header("X-GA-MOBILE-URL" , utmUrl)

#Finally write the gif data to the response.
self.response.headers.add_header('Content-Type', 'image/gif' )
self.response.headers.add_header('Cache-Control', 'private, no-cache, no-cache=Set-Cookie, proxy-revalidate' )
self.response.headers.add_header('Pragma', 'no-cache' )
self.response.headers.add_header('Expires', 'Wed, 17 Sep 1975 21:32:10 GMT' )
self.response.out.write(''.join(GIF_DATA))

[/code]

So now we know what to do with our requests at /ga/ when we get them, we just need to make the proper requests to that URL in the first place. So we need to generate the URL we’re going to have the visitor’s browser request in the first place. With normal django, we would be able to use template_context to automatically insert it into the page’s template values. But, since AppEngine doesn’t use that, we have our own helper functions to do that, which I showed some of in my last post. Here’s the updated helper functions, with the GoogleAnalyticsGetImageUrl function included:

[code lang="python"]
import settings

def googleAnalyticsGetImageUrl(request):
url = ""
url += '/ga/' + "?"
url += "utmac=" + settings.GA_ACCOUNT
url += "&utmn=" + str(random.randrange(0, 0x7fffffff))

referer = request.referrer
query = urllib.urlencode(request.GET) #$_SERVER["QUERY_STRING"];
path = request.path #$_SERVER["REQUEST_URI"];

if len(referer) == 0:
referer = "-"

url += "&utmr=" + urllib.pathname2url(referer)

if len(path)!=0:
url += "&utmp=" + urllib.pathname2url(path)

url += "&guid=ON";

return {'gaImgUrl':url}

def getTempalteValues(request):
myDict = {}
myDict.update(ua_test(request))
myDict.update(googleAnalyticsGetImageUrl(request))
return myDict

[/code]

Assuming we use getTemplateValues to set up our inital template_values dict, we should have a variable named ‘gaImgUrl’ in our page. To use it, all we need to do is put this at the bottom of every page on the site:

[code lang="html"]
<img src="{{ gaImgUrl }}" alt="analytics" />
[/code]

My settings file contains the GA_ACCOUNT variable, but replaces the standard GA-XXXXXX-X setup with MO-XXXXXX-X. I’m assuming the MO- tells google that it’s a mobile so accept the proxied requests.

One thing to keep in mind with this technique is that you cannot cache your rendered templates. The image you server will necessarily have a different query string every time, and if you cached it, you would ruin your analytics. Instead, you should cache nearly everything from your view functions, except the gaImgUrl variable.

Building the Walker’s mobile website with Google AppEngine, part 1

Over the summer, our department made a small but significant policy change. We decided to take a cue from Google’s 20% time philosophy and spend one day a week working on a Walker-related project of our choosing. Essentially, we wanted to embark on quicker, more nimble projects that hold more interest for our team. The […]

mwalker-iphoneOver the summer, our department made a small but significant policy change. We decided to take a cue from Google’s 20% time philosophy and spend one day a week working on a Walker-related project of our choosing. Essentially, we wanted to embark on quicker, more nimble projects that hold more interest for our team. The project I decided to experiment with was making a mobile website for the Walker, m.walkerart.org.

Reviewing our current site to inform the mobile site

The web framework we use for most of our site has the ability, with some small changes, to load different versions of a page based on a visitor’s User Agent (what browser they’re using). This would mean we could detect if a visitor was running IE on a Desktop or Mobile Safari on an iPhone, and serve each of them two different versions of a page. This is how a lot of mobile sites are done.

This is not the approach we went with for our mobile site, because it violates two of the primary rules (in my mind) of making a mobile website:

  1. Make it simple.
  2. Give people the stuff they’re looking for on their phones right away.

Our site is complicated: we have pages for different disciplines, a calendar with years of archives, and many specialty sites. Rule #1, violated. To address #2, I took a look at our web analytics to figure out what most people come to our site looking for. This won’t surprise anyone, but it’s hours, admission, directions, and what’s happening today at the Walker:

Top Walker Pages as noted by Google Analytics

Top Walker Pages as noted by Google Analytics

So it seems pretty clear those things should be up front. One of the other things you might want to access on a mobile is Art on Call. While Art on Call is designed primarily around dial-in access, there is also a website, but it isn’t optimized for the small screen of a smartphone. We have WiFi in most spaces within our building, so accessing Art on Call via an web interface and streaming audio via HTTP rather than POTS is a distinct possibility that I wanted to enable.

Prototyping with Google AppEngine

I decided to develop a quick prototype using Google AppEngine, thinking I’d end up using GAE in the end, too. Because this was a 20% time project, I had the freedom to do it using the technology I was interested in. AppEngine has the advantage of being something that isn’t hosted on our server, so there was no need to configure any complicated server stuff. In my mind, AppEngine is perfect for a mobile site because of the low bandwidth requirements for a mobile site. Google doesn’t provide a ton for free, but if your pages are only 20K each, you can fit quite a bit within the quotas they do give provide. AppEngine’s primary language is also python, a big plus, since python is the best programming language.

In about two days I built a proof of concept mobile site that displayed the big-ticket pages (hours, admission,etc.) and had a simple interface for Art on Call. Using iUi as a front-end framework was really, really useful here, because it meant that the amount of HTML/CSS/JS I had to code was super minimal, and I didn’t have to design anything.

I showed the prototype to Robin and she enthusiastically gave me the green light to work on it full-time.

Designing a mobile website

A strategy I saw when looking at mobile sites was to actually have two mobile sites: one for the A-grade phones (iPhone, Nokia S60, Android, Pre) and one for the B- and C-grade phones (Blackberry and Windows Mobile). The main difference between the two is the use of javascript and some more advanced layout. Depending on what version of Blackberry you look at, they have a pretty lousy HTML/CSS implementation, and horrendous or no javascript support.

To work around this, our mobile site does not use any javascript on most pages and tries to keep the HTML/CSS pretty simple. We don’t do any fancy animations to load between pages like iUi or jQtouch do: even on an iPhone those animations are slow. If you make your pages small enough, they should load fast enough and a transition is not necessary.

Designing mobile pages is fun. The size and interface methods for the device force you to re-think how to people interact and what’s important. They’re also fun because they’re blissfully simple. Each page on our mobile site is usually just a headline, image, paragraph or two, and some links. Laying out and styling that content is not rocket surgery.

Initially, when I did my design mockups in Photoshop, I wanted to use a numpad on the site for Art on Call, much like the iPhone features for making a phone call. There’s no easy input for doing this, but I thought it wouldn’t be too hard to create one with a little javascript (for those that had it). Unfortunately, due to the way touchscreen phones handle click/touch events in the browser, there’s a delay between when you touch and when the click event fires in javascript. This meant that it was possible to touch/type the number much faster than the javascript events fired. No go.

Instead, the latest versions of WebKit provide with a HTML5 input field with a type of “number”. On iPhone OS 3.1 and better, it will bring up the keypad already switched to the numeric keys. It does not do this on iPhone OS prior to 3.1. I’m not sure how Android and Pre handle it.

Mocked up Art on Call code input.

Mocked up Art on Call code input.

Implimented Art on Call code input.

Implimented Art on Call code input.


Comparing smartphones

Here’s a few screenshots of the site on various phones:

Palm Pre

Palm Pre

Android 1.5

Android 1.5

Blackberry 9630

Blackberry 9630



Not pictured is Windows Mobile, because it looks really bad.

A future post may cover getting all of these emulators up and running, because it’s not as straight easy as it should be. Working with the blackberry emulator is especially painful.

How our mobile site works

The basic methodology for our mobile site is to pull the data, via either RSS or XML from our main website, parse it, cache it, and re-template it for mobile visitors. Nearly all of the pages on our site are available via XML if you know how to look. Parsing XML into usable data is a computationally expensive task, so caching is very important. Thankfully, AppEngine provides easy access to memcache, so we can memcache the XML fetches and the parsing as much as possible. Here’s our simple but effective URL parse/cache helper function:

[python]
from google.appengine.api import urlfetch
from xml.dom import minidom
from google.appengine.api import memcache

def parse(url,timeout=3600):
memKey = hash(url)
r = memcache.get(‘fetch_%s’ % memKey)
if r == None:
r = urlfetch.fetch(url)
memcache.add(key="fetch_%s" % memKey, value=r, time=timeout)
if r.status_code == 200:
dom = memcache.get(‘dom_%s’ % memKey)
if dom == None:
dom = minidom.parseString(r.content)
memcache.add(key="dom_%s" % memKey, value=dom, time=timeout)
return dom
else:
return False
[/python]

Google AppEngine does not impose much of a structure for your web app. Similar to Django’s urls.py, you link regular expressions for URLS to class-based handlers. You can’t pass any variables beyond what’s in the URL or the WebOb to the request handler. Each handler will call a different method, depending if it’s a GET, POST, DELETE, http request. If you’re coming from the django world like me, this is not much of a big deal at first, but it gets tedious pretty fast. If I had it to do over again, I’d probably use app-engine-patch from the outset, and thus be able to use all the normal django goodies like middleware, template context, and way more configurable urls.

Within each handler, we also cache the generated data where possible. That is, after our get handler has run, we cache all the values that we pass to our template that won’t change over time. Here’s an example of the classes that handle the visit section of our mobile site:

[python]
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.api import memcache
from xml.dom import minidom
from google.appengine.api import memcache
from utils import feeds, parse, template_context, text
import settings

class VisitDetailHandler(webapp.RequestHandler):
def get(self):
url = self.request.get("s") + "?style=xml"
template_values = template_context.getTempalteValues(self.request)
path = settings.TEMPLATE_DIR + ‘info.html’
memKey = hash(url)

r = memcache.get(‘visit_%s’ % memKey)
if r and not settings.DEBUG:
template_values.update(r)
self.response.out.write(template.render(path, template_values))
else:
dom = parse.parse(url)
records = dom.getElementsByTagName("record")
contents = []
for rec in records:
title = text.clean_utf8(rec.getElementsByTagName(‘title’)[0].childNodes[0].nodeValue)
body = text.clean_utf8(rec.getElementsByTagName(‘body’)[0].childNodes[0].nodeValue)
contents.append({‘title':title,’body':body})

back = {‘href':’/visit/#top’, ‘text':’Visiting’}
cacheableTemplateValues = { "contents": contents,’back':back }
memcache.add(key=’visit_%s’ % memKey, value={ "contents": contents,’back':back }, time=7200)
template_values.update(cacheableTemplateValues)
self.response.out.write(template.render(path, template_values))
[/python]

Dealing with parsing XML via the standard DOM methods is a great way to test your tolerance for pain. I would use libxml and xpath, AppEngine doesn’t provide those libraries in their python environment.

Because the only part of Django’s template system that AppEngine uses is the template language, and nothing else, we have to roll our own helper functions for context. Meaning, if we want to pass a bunch variables by default to our templates, something easy in django, we have to do it a little differently with GAE. I set up a function called getTemplateValues, which we pass the WebOb request, and it ferrets out and organizes info we need for the templates, passing it back as a dict.

[python]
def ua_test(request):
uastring = request.headers.get(‘user_agent’)
uaDict = {}
if "Mobile" in uastring and "Safari" in uastring:
uaDict['isIphone'] = True
if ‘BlackBerry’ in uastring:
uaDict['isBlackBerry'] = True
return uaDict

def getTempalteValues(request):
myDict = {}
myDict.update(ua_test(request))
myDict.update(googleAnalyticsGetImageUrl(request))
return myDict
[/python]

In my next post, I’ll talk about how to track visitors on a mobile site using google analytics, without using javascript.

Working with iTunes U

Several weeks ago, Robin posted about the Walker Channel on iTunes U. I am going to follow up on her initial announcement with a more info about the process of designing an iTunes U Page, the preparation of content, and putting content online. Designing an iTunes U Page There are a number of different designs […]

Walker Art Center iTunes U page

Walker Art Center iTunes U page.

Several weeks ago, Robin posted about the Walker Channel on iTunes U. I am going to follow up on her initial announcement with a more info about the process of designing an iTunes U Page, the preparation of content, and putting content online.

Designing an iTunes U Page

There are a number of different designs for pages in iTunes U. Some institutions that have been in the store for a while have a three column layout. However, Apple has now standardized on a two column layout for iTunes U pages. There options for customizing a page are limited, but not restrictive. Colors for backgrounds, borders, and text can be changed. An overall header image that is 600px by 300px is used on the top of the main page. The downside of a two column layout is that it does not re-size to a smaller iTunes window as nicely as a the three column layout.

A three column iTunes U page

A three column iTunes U page.

Within the main page, you can create separate content groups. We decided to go with three sections: Featured, Exhibitions, and Topics. Within these sections, you create course pages. Each course can be customized with an icon, description, author/instructor, and links. In each course, there can be multiple tabs for different groupings of content. We’re using “Tracks” for most, which are a mix of video and audio content. A few exhibitions courses also have tabs for Art on Call content.

In order to design our site, I ended up doing some of the initial work right in iTunes. I figured out our color scheme and organizational structure, then took a few screenshots of iTunes. The screenshots were pasted together in photoshop, and I layered the header and course images onto it. Thankfully, the iconography choices were straight forward. The exhibitions use images from an exhibition, either artwork or an installation view. For Subjects or Featured courses, the icons are all similar, just using color, pattern, and language changes, each referencing the different artistic program pages that are already on the Walker web site.

Encoding video for the iPod using h.264

The h.264 codec is both amazing and vexing. It has very high compression, good quality, and is a widely supported standard. Working with h.264, especially for devices, can be complicated. Since the 5th generation, iPods have been able to play h.264 encoded video. They can even play 640×480 video and downscale it to their 320×240 screen, which is great since a 640×480 video will look good on a larger screen too. The real trickery with h.264 comes in with profiles.

Exporting to mp4 in Quicktime Pro. Not iPod compatible.

The MPEG Streamclip settings we use.

The MPEG Streamclip settings we use


Most of the time, if you just export a movie from quicktime using h.264, you use the main profile. However, for a device like the iPod, which doesn’t have a fast processor, Apple specifies that you need to use the low-complexity profile. There technical differences are mostly beyond me, but the low-complexity profile drops some of the more advanced hinting and shape features, but will mean a less processor intensive decode process that the iPod can handle.

Getting video encoded into a low-complexity h.264 profile is not a clear process. Apple’s own QuickTime Pro doesn’t let you encode to low-complexity and have any control over the output. If you want to make a movie compatible for the iPod, you must use the Movie to iPod or Movie to iPhone preset. Both of these presets encode at a very high bitrate, which makes for good quality. However, if you have the scenario we have– long movies of not a lot of action–a high bitrate is both filesize prohibitive and not necessary to maintain quality.

Some time ago, we switched to saving all our channel videos in a mp4 file, using the h.264 codec, thinking that it would make them iTunes compatible. We apparently missed the low-complexity part, and discovered that our videos were, in fact, not iPod compatible. This meant we would need to re-encode our video files to make them more useful in iTunes U. I looked at several different pieces of software to do this, but eventually decided upon MPEG Streamclip.

As I noted above, Quicktime Pro would not work for this. I also looked at Compressor, basically the Pro version of QuickTime Pro. Compressor offers much more customizability than QuickTime Pro in terms of codec configuarion and workflow. Compressor, for some reason, takes an inexplicably long time to encode a iPod compatible mp4. On a high-end Mac Pro, encoding a 640×480 was taking well beyond 8 hours. The output look really good, but given that we had 50 files to convert, it was simply not an option, even when using distributed encoding.

I also looked at FFmpegX and VisualHub (now defunct). Both of them are essentially wrappers around FFmpeg, and they produce good results, are very efficent encoders, and let you adjust every setting (almost to a fault). However, FFmpeg suffers from being written to expect a PC gamma of 2.2, and the resulting videos looked considerably darkened when compared to the original.

In the end, MPEG Streamclip worked the best. It offers the same speed of FFMpeg, much of the same control over settings, and deals with the gamma–outputting the a proper video for the iPod. At a bitrate of about 950kpbs, a typical two hour lecture comes in between 450-500 MB, just below the iTunes U limit of 500 MB.

Putting Content into iTunes U

The processes of editing content and putting tracks into iTunes U straight forward, though frustrating, since it involves a lot of clicking and waiting. iTunes has evolved considerably over time, and certainly letting a huge range of users edit parts of the iTunes Music store was not one of the original design specifications. The process is a bit clunky and Web 1.0-style, but it works. Uploading content is done through a browser, which can be a very finicky, especially with large files. After some trial and error, I figured out that setting Firefox as my default browser and using that for uploading worked better than Safari. Safari will time out the upload after a period of time, whereas Firefox keeps on chugging.

Before files are uploaded, they need to be properly loaded with metadata. iTunes U doesn’t let you edit much on the site (just title and artist) so other fields must be filled in on iTunes on your computer before uploading. When you edit the metadata fields, iTunes commits the changes to the movie files itself. When you upload the movie files, iTunes U will pick up on this and display it. One thing I found a little confusing that artwork is not displayed in the store or when you are previewing a file. Apple says that artwork on movie files is used to display on the iPod, but never in iTunes. This is all covered pretty well in the iTunes U User’s Guide.

Despite the time spent figuring out codecs and monkeying around with uploading, we’re very happy to have our content in another venue and excited to keep adding more.

Frida Kahlo multimedia guide update

Visitors to the Walker’s Frida Kahlo exhibition have the option of renting a multimedia guide ($6, $5 Walker members). The tour was produced by Antenna Audio whose staff are providing bi-weekly reports on usage. Here’s what we know so far: Take-up rate varies widely depending on attendance with the average being 9%. Thursdays are our […]

Visitors to the Walker’s Frida Kahlo exhibition have the option of renting a multimedia guide ($6, $5 Walker members). The tour was produced by Antenna Audio whose staff are providing bi-weekly reports on usage. Here’s what we know so far:

  • Take-up rate varies widely depending on attendance with the average being 9%. Thursdays are our big day with typically around 22% (Walker admission is free on Thursday nights). Saturdays are also a big day but the take up ratio (.05%) is diluted by Free First Saturday (FFS) attendance. FFS is the Walker family day; we had 4,800 visitors on November 3rd, a large percentage of which were 12 and under.
  • The numbers show far more non-members purchase the tour versus members (approximately 20:1). However, once members purchase the tour, they’ve come back multiple times, often with friends and family.
  • The 50+ crowd are the folks purchasing the tour. Teens and 20-somethings think they know it all and tend to dismiss it. I wonder how much this demographic might change if the tour was free and/or offered on personal technology.
  • Antenna’s new hardware appears to be holding up to public use. By the end of a 6-hour day, the players can get a bit sluggish but they brought in additional units so they can rotate more frequently.
  • The comments from visitors continue to be overwhelmingly positive. Some of the quotes we’ve gotten:”Fantastic…indispensable for understanding the heavy symbolism of her work.””…loved additional visuals on touch screen…” \r\n\r\n”…would have been lost without it…”

    “…numbers next to paintings should be larger…” (Sigh. The labels, always the labels.)

    “Excellent to have optional perspectives on the artist and contextual background on her life and times.”

    “…the order of paintings didn’t jive with the audio and I had to skip all over the place to find where I was supposed to be.” (The tour is random access and some visitors still prefer a more linear tour.)

    “Every exhibit should have these!”

Counting People in Galleries with iPod Touch

Here’s an interesting problem that came across my desk several weeks ago. Lets say you want to know exactly how many people are in a gallery at any given time. How do you do it? There are expensive people counters available, with all sorts of technology, right down to thermal imaging. There are also cheap […]

Here’s an interesting problem that came across my desk several weeks ago. Lets say you want to know exactly how many people are in a gallery at any given time. How do you do it?

There are expensive people counters available, with all sorts of technology, right down to thermal imaging. There are also cheap hand held counters, with plus and minus buttons to add and subtract people as they come and go to keep a consistent count of people in a gallery.

These cheap hand held versions are great…if you only have one entrance and exit point. What if you have multiple entrances and exits? Suddenly the hand held version falls apart, and putting cameras all over is way too expensive.

This is the issue that was put forth to me. We have an upcoming exhibition for Frida Kahlo. The gallery that the exhibition is in can only support 200 visitors at any one time. We expect more than that, especially on busy days. The kicker of course is that the gallery it’s in has two entrances, so we needed to find a way to accurately count how many people are in the gallery at any given time, and if that number goes over 200, the gallery guards would have to hold people from entering until the number dropped below 200.

I thought for sure something like this must have been made before. Surely we aren’t the only people who have ever had this problem? But in looking online I couldn’t find anything that was cost effective and would “just work”. We kept saying “if we only had two clickers that could talk to each other”.

Something interesting happened the same day I was presented with this problem. Apple announced the iPod Touch. As soon as I saw the Touch, my first thought was Art on Call and the Walker Channel. I could see all sorts of uses for both in the galleries. But after a couple hours wrestling with this given problem it hit me, why not use the iPod Touch?

The iPod Touch is handheld, has touch input, and a browser with wifi built in. All we had to do was make a simple web app for it that counted up or down. Two people could have the Touch’s, check off how many people are entering and leaving, and both be up to date on exactly how many people are in the gallery. So that’s what we did.

Here are some screen grabs of what I built. The left image is the typical display of the app. Options are simply to add or subtract a certain amount of people as they enter or leave. You’re able to reset the counter to zero in the upper right (it has a confirmation before doing so). The right image shows what happens when you go over the gallery maximum. The app also auto updates the number every 10 seconds, so the guard who has people waiting will know when the the number drops below the max value right away without needing to manually refresh.

Walker Counter Walker Counter Maxed

iPod Touch CounterMaking a web app specifically for the iPod Touch (or iPhone) turns out to be really easy. It’s just a webpage. You pretty much can do anything that is available in Safari (though there are a few inconstancies to watch out for), and there are also several special meta tags you can add specifically for these apps (for example, I turned off scaling for our web app). Apple has written up a very nice development doc on their website that I used when making this app. It includes things like screen size, font size, color, meta tags, basically everything you need to make something look nice and stylish on these devices. I’d recommend it to anyone working on apps like this. The screenshot to the left is how the iPod Touch looks with the rest of the UI around it, to give you an idea.

As far as the iPod Touch/iPhone goes, I’m very impressed. I really do think these devices are the future of museum audio tours. Well, not just audio, but video as well! There are things that need to be fixed (like the fact that you can’t get podcasts on them via wifi yet), but overall there is so much potential here, simply by having a real browser with wifi on it and supporting rich media, as well as the UI and multi-touch interface. It could very well be the Rosetta Stone of digital museum tours.

Picasso iPods part 2

Brent beat me to the punch with his Picasso iPod post. Much to learn from this project which gave us an opportunity to compare the same tour on iPods and cell phones. I was waiting for the phone stats and survey results but you’ll have to come back for that information. As Brent said, the […]

Kill the iPod

Brent beat me to the punch with his Picasso iPod post. Much to learn from this project which gave us an opportunity to compare the same tour on iPods and cell phones. I was waiting for the phone stats and survey results but you’ll have to come back for that information.

As Brent said, the iPods were a huge success. In the course of the exhibition (June 16-September 9), over 3,500 visitors borrowed the iPods (25-23 devices available for free and loaded with the exhibition tour only). In busy periods, people queued for the tour. And in these same busy periods, visitor services found the loan process almost more than they could manage (witness the drawing on the envelop accompanying the last bunch of checkout sheets).* I sought a donation from Apple (they gave us 5 iPods, we bought 20) but fact is they should have paid us for this kind of promotion. In addition to providing a rewarding interpretative experience, we taught a new generation how to use the iPod–a common refrain heard at the front desk, “ now I can tell my grandchildren I used an iPod!”

Despite their popularity, the iPods will only be used for special projects (3 remain available for the permanent collection tour but ultimately we prefer visitors bring their own hardware). That said, Walker is working with Antenna Audio and SFMOMA to produce a multimedia guide for our upcoming Frida Kahlo exhibition, available on Antenna’s new XP-vision player for $6.

* This drawing is in no way a reflection of the demeanor of front-line staff who are often complemented for exceptional customer service. “Kill the iPod” courtesy the artist Joe Rizzo.

Picasso iPod Audio Tour Post Mortem

So the Picasso exhibition is over and we learned a lot about mass iPod audio tours. The first lesson, they’re very popular! We’ve had iPods for our permanent collection for a while now, but we never really had the push behind it like we had for Picasso. The difference I noticed here is that if […]

So the Picasso exhibition is over and we learned a lot about mass iPod audio tours. The first lesson, they’re very popular! We’ve had iPods for our permanent collection for a while now, but we never really had the push behind it like we had for Picasso. The difference I noticed here is that if you advertise it, people will use it.

We did a lot better job for the Picasso show in getting the word out that the iPods as well as Art on Call were available. People used them. There were very often waiting lists for people to check out an iPod. I had honestly thought at the beginning that 25 iPods was overkill, but after a short time it was obvious we could have probably had twice that and still had all of them in use at any given time. A lot of this was because of the show itself. A ton of people came to see Picasso. I’ve never seen that many people in our galleries before, outside of After Hours. And this was day in, day out. But like anything, word gets out, people in the galleries see others on their cell phones or with iPods and learn they can do the same (for free) and people really ate up the content. We will post more on our numbers when the final data comes in.

So that’s great, people dug the content, but what were the caveats? For us there were several things that came up that we had to work around. One is what I already mentioned, the iPods being checked out constantly. Because of this, none of the iPods during the day got a chance to recharge. Most made it through an 8 hour day fine, but what we didn’t expect was having to charge them over night. Because they needed to be locked up somewhere safe when the building was closed we had to find a secure place to take the charging station each night, and thankfully we were able to.

Also, at first we were going to use one of the computers at the front desk to dock the iPods on, but given the traffic, that didn’t go over well as that computer needed to be used off and on all day for ticketing, etc. But we still needed a dedicated computer there just for the iPods. We thankfully had a spare Sony laptop that sufficed for this and did a good job.

There was also something that came up that I never had even thought about. I originally put the iPods down in a floor cabinet which could be closed. This was partially to be neat and tidy and partially for security. Problem was, we were so busy and swapping out so many iPods that the Visitor Services staff started to really strain having to bend over again and again to swap out iPods all day. Thankfully our carpentry shop rectified this by making a pedestal that the iPods could go in to make it easier on everyone’s backs.

And what about dead and abused iPods? Several notes here. One, Notes mode works better now than it did when I first used it and dismissed it and then instead hacked the iPod firmware. But there are still major issues with Notes mode. While better, it’s still not ready for prime time and there are still ways for users to change settings even when locked into Notes mode (which I’m still trying to figure out, but given the number of iPods I got that I had to reset, it’s certainly an issue). That said, we will probably use Notes mode for exhibition-only tours in the future.

Secondly, when your audio tour is this popular bad things happen. We had a few iPods die on us. Three were hard drive failures, and one had a screen fail from abuse. The good news is Apple will replace iPods for free if there is a hard drive failure and the iPod is under warranty. The bad news is the same can’t be said for screen abuse (or any other kind of user created problem). Most of the iPods survived just fine, some had to be reset with a hard reset (getting into the hidden firmware setting to do so), but in the end most survived the ordeal.

The other big challenge was getting people to understand how to use the iPods. Believe it or not there are a lot of people who have never used one before. The Picasso show skewed a bit older as well which added to this. We had a stop on our tour (the first stop) that was all about how to use the iPod and the tour menu itself was as simple as possible (just one list, no submenus), but as with any technology there is still a learning curve involved, regardless of how simple it may seem. Someone will always struggle. It’s important everyone in your museum knows how they work, because anyone, even security guards, may be asked to help someone who’s stuck. This is the most important part to me, because if people can’t figure out how to use your device, there’s no point in having it!

Lastly, as Robin guessed before the show started, ditch the earbuds and get over-the-ear headphones for your iPods. This was a very good move. Nobody wants to stick earbuds in their ears after 20 people before you have!

In related iPod news, we’re getting a few of the new iPod Touch’s in at the end of the month, and I’m currently building an app for it. I think these could have a real impact on audio (and video) tours because of the built in WiFi and browser. I’m pretty excited at the possibilities. More on this soon.

Building a Multiple iPod Charger

One of the cool things we’re doing for the Walker’s upcoming exhibition Picasso and American Art is significantly increasing our iPod audio tour capacity. For the exhibit we were able to get 25 iPod Video’s, and like our normal iPod audio tours, we will be letting visitors use them for free. The same content is […]

One of the cool things we’re doing for the Walker’s upcoming exhibition Picasso and American Art is significantly increasing our iPod audio tour capacity. For the exhibit we were able to get 25 iPod Video’s, and like our normal iPod audio tours, we will be letting visitors use them for free. The same content is also available on Art on Call.

This presents a bit of a challenge however. Up until now we’ve only had four iPod Nano’s to worry about, and plugging a few into a computer or two to charge isn’t that big a deal. Now however we have 25 of them to deal with, and there certainly aren’t enough USB ports to go around. The goal was to find a way to charge most of the iPods, do it in a limited space, and do it for as cheap as possible.

My solution was to buy three USB hubs and use them just for charging. We don’t really need to have them connected to the computer to sync with, we just want the power. This turned out to be harder than I thought. I went through a few USB hubs trying to get the iPods just to charge off the supplied AC adaptor. Each hub I tried didn’t allow this. It would only charge when the hub was connected to a computer via USB. I can’t fathom a reason why they limited it like this, as the power comes from AC on the hub, not from USB. Whether the hub was connected to a computer should not really dictate whether power could be supplied to the device or not. Alas, there was no cost efficient way around this.

So I had no choice, if I wanted to charge via any hub, I had to connect the hub to a computer. Thankfully we did have a computer near where our iPod storage is. Except it only has two open USB ports, not the three I needed. Another stumbling block. But then the thought occured to daisy chain the hubs. In essence, the USB cable that was supposed to go to the computer for each hub would plug into one of the other hubs instead. The last in the chain would then plug into the computer. Basically we could connect all of the iPods to a computer with one USB cord, regardless of how many hubs we had. And that’s what we did, as it worked perfectly:

One interesting feature of this is it allows us to mount all of these iPods at once, as you can see here. This actually makes adding and editing content on all of them a breeze. So in the end, perhaps all of the troubles were a blessing.

Total cost for this: $60. It may not look the prettiest, but sometimes when you’re trying to be frugal, getting something that just works is what counts.

Touch: Near Field Communications Blog

I found an interesting blog today: Touch. According to the about: Touch is a research project looking at the intersections between the digital and the physical. Its aim is to explore and develop new uses for RFID, NFC and mobile technology in areas such as retail, public services, social and personal communication. NFC, or Near […]

A Graphic Language for RFIDRFID Form Factors

I found an interesting blog today: Touch. According to the about:

Touch is a research project looking at the intersections between the digital and the physical. Its aim is to explore and develop new uses for RFID, NFC and mobile technology in areas such as retail, public services, social and personal communication.

NFC, or Near Field Communication, in a nutshell is the technology that will some day let us pay for a Coke or pump a parking meeter with our mobile phone. Or, perhaps, wave our phone at a piece of art and hear the Art On Call stop and an image on our phone’s screen. If you’re wondering why a blog about wireless communication is called touch, it is because NFC generally requires very close proximity, often requiring the access card or phone to touch the receiver.

Dig back through the archives, there are some great posts, such as RFID Form Factors and A Graphic Language for RFID. This one is definitely going in my RSS reader.

Photos borrowed from Touch

Next