Blogs Media Lab

Building the 50/50 Voting App

For our upcoming exhibition 50/50: Audience and Experts Curate the Paper Collection, we’re trying something a bit different. As you can probably tell from the title, we’re allowing our audience to help us curate a show. The idea is that our chief curator, Darsie Alexander, will curate 50% of the show, and the audience will […]

50/50 Voting App

For our upcoming exhibition 50/50: Audience and Experts Curate the Paper Collection, we’re trying something a bit different. As you can probably tell from the title, we’re allowing our audience to help us curate a show. The idea is that our chief curator, Darsie Alexander, will curate 50% of the show, and the audience will select from a group of 180 different print works for the other half.

As with most things presented to New Media, the question was posed, “how best do we do this?”. The exhibition is being hung in the same room as Benches and Binoculars, so the obvious answer was to use the kiosk already there as the voting platform for the show. With this in mind I started to think of different ways to present the voting app itself.

My initial idea was to do a “4-up” design. Display four artworks and ask people to choose their favorite. The idea was that this would make people confirm a choice in comparison to others. If you see some of what you’re selecting against, it can make it easier to know whether you want specific works in the show or not. But it also has the same effect in reverse. If you have two artworks that you really like, it can be just as hard to only be able to choose one. The other limitation? After coming up with the 4-up idea, we also decided to add iPhones into the mix as a possible voting platform (as well as iPads, an general internet browsers). The images on the iPhone’s screen were much to small to make decent comparisons on.

Nate suggested instead using a “hot or not” style voting system. One work that you basically vote yes or no on. This had the small downfall of not being able to compare a work against others, but allowed us to negate the “analysis paralysis” of the 4-up model. It also worked much better on mobile devices.

The second big decision we faced was “what do we show”? I had assumed in the beginning that we’d be showing label copy of every work like we do just about everywhere but it was suggested early on that we do no such thing. We didn’t want to influence voters by having a title or artist on every piece. With works by Chuck Close and Andy Warhol mixed into the print selections, it’s much too easy to see their name and vote for them simply because of their name. We wanted people to vote on what work they wanted to see, not what artist they wanted to see.

Both of these decisions proved to be pivotal in the popularity of the voting app. It made the voting app very streamlined and simplified. With 180 works to go through it makes it much easier to get through the entire thing. Choices are quick and easy. The results screen after voting on each artwork shows the current percentage of no to yes votes. This is a bit of a psychological pull. You as a user know what you think of this artwork, but what do others think about it? The only way to find out is to vote.

50/50 Voting App Results Screen

Because of this the voting app has been a success far beyond what we even thought it would be. I thought if we got 5,000-10,000 votes we would be doing pretty well. Half way through the voting process now, we have well over 100,000 votes. We’ve had over 1,500 users voting on the artworks. We’ve collected over 500 email addresses wanting to know who the winners are when all the voting is tallied. We never expected anything this good and we have several weeks of voting yet to come.

One interesting outcome of all of these votes has been the number of yes’s to no’s over all of the works. Since the works are presented randomly (well, pseudo randomly for each user), one might expect that half the works would have more yes than no votes, and vice versa. But that’s not turned out to be the case. About 80% of the works have more no votes than yes’s. Why is this?

There are various theories. Perhaps people are more selective if they know something will be on view in public. Perhaps people in general are just overly negative. Or perhaps people really don’t like any of our artwork!

But one of the more interesting theories of why this is goes back to the language we decided to use. Originally we were going to use the actual words “Yes” and “No” to answer the question “Would you like to see this artwork on view?”. This later got changed to “Definitely” and “Maybe Not”. Notice how the affirmative answer has much more weight behind it: “Yes, most definitely!”, whereas the negative option leaves you a bit of wiggle room “Eh, maybe not”. It’s this differentiation between being sure of a decision and perhaps not so sure that may have contributed to people saying no more often than yes.

Which begs the question, what if it was changed? What if the options instead were “Definitely Not” and “Sure”? Now the definitive answer is on the negative and the positive answer has more room to slush around (“Hell no!” vs “Ahh sure, why not?!”). It would be interesting to see what the results would have been with this simple change in language. Maybe next time. This round, we’re going to keep our metrics the same throughout to keep it consistant.

The voting for 50/50 runs until Sept 15. If you’d like to participate, you still have time!

Changes in New Media, job opening for a web designer/developer

There are changes in store for the New Media Initiatives department. After being with the Walker for four years, I’ve taken a position across the river Minnesota Public Radio as a Web Designer/Developer. It is very hard for me to leave the Walker, but I’m excited about working on new projects for an even larger […]

There are changes in store for the New Media Initiatives department. After being with the Walker for four years, I’ve taken a position across the river Minnesota Public Radio as a Web Designer/Developer. It is very hard for me to leave the Walker, but I’m excited about working on new projects for an even larger audience at MPR.

This means there is a job opening in New Media, and if you’re a web nerd, you should consider sending in your resume. My title at the Walker is New Media Designer, but the job posting is for a Web Designer/Developer. This reflects the changing nature of the work I’ve taken on over the years, including doing more back-end development work on the Walker Channel and Mobile Site, amongst other projects. In the future, we have work planned around an overhaul of major portions of the Walker website. Our tool of choice is Django, but even if you don’t have python or django experience, consider applying. I didn’t know a lick of Python or Django when I tackled My Yard Our Message, but it was easy to get up to speed and make things happen.

Full details for the position are listed on jobs site, and the deadline for applying is September 3rd.

Creating a community calendar using Google Apps and WordPress

For Walker Open Field, we wanted a way to collect community submitted events and display them on our site. We have our own calendar and we discussed whether adding the events to our internal Calendar CMS was the best way, or if using an outside calendar solution was the direction to go. In the end, […]

For Walker Open Field, we wanted a way to collect community submitted events and display them on our site. We have our own calendar and we discussed whether adding the events to our internal Calendar CMS was the best way, or if using an outside calendar solution was the direction to go. In the end, we decided to do both, using Google Calendar for community events and our own calendar CMS for Walker-programmed events.

The Open Field website is based on the lovely design work of Andrea Hyde, and the site is built using WordPress, which we use for this blog and a few other portions of our website. WordPress is relatively easy to template once, so it makes for quick development. WordPress also has a load of useful plug-ins and built-in features that saved us a lot of time. Here’s how we used it an put it together:

Collecting Events
To accept event ideas from community members, we used the WordPress Cforms II plugin, which makes it very easy to build otherwise complex forms and process them. You can create workflows with the submissions, but we simply have Cforms submit the form to us over email. A real person (Shaylie) reviews each of the event submissions and adds the details to…

Google Calendar
We use Google’s Calendar app to contain the calendar for the community events. When Shaylie gets the email about a new event, she reviews it, follows up on any conflicts or issues, and then manually adds it to google calendar. We toyed with the idea of using the Calendar API to create a form that would allow users to create events directly in the calendar, but decided against it for two reasons. First, it seemed overly complicated for what we thought would amount to less than 100 events. Secondly, we would still have to review every submission and it would be just as cumbersome to do it after the fact rather than beforehand.

We also use Google Calendar to process our own calendar internal calendar feed. The Walker Calendar can spit out data as XML and ICAL. We have our own proprietary XML format that can be rather complex, but the ICAL format is widely understood and Google Calendar can import it as a subscription.

Getting data out of Google Calendar
We now have two calendars in google calendar: Walker Events and Community Events. Google provides several ways to get data out of google calendar, and the one we use is the ATOM format with a collection of google name-spaced elements. The calendar API is quite robust, but there are a few things worth noting:

  • You must ask for the full feed to get all the date info
  • Make sure you set the time zone, both on the feed request but add it to the events when you link tot hem (using the ctz paramater)
  • Asking for only futureevents and singleevents (as paramaters) makes life easier, since you don’t have to worry about complexities of figuruing out the repeating logic, which is complicated

This is our feed for Open Field Community Events.

Calendar data into WordPress
Since version 2.8, WordPress has included the most excellent SimplePie RSS/ATOM parsing library. As the name would have you believe, it is pretty simple to use. To pull the data out of the Google Calendar items with simplePie, you extend the SimplePie_Item class with some extra methods to get that gd:when data.

Combining two feeds in SimplePie is not hard. By default, SimplePie will sort them by the modified or published date, which in the Google Calendar API is the date the event was edited, not when it happens. Instead, we want to sort them by the gd:date field. There are probably a few ways to do this, but the way that I set up was to simply loop through all the data, put it into an array with the timestamp as the key, and then sort that array by key. Here’s the code:

[php]
<?php

//include the rss/atom feed classes
include_once(ABSPATH . WPINC . ‘/feed.php’);
require_once (ABSPATH . WPINC . ‘/class-feed.php’);

// Get a SimplePie feed object from the specified feed source.
$calFeedsRss = array(
#walker events
‘http://www.google.com/calendar/feeds/95g83qt1nat5k31oqd898bj2ako1phq1%40import.calendar.google.com/public/full?orderby=starttime&ctz=America/Chicago&sortorder=a&max-results=1000&futureevents=true’,

#public events
‘http://www.google.com/calendar/feeds/walkerart.org_cptqutet6ou4odcg6n2mvk4f44%40group.calendar.google.com/public/full?orderby=starttime&ctz=America/Chicago&sortorder=a&max-results=1000&futureevents=true&singleevents=true’
);

$feed = new SimplePie();
$feed->set_item_class(‘SimplePie_Item_Extras’);
$feed->set_feed_url($calFeedsRss);
$feed->set_cache_class(‘WP_Feed_Cache’);
$feed->set_file_class(‘WP_SimplePie_File’);
$feed->set_cache_duration(apply_filters(‘wp_feed_cache_transient_lifetime’, 3600)); //cache things for an hour
$feed->init();
$feed->handle_content_type();

if ( $feed->error() )
printf (‘There was an error while connecting to Feed server, please, try again!’);

$count = 0; // hack, but we’re going to count each loop and use it as a little offset on the sort val
foreach ($feed->get_items() as $item){

if (strtolower($item->get_title()) != ‘walker open field’){
$eventType = ‘walker';
$related = $item->get_links ( ‘related’ );
$related = $related[0];
if ( strpos($related,’walkerart.org’) === False ){
$related = $item->get_link();
// if it’s a google calendar event, make sure we set the tiem zone
$related .= "&ctz=America%2FChicago";
$eventType = ‘community';
}

#we offset the actual starttime a little bit in case two events have the same start time, they would overwrite in t he array
$sortVal = $item->get_gcal_starttime(‘U’) + $count;
$myData = array(
‘title’ => $item->get_title(),
‘starttime’ => $item->get_gcal_starttime(‘U’),
‘endtime’ => $item->get_gcal_endtime(‘U’),

‘link’ => $related,
‘eventType’ => $eventType,
‘text’ => $item->get_content(),
‘date’ => $item->get_gcal_starttime(‘U’)
);
$cals[ $sortVal ] = $myData;
}
$count++;
}
//sort the array by keys
ksort($cals);
// $cals now contains all the event info we’ll need

?>
[/php]

Once this is done, you can simply take the $cals array and loop through it in your theme as needed. Parsing the google calendar feeds is not an inexpensive operation, so you may wish to use the Transients API in WordPress to cache this information.

Caveats and Issues
Overall, this approach has worked well for us. We have run into some issues where the Google Calendar ATOM feed would show events that had been deleted. Making sure to set the futureevents and singleevents paramaters fixed this. We also ran into some issues using the signleevents, so we ended up manually creating occurrences for events that would have otherwise had a repeating structure.

What apps and books do you want to see on the #openfield iPads?

As part of the Open Field Toolshed this summer, we are going to have four iPads available for public checkout. With our expanding WiFi coverage and outdoor beer garden, Open Field might just be the perfect place to surf the web. Are there any apps or e-books you’d like to see on our iPads? We’re […]

iPad photo taken with iPhone

As part of the Open Field Toolshed this summer, we are going to have four iPads available for public checkout. With our expanding WiFi coverage and outdoor beer garden, Open Field might just be the perfect place to surf the web.

Are there any apps or e-books you’d like to see on our iPads? We’re got a few in mind already:

If you’ve got specific requests, let us know in the comments or on twitter. No promises, but we’ll see what we can do.

Simple iTunes U stats aggregation with python and xlrd

Like many institutions, we put numbers for our various online presences in our annual report and other presentations: YouTube views, Twitter followers, Facebook fans, etc. For most services, this is very easy to do: log in, go to the stats page, write the big number down. We also want to include the iTunes U numbers, […]

Like many institutions, we put numbers for our various online presences in our annual report and other presentations: YouTube views, Twitter followers, Facebook fans, etc. For most services, this is very easy to do: log in, go to the stats page, write the big number down. We also want to include the iTunes U numbers, but for iTunes U, there is no centralized stats reporting outside of the Microsoft Excel file Apple sends iTunes U administrators every week. Tabulating the stats by hand would be time consuming and error-prone, so I wrote a short python script to automate it. Here is is:

[python]
#!/usr/bin/env python
# encoding: utf-8
"""
itunesUstats.py
Created by Justin Heideman on 2010-05-19.
"""

import sys, os, glob, xlrd

def main():
#change this to the path where your stats are
path = ‘itunes_U_stats/all/’
totalDownloads = 0

for infile in glob.glob( os.path.join(path, ‘*.xls’) ):
# open the file
wb = xlrd.open_workbook(infile)

# get the most recent days’s tracks
sh = wb.sheet_by_index(1)

# get the downloads from that day
downloads = sh.col_values(1)

#first entry is u’Count’, whcih we don’t want
downloads.pop(0)

#sum it up
totalDownloads += sum(downloads)

# show a little progress
print sum(downloads)

#done, output results
print "—————————————————-"
print "Total downloads: %d" % totalDownloads

if __name__ == ‘__main__':
main()
[/python]

This script uses the excellent xlrd python module to read the Excel files (simple xlrd tutorial here), which is roughly 27.314 times easier than trying to use an Excel macro to do the same thing. To use this, simply change the path on line 13 to a directory containing all your iTunes stats files, and run the script from the command line. You’ll get output like this:

929.0
732.0
779.0
854.0
1000.0
987.0
765.0
812.0
1275.0
1333.0
1114.0
1581.0
1278.0
1568.0
1854.0
2102.0
1108.0
1078.0
----------------------------------------------------
Total downloads: 21149

Do note that the script is tied to the excel format Apple has been using, so if they change it, this will break. Apple explains the iTunes report fields here. This script tells you ” all the iTunes U tracks users downloaded that week through the DownloadTrack, DownloadTracks, and SubscriptionEnclosure actions”.

User testing using paper prototypes

A few years ago I was trying to explain the concept of “fail early, fail often” to someone, and failing.  (see what I did there?  ;-)  They didn’t understand why you just wouldn’t take longer to build it right the first time. Now that we’re deep in the process of redesigning our website, I am […]

A few years ago I was trying to explain the concept of “fail early, fail often” to someone, and failing.  (see what I did there?  ;-)  They didn’t understand why you just wouldn’t take longer to build it right the first time.

Now that we’re deep in the process of redesigning our website, I am starting to see the real danger in that sort of thinking.  Despite all our best intentions, we’ve fallen into a trap of thrashing back and forth around certain ideas – unable to agree, unwilling to move forward until we “solve it”, and essentially stuck in the same cycle illustrated in this cartoon.

Click for the whole cartoon (scroll down a bit)

To try to help break the recent impasse on site navigation, we’re doing some simple user testing using paper prototypes of several ideas.  These are meant to be rough sketches to essentially pass/fail the “do they get it?” test, but they’re also giving us a ton of valuable little hints into how people see and understand both our website and our navigation.

An example of some paper prototypes for the navigation. (Don't worry, it's just a rough idea and one of many!)

Our basic process so far is to ask people (non-staff) for first impressions of the top nav: does it make sense?  Do they think they know what they’ll get under each button?  Then we show the flyouts and see if it’s what they expected.  Anything missing?  Anything doesn’t meet their expectations?  Finally we ask a few targeted “task” questions, like “where would you look if you wanted information about n work of art you saw in the galleries?”

Even this simple round of testing has revealed some clearly wrong assumptions on our part.  By fixing these things now (failing early) and iterating quickly, we can do more prototypes and get more feedback (failing often).  I’ll try to post updates as we proceed.

PS — Anyone else doing paper prototypes like this?  I think we all know we’re “supposed” to do quick user testing, but honestly this is the first time in years we’ve actually done something like it.

Setting up smartphone emulators for testing mobile websites

While developing the Walker’s mobile site, I needed to test the site in a number of browsers to ensure compatibility. If you thought testing a regular website was a pain, mobile is an order of magnitude worse. Our mobile site is designed to work on modern smartphones. If you’re using a 4 year old Nokia […]

While developing the Walker’s mobile site, I needed to test the site in a number of browsers to ensure compatibility. If you thought testing a regular website was a pain, mobile is an order of magnitude worse.

Our mobile site is designed to work on modern smartphones. If you’re using a 4 year old Nokia phone with a 120×160 screen, our site does not and will not work for you. If you want to test on older/less-smart phones, PPK has a quick overview post that has some pointers. Even so, getting the current smartphone OS running is no piece of cake. So this post will outline how to get iPhone, Android, WebOS, and, ugh, BlackBerry running in emulation. Note: I left out Windows Mobile, as does 99% of the smartphone buying public.

Let’s knock off some low hanging fruit: iPhone

Getting the iPhone to run in emulation is very easy. First, you have to have a mac. If you’re a web developer, you’re probably working on a mac. You need to get the iPhone developer tools. You’ll have to register for a free Apple Developer account, agreeing to their lengthy and draconian agreement. Once that’s done, you can slurp down the humongous 2.3gb download and install it. Once installed, you’ll have a nice folder named Developer at the root of your drive, and navigate inside it and look for the iPhone Simulator.app. That’s your boy, so launch it and, hooray! You can now test your sites in Mobile Safari.

iPhone Simulator in Apple Developer Tools

The iPhone Simulator is by far the easiest to work with, since it’s a nice pre-packaged app, just like any other. And it is a simulator, not an emulator. The difference being, a simulator just looks and acts like an iPhone, but actually runs native code on your machine. An emulator emulates a different processor, running the whole host OS inside the emulator. The iPhone simulator runs an x68 version of Safari, and just links to the mobile frameworks, compiled in x86, on your local machine. A real, actual iPhone has all the same frameworks, but they’re compiled in ARM code on the phone.

Walker Art Center on the iPhone

Android

In typical google fashion, Android is a bit more confusing, but also more powerful. There are roughly three different flavors of Android out there in the wild: 1.5, 2.0, and 2.1. The browser is slightly different in each, but for most simple sites this should be relatively unimportant.

To get the Android Emulator running, download the Android SDK for your platform. I’m on a Mac, so that’s what I focus on here. You’ll need to have up-to-date java, but if you’re on a Mac, this isn’t a problem. Bonus points to google for being the only one to not require you to sign up as a developer to get the SDK. Once you have the file, unpack it and pull up the Terminal. Navigate to the directory, and then look inside the tools directory. You need to launch the “android” executable:

Very tricky: Launch the android executable.

This will launch the Android SDK and Android AVD Manager:

Android SDK and AVD Manager

The first thing you’ll probably want to do is go to Installed Packages and hit Update All…, just to get everything up-to-date. With that done, move back to Virtual Devices and we’re going to create a New virtual device:

Set up new Android Virtual Device

Name it whatever you want, I’d suggest using Android 2.1 as your target, give it a file size of around 200mb (you don’t need much if you aren’t going to install any apps) and leave everything else as default. Once it’s created, you can simply hit start, wait for it to boot, and you’re now running Android:

Android Emulator Running

Palm WebOS

Palm is suffering as a company right now, and depending on the rumors, is about to be bought by Lenovo, HTC, Microsoft, or Google. Pretty much everyone agrees that WebOS is really cool, so it’s definitely worth testing your mobile site on. WebOS, like the iPhone and Android, use Webkit as it’s browser, so things here are not going to be unexpected. The primary difference is the available fonts.

Running the WebOS emulator is very easy, at least on the Mac. First, you need to download an grab a copy of VirtualBox, and second, you download and install the Palm SDK. Both are linked from this page.

Installing VirtualBox is dead easy, and works just like any other OS X .pkg install process:

Then download and install the Palm WebOS SDK:

When you’re done, look in your /Applications folder for an app named Palm Emulator:

When you launch the emulator, you’ll be asked to choose a screen size (corresponding to either the Pre or the Pixi) and then it will start VirtualBox. It’s a bit more of a cumbersome startup process than the iPhone Simulator, but about on par with Android.

WebOS emulator starting up. It fires up VirtualBox in the background.

WebOS running.

BlackBerry

BlackBerry is the hairiest of all the smartphones in this post. Unless you know the Research In Motion ecosystem, and I don’t, it seems that there are about 300 different versions of BlackBerry, and no easy way to know what version you should test on. From what I can tell, the browser is basically the same on all the more recent phones, so picking one phone and using that should be fairly safe. RIM is working on BlackBerry 6, which is purported to include a WebKit based browser, addressing the sadness their browser causes in web developers everywhere.

The first thing you’re going to need to simulate a BlackBerry is a windows machine. I use VMWare Fusion on my mac, and have several instances of XP, so this is not a problem. The emulator is incredibly slow and clunky, so you’ll want a fairly fast machine or a Virtual Machine with the settings for RAM and CPU cranked up.

There are three basic parts you’ll need to install to get the BlackBerry emulator running: Java EE 5, BlackBerry Smartphone Simulator, and BlackBerry Email and MDS Services Simulator. Let’s start with Java. You need Java Enterprise Edition 5, and you can get that on Sun/Oracle’s Java EE page. I’ve had Java EE 5 and 6 on my windows machine for quite some time, so I’m not actually sure what version BlackBerry requires, but it’s one of them, and they’re both free. Get it, install it, and add one more hunk of junk to your system tray.

Now you need the emulators themselves: To get an emulator, head over to the RIM emulator page and pick a device. I went with the 9630 since that seems fairly popular and it was at the top of the list of devices to chose. I’d grab the latest OS for a generic carrier. You will have to register for a no-cost RIM developer account to download the file.

While you’re there, you’ll also want to grab the MDS (aka Mobile Data Service) emulator. This is what enables the phone to actually talk to the internet. To grab this, click on the “view all BlackBerry Smartphone Simulator downloads” link, and then choose the first item from the list, “BlackBerry Email and MDS Services Simulator Package”. Click through and grab the latest version.

Once the download completes, copy the .EXEs to windows and run them. You’ll walk through the standard windows install process, and when you’re done, you’ll be left with some new menu items. Let’s start the MDS up first, since we’d like a net connection. Here’s where you should find it:

I like to take screenshots of Windows to show how crazy bad it is.

And this is what it looks like starting up:

MDS running. It's a java app.

Now let’s start up the phone emulator itself:

BlackBerry 9630 Emulator

For me, it takes quite a while to start the phone, about a minute. I started off with a smaller VM instance and it was 5+ minutes to launch, so be warned. After it starts, you’ll be left with a screen like this:

You can’t use the mouse to navigate on the screen, which is crazy counter-intuitive for anyone who has used the other three phones mentioned in this post. Instead, you click on the buttons on screen or use your keyboard to navigate. Welcome to 2005. To get to the browser, hit the hangup button, then arrow over to the globe and hit enter. You can hit the little re-wrap/undo button to get to the URL field once the browser launches. Here’s what our site looks like:

A glimpse inside a blog spammer’s tools

We get a fair amount of spam on the Walker Blogs: Defensio has blocked 49108 spam messages since it started counting. Even with a 99.07% accuracy rate and a captcha, spam gets through our filters. Over the weekend, I noticed a couple spam comments come through that I thought were interesting. Here’s an example: {Amazing|Amazing […]

We get a fair amount of spam on the Walker Blogs: Defensio has blocked 49108 spam messages since it started counting. Even with a 99.07% accuracy rate and a captcha, spam gets through our filters. Over the weekend, I noticed a couple spam comments come through that I thought were interesting. Here’s an example:

{Amazing|Amazing Dude|Wow dude|Thanks dude|Thankyou|Wow man|Wow}, {that is|this is|that’s} {extremely|very|really} {good|nice|helpful} {info|information}, {thanks|cheers|much appreciated|appreciated|thankyou}.

The geeky types among us will immediately recognize that as some sort of spam template language. Pick a word or phrase in each section, and you have a nearly limitless selection of spam phrases.

The template language in itself isn’t all that interesting, but what I found very interesting is that the link the spammer left goes to this account on del.icio.us:

The pages and pages of tagged links look to be the library of links that our spammer is using to spam blogs. The comments they’ve left on delicious look to be alternate text to be used as comments on posts. My guess is they’re using the del.icio.us tags to match keywords or tags on blog posts. I guess it’s not surprising to see spammers using web 2.0 services for doing their filthy work.

Announcing the new Walker Channel — HD video, improved design, search, accessibility

The Walker Channel, in existence since 2003, has recently undergone a re-design. The old Walker Channel was originally built to serve Real Video and stream live webcasts using Real Video. It had slowly evolved over time to use more friendly MPEG-4 and H.264 video, and even moved from Real Video for live streaming to the better ustream.tv. […]

The Walker Channel, in existence since 2003, has recently undergone a re-design. The old Walker Channel was originally built to serve Real Video and stream live webcasts using Real Video. It had slowly evolved over time to use more friendly MPEG-4 and H.264 video, and even moved from Real Video for live streaming to the better ustream.tv. But it never really caught up to the modern, YouTube era of video. The re-design we just completed did that, and added a few other goodies.

Visual Design


Quite obviously, the site has undergone a major visual overhaul. The old site had almost no hierarchy to the video archive, which worked OK with a handful of video, but with 200+ in the archive, it became unwieldy to find a particular video or just browse.

Just like with our iTunes U site, we’ve split our internal, museum centric departments into more logical genres. For example, instead of just “Performing Arts”, we have Dance, Theater and Music. We also highlight content by it’s recentness, and, more importantly by it’s popularity (view count). None of this is ground-breaking in 2010, but it’s a big upgrade from 2003.

Streaming H.264 Video

We’re now serving all our video content as streaming h.264 video. This means you can watch a video and jump to any place in the timeline before it has buffered to that spot. Using h.264 enables us to easily switch to HTML5 and support other devices down the road. We converted all our older Real Media video into h.264 mp4s.

We also utilize YouTube to serve many of our newer videos. We have already been putting all our Channel content on YouTube for about a year, so there’s no need to upload it twice. YouTube serves a relatively high-quality FLV or MP4 file, and this means we do not pay for bandwidth, which is not an insignificant cost consideration.

Where we’re not using YouTube, we’re using Amazon CloudFront and their new Adobe Streaming Media Server. This means that we don’t have to run our own instances of EC2 and Wowza to encode & stream the video. We upload our video manually, so we don’t need to encode our video in “the cloud”.

High Definition Video

We also upgraded our camera and video capture equipment to enter the beautiful HD world. We now capture all lectures in HD and webcast them live at 640×360. Going forward, archived versions will be posted at 720P (1280×720). Drawn Here (and there): HouMinn Practice is our first video posted in HD, and it looks great. Here’s a visual representation of what this new video means, comparing the resolutions we have from older content:

Click to enlarge and get the full effect.

We have also added a video switcher to our hardware repertoire. The switcher lets us show the presenter’s slides, in-stream, rather than just pointing the camera at the projection screen. This switcher enables a dramatic improvement in video quality, and will be especially useful for Architecture / Design lectures, which typically feature many slides.

Transcripts and captions

Starting with our new recordings in 2010, we’re adding closed captions and transcripts for nearly every video. This video is a good example. That means a couple things:

  • Videos are more accessible for deaf or hard of hearing viewers
  • It enables you to visually scan the contents of a video to key on a section you want to watch. In the  example video, clicking on the time code on the right jumps the playhead to that point in the video.
  • It gives us much more meaningful text to search on. Search engines are still text based, so having more than just the video description to search, is a great thing.

We create our transcripts by sending our video to CastingWords. The transcripts that CastingWords generates is then fed into YouTube’s machine caption processing feature, generating a captions for the video in the form of a .SBV file. The .SBV file is then pulled back into the Walker Channel, where we convert it on the fly to W3C TimedText format for use in jwplayer as captions.

We also re-format the captions as a transcript for display in the Transcript tab on the video. Captions tend to be broken up not by sentence, but by how the speaker is talking and how they’ll fit on screen. Transcripts, on the other hand, are read more traditionally, and should be read in complete sentences. So we break the captions up and re-form them in complete sentences with associated timecodes. Here’s an example screenshot:

Note the fragmented captions (in video) with transcript (below), in full sentences.

Comments and video jumping

We’ve added comments! Like what you see or want to add your thoughts? Leave a note. One neat thing in the comments that we convert mentions of specific time into a link to jump the video playhead. So if you leave a comment with 3:13 in it, it will turn into a link to that spot in the video.

Similarly, when that happens we change the hash for the page to a link to that spot. The URL will change from http://channel.walkerart.org/play/my-video/ to http://channel.walkerart.org/play/my-video/#t=3m3s. Using that link anywhere else will jump the playhead to that point in the video. YouTube does the same thing, so we borrowed the idea.

Search and backend

We’re using solr for the search engine on the channel. Nate had great success with solr on ArtsConnectEd, so using solr was a no-brainer for us. The rest of the logic for the channel is built using Django, a python web framework that I also worked with for the My Yard Our Message project. To connect Django and solr, we’re using django-solr-search (aka “solango”). It was necessary to sub-class parts of solango to get it to present solr’s more-like-this functionality that we use for the “Related Media”. In retrospect, I probably should have used Haystack Search instead, since it supports that natively. As we move forward using solr and django in other areas of the Walker’s website, we’ll probably switch to using Haystack.

Funding

Funding for aspects of these updates came from the Bush Foundation, under a grant entitled “Expanding the Rules of Engagement with Artists and Audiences and Fostering Creative Capital in our Community“. This grant has many applications within the Walker as a whole, but for the online Walker Channel, it is specifically funding the upgrade of our camera and video equipment.

Tips and tricks: How to convert ancient real media video into a modern h.264 mp4

First of all, I’d like to apologize to all the people on twitter that follow me and had to endure my ranting about the trials and tribulations of converting real media files: I’m sorry. So let’s say you have a pile of real media video that was recorded sometime earlier in the decade when real […]

First of all, I’d like to apologize to all the people on twitter that follow me and had to endure my ranting about the trials and tribulations of converting real media files: I’m sorry.

So let’s say you have a pile of real media video that was recorded sometime earlier in the decade when real video was still relevant, but you realize any sane person these days doesn’t have RealPlayer installed and can’t view it. What you really want is that video to exist in an mp4 so you can stream it to a flash player, or eventually use <video> in html5 (once they work that codec stuff out). If you do a little googling on how to convert real video into h.264 mp4, you’ll find lots of programs and forum posts claiming they know to do it. But it’s mostly programs that don’t actually work and forum posts that are no longer relevant or strewn with blocking issues.

Thankfully, there is a better way, and I will lay it out for you.

Step one: Download the actual media
In our scenario, you have a list of 80 or so real media files that you need to convert. The URLs for those things probably look something like
http://media.walkerart.org/av/Channel/Gowda.ram. If you were to download that .ram file, you’d notice that it’s about 59 bytes; clearly not enough to be the actual video file. What it is, is a pointer to the streaming location for the file. If you open up that .ram file in a text editor, you’ll see it points to rtsp://ice.walkerart.org:8080/translocations/media/Gowda.rm, which is the location for our real media streaming server here at the Walker.The thing we really want is the .rm file, but it can be a little hard to get via rtsp. Since we’re not stream ripping someone else’s content (that would be wrong, dontcha know), we can just log in to the server and based on that file path it’s looking for, grab the .rm via SCP or a file transfer mechanism of our choice. I happened to know that all our .rm files are actually accessible via HTTP, so I just did a little find/replacing in the URLs and built a list with wget to download them.

Step two: Convert the real media files to mp4
If you were trying to do this back in the day this would be a major pain. You’d have to use mencoder and the longest, most convoluted command-line arguments you’ve ever seen. Thankfully, Real recently came out with an updated version of RealPlayer that has a handy little thing in it called RealPlayer Converter. Sounds too good to be true, right? It is.

For larger files, it only works well on Windows, and it doesn’t give you a lot of options for encoding. The mac version will hang at 95% encoding for most files, and that’s pretty annoying. Save yourself the trouble and use a Windows box. Once you have RealPlayer installed, open up the converter, drag your .rm files in, and set the conversion settings. Depending on what your original sources are, you might need to fiddle with the options. I used the h.246 for iPod and iPhone, because that fit the size (320×240) that my source files were. I cranked down the bitrate to 512kbps and 128kbps, because my source .rm files were about 384kbps and 64kbps to start with. This will give you a .m4v file, which is basically a .mp4 file with a different extension, but should work OK for most stuff.

Queue everything up and let it rip. On a two year old PC, it took about a day to process 48 hours worth of video.

Step three: Check your files
This is the part where your curse a little bit, realizing that in half the video you just encoded, the audio is out of sync with the video. This is a common problem when converting real video, and Real’s own tool doesn’t do a good job of handling it, never mind the fact that if you just play the video in RealPlayer, it plays in sync just fine. If you were to open the .m4v up in QuickTime Pro and look at the movie properties, you’d see something like this:

Notice the problem there? The video and audio tracks have different lengths, the video track being shorter than the audio. There is a way to fix this.

Step four: Synchronize the audio and video
There is a handy mac program that helps you fix just this synchronization issue. It’s called QT Sync. Operation is pretty simple. You open up a video file and it shows you fiddle with the video/audio offset until it is synced up. Here’s a screenshot:

Ideally, proper sync will occur when the number of frames is equal for both the audio and video. In my experience, most of my videos were synced when the video frame count was about 10 short of the audio frames, but your mileage may vary. Some of the videos I worked with would also slowly drift out of sync over time, and unfortunately, there isn’t a way to fix those. Just sync them up the the beginning and rest easy knowing you’ve done what you can.

Step four-and-a-half: Save the video
This is where things get tricky again. How you save the video depends on what you’re going to do with it. If your output target is just for iPods and iPhone, and you’re not going to be streaming it from a streaming server, you have it good. If you are planning on streaming, skip to step five. You can save the video from QT Sync without re-encoding. You’ll just be writing the audio and video streams in a new .mp4 wrapper, this time with a proper delay set up for one of the streams. To save the .mp4, you file > export, and use “Movie to mpeg-4″ as the format. Go into the options, and you want to use “Pass through” as the format for both audio and video, and do not check anything in the streaming tab. Here’s what it looks like:

This will take a moment to write the file, but it won’t re-encode. If you open the resulting mp4 up in QuickTime Pro and look at the properties, you should see something like this:

Note how the video track has a start time 6 seconds later than the audio. This is good and should play in sync. If Rinse and repeat for each of your videos that is out of sync and you’re done.

Step five: Save the video
If you’re reading this, it’s because you want to take your converted video and stream it to a flash player, using something like Adobe Streaming Media Server. If you were to take that synced, fixed up mp4 from step 4.5 and put it on your streaming media server and started streaming, you’d notice that the audio and video were out of sync again. See, Adobe Streaming Media Server doesn’t respect the delay or start time in an .mp4 file. I didn’t test other streaming servers like Wowza, but I’m guessing the suffer from the same issue. It sucks, but I can kind of see it making sense for a streaming server to expect them to be in sync.

Instead, we are stuck fixing the video the hard way. You have the video sync’d up in QT Sync, but instead of saving it as a .mp4 as in step 4.5, save it as a reference movie with a .mov extension. We’re doing this because we’ve got to re-encode the video, again, essentially hard-coding the audio or video delay into the streams, rather than just the .mp4 wrapper.

Step six: Encode the video (again)
So, now you have a bunch of .mov referrence files that are ready to be batch processed. You can use whatever software you like to do this, but I like MPEG Streamclip, which I wrote about a little in this post about iTunes U. It is way faster than Compressor, and it does batch processing really nicely.

You want to use settings that are similar to what your file is already using. I outlined that above, but here’s what the settings screen looks like:

Yes, you’re losing a bit of quality here encoding the video for the second time, but there isn’t a way around it. In looking, I couldn’t notice a difference between the original .rm file, the first version m4v, and the fixed and synced .mp4. There is no doubt some loss, but it is an acceptable trade-off to get a usable video format.