Blogs Media Lab Graphic Design

See Change 2014: Creative Inspiration

See Change 2014 conference at the University of Minnesota (in its fifth year) brings together another diverse set of creative perspectives on design and the undercurrent of change driven by design. This year was no different. In the five years I have attended See Change, it has consistently given me inspiration and a view into […]

See Change 2014 conference at the University of Minnesota (in its fifth year) brings together another diverse set of creative perspectives on design and the undercurrent of change driven by design. This year was no different. In the five years I have attended See Change, it has consistently given me inspiration and a view into a world of design which I am now entering late in my software development career. As an MFA student in interactive design, I consider attending See Change part of my curriculum. As an artist, I feel a connection with the creative drive of those who have made visual expression their line of gainful employment, sustaining, in a sense, both sides of their lives in one endeavor. All this appeals to my personal sense of holistic integration.

Conference presentations ranged from how we work and interact as individuals to creativity theory. Along this spectrum Aby Wolf lead us through singing exercises, Paul Trani looked at the 3D printing revolution, and two inspiring photographers showing their great work and telling wonderful stories. If any theme stands out among this diversity, it is this: how to find inspiration in your creative work. On that topic, photographer Douglas Kirkland responds, “keep many irons in the fire,” and his vast body of work expresses this passion and sustained inspiration. Annie Griffiths, after recounting her story of engaging in a photographic subject one hurried morning, when she forgot to wear pants, advocates, “find a passion that makes you forget to put your pants on.”

For the finale, genial professor Barry Kudrowitz compared close links between creativity a type of humor based on incongruity, making non-obvious connections (as opposed to slapstick or cathartic types of humor). Making non-obvious associations, Kudrowitz posits, means getting past the obvious ones, which itself seems obvious. But what studies have shown might not be so obvious: it is the simple correlation between the number of ideas and having good ones. This happens because the good ideas are usually found at the tail of the chart—he’s also an engineer, so there were charts! Getting past the usual and obvious means getting past the first ten or so ideas.

If it is possible to summarize See Change 2014 with an agglomeration of quotes (lacking attribution—sorry), here is what that could be: scale is the enemy of doing good work, print is still important, collaboration is a key ingredient, suspend judgment, the quiet power of space, stay open, know your inspiration, just the right amount of wrong, find the creative hook and make bold statements, be comfortable being uncomfortable, silly ideas can be stepping stones, Tigers and Bears (hey, you had to be there!)

Introducing Media Lab and the New Walker Blogs

It’s been seven years since we launched the Walker Blogs and with the release of our new website back in December we thought it was finally time for a refresh. You’ll notice that the design has changed to align with our new website and we’ve used the redesign process as an opportunity to rebrand each […]

It’s been seven years since we launched the Walker Blogs and with the release of our new website back in December we thought it was finally time for a refresh. You’ll notice that the design has changed to align with our new website and we’ve used the redesign process as an opportunity to rebrand each of our core blogs. It was an interesting exercise and allowed us to assess the state of our collective blogging efforts – how each of our (now) nine blogs serves a different audience, how they all have different use characteristics by their audiences, and how they could all be focused into tighter streams of content. The blogs definitely represent the long tail side of our publishing efforts – lots of small bits of specialized content for micro-niche audiences – so maintaining a strong emphasis on the personalities behind the Walker and their specific interests was key. And the rebranding process illustrated for us that when you present people with tangible criteria to change, such as a new name, tighter description, graphic – an understandable format to inhabit – it helps them better speculate on what their blog can be.

We decided on a system of flag graphics to represent the various blogs, since each blog is really a representation of a different group of people at the Walker (in most cases the individual programming departments). It’s a tricky balance to strike between striving for traditional, recognizable flag forms and having a graphic that cleverly plays off the title, but we’re glad to have a consistent vocabulary to build on in the future, especially since the blogs now match our comparatively monochromatic main site. We’re particularly fond of the Green Room’s flag.

Beyond the simple graphic forms, this is the first truly responsively designed Walker site – resize your browser window to see things reflow to fit a variety of screen sizes. Content and interface elements of lesser importance become hidden behind links at certain screen sizes. The main content area, on the other hand, stretches to fill a large width when called for. It leads to some pretty long line lengths, but gives our older, image-heavy content the space it needs to fit. We’ll be soon applying this technique to the redesigned Walker Collections, which features a strong publishing component. With the easy adaptations to tablets and mobile devices, it’s a good fit for our eventual goal of efficient multi-channel communications.

Other, smaller items of note include the addition of a grid/list view toggle in the top left to make skimming easier, smarter ordering of categories and authors (by popularity and date of last post, respectively), and a fun little flag animation when you roll over the left-side blog names (in full-width view).

And just for kicks, here are some rejected flag sketches:

Walkerart.org Design Notes #1

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own […]

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own as an internationally-focused “digital Walker,” instead of something that merely serves the local, physical space.

We largely started from scratch with the user experience and design of the site; the old site, for all its merits, had started to show its age on that front, being originally designed over six years ago – an eternity in web-years. That said, we’re still traditionalists in some ways where new media design is concerned, and took a really minimal, monochromatic, print/newspaper-style approach to the homepage and article content. So in a way, it’s a unique hybrid of the old/time-tested (in layout) and new/innovative (in concept and content), hopefully all tempered by an unadorned, type-centric aesthetic that lets the variety of visuals really speak for themselves.

Our inspiration was a bit scattershot, as we tried to bridge a gap between high and low culture in a way reflective of the Walker itself. Arts and cultural sites were obviously a big part (particularly Metropolis M and it’s wonderful branded sidebar widgets), but not so much museums, which have traditionally been more conservative and promotionally-driven. With our new journalistic focus, two common touchstones became The New York Times’ site and The Huffington Post – with the space in between being the sweet spot. The former goes without saying. The latter gets a bad rap, but we were intrigued by it’s slippery, weirdly click-enticing design tricks and general sense of content-driven chaos enlivened by huge contrasts in scale. The screaming headlines aren’t pretty, but they’re tersely honest and engaging in an area where a more traditional design would introduce some distance. And the content, however vapid, is true to its medium; it’s varied and easily digestible. (See also Jason Fried’s defense of the seemingly indefensible.)

Of course, we ended up closer to the classier, NYT side of things, and to that end, we were really fortunate to start this process around the advent of truly usable web font services. While the selection’s still rather meager beyond the workhorse classics and a smattering of more gimmicky display faces (in other words, Lineto, we’re waiting), really I’m just happy to see less Verdana in the world. And luckily for us, the exception-to-the-rule Colophon Foundry has really stepped up their online offerings lately – it’s Aperçu that you’re seeing most around the site, similar in form to my perennial favorite Neuzeit Grotesk but warmer, more geometric, and with a touch of quirk.

Setting type for the web isn’t without it’s issues still, with even one-point size adjustments resulting in sometimes wildly different renderings, but with careful trial-and-error testing and selective application of the life-saving -webkit-font-smoothing CSS property, we managed to get as close as possible to our ideal. It’s the latter in particular that allows us elegant heading treatments (though only visible in effect to Safari and Chrome): set to antialiased, letterforms are less beholden to the pixel grid and more immune to the thickening that sometimes occurs on high-contrast backgrounds.

It’s not something I’d normally note, but we’re breaking away from the norm a bit with our article treatments, using the more traditional indentation format instead of the web’s usual paragraph spacing, finding it to flow better. It’s done using a somewhat complex series of CSS pseudo-elements in combination with adjacent selectors – browser support is finally good enough to accomplish such a thing, thankfully, though it does take a moment to get used to on the screen, strangely enough. And we’re soon going to be launching another section of the site with text rotation, another technique only recently made possible in pure CSS. Coming from a print background, it’s a bit exciting to have these tools available again.

Most of the layout is accomplished with the help of the 960 Grid System. Earlier attempts at something more semantically meaningful proved more hassle than they were worth, considering our plethora of more complex layouts. We’ve really attempted something tighter and more integrated than normally seen on the web, and I think it’s paid off well. That said, doing so really highlighted the difficulties of designing for dynamic systems of content – one such case that reared it’s head early on was titles in tiles (one of the few “units” of content used throughout the site).

A tricky issue at first considering our avoidance of ugly web aesthetics like fades (and artificial depth/dimensionality, and gradients, and drop shadows…), but one eventually solved with the implementation of our date treatments, whose connecting lines also function nicely as a cropping line – a tight, interlocking, cohesive system using one design element to solve the issues of another. We’ve tried to use similar solutions across the site, crafting a system of constraints and affordances, as in the case of our generated article excerpts:

Since we’re losing an element of control with freeform text fields on the web and no specific design oversight as to their individual display, we’ve chosen to implement logic that calculates an article title’s line-length, and then generates only enough lines of the excerpt to match the height of any neighboring articles. It’s a small detail for sure, but we’re hoping these details add up to a fine experience overall.

Anyway, there’s still more to come – you’ll see a few painfully neglected areas here and there (our collections in particular, but also the Sculpture Garden and to a lesser extent these blogs), but they’re next on our list and we’ll document their evolution here.

Process/miscellany

Digital Wayfinding in the Walker, Pt. 1

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New […]

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New Media’s purview, and only occasionally so for Design, a recent initiative to improve the flow and general satisfaction of visitors brought with it the idea of using digital displays, with their malleable content and powerful visual appeal, to guide and direct people throughout the Walker.

Our new static directional signage

Currently installed in one location of an eventual three, and with a simple “phase one” version of the content, the Bazinet Lobby monitor banks cycle through the title graphics for all the exhibitions currently on view, providing a mental checklist of sorts that allows the visitor to tally what he or she has or hasn’t yet seen that directly references the vinyl graphics at each gallery entrance. The corner conveniently works as an intersection for two hallways leading to a roughly equivalent number of galleries in either direction, one direction leading to our collection galleries in the Barnes tower, and the other our special exhibition galleries in the Herzog & de Meuron expansion. To this end, we’ve repurposed the “street sign” motif used on our new vinyl wall graphics to point either way (which also functions as a nice spacial divider). Each display tower cycles through it’s given exhibitions with a simple sliding transition, exposing the graphics one by one. An interesting side effect of this motion and the high-contrast LCDs has been the illusion of each tower being a ’70s-style mechanical lightbox; I’ve been tempted to supplement it with a soundtrack of quiet creaking.

The system, powered by Sedna Presenter and running on four headless, remotely-accessible Mac Minis directly behind the wall, affords us a lot of flexibility. While our normal exhibitions cycle is a looped After Effects composition, we’re also working on everything from decorative blasts of light and pattern (the screens are blindingly bright enough to bathe almost the entire lobby in color), to live-updating Twitter streams (during parties and special events), to severe weather and fire alerts (complete with a rather terrifying pulsating field of deep red). In fact, this same system is now even powering our pre-show cinema trailers. I’m particularly interested in connecting these to an Arduino’s environmental sensors that would allow us to dynamically change color, brightness, etc. based on everything from temperature to visitor count to time of day — look for more on that soon.

See it in action:

Behind the scenes / Severe weather alert:

 

Installation:

  

Building the 50/50 Voting App

For our upcoming exhibition 50/50: Audience and Experts Curate the Paper Collection, we’re trying something a bit different. As you can probably tell from the title, we’re allowing our audience to help us curate a show. The idea is that our chief curator, Darsie Alexander, will curate 50% of the show, and the audience will […]

50/50 Voting App

For our upcoming exhibition 50/50: Audience and Experts Curate the Paper Collection, we’re trying something a bit different. As you can probably tell from the title, we’re allowing our audience to help us curate a show. The idea is that our chief curator, Darsie Alexander, will curate 50% of the show, and the audience will select from a group of 180 different print works for the other half.

As with most things presented to New Media, the question was posed, “how best do we do this?”. The exhibition is being hung in the same room as Benches and Binoculars, so the obvious answer was to use the kiosk already there as the voting platform for the show. With this in mind I started to think of different ways to present the voting app itself.

My initial idea was to do a “4-up” design. Display four artworks and ask people to choose their favorite. The idea was that this would make people confirm a choice in comparison to others. If you see some of what you’re selecting against, it can make it easier to know whether you want specific works in the show or not. But it also has the same effect in reverse. If you have two artworks that you really like, it can be just as hard to only be able to choose one. The other limitation? After coming up with the 4-up idea, we also decided to add iPhones into the mix as a possible voting platform (as well as iPads, an general internet browsers). The images on the iPhone’s screen were much to small to make decent comparisons on.

Nate suggested instead using a “hot or not” style voting system. One work that you basically vote yes or no on. This had the small downfall of not being able to compare a work against others, but allowed us to negate the “analysis paralysis” of the 4-up model. It also worked much better on mobile devices.

The second big decision we faced was “what do we show”? I had assumed in the beginning that we’d be showing label copy of every work like we do just about everywhere but it was suggested early on that we do no such thing. We didn’t want to influence voters by having a title or artist on every piece. With works by Chuck Close and Andy Warhol mixed into the print selections, it’s much too easy to see their name and vote for them simply because of their name. We wanted people to vote on what work they wanted to see, not what artist they wanted to see.

Both of these decisions proved to be pivotal in the popularity of the voting app. It made the voting app very streamlined and simplified. With 180 works to go through it makes it much easier to get through the entire thing. Choices are quick and easy. The results screen after voting on each artwork shows the current percentage of no to yes votes. This is a bit of a psychological pull. You as a user know what you think of this artwork, but what do others think about it? The only way to find out is to vote.

50/50 Voting App Results Screen

Because of this the voting app has been a success far beyond what we even thought it would be. I thought if we got 5,000-10,000 votes we would be doing pretty well. Half way through the voting process now, we have well over 100,000 votes. We’ve had over 1,500 users voting on the artworks. We’ve collected over 500 email addresses wanting to know who the winners are when all the voting is tallied. We never expected anything this good and we have several weeks of voting yet to come.

One interesting outcome of all of these votes has been the number of yes’s to no’s over all of the works. Since the works are presented randomly (well, pseudo randomly for each user), one might expect that half the works would have more yes than no votes, and vice versa. But that’s not turned out to be the case. About 80% of the works have more no votes than yes’s. Why is this?

There are various theories. Perhaps people are more selective if they know something will be on view in public. Perhaps people in general are just overly negative. Or perhaps people really don’t like any of our artwork!

But one of the more interesting theories of why this is goes back to the language we decided to use. Originally we were going to use the actual words “Yes” and “No” to answer the question “Would you like to see this artwork on view?”. This later got changed to “Definitely” and “Maybe Not”. Notice how the affirmative answer has much more weight behind it: “Yes, most definitely!”, whereas the negative option leaves you a bit of wiggle room “Eh, maybe not”. It’s this differentiation between being sure of a decision and perhaps not so sure that may have contributed to people saying no more often than yes.

Which begs the question, what if it was changed? What if the options instead were “Definitely Not” and “Sure”? Now the definitive answer is on the negative and the positive answer has more room to slush around (“Hell no!” vs “Ahh sure, why not?!”). It would be interesting to see what the results would have been with this simple change in language. Maybe next time. This round, we’re going to keep our metrics the same throughout to keep it consistant.

The voting for 50/50 runs until Sept 15. If you’d like to participate, you still have time!

Changes in New Media, job opening for a web designer/developer

There are changes in store for the New Media Initiatives department. After being with the Walker for four years, I’ve taken a position across the river Minnesota Public Radio as a Web Designer/Developer. It is very hard for me to leave the Walker, but I’m excited about working on new projects for an even larger […]

There are changes in store for the New Media Initiatives department. After being with the Walker for four years, I’ve taken a position across the river Minnesota Public Radio as a Web Designer/Developer. It is very hard for me to leave the Walker, but I’m excited about working on new projects for an even larger audience at MPR.

This means there is a job opening in New Media, and if you’re a web nerd, you should consider sending in your resume. My title at the Walker is New Media Designer, but the job posting is for a Web Designer/Developer. This reflects the changing nature of the work I’ve taken on over the years, including doing more back-end development work on the Walker Channel and Mobile Site, amongst other projects. In the future, we have work planned around an overhaul of major portions of the Walker website. Our tool of choice is Django, but even if you don’t have python or django experience, consider applying. I didn’t know a lick of Python or Django when I tackled My Yard Our Message, but it was easy to get up to speed and make things happen.

Full details for the position are listed on jobs site, and the deadline for applying is September 3rd.

Creating a community calendar using Google Apps and WordPress

For Walker Open Field, we wanted a way to collect community submitted events and display them on our site. We have our own calendar and we discussed whether adding the events to our internal Calendar CMS was the best way, or if using an outside calendar solution was the direction to go. In the end, […]

For Walker Open Field, we wanted a way to collect community submitted events and display them on our site. We have our own calendar and we discussed whether adding the events to our internal Calendar CMS was the best way, or if using an outside calendar solution was the direction to go. In the end, we decided to do both, using Google Calendar for community events and our own calendar CMS for Walker-programmed events.

The Open Field website is based on the lovely design work of Andrea Hyde, and the site is built using WordPress, which we use for this blog and a few other portions of our website. WordPress is relatively easy to template once, so it makes for quick development. WordPress also has a load of useful plug-ins and built-in features that saved us a lot of time. Here’s how we used it an put it together:

Collecting Events
To accept event ideas from community members, we used the WordPress Cforms II plugin, which makes it very easy to build otherwise complex forms and process them. You can create workflows with the submissions, but we simply have Cforms submit the form to us over email. A real person (Shaylie) reviews each of the event submissions and adds the details to…

Google Calendar
We use Google’s Calendar app to contain the calendar for the community events. When Shaylie gets the email about a new event, she reviews it, follows up on any conflicts or issues, and then manually adds it to google calendar. We toyed with the idea of using the Calendar API to create a form that would allow users to create events directly in the calendar, but decided against it for two reasons. First, it seemed overly complicated for what we thought would amount to less than 100 events. Secondly, we would still have to review every submission and it would be just as cumbersome to do it after the fact rather than beforehand.

We also use Google Calendar to process our own calendar internal calendar feed. The Walker Calendar can spit out data as XML and ICAL. We have our own proprietary XML format that can be rather complex, but the ICAL format is widely understood and Google Calendar can import it as a subscription.

Getting data out of Google Calendar
We now have two calendars in google calendar: Walker Events and Community Events. Google provides several ways to get data out of google calendar, and the one we use is the ATOM format with a collection of google name-spaced elements. The calendar API is quite robust, but there are a few things worth noting:

  • You must ask for the full feed to get all the date info
  • Make sure you set the time zone, both on the feed request but add it to the events when you link tot hem (using the ctz paramater)
  • Asking for only futureevents and singleevents (as paramaters) makes life easier, since you don’t have to worry about complexities of figuruing out the repeating logic, which is complicated

This is our feed for Open Field Community Events.

Calendar data into WordPress
Since version 2.8, WordPress has included the most excellent SimplePie RSS/ATOM parsing library. As the name would have you believe, it is pretty simple to use. To pull the data out of the Google Calendar items with simplePie, you extend the SimplePie_Item class with some extra methods to get that gd:when data.

Combining two feeds in SimplePie is not hard. By default, SimplePie will sort them by the modified or published date, which in the Google Calendar API is the date the event was edited, not when it happens. Instead, we want to sort them by the gd:date field. There are probably a few ways to do this, but the way that I set up was to simply loop through all the data, put it into an array with the timestamp as the key, and then sort that array by key. Here’s the code:

[php]
<?php

//include the rss/atom feed classes
include_once(ABSPATH . WPINC . ‘/feed.php’);
require_once (ABSPATH . WPINC . ‘/class-feed.php’);

// Get a SimplePie feed object from the specified feed source.
$calFeedsRss = array(
#walker events
‘http://www.google.com/calendar/feeds/95g83qt1nat5k31oqd898bj2ako1phq1%40import.calendar.google.com/public/full?orderby=starttime&ctz=America/Chicago&sortorder=a&max-results=1000&futureevents=true’,

#public events
‘http://www.google.com/calendar/feeds/walkerart.org_cptqutet6ou4odcg6n2mvk4f44%40group.calendar.google.com/public/full?orderby=starttime&ctz=America/Chicago&sortorder=a&max-results=1000&futureevents=true&singleevents=true’
);

$feed = new SimplePie();
$feed->set_item_class(‘SimplePie_Item_Extras’);
$feed->set_feed_url($calFeedsRss);
$feed->set_cache_class(‘WP_Feed_Cache’);
$feed->set_file_class(‘WP_SimplePie_File’);
$feed->set_cache_duration(apply_filters(‘wp_feed_cache_transient_lifetime’, 3600)); //cache things for an hour
$feed->init();
$feed->handle_content_type();

if ( $feed->error() )
printf (‘There was an error while connecting to Feed server, please, try again!’);

$count = 0; // hack, but we’re going to count each loop and use it as a little offset on the sort val
foreach ($feed->get_items() as $item){

if (strtolower($item->get_title()) != ‘walker open field’){
$eventType = ‘walker’;
$related = $item->get_links ( ‘related’ );
$related = $related[0];
if ( strpos($related,’walkerart.org’) === False ){
$related = $item->get_link();
// if it’s a google calendar event, make sure we set the tiem zone
$related .= "&ctz=America%2FChicago";
$eventType = ‘community’;
}

#we offset the actual starttime a little bit in case two events have the same start time, they would overwrite in t he array
$sortVal = $item->get_gcal_starttime(‘U’) + $count;
$myData = array(
‘title’ => $item->get_title(),
‘starttime’ => $item->get_gcal_starttime(‘U’),
‘endtime’ => $item->get_gcal_endtime(‘U’),

‘link’ => $related,
‘eventType’ => $eventType,
‘text’ => $item->get_content(),
‘date’ => $item->get_gcal_starttime(‘U’)
);
$cals[ $sortVal ] = $myData;
}
$count++;
}
//sort the array by keys
ksort($cals);
// $cals now contains all the event info we’ll need

?>
[/php]

Once this is done, you can simply take the $cals array and loop through it in your theme as needed. Parsing the google calendar feeds is not an inexpensive operation, so you may wish to use the Transients API in WordPress to cache this information.

Caveats and Issues
Overall, this approach has worked well for us. We have run into some issues where the Google Calendar ATOM feed would show events that had been deleted. Making sure to set the futureevents and singleevents paramaters fixed this. We also ran into some issues using the signleevents, so we ended up manually creating occurrences for events that would have otherwise had a repeating structure.

Announcing the new Walker Channel — HD video, improved design, search, accessibility

The Walker Channel, in existence since 2003, has recently undergone a re-design. The old Walker Channel was originally built to serve Real Video and stream live webcasts using Real Video. It had slowly evolved over time to use more friendly MPEG-4 and H.264 video, and even moved from Real Video for live streaming to the better ustream.tv. […]

The Walker Channel, in existence since 2003, has recently undergone a re-design. The old Walker Channel was originally built to serve Real Video and stream live webcasts using Real Video. It had slowly evolved over time to use more friendly MPEG-4 and H.264 video, and even moved from Real Video for live streaming to the better ustream.tv. But it never really caught up to the modern, YouTube era of video. The re-design we just completed did that, and added a few other goodies.

Visual Design


Quite obviously, the site has undergone a major visual overhaul. The old site had almost no hierarchy to the video archive, which worked OK with a handful of video, but with 200+ in the archive, it became unwieldy to find a particular video or just browse.

Just like with our iTunes U site, we’ve split our internal, museum centric departments into more logical genres. For example, instead of just “Performing Arts”, we have Dance, Theater and Music. We also highlight content by it’s recentness, and, more importantly by it’s popularity (view count). None of this is ground-breaking in 2010, but it’s a big upgrade from 2003.

Streaming H.264 Video

We’re now serving all our video content as streaming h.264 video. This means you can watch a video and jump to any place in the timeline before it has buffered to that spot. Using h.264 enables us to easily switch to HTML5 and support other devices down the road. We converted all our older Real Media video into h.264 mp4s.

We also utilize YouTube to serve many of our newer videos. We have already been putting all our Channel content on YouTube for about a year, so there’s no need to upload it twice. YouTube serves a relatively high-quality FLV or MP4 file, and this means we do not pay for bandwidth, which is not an insignificant cost consideration.

Where we’re not using YouTube, we’re using Amazon CloudFront and their new Adobe Streaming Media Server. This means that we don’t have to run our own instances of EC2 and Wowza to encode & stream the video. We upload our video manually, so we don’t need to encode our video in “the cloud”.

High Definition Video

We also upgraded our camera and video capture equipment to enter the beautiful HD world. We now capture all lectures in HD and webcast them live at 640×360. Going forward, archived versions will be posted at 720P (1280×720). Drawn Here (and there): HouMinn Practice is our first video posted in HD, and it looks great. Here’s a visual representation of what this new video means, comparing the resolutions we have from older content:

Click to enlarge and get the full effect.

We have also added a video switcher to our hardware repertoire. The switcher lets us show the presenter’s slides, in-stream, rather than just pointing the camera at the projection screen. This switcher enables a dramatic improvement in video quality, and will be especially useful for Architecture / Design lectures, which typically feature many slides.

Transcripts and captions

Starting with our new recordings in 2010, we’re adding closed captions and transcripts for nearly every video. This video is a good example. That means a couple things:

  • Videos are more accessible for deaf or hard of hearing viewers
  • It enables you to visually scan the contents of a video to key on a section you want to watch. In the  example video, clicking on the time code on the right jumps the playhead to that point in the video.
  • It gives us much more meaningful text to search on. Search engines are still text based, so having more than just the video description to search, is a great thing.

We create our transcripts by sending our video to CastingWords. The transcripts that CastingWords generates is then fed into YouTube’s machine caption processing feature, generating a captions for the video in the form of a .SBV file. The .SBV file is then pulled back into the Walker Channel, where we convert it on the fly to W3C TimedText format for use in jwplayer as captions.

We also re-format the captions as a transcript for display in the Transcript tab on the video. Captions tend to be broken up not by sentence, but by how the speaker is talking and how they’ll fit on screen. Transcripts, on the other hand, are read more traditionally, and should be read in complete sentences. So we break the captions up and re-form them in complete sentences with associated timecodes. Here’s an example screenshot:

Note the fragmented captions (in video) with transcript (below), in full sentences.

Comments and video jumping

We’ve added comments! Like what you see or want to add your thoughts? Leave a note. One neat thing in the comments that we convert mentions of specific time into a link to jump the video playhead. So if you leave a comment with 3:13 in it, it will turn into a link to that spot in the video.

Similarly, when that happens we change the hash for the page to a link to that spot. The URL will change from http://channel.walkerart.org/play/my-video/ to http://channel.walkerart.org/play/my-video/#t=3m3s. Using that link anywhere else will jump the playhead to that point in the video. YouTube does the same thing, so we borrowed the idea.

Search and backend

We’re using solr for the search engine on the channel. Nate had great success with solr on ArtsConnectEd, so using solr was a no-brainer for us. The rest of the logic for the channel is built using Django, a python web framework that I also worked with for the My Yard Our Message project. To connect Django and solr, we’re using django-solr-search (aka “solango”). It was necessary to sub-class parts of solango to get it to present solr’s more-like-this functionality that we use for the “Related Media”. In retrospect, I probably should have used Haystack Search instead, since it supports that natively. As we move forward using solr and django in other areas of the Walker’s website, we’ll probably switch to using Haystack.

Funding

Funding for aspects of these updates came from the Bush Foundation, under a grant entitled “Expanding the Rules of Engagement with Artists and Audiences and Fostering Creative Capital in our Community“. This grant has many applications within the Walker as a whole, but for the online Walker Channel, it is specifically funding the upgrade of our camera and video equipment.

Building the Benches and Binoculars Touchscreen Kiosk

[flickrvideo]http://www.flickr.com/photos/vitaflo/4119139342/[/flickrvideo] For our exhibition Benches and Binoculars, I was asked to create a touchscreen kiosk. The artwork in Benches and Binoculars is hung salon-style, making it impractical to use wall labels on works that are hanging 20 feet up in the air. Many get around this by having a gallery “map” (and our Design dept […]

[flickrvideo]http://www.flickr.com/photos/vitaflo/4119139342/[/flickrvideo]

For our exhibition Benches and Binoculars, I was asked to create a touchscreen kiosk. The artwork in Benches and Binoculars is hung salon-style, making it impractical to use wall labels on works that are hanging 20 feet up in the air. Many get around this by having a gallery “map” (and our Design dept did create these as well for the exhibit), but much like the exhibition itself, we thought it was a good time to “re-imagine” the gallery map.

I had never worked on a touchscreen app before. Sure, I’ve created kiosks here at the Walker but a touchscreen brings some new challenges, as well as some new opportunities. Input is both easier, and more difficult. You just use your hands, but people aren’t always sure how they are supposed to use their hands to perform actions, or even that they can.

Walker Director Olga Viso using the Benches and Binoculars kiosk

Walker Director Olga Viso using the Benches and Binoculars kiosk

As such my main goal when making the kiosk was to keep it simple. Don’t let the interface get in the way of the information. The interface should help facilitate finding the content you want easily. Too many times I’ve seen these types of devices be more about the technology than about the content on them. This meant making the kiosk less “flashy”, but in turn also made it more useful.

In the end the layout was rather simple. The screen has an exact (to the pixel) representation of the artwork hanging on the walls. Moving your hand right and left on the kiosk moved the walls on it left and right. Tapping on an artwork brought up a modal window with a high res image of the object as well as the label text. There is nothing particularly fancy or new about this idea, and there really shouldn’t have been. Much more would have taken away the experience you were there for, namely viewing the artworks on the walls.

As for the technology involved, we decided to use the HP Touchsmart PC for this particular kiosk. It uses an infrared field above the screen to track “touch”. As such you don’t actually have to make physical contact with the screen to activate a touch event, you just have to break the infrared plane.

We decided on the 22″ version because we wanted the machine to be single use. With the way the computer is set up, it’s not all that great at multi-touch as it is. And wanting to keep the device as simple as possible led to wanting to keep usable by one person at a time. There is a larger version of the Touchsmart but any larger than the 22″ and it felt like you were supposed to have more than one person use it at a time, which we wanted to stay away from.

Since we didn’t have to worry about multi use, we had a few more options on what to build the interface with. Most people would probably go the Flash route but for us Flash is usually the choice of last resort. This is for various reasons, not the least of which for me is lack of experience with Flash. But most of what you can do in Flash these days can also be done in the browser, and given that front end interfaces are my forte, that’s where I went.

The interface is just a simple HTML page that dynamically calls ArtsConnectEd for its data. Thankfully, Nate was able to leverage a lot of the work he did on ACE for this which sped up development drastically. Interaction is just built with some jQuery scripts I wrote. All in all it wasn’t all that difficult to get together except for a few snags (isn’t there always some?).

Using the Kiosk.

Using the Kiosk.

One was that I found very early on that interacting with a touchscreen is a lot different from using a mouse. Hit areas are much different since when you press on a screen your finger tends to “roll”. On the first mousedown event, the tip of your finger is in one spot, but as you press, the mouse position shifts lower on the screen as your finger flattens out from pressing into the screen. This means the mouseup event is in a totally different spot, which can cause issues with trying to register a proper click. A problem exists when trying to register a drag event for the same reason. As such I had to program in some “slush” room to compensate for this.

The second issue I had was that of the computer and browser itself. The Touchsmarts, while having a decent CPU were really slow and sluggish in general. I had from the beginning targeted Firefox for the development platform. Mainly because it has many fullscreen kiosk implementations as add ons. But once I loaded up 98 images with all of the CSS drop shadows, transparencies, etc, the entire browser was very sluggish and choppy.

I had read recently that Google Chrome was pushing v4 to be a lot faster and their new beta had just been released for it. Testing it I found that it was about 3 times faster than Firefox. The issue was it had no true kiosk mode. I was in a bind. I had a nice fullscreen kiosk in Firefox that was choppy, and a decent speed browser in Chrome that had no kiosk mode.

After much searching I found that a kiosk patch was in development on the browser. The only issue was patching it into a build. Unfortunately Google’s requirements for building Chrome on Windows is not trivial and I couldn’t find anyone to do it for me. In desperation, I emailed the creator of the patch, Mohamed Mansour, to see if he could build me a binary with his patch in it. Thankfully he came through and was able to offer up a custom build of Chrome with the kiosk mode built in that I could use for the exhibition. It’s worked wonderfully (note, this patch has since been checked into the Google Chrome nightlies).

In the end it turned out better than I thought it would. Chrome was fast enough for me to go back and add in new features like proper acceleration when “throwing” the walls. And the guys in the Walker carpentry shop, especially David Dick, made a beautiful pedestal to install the kiosk in, complete with a very nice black aluminum bezel. I couldn’t be more happy and from the looks of it our visitors are as well. It goes a long way to my (and New Media’s) goal of taking complex technology and making it simple for users, as well as the Walker’s mission of the active engagement of audiences.

You can see more photos in my Flickr set:
http://www.flickr.com/photos/vitaflo/sets/72157622839288542/

New Media kills in the Walker’s pumpkin carving contest

Every year, the Walker has a staff halloween party, which includes a departmental pumpkin carving contest. And this isn’t just a carve a grocery store pumpkin contest, it’s a creative, conceptual, re-imagine an artist or artwork pumpkin contest. Invariably, our carpentry shop and registration departments usually blow everyone else out of the water. Those of […]

Every year, the Walker has a staff halloween party, which includes a departmental pumpkin carving contest. And this isn’t just a carve a grocery store pumpkin contest, it’s a creative, conceptual, re-imagine an artist or artwork pumpkin contest. Invariably, our carpentry shop and registration departments usually blow everyone else out of the water. Those of us that are a little less hands-on with the art work tend to be outclassed every year (exhibits 1, 2, and 3). New Media Initiatives never wins.

But not this year.

This year, we had a plan.

Actually, we came up with the plan after our no-show defeat last year, but we smartly held onto it for this year (thank you, iCal). On the day of the contest, we replaced every image of artwork on the Walker website with an image of a pumpkin.

walker homepage with pumpkins

And the rest of the pages (click to embiggen):

Calendar

Calendar

Collections and Resources

Collections and Resources

Artists-in-Residence

Artists-in-Residence

Visual Arts

Visual Arts

Design Blog

Design Blog



We ended up winning in the “Funniest Pumpkin” category.

Because we serve all of our media from a single server using lighttpd, and our files are all uniformly named, we were able to implement a simple rule set in lighty to replace the images. Instead of the requested file, each image was re-directed to a simple perl script that would grab a random jpg from our pool of pumpkin images, and send it’s contents instead. Part of the plan was that we would only serve these images to people visiting our site from inside our internal network. The rest of the world would see our website just as always. In our department, we all unplugged our ethernet cables and ran off of our firewall’d WiFi, which effectively put us outside the network, seeing nothing different on the site. We had a hard time holding back evil cackles as people came to us wondering how our site was hacked, and watching it slowly dawn on them that this was our pumpkin.

The images we used were all the creative commons licensed flickr images of pumpkins I could find. There were 54 of them. Here they are, for credit:

Next