Blogs Media Lab General

Getting Mobile in the Garden

This summer marks a major milestone for the Minneapolis Sculpture Garden: 25 years as one of the country’s premiere public sculpture parks. The New Media Initiatives department’s contribution to the celebration comes in the form of a brand new website for the garden, a fully responsive “web app” that has been an exciting challenge to […]

This summer marks a major milestone for the Minneapolis Sculpture Garden: 25 years as one of the country’s premiere public sculpture parks. The New Media Initiatives department’s contribution to the celebration comes in the form of a brand new website for the garden, a fully responsive “web app” that has been an exciting challenge to build.

Opening screen of the web appMap view of the garden

The new site is a radical shift from the static, research-focused 2004 version, and instead becomes an on-demand interpretive tool for exploration in the garden, including an interactive, GPS-capable map, audio tour, video interviews, and short snippets called “fun facts.” One of the most exciting features is the new 25th Anniversary Audio Tour called Community Voices. Last summer we recorded interviews in the garden with community members, first-time visitors, and some local celebrities, and it’s all come together in this tour to present a fantastic audio snapshot of just how special the garden is to people.

Detail view of Spoonbridge and CherryInterpretive media for Spoonbridgegarden_phone_5_sm

The site provides light, casual information “snacking,” with prompts to dive deeper if time and interest allow. It gives visitors a familiar device (their own!) to enhance their visit at their own convenience.

Of course, we didn’t neglect our out-of-state or desktop visitors, but the site’s focus remains on getting people to the garden. For those unable to experience it physically (or for those frigid winter months), the new website provides a browsable interface to familiar favorites and up-to-date acquisitions and commissions.

Behind the scenes

MSG Web Data

Our proof of concept for the site was lean and mean, built quickly using open source tools (leaflet.js) and open data (OpenStreetMap). We didn’t have latitude/longitude positioning info for our public works of art, but as it turned out some kind soul had already added a significant number of our sculptures to OpenStreetMap! We set about adding the rest and knocked together a new “meta API” for the garden that would unify data streams from OSM, our Collections, Calendar, and existing media assets in Art on Call.

Fuzzy GPS

garden_2

Next we began the process of verifying the data. We’d created custom map tiles for the garden so we could maintain the designed look and feel Eric was going for (look for a future post on the design process for this site), but it involved  some compromises to make the paths line up visually. The New Media team spent a few hours walking the garden in the early spring, making notes on sculpture GPS anomalies, misplaced paths, and trying to avoid having anyone appear to be inside the hedges. No two devices gave the exact same GPS coordinates, so we ended up averaging the difference and calling it close enough.

Native-ish

It’s not a web app. It’s an app you install from the web.

As we discovered while building the mobile side of the new Collections site, a properly tuned webpage can start to feel a lot like a native application. We’re using swipe gestures to move between information “slides,” pinch and zoom for the map, and followed most of the tips in the forecast.io blog post (above) to further enhance the experience. We’ll never be quite as snappy as a properly native app, but we feel the cross-platform benefits of the web fully outweigh that downside. (Not to mention our in-house expertise is web-based, not app-based.)

Need for Speed

garden_pagespeed

This was the make-or-break component of the mobile site: if it didn’t “feel” fast, no one would use it. We spent untold hours implementing just-in-time loading of assets so the initial site would by tiny, but then we’d have the images we need just before they were supposed to be on screen. We tuned the cache parameters so anyone who’s visited the site in the past will have the components they need when they return, but we can also push out timely updates in a lightweight manner. We optimized images and spread the map tiles around our Content Delivery Network to prevent a single-domain bottleneck.

Finally, and perhaps foolishly, we wrote a safety fallback that tries to estimate a user’s bandwidth as they load the welcome image: by timing the download of a known-size file, we can make a quick decision if they are on a painfully slow 3G network or something better. In the case of the slow connection we dynamically begin serving half-size images in an effort to improve the site’s performance. We’ll be monitoring usage statistics closely to see if/when this situation occurs and for what devices. Which brings me to…

Analytics

garden_heatmap_sm

I hope I’m right when I say that anyone who’s heard me speak about museums and digital knows how adamant I am about measuring results and not just guessing if something is “working.” This site is no exception, with the added bonus of location tracking! We’re anonymizing user sessions and then pinging our server with location data so we can begin to build an aggregate “heatmap” of popular spots in the garden. Above is a screenshot of my first test walk through the garden.

We’re logging as many bits of information as we can about the usage of the new site in hopes of refining it, measuring success, and informing our future mobile interpretation efforts.

Enjoy!

Please visit the new Minneapolis Sculpture Garden website and let us know what you think!

 

Out with the Dialog Table, in with the Touch Wall

If you’ve explored our galleries, you’ve probably noticed the Dialog Table tucked into the Best Buy Info Lounge just off one of our main arteries. It’s a poster child of technical gotchas: custom hardware and software, cameras, projectors, finicky lighting requirements… Despite all the potential trouble embedded in the installation, it’s actually been remarkably solid […]

If you’ve explored our galleries, you’ve probably noticed the Dialog Table tucked into the Best Buy Info Lounge just off one of our main arteries. It’s a poster child of technical gotchas: custom hardware and software, cameras, projectors, finicky lighting requirements… Despite all the potential trouble embedded in the installation, it’s actually been remarkably solid apart from a few high-profile failures. At last tally, only the CPUS, capture cards, and one graphics board are original; the rest has been slowly replaced over the years as pieces have broken. (The graphics and video capture cards have drivers that aren’t upgradable at this point, so I’ve been trolling eBay to acquire various bits of antique hardware.)

It’s been a tank. A gorgeous, ahead-of-its-time, and mostly misunderstood tank. I’m both sad and excited to see it go.

I am, however, unequivocally excited about the replacement: two 65″ touch walls from Ideum. This change alone will alleviate one of the biggest human interface mis-matches with the old table: it wasn’t a touch surface, and everyone tried to use it that way.

presenter1

Early meeting with demo software

We’re moving very quickly with our first round of work on the walls, trying to get something up as soon as possible and iterating from there. The immediate intention is to pursue a large-scale “big swipe” viewer of highlights from our collection. Trying to convey the multidisciplinary aspect of the Walker’s collection is always a challenge, but the Presenter wall gives us a great canvas with the option for video and audio.

prsenter2

The huge screen is an attention magnet

With the recently announced alpha release of Gestureworks Core with Python bindings, I’m also excited for the possibilities of what’s next for the walls. The open source Python library at kivy.org looks like a fantastic fit for rapidly developing multi-touch apps, with the possible benefit of pushing out Android / iOS versions as well. At the recent National Digital Forum conference in New Zealand I was inspired by a demo from Tim Wray showing some of his innovative work in presenting collections on a tablet. We don’t have a comprehensive body of tags around our work at this point, but this demo seems to provide a compelling case for gathering that data. Imagine being able to create a set of objects on the fly showing “Violent scenes in nature” just from the paired tags “nature” and “violent”. Or “Blue paintings from Europe” using the tag “blue” and basic object metadata. Somehow the plain text description imposed on simple tag data makes the set of objects more interesting (to me, anyway). I’m starting to think that collection search is moving into the “solved” category, but truly browsing a collection online… We’re not there.

Touch screens, and multitouch in particular, seem destined for eventually greatness in the galleries, but as always the trick is to make the technical aspect of the experience disappear. I hope by starting very simply with obvious interactions we can avoid the temptation to make this about the screens, and instead about the works we’ll be showing.

Walkerart.org Design Notes #1

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own […]

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own as an internationally-focused “digital Walker,” instead of something that merely serves the local, physical space.

We largely started from scratch with the user experience and design of the site; the old site, for all its merits, had started to show its age on that front, being originally designed over six years ago – an eternity in web-years. That said, we’re still traditionalists in some ways where new media design is concerned, and took a really minimal, monochromatic, print/newspaper-style approach to the homepage and article content. So in a way, it’s a unique hybrid of the old/time-tested (in layout) and new/innovative (in concept and content), hopefully all tempered by an unadorned, type-centric aesthetic that lets the variety of visuals really speak for themselves.

Our inspiration was a bit scattershot, as we tried to bridge a gap between high and low culture in a way reflective of the Walker itself. Arts and cultural sites were obviously a big part (particularly Metropolis M and it’s wonderful branded sidebar widgets), but not so much museums, which have traditionally been more conservative and promotionally-driven. With our new journalistic focus, two common touchstones became The New York Times’ site and The Huffington Post – with the space in between being the sweet spot. The former goes without saying. The latter gets a bad rap, but we were intrigued by it’s slippery, weirdly click-enticing design tricks and general sense of content-driven chaos enlivened by huge contrasts in scale. The screaming headlines aren’t pretty, but they’re tersely honest and engaging in an area where a more traditional design would introduce some distance. And the content, however vapid, is true to its medium; it’s varied and easily digestible. (See also Jason Fried’s defense of the seemingly indefensible.)

Of course, we ended up closer to the classier, NYT side of things, and to that end, we were really fortunate to start this process around the advent of truly usable web font services. While the selection’s still rather meager beyond the workhorse classics and a smattering of more gimmicky display faces (in other words, Lineto, we’re waiting), really I’m just happy to see less Verdana in the world. And luckily for us, the exception-to-the-rule Colophon Foundry has really stepped up their online offerings lately – it’s Aperçu that you’re seeing most around the site, similar in form to my perennial favorite Neuzeit Grotesk but warmer, more geometric, and with a touch of quirk.

Setting type for the web isn’t without it’s issues still, with even one-point size adjustments resulting in sometimes wildly different renderings, but with careful trial-and-error testing and selective application of the life-saving -webkit-font-smoothing CSS property, we managed to get as close as possible to our ideal. It’s the latter in particular that allows us elegant heading treatments (though only visible in effect to Safari and Chrome): set to antialiased, letterforms are less beholden to the pixel grid and more immune to the thickening that sometimes occurs on high-contrast backgrounds.

It’s not something I’d normally note, but we’re breaking away from the norm a bit with our article treatments, using the more traditional indentation format instead of the web’s usual paragraph spacing, finding it to flow better. It’s done using a somewhat complex series of CSS pseudo-elements in combination with adjacent selectors – browser support is finally good enough to accomplish such a thing, thankfully, though it does take a moment to get used to on the screen, strangely enough. And we’re soon going to be launching another section of the site with text rotation, another technique only recently made possible in pure CSS. Coming from a print background, it’s a bit exciting to have these tools available again.

Most of the layout is accomplished with the help of the 960 Grid System. Earlier attempts at something more semantically meaningful proved more hassle than they were worth, considering our plethora of more complex layouts. We’ve really attempted something tighter and more integrated than normally seen on the web, and I think it’s paid off well. That said, doing so really highlighted the difficulties of designing for dynamic systems of content – one such case that reared it’s head early on was titles in tiles (one of the few “units” of content used throughout the site).

A tricky issue at first considering our avoidance of ugly web aesthetics like fades (and artificial depth/dimensionality, and gradients, and drop shadows…), but one eventually solved with the implementation of our date treatments, whose connecting lines also function nicely as a cropping line – a tight, interlocking, cohesive system using one design element to solve the issues of another. We’ve tried to use similar solutions across the site, crafting a system of constraints and affordances, as in the case of our generated article excerpts:

Since we’re losing an element of control with freeform text fields on the web and no specific design oversight as to their individual display, we’ve chosen to implement logic that calculates an article title’s line-length, and then generates only enough lines of the excerpt to match the height of any neighboring articles. It’s a small detail for sure, but we’re hoping these details add up to a fine experience overall.

Anyway, there’s still more to come – you’ll see a few painfully neglected areas here and there (our collections in particular, but also the Sculpture Garden and to a lesser extent these blogs), but they’re next on our list and we’ll document their evolution here.

Process/miscellany

Event Documentation and Webcasting for Museums

At the Walker, we webcast many of our events live. It is a history wrought with hiccups and road bumps, but doing so has given our audiences the opportunity to watch lectures, artist talks, and events live from their home or even abroad. More importantly, webcasting has focused our technique for documenting events. In the […]

At the Walker, we webcast many of our events live. It is a history wrought with hiccups and road bumps, but doing so has given our audiences the opportunity to watch lectures, artist talks, and events live from their home or even abroad. More importantly, webcasting has focused our technique for documenting events. In the broadcast world, “straight to tape” is a term used for programs such as late night talk shows that are directed live and sent straight to video tape, free of post-production. For the most part, we also try to minimize our post-production process, allowing us to push out content relatively quickly before moving onto the next show.

At the heart of our process is a Panasonic AV-HS400 video mixer, which accepts both an HD-SDI camera feed and a VGA feed from the presenter’s laptop.  The video mixer allows us to cut live between the speaker and his or her presentation materials, either with fades or straight cuts. In addition, the mixer’s picture-in-picture capability allows us to insert presentation materials into the frame, next to the speaker.  Doing so gives viewers both the expressiveness of the presenter and the visual references live audiences are seeing. One thing to note: if a speaker begins moving around the stage, it becomes difficult to frame a picture-in-picture, so the technique works better when people stand still.

        

The camera we use is a Sony PMW-350K, which is part of the XDCAM family. We shoot from the back of the room in all of our public spaces, putting a lot of distance between the camera and the subject. As a result, we need all the zoom our camera lens can give. Presently our lens is a Fujinon 8mm–128mm (16x), but realistically we could use something longer for better close-ups of the speaker. This is an important factor when considering cameras: where will your camera be positioned in relation to the subject, and how much reach is needed to get a good shot. Having a camera close to the speaker isn’t always practical with a live audience present, so many of shooters push the limits of their camera lens. Being so far out also puts a lot of strain on a tripod head. It is very easy to jiggle the frame when making slight camera moves fully zoomed out, so a good tripod head should go hand in hand with a long video lens.

For audio, our presenter’s microphone first hits the house soundboard and then travels to our camera where levels are monitored and adjusted. At that point, both the audio and the camera’s images travel through a single HD-SDI BNC cable to our video mixer where audio and video signals split up once again. This happens because the mixer draws audio from whatever source is selected. As such, if a non-camera source is selected, such as the PowerPoint, no audio is present. To resolve this, an HD-SDI direct out from the camera source on the mixer is used to feed a device that re-embeds the audio with the final mixed video signal. The embedding device we use is an AJA FS-1 frame synchronizer.

         

With the frame synchronizer now kicking out a finished program, complete with embedded audio, our AJA KiPro records the content to an Apple ProRes file. We use a solid-state hard drive module as media, which pops out after an event is over and plugs directly into a computer for file transferring. An important thing to remember for anyone considering a mixer is that an external recording device is necessary to capture the final product.

To webcast, our FS-1 frame synchronizer simultaneously sends out a second finished signal to our Apple laptop. The laptop is outfitted with a video capture card, in our case a Matrox MXO2 LE breakout box, that attaches via the ExpressCard slot. Once the computer recognizes the video signal, it is ready for webcasting. The particular service we use is called Ustream. A link to our Ustream account is embedded in the Walker’s video page, titled The Channel, and viewers can watch the event live through their browser. Live viewership can run the gamut from just a few people to more than 75 viewers. Design-related programs–like the popular lecture by designer Aaron Draplin in March–tend to attract the biggest audiences. Once an event has concluded, Ustream stores a recording of the event within the account. We have the option to link to this recorded Ustream file through our website, but we don’t. Instead we try to quickly process our own recording to improve the quality before uploading it to YouTube.

       

The most frustrating part of our webcasting experiment has been bandwidth. The Walker has very little of it and thus we share a DSL line with the FTP server for webcasting. The upload speed on this DSL line tops out at 750 kbps. In real life, we get more like 500 kbps, leaving us to broadcast around 400 kbps. These are essentially dial-up numbers, which means the image quality is poor and our stream is periodically lost, even when the bit rate is kept down. Viewers at home are therefore prone to multiple disruptions while watching an event. We do hope to increase bandwidth in the coming months to make our service more reliable.

Earlier I mentioned that the Walker does as little post-production as possible for event documentation, but we still do some. Once the final ProRes file is transferred to an editing station, it is opened up in Final Cut 7. The audio track is then exported as a stand-alone stereo file and opened with Soundtrack Pro where it is normalized to 0db and given a layer of compression. With live events, speakers often turn their head or move away from the microphone periodically. This can make audio levels uneven.  Compression helps bring the softer moments in line with the louder ones, thus limiting dynamic range and delivering a more consistent product.

After the audio track is finished, it is dropped back into the timeline and the program’s front and back end are trimmed. We try to cut out all topical announcements and unnecessary introductions. Viewers don’t need to hear about this weekend’s events two years from now, so we don’t waste their time with it. In addition to tightening up the top of the show, an opening title slide is added including the program’s name and date. The timeline is then exported as a reference file and converted to an MP4 through the shareware program MPEG streamclip.

MPEG streamclip is a favorite of mine because it lists the final file size and lets users easily adjust the bit rate. With a 2GB file size limit on YouTube uploads, we try to maximize bitrate (typically 1800–3000 kbps) for our 1280 x 720p files. Using a constant bit rate for encoding instead of a variable bit rate also saves us a lot of time. With the runtime of our events averaging 90 minutes, the sacrifice in image quality for a constant bit rate seems justified considering how long an HD variable bit rate encode can take.

Once we have the final MP4 file it is uploaded to YouTube and embedded in the Walker’s video page.

 

Digital Wayfinding in the Walker, Pt. 1

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New […]

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New Media’s purview, and only occasionally so for Design, a recent initiative to improve the flow and general satisfaction of visitors brought with it the idea of using digital displays, with their malleable content and powerful visual appeal, to guide and direct people throughout the Walker.

Our new static directional signage

Currently installed in one location of an eventual three, and with a simple “phase one” version of the content, the Bazinet Lobby monitor banks cycle through the title graphics for all the exhibitions currently on view, providing a mental checklist of sorts that allows the visitor to tally what he or she has or hasn’t yet seen that directly references the vinyl graphics at each gallery entrance. The corner conveniently works as an intersection for two hallways leading to a roughly equivalent number of galleries in either direction, one direction leading to our collection galleries in the Barnes tower, and the other our special exhibition galleries in the Herzog & de Meuron expansion. To this end, we’ve repurposed the “street sign” motif used on our new vinyl wall graphics to point either way (which also functions as a nice spacial divider). Each display tower cycles through it’s given exhibitions with a simple sliding transition, exposing the graphics one by one. An interesting side effect of this motion and the high-contrast LCDs has been the illusion of each tower being a ’70s-style mechanical lightbox; I’ve been tempted to supplement it with a soundtrack of quiet creaking.

The system, powered by Sedna Presenter and running on four headless, remotely-accessible Mac Minis directly behind the wall, affords us a lot of flexibility. While our normal exhibitions cycle is a looped After Effects composition, we’re also working on everything from decorative blasts of light and pattern (the screens are blindingly bright enough to bathe almost the entire lobby in color), to live-updating Twitter streams (during parties and special events), to severe weather and fire alerts (complete with a rather terrifying pulsating field of deep red). In fact, this same system is now even powering our pre-show cinema trailers. I’m particularly interested in connecting these to an Arduino’s environmental sensors that would allow us to dynamically change color, brightness, etc. based on everything from temperature to visitor count to time of day — look for more on that soon.

See it in action:

Behind the scenes / Severe weather alert:

 

Installation:

  

New Media Initiatives’ Somewhat Unintentional Tribute to Mr. Jobs (R.I.P.)

Our little corner of the office, over the past few months, has been transformed into a veritable Apple Store in miniature. You’ll be seeing some of these around the galleries soon. Photo by Greg Beckel


Our little corner of the office, over the past few months, has been transformed into a veritable Apple Store in miniature. You’ll be seeing some of these around the galleries soon.

Photo by Greg Beckel

Building the 50/50 Voting App

For our upcoming exhibition 50/50: Audience and Experts Curate the Paper Collection, we’re trying something a bit different. As you can probably tell from the title, we’re allowing our audience to help us curate a show. The idea is that our chief curator, Darsie Alexander, will curate 50% of the show, and the audience will […]

50/50 Voting App

For our upcoming exhibition 50/50: Audience and Experts Curate the Paper Collection, we’re trying something a bit different. As you can probably tell from the title, we’re allowing our audience to help us curate a show. The idea is that our chief curator, Darsie Alexander, will curate 50% of the show, and the audience will select from a group of 180 different print works for the other half.

As with most things presented to New Media, the question was posed, “how best do we do this?”. The exhibition is being hung in the same room as Benches and Binoculars, so the obvious answer was to use the kiosk already there as the voting platform for the show. With this in mind I started to think of different ways to present the voting app itself.

My initial idea was to do a “4-up” design. Display four artworks and ask people to choose their favorite. The idea was that this would make people confirm a choice in comparison to others. If you see some of what you’re selecting against, it can make it easier to know whether you want specific works in the show or not. But it also has the same effect in reverse. If you have two artworks that you really like, it can be just as hard to only be able to choose one. The other limitation? After coming up with the 4-up idea, we also decided to add iPhones into the mix as a possible voting platform (as well as iPads, an general internet browsers). The images on the iPhone’s screen were much to small to make decent comparisons on.

Nate suggested instead using a “hot or not” style voting system. One work that you basically vote yes or no on. This had the small downfall of not being able to compare a work against others, but allowed us to negate the “analysis paralysis” of the 4-up model. It also worked much better on mobile devices.

The second big decision we faced was “what do we show”? I had assumed in the beginning that we’d be showing label copy of every work like we do just about everywhere but it was suggested early on that we do no such thing. We didn’t want to influence voters by having a title or artist on every piece. With works by Chuck Close and Andy Warhol mixed into the print selections, it’s much too easy to see their name and vote for them simply because of their name. We wanted people to vote on what work they wanted to see, not what artist they wanted to see.

Both of these decisions proved to be pivotal in the popularity of the voting app. It made the voting app very streamlined and simplified. With 180 works to go through it makes it much easier to get through the entire thing. Choices are quick and easy. The results screen after voting on each artwork shows the current percentage of no to yes votes. This is a bit of a psychological pull. You as a user know what you think of this artwork, but what do others think about it? The only way to find out is to vote.

50/50 Voting App Results Screen

Because of this the voting app has been a success far beyond what we even thought it would be. I thought if we got 5,000-10,000 votes we would be doing pretty well. Half way through the voting process now, we have well over 100,000 votes. We’ve had over 1,500 users voting on the artworks. We’ve collected over 500 email addresses wanting to know who the winners are when all the voting is tallied. We never expected anything this good and we have several weeks of voting yet to come.

One interesting outcome of all of these votes has been the number of yes’s to no’s over all of the works. Since the works are presented randomly (well, pseudo randomly for each user), one might expect that half the works would have more yes than no votes, and vice versa. But that’s not turned out to be the case. About 80% of the works have more no votes than yes’s. Why is this?

There are various theories. Perhaps people are more selective if they know something will be on view in public. Perhaps people in general are just overly negative. Or perhaps people really don’t like any of our artwork!

But one of the more interesting theories of why this is goes back to the language we decided to use. Originally we were going to use the actual words “Yes” and “No” to answer the question “Would you like to see this artwork on view?”. This later got changed to “Definitely” and “Maybe Not”. Notice how the affirmative answer has much more weight behind it: “Yes, most definitely!”, whereas the negative option leaves you a bit of wiggle room “Eh, maybe not”. It’s this differentiation between being sure of a decision and perhaps not so sure that may have contributed to people saying no more often than yes.

Which begs the question, what if it was changed? What if the options instead were “Definitely Not” and “Sure”? Now the definitive answer is on the negative and the positive answer has more room to slush around (“Hell no!” vs “Ahh sure, why not?!”). It would be interesting to see what the results would have been with this simple change in language. Maybe next time. This round, we’re going to keep our metrics the same throughout to keep it consistant.

The voting for 50/50 runs until Sept 15. If you’d like to participate, you still have time!

Tips and tricks: How to convert ancient real media video into a modern h.264 mp4

First of all, I’d like to apologize to all the people on twitter that follow me and had to endure my ranting about the trials and tribulations of converting real media files: I’m sorry. So let’s say you have a pile of real media video that was recorded sometime earlier in the decade when real […]

First of all, I’d like to apologize to all the people on twitter that follow me and had to endure my ranting about the trials and tribulations of converting real media files: I’m sorry.

So let’s say you have a pile of real media video that was recorded sometime earlier in the decade when real video was still relevant, but you realize any sane person these days doesn’t have RealPlayer installed and can’t view it. What you really want is that video to exist in an mp4 so you can stream it to a flash player, or eventually use <video> in html5 (once they work that codec stuff out). If you do a little googling on how to convert real video into h.264 mp4, you’ll find lots of programs and forum posts claiming they know to do it. But it’s mostly programs that don’t actually work and forum posts that are no longer relevant or strewn with blocking issues.

Thankfully, there is a better way, and I will lay it out for you.

Step one: Download the actual media
In our scenario, you have a list of 80 or so real media files that you need to convert. The URLs for those things probably look something like
http://media.walkerart.org/av/Channel/Gowda.ram. If you were to download that .ram file, you’d notice that it’s about 59 bytes; clearly not enough to be the actual video file. What it is, is a pointer to the streaming location for the file. If you open up that .ram file in a text editor, you’ll see it points to rtsp://ice.walkerart.org:8080/translocations/media/Gowda.rm, which is the location for our real media streaming server here at the Walker.The thing we really want is the .rm file, but it can be a little hard to get via rtsp. Since we’re not stream ripping someone else’s content (that would be wrong, dontcha know), we can just log in to the server and based on that file path it’s looking for, grab the .rm via SCP or a file transfer mechanism of our choice. I happened to know that all our .rm files are actually accessible via HTTP, so I just did a little find/replacing in the URLs and built a list with wget to download them.

Step two: Convert the real media files to mp4
If you were trying to do this back in the day this would be a major pain. You’d have to use mencoder and the longest, most convoluted command-line arguments you’ve ever seen. Thankfully, Real recently came out with an updated version of RealPlayer that has a handy little thing in it called RealPlayer Converter. Sounds too good to be true, right? It is.

For larger files, it only works well on Windows, and it doesn’t give you a lot of options for encoding. The mac version will hang at 95% encoding for most files, and that’s pretty annoying. Save yourself the trouble and use a Windows box. Once you have RealPlayer installed, open up the converter, drag your .rm files in, and set the conversion settings. Depending on what your original sources are, you might need to fiddle with the options. I used the h.246 for iPod and iPhone, because that fit the size (320×240) that my source files were. I cranked down the bitrate to 512kbps and 128kbps, because my source .rm files were about 384kbps and 64kbps to start with. This will give you a .m4v file, which is basically a .mp4 file with a different extension, but should work OK for most stuff.

Queue everything up and let it rip. On a two year old PC, it took about a day to process 48 hours worth of video.

Step three: Check your files
This is the part where your curse a little bit, realizing that in half the video you just encoded, the audio is out of sync with the video. This is a common problem when converting real video, and Real’s own tool doesn’t do a good job of handling it, never mind the fact that if you just play the video in RealPlayer, it plays in sync just fine. If you were to open the .m4v up in QuickTime Pro and look at the movie properties, you’d see something like this:

Notice the problem there? The video and audio tracks have different lengths, the video track being shorter than the audio. There is a way to fix this.

Step four: Synchronize the audio and video
There is a handy mac program that helps you fix just this synchronization issue. It’s called QT Sync. Operation is pretty simple. You open up a video file and it shows you fiddle with the video/audio offset until it is synced up. Here’s a screenshot:

Ideally, proper sync will occur when the number of frames is equal for both the audio and video. In my experience, most of my videos were synced when the video frame count was about 10 short of the audio frames, but your mileage may vary. Some of the videos I worked with would also slowly drift out of sync over time, and unfortunately, there isn’t a way to fix those. Just sync them up the the beginning and rest easy knowing you’ve done what you can.

Step four-and-a-half: Save the video
This is where things get tricky again. How you save the video depends on what you’re going to do with it. If your output target is just for iPods and iPhone, and you’re not going to be streaming it from a streaming server, you have it good. If you are planning on streaming, skip to step five. You can save the video from QT Sync without re-encoding. You’ll just be writing the audio and video streams in a new .mp4 wrapper, this time with a proper delay set up for one of the streams. To save the .mp4, you file > export, and use “Movie to mpeg-4″ as the format. Go into the options, and you want to use “Pass through” as the format for both audio and video, and do not check anything in the streaming tab. Here’s what it looks like:

This will take a moment to write the file, but it won’t re-encode. If you open the resulting mp4 up in QuickTime Pro and look at the properties, you should see something like this:

Note how the video track has a start time 6 seconds later than the audio. This is good and should play in sync. If Rinse and repeat for each of your videos that is out of sync and you’re done.

Step five: Save the video
If you’re reading this, it’s because you want to take your converted video and stream it to a flash player, using something like Adobe Streaming Media Server. If you were to take that synced, fixed up mp4 from step 4.5 and put it on your streaming media server and started streaming, you’d notice that the audio and video were out of sync again. See, Adobe Streaming Media Server doesn’t respect the delay or start time in an .mp4 file. I didn’t test other streaming servers like Wowza, but I’m guessing the suffer from the same issue. It sucks, but I can kind of see it making sense for a streaming server to expect them to be in sync.

Instead, we are stuck fixing the video the hard way. You have the video sync’d up in QT Sync, but instead of saving it as a .mp4 as in step 4.5, save it as a reference movie with a .mov extension. We’re doing this because we’ve got to re-encode the video, again, essentially hard-coding the audio or video delay into the streams, rather than just the .mp4 wrapper.

Step six: Encode the video (again)
So, now you have a bunch of .mov referrence files that are ready to be batch processed. You can use whatever software you like to do this, but I like MPEG Streamclip, which I wrote about a little in this post about iTunes U. It is way faster than Compressor, and it does batch processing really nicely.

You want to use settings that are similar to what your file is already using. I outlined that above, but here’s what the settings screen looks like:

Yes, you’re losing a bit of quality here encoding the video for the second time, but there isn’t a way around it. In looking, I couldn’t notice a difference between the original .rm file, the first version m4v, and the fixed and synced .mp4. There is no doubt some loss, but it is an acceptable trade-off to get a usable video format.

Access the Walker’s website from Minneapolis Public WiFi

If you’re visiting town and are out and about, getting info on the Walker and other cultural institutions in the city via the web just got easier. Minneapolis’ city-wide wireless network now lets users access walkerart.org without being a subscriber. Here’s how it works: On your computer, select the “City of Minneapolis Public WiFi” network. […]

If you’re visiting town and are out and about, getting info on the Walker and other cultural institutions in the city via the web just got easier. Minneapolis’ city-wide wireless network now lets users access walkerart.org without being a subscriber. Here’s how it works:

On your computer, select the “City of Minneapolis Public WiFi” network.

select_wifi

Open your browser and point yourself to walkerart.org. That should do it. You may be directed to a user agreement log in screen and then the “walled garden” of Minneapolis city information and lists of other accessible community sites. The Walker is listed under Area Arts & Culture > Arts & Museums > Art Museums.

Wireless Log In Screen

Wireless Log In Screen

Minneapolis Dowtown Area Walled Garden Portal

Minneapolis Dowtown Area Walled Garden Portal


A brief history of Minneapolis Municipal WiFi

Several years ago, the City of Minneapolis joined with USI Wireless to build out a city-wide network. The goal was to provide access for city government and citizens. The city would be a core tenant, paying USI, and USI would sell access to citizens. The city required USI to build a community portal and USI must provide grants out of it’s profits to non-profits working to bridge the digital divide.

Over the last several years, the network has slowly been built out. Right now there are some problem areas, which include Loring Park and the Minneapolis Sculpture Garden. My understanding is that these areas should see service sometime soon, though I’m not sure of any exact plans on the Sculpture Garden.

There are a couple things I have really liked about the network:

  • We’re doing it. A lot of cities have talked about building municipal wifi, and then discover large problems and things don’t work well. There have been some issues with in Minneapolis, it is taking longer to build the network than originally thought, but my impression is that it has worked fairly well.
  • It’s network neutral. The agreement between the city and USI specifically requires USI to not hinder any type of traffic over another.
  • Parts of it are free. This is how you can get to our site for free.
  • It’s low cost. The cost for being a subscriber is pretty low, compared to other wire-based providers.
  • It’s local. USI is a local company.

For more information on the network and the history, Peter Fleck has been blogging about Minneapolis WiFi for some time.

IE6 Must Die (along with 7 and 8)

One of the trending topics on Twitter currently is “IE6 Must Die“, which are mainly retweets to a blog post entitled “IE6 Must Die for the Web to Move On“. This is certainly true, IE6 has many rendering bugs and lacks support for so many things that it is simply a nightmare to work with. […]

iedestroyOne of the trending topics on Twitter currently is “IE6 Must Die“, which are mainly retweets to a blog post entitled “IE6 Must Die for the Web to Move On“. This is certainly true, IE6 has many rendering bugs and lacks support for so many things that it is simply a nightmare to work with. The amount of time and money wasted in supporting this browser across the web is staggering.

In fact a few months ago the New Media department decided to drop support for IE6 on all future websites we create. The last website we built with full IE6 support was the new ArtsConnectEd, mainly because teachers tend to have little say in what browsers they can use on school computers. However, moving forward we’re phasing out support for IE6. It simply costs us too much time and resources for the dwindling number of users it has on our sites (currently under 10%, which is down 45% from last year and falling fast). We’re not alone, many other sites are doing this as well.

However calling for the killing of IE6 ignores a bit of history as well as new problems to come. There was a time not so long ago when all web developers wanted to be using IE6. The goal back then was to kill off IE5. You see, IE5 had an incorrect box model. Padding and margins were included in a boxes width and height instead of adding to it like in standards compliant browsers.

This caused all sorts of layout errors, and meant hacks (like the Simplified Box Model Hack) had to be used to get content to align correctly. These hacks were so widely used that Apple was going to allow them to be used in the first version of Safari until I convinced Dave Hyatt (lead Safari dev) to take out support for it. IE6 fixed this bug and everyone was happy (for a while anyway).

Going back further, IE5, even with its broken box model, was at one time the browser of choice back when IE4 was killing Javascript programmers because it didn’t support document.getElementById(). IE4 only supported the proprietary document.all leading to a horrible fracturing of Javascript, whereas IE5 added in the JS standard we still use today. Before people embraced IE5, cross platform JS on the web was almost non-existent, a fact I attempted to rectify by building my Assembler site in 1999.

The reason I bring this up is because we have a history of this behavior with regards to IE. We yearn for the more modern versions, only to end up hating those same versions later on. This will not change with the death of IE6. Soon, it will be IE7 that we are trashing, and then IE8 will be the bane of our existence.

This only becomes more clear as we move to HTML5. IE8 doesn’t support it, nor does it support any CSS3. While IE8 does support many of the older standards it had been ignoring for so long, having just recently been released it is already out of date. All of the other browsers do support these advanced web technologies, but IE is the lone browser to ignore them. Once again IE is two steps behind where the web is going, and severely limits our ability to push web technology forward to everyone for many years to come.

So while we celebrate the death of IE6, let us not forget that there will be a new thorn in our side to take its place in short order. IE7, you’re next.

Next