Blogs Media Lab General

#tbt lonelygirl: confessing online

Margot Lovejoy, Parthenia (1995)

The act of confessing has a history as long as the notion of secrets. Their histories wrap both societal and interpersonal dynamics, their forms changing as new revelations appear and values change. In this age of self-publishing on online social networks, confessing becomes as easy as rapidly typing and pressing post or send. Parthenia (1995), an adaweb project by Margot Lovejoy, was an online confessional for victims of domestic abuse. Although the concept of the Internet confessional was nothing new, this “monument” utilized and formalized the network to be presented as an instrument of change, a cyberfeminist public space to express without judgement that could heal and empower those voices that were not heard before.

Screen Shot 2015-08-20 at 3.41.39 PM

Margot Lovejoy, Parthenia (1995)

Screen Shot 2015-08-20 at 3.35.45 PM

Margot Lovejoy, Parthenia (1995)

The promise of any new medium is that it will offer new possibilities for those that were previously disenfranchised or marginalized. However, there is a lot of research that shows that even the Internet, as disseminate and accessible as it is, just reinforces and codifies old patterns. Parthenia emphasizes this as it re-enacts domestic abuse support groups but then changes its relationship to the audience – rather than confessing to a specific group, confession occurs at a public level, amplifying its awareness and cathartic value. As a safe space, many of these responses are anonymous; even when a name is assigned there is no profile or identity associated to it. This artwork foretold many confessional websites of that Internet era such as PostSecret or Group Hug. These are ways we found to not be alone when in front of our computer terminal.

Killingjpeg

A post on PostSecret

Although confessing often benefits the confessor as a way to relieve feelings of guilt or anxiety, the confessee often benefits as such statements serve to create social bonds, enable similar relief or to extract reciprocal information. These projects illustrate the magnification of broadcast that the Internet has and that social movements can indeed be brought forward when the network itself is addressed. The network-as-confessee could reach an anonymous audience larger than those reading scribbles on bathroom walls.

However, in this post-Snowden era, when we confess on these platforms, we not only confess to our communities but know that we are also subjugated to governmental and corporate surveillance. There are real names or IP addresses tied to our confessions. It is not surprising that anonymous apps such as Secret did not last long as our confessions, intrinsically linked to our identities, have to be revealed in order to be monetized as digital labor. In the case of Whisper, another anonymous confessional app, they have trivialized their content and transformed them into memes and listicles, losing the intimate aspect of telling a secret. Not only the network, but the individual has changed as well: we curate our social media identities, some even to the point of imposter, utilizing the cape of code to send off false confessions in pursuit of gain. How can we trust anything or anyone online?

902554_4968246681885_867578299_o

Jill Magid, Failed States (2012)

A solution to this is going back to the basics: tribalism. Postmodernity has shifted identity from a fixed entity to one that is continually in process, evolving our ideas of shared interests and activities from simple tags (Tame Impala, art, Catholicism) to a more holistic approach. In a recent lecture by K-Hole, they said: “Once upon a time people were born into communities and had to find their individuality. Today people are born individuals and have to find their communities.” We are now individual individuals seeking like minded people. Instead of being born in a tribe, we now seek out and eventually find our tribe within a globalized public. From this tribalism, we can seek out mutual support and understanding, a safe space where we can enact confession, both given and received.

Screen Shot 2015-08-20 at 4.41.56 PM

Rules for Cool Freaks’ Wikipedia Club

Dunbar’s number” is the theoretical limitation of the number of people that one can sustain social relations with. Robin Dunbar came up with the number 150, but this number is actually a starting point, following a “rule of three”: five is the limit for your intimate friends, 50 for your close friends, 150 for your casual friends, and 500 as your acquaintances. In Dorthy Howard’s recent Rhizome essay, “Feed my Feed: Radical publishing in Facebook Groups,” she argues that Facebook groups can produce “some exciting new ground as the smaller, granular levels of conversation become fodder for the public sphere.” These conversations can range from selfies to similar aesthetics to cybertwee to memes. Although she recognizes the irony in utilizing Facebook as a way of generating intimacy, Facebook has become our contemporary public. Private groups, with the recognition of Dunbar’s number, can entice a new tribalism in response to an ever-increasing unsafe and sousveilled environment, both online and off.

 

 

#tbt EVERYTHING HAS ALREADY BEEN WRITTEN ABOUT JENNY HOLZER

Screen Shot 2015-08-06 at 4.21.05 PM
Screen Shot 2015-08-06 at 4.07.54 PM

Jenny Holzer, Please Change Beliefs (1995)

Jenny Holzer’s first public works, Truisms (1977–1979), seem to have been presaging contemporary Internet chatter, where tweets are restricted to 140 characters and sometimes the most complimented updates are those that are short but sweet. Her statements are attentive to both form and content, appearing poetic and digestible yet urgent and homiletic. They have been inscribed, projected, etched, electrically powered, carved, cast, screen-printed, and painted. The fluidity in her manipulation of physical mediums inverts the evolving landscape of public messaging by appropriating corporate, governmental, and advertising techniques to reveal contentious issues of the time. As information displays change, she is able to subsume and then imbue them with her take to irritate our environments dominated by impersonal text.

Jenny Holzer, Protect Me From What I Want, 1982

Jenny Holzer, Blue Cross, 2008

Jenny Holzer, Someone Wants To Cut A Hole In You And Fuck You Through It, Buddy, 2012

Holzer’s works range from the tangible (paper, marble) to the dynamic (billboard signs, LED columns) to the interactive. Although most of her work is static in its content, thus preserving its authoritative and authorless voice, she also delves in web based work. Her Twitter account currently has 53,000 followers, enabling others to retweet and thereby emphasize her Truisms. A lesser known work exists in Ada’web, an online art gallery acquired by the Walker in 1998. Titled Please Change Beliefs (1995), people enter the website with a single Truism presented to them, able to constantly click through to get another one. This predicted the proliferation of single-serving, aggregated, generative websites such as What The Fuck Should I Make For Dinner, This Is Less Of A, or even what would i say?.

However, what makes Please Change Beliefs unique is the ability for visitors to “improve or replace the truism.” These user-generated truisms then become part of the artwork, which inevitably looks like an anonymous Facebook account with a lot of caps-locked status updates. However, these posts are often not created arbitrarily—most truisms are “improved” instead of “replaced,” and since they are indexed alphabetically, a pattern starts to emerge: the verb and object in the statement is usually changed, whereas the subject stays the same. And in that very way, this work emphasizes that the public and its displays is undeniably tied with the nous of our subjectivity.

Screen Shot 2015-08-06 at 4.04.23 PM

Jenny Holzer, Please Change Beliefs (1995)

 

 

#tbt Virtual Reality Sculpture Garden

Screen Shot 2015-07-30 at 3.53.39 PM

Robin Dowden, our beloved director of New Media, leaves the Walker today, and as she’s been cleaning house, she has also been rediscovering little gems of the past. With the initiation of our campus renovation, it seemed like an appropriate time to look back at this project that recreated the Minneapolis Sculpture Garden in a virtual reality context back in the late ’90s.

Created in 1995, the Virtual Reality Modeling Language (VRML) served as a predecessor for the current web’s WebGL, which allows for 3D modeling and manipulation online. The VRML version of the Sculpture Garden allowed for a self-guided tour of the area, giving the ability the navigate wherever you wanted at your own pace, something relatively new in 1998. There were obvious limitations, such as not having the ability to recreate the actual sculptures in 3D—they are 2D image placeholders, but at the time, it was rare to surf the web as something other than the semblance of flipped pages.

 

You can also read more about this project in an interview between Steve Dietz, former director of New Media Initiatives, and the artists/engineers Marek Walczak and Remo Campopiano here.

Superscript in Context

Superscript is touted as being the first conference of its kind, but that doesn’t mean the role of arts journalism in the digital age hasn’t already been explored. Recent lectures and panels, as well as calls for proposals from the College Art Association and the European Society for Aesthetics, demonstrate an anxiety—or excitement—about the contemporary art critic. While there doesn’t […]

Grumpy Cat Critic

Superscript is touted as being the first conference of its kind, but that doesn’t mean the role of arts journalism in the digital age hasn’t already been explored.

Recent lectures and panels, as well as calls for proposals from the College Art Association and the European Society for Aesthetics, demonstrate an anxiety—or excitement—about the contemporary art critic. While there doesn’t appear to have been a conference before this one devoted to arts writing in the digital age, there have been several discussions about the future of arts journalism and criticism. The following instances might provide some context for situating Superscript.

A year ago, British film critic Mark Kermode chaired a panel at the Institute of Contemporary Arts, London, entitled “Who Needs the Professionals Now That Everyone’s a Critic?” It took on the future of criticism, with a myriad of arts writers addressing music, film, visual art, theater, and literature. They focused primarily on the reviewer and the audience, though, and according to at least one critic, barely problematized the word “professional.” And that writer’s recap is pretty much the only record of the panel accessible online.

Just a week ago, the Sydney Writer’s Festival featured a panel entitled Everyone’s a Critic, But Should They Be?,” again bringing together a range of arts writers to consider the question. “In a noisy digital age, making your opinions heard is a rare skill,” says the event page. “How do our best critics keep the cultural conversation classy, while competing against the world’s most clickable cat videos?” (I might add that the Walker developed the first ever Cat Video Festival.) But like the previous example, this panel lacked much in the way of a digital life, which raises some key questions: Who are these panels really for? And how can we harness digital media to extend the reach of these gatherings, so that we don’t silo conversations but open up space for collaborative inquiry?

Superscript is searching for an answer. Although the focus of Superscript isn’t entirely novel, its efforts to engage audiences beyond the physical and temporal space of the conference are not only forward-thinking, but self-referential. Unlike previous panels, where digital hasn’t been prioritized, Superscript is attempting to inhabit the digital, playing with some of the issues it seeks to explore, namely: sustainability, connectivity, and community. Hopefully Superscript’s digital life will better enable others to pick up where it leaves off.

The Superscript Blog Mentorship program, a partnership with Hyperallergic, is presented as part of

Switching Screens: Taking a Break in the Mediatheque

As the Walker’s social media manager (and a longtime internet obsessive) I live my life online, and it’s not much of an exaggeration to say that the small screen is my everything. In my off hours I usually juggle a phone and tablet while streaming Netflix or Hulu on my TV. This habit of layering my media consumption […]

People sitting in the Mediatheque watching the screen.

As the Walker’s social media manager (and a longtime internet obsessive) I live my life online, and it’s not much of an exaggeration to say that the small screen is my everything. In my off hours I usually juggle a phone and tablet while streaming Netflix or Hulu on my TV. This habit of layering my media consumption is exhilarating and exhausting, and yes, I frequently miss key plot points because I was distracted by a conversation someone sparked on Twitter. For instance, last night I cued up the latest episode of a TV show and realized I had no idea how a main character ended up in the hospital.

When news started spreading about a new way to access the Walker’s Ruben/Bentson Moving Image Collection, it sounded like something a device-addicted content consumer like me could get really excited about. With a touch screen remote, a large (but manageable) selection of films and videos, and comfy seats, the Mediatheque might be enough to get me to put down my phone and eschew the small screen for the big screen.

As a Walker staffer I was able to get a sneak preview, but starting today anyone has access the Mediatheque. You don’t even need to pay admission. Just walk right in, choose a film, find a seat, and imagine you’re in a private screening room as the opening credits start to roll.

My preview session started with a quick introduction, but the menus are simple enough that anyone familiar with Netflix or YouTube will quickly learn to navigate from playlists to search screens to the queue. Curated playlists with topics like “Icons and Iconography” and “Dreamscapes” are one option—touch a few buttons and a selection of films will be added to the queue and seamlessly play.

I chose “Cinemas of Resistance” and the theater screen transitioned from a preview trailer as the first film began playing.

Mediatheque touch screen

I was torn between taking my seat and standing near the wall-mounted iPad to read the descriptions of each film. They take Netflix and IMDB summaries to the next level: like wall labels for cinema, you get a taste of history, context, and plot, plus a preview clip.

Mediatheque film preview screen

Some films are as short as a few minutes, others are feature length. Since I hadn’t allowed enough time for a marathon, I quickly cleared my queue and selected a Buster Keaton short. Alone in the dark theater, I put my phone away and settled in for six minute break from small screens.

 

Getting Mobile in the Garden

This summer marks a major milestone for the Minneapolis Sculpture Garden: 25 years as one of the country’s premiere public sculpture parks. The New Media Initiatives department’s contribution to the celebration comes in the form of a brand new website for the garden, a fully responsive “web app” that has been an exciting challenge to […]

This summer marks a major milestone for the Minneapolis Sculpture Garden: 25 years as one of the country’s premiere public sculpture parks. The New Media Initiatives department’s contribution to the celebration comes in the form of a brand new website for the garden, a fully responsive “web app” that has been an exciting challenge to build.

Opening screen of the web appMap view of the garden

The new site is a radical shift from the static, research-focused 2004 version, and instead becomes an on-demand interpretive tool for exploration in the garden, including an interactive, GPS-capable map, audio tour, video interviews, and short snippets called “fun facts.” One of the most exciting features is the new 25th Anniversary Audio Tour called Community Voices. Last summer we recorded interviews in the garden with community members, first-time visitors, and some local celebrities, and it’s all come together in this tour to present a fantastic audio snapshot of just how special the garden is to people.

Detail view of Spoonbridge and CherryInterpretive media for Spoonbridgegarden_phone_5_sm

The site provides light, casual information “snacking,” with prompts to dive deeper if time and interest allow. It gives visitors a familiar device (their own!) to enhance their visit at their own convenience.

Of course, we didn’t neglect our out-of-state or desktop visitors, but the site’s focus remains on getting people to the garden. For those unable to experience it physically (or for those frigid winter months), the new website provides a browsable interface to familiar favorites and up-to-date acquisitions and commissions.

Behind the scenes

MSG Web Data

Our proof of concept for the site was lean and mean, built quickly using open source tools (leaflet.js) and open data (OpenStreetMap). We didn’t have latitude/longitude positioning info for our public works of art, but as it turned out some kind soul had already added a significant number of our sculptures to OpenStreetMap! We set about adding the rest and knocked together a new “meta API” for the garden that would unify data streams from OSM, our Collections, Calendar, and existing media assets in Art on Call.

Fuzzy GPS

garden_2

Next we began the process of verifying the data. We’d created custom map tiles for the garden so we could maintain the designed look and feel Eric was going for (look for a future post on the design process for this site), but it involved  some compromises to make the paths line up visually. The New Media team spent a few hours walking the garden in the early spring, making notes on sculpture GPS anomalies, misplaced paths, and trying to avoid having anyone appear to be inside the hedges. No two devices gave the exact same GPS coordinates, so we ended up averaging the difference and calling it close enough.

Native-ish

It’s not a web app. It’s an app you install from the web.

As we discovered while building the mobile side of the new Collections site, a properly tuned webpage can start to feel a lot like a native application. We’re using swipe gestures to move between information “slides,” pinch and zoom for the map, and followed most of the tips in the forecast.io blog post (above) to further enhance the experience. We’ll never be quite as snappy as a properly native app, but we feel the cross-platform benefits of the web fully outweigh that downside. (Not to mention our in-house expertise is web-based, not app-based.)

Need for Speed

garden_pagespeed

This was the make-or-break component of the mobile site: if it didn’t “feel” fast, no one would use it. We spent untold hours implementing just-in-time loading of assets so the initial site would by tiny, but then we’d have the images we need just before they were supposed to be on screen. We tuned the cache parameters so anyone who’s visited the site in the past will have the components they need when they return, but we can also push out timely updates in a lightweight manner. We optimized images and spread the map tiles around our Content Delivery Network to prevent a single-domain bottleneck.

Finally, and perhaps foolishly, we wrote a safety fallback that tries to estimate a user’s bandwidth as they load the welcome image: by timing the download of a known-size file, we can make a quick decision if they are on a painfully slow 3G network or something better. In the case of the slow connection we dynamically begin serving half-size images in an effort to improve the site’s performance. We’ll be monitoring usage statistics closely to see if/when this situation occurs and for what devices. Which brings me to…

Analytics

garden_heatmap_sm

I hope I’m right when I say that anyone who’s heard me speak about museums and digital knows how adamant I am about measuring results and not just guessing if something is “working.” This site is no exception, with the added bonus of location tracking! We’re anonymizing user sessions and then pinging our server with location data so we can begin to build an aggregate “heatmap” of popular spots in the garden. Above is a screenshot of my first test walk through the garden.

We’re logging as many bits of information as we can about the usage of the new site in hopes of refining it, measuring success, and informing our future mobile interpretation efforts.

Enjoy!

Please visit the new Minneapolis Sculpture Garden website and let us know what you think!

 

Out with the Dialog Table, in with the Touch Wall

If you’ve explored our galleries, you’ve probably noticed the Dialog Table tucked into the Best Buy Info Lounge just off one of our main arteries. It’s a poster child of technical gotchas: custom hardware and software, cameras, projectors, finicky lighting requirements… Despite all the potential trouble embedded in the installation, it’s actually been remarkably solid […]

If you’ve explored our galleries, you’ve probably noticed the Dialog Table tucked into the Best Buy Info Lounge just off one of our main arteries. It’s a poster child of technical gotchas: custom hardware and software, cameras, projectors, finicky lighting requirements… Despite all the potential trouble embedded in the installation, it’s actually been remarkably solid apart from a few high-profile failures. At last tally, only the CPUS, capture cards, and one graphics board are original; the rest has been slowly replaced over the years as pieces have broken. (The graphics and video capture cards have drivers that aren’t upgradable at this point, so I’ve been trolling eBay to acquire various bits of antique hardware.)

It’s been a tank. A gorgeous, ahead-of-its-time, and mostly misunderstood tank. I’m both sad and excited to see it go.

I am, however, unequivocally excited about the replacement: two 65″ touch walls from Ideum. This change alone will alleviate one of the biggest human interface mis-matches with the old table: it wasn’t a touch surface, and everyone tried to use it that way.

presenter1

Early meeting with demo software

We’re moving very quickly with our first round of work on the walls, trying to get something up as soon as possible and iterating from there. The immediate intention is to pursue a large-scale “big swipe” viewer of highlights from our collection. Trying to convey the multidisciplinary aspect of the Walker’s collection is always a challenge, but the Presenter wall gives us a great canvas with the option for video and audio.

prsenter2

The huge screen is an attention magnet

With the recently announced alpha release of Gestureworks Core with Python bindings, I’m also excited for the possibilities of what’s next for the walls. The open source Python library at kivy.org looks like a fantastic fit for rapidly developing multi-touch apps, with the possible benefit of pushing out Android / iOS versions as well. At the recent National Digital Forum conference in New Zealand I was inspired by a demo from Tim Wray showing some of his innovative work in presenting collections on a tablet. We don’t have a comprehensive body of tags around our work at this point, but this demo seems to provide a compelling case for gathering that data. Imagine being able to create a set of objects on the fly showing “Violent scenes in nature” just from the paired tags “nature” and “violent”. Or “Blue paintings from Europe” using the tag “blue” and basic object metadata. Somehow the plain text description imposed on simple tag data makes the set of objects more interesting (to me, anyway). I’m starting to think that collection search is moving into the “solved” category, but truly browsing a collection online… We’re not there.

Touch screens, and multitouch in particular, seem destined for eventually greatness in the galleries, but as always the trick is to make the technical aspect of the experience disappear. I hope by starting very simply with obvious interactions we can avoid the temptation to make this about the screens, and instead about the works we’ll be showing.

Walkerart.org Design Notes #1

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own […]

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own as an internationally-focused “digital Walker,” instead of something that merely serves the local, physical space.

We largely started from scratch with the user experience and design of the site; the old site, for all its merits, had started to show its age on that front, being originally designed over six years ago – an eternity in web-years. That said, we’re still traditionalists in some ways where new media design is concerned, and took a really minimal, monochromatic, print/newspaper-style approach to the homepage and article content. So in a way, it’s a unique hybrid of the old/time-tested (in layout) and new/innovative (in concept and content), hopefully all tempered by an unadorned, type-centric aesthetic that lets the variety of visuals really speak for themselves.

Our inspiration was a bit scattershot, as we tried to bridge a gap between high and low culture in a way reflective of the Walker itself. Arts and cultural sites were obviously a big part (particularly Metropolis M and it’s wonderful branded sidebar widgets), but not so much museums, which have traditionally been more conservative and promotionally-driven. With our new journalistic focus, two common touchstones became The New York Times’ site and The Huffington Post – with the space in between being the sweet spot. The former goes without saying. The latter gets a bad rap, but we were intrigued by it’s slippery, weirdly click-enticing design tricks and general sense of content-driven chaos enlivened by huge contrasts in scale. The screaming headlines aren’t pretty, but they’re tersely honest and engaging in an area where a more traditional design would introduce some distance. And the content, however vapid, is true to its medium; it’s varied and easily digestible. (See also Jason Fried’s defense of the seemingly indefensible.)

Of course, we ended up closer to the classier, NYT side of things, and to that end, we were really fortunate to start this process around the advent of truly usable web font services. While the selection’s still rather meager beyond the workhorse classics and a smattering of more gimmicky display faces (in other words, Lineto, we’re waiting), really I’m just happy to see less Verdana in the world. And luckily for us, the exception-to-the-rule Colophon Foundry has really stepped up their online offerings lately – it’s Aperçu that you’re seeing most around the site, similar in form to my perennial favorite Neuzeit Grotesk but warmer, more geometric, and with a touch of quirk.

Setting type for the web isn’t without it’s issues still, with even one-point size adjustments resulting in sometimes wildly different renderings, but with careful trial-and-error testing and selective application of the life-saving -webkit-font-smoothing CSS property, we managed to get as close as possible to our ideal. It’s the latter in particular that allows us elegant heading treatments (though only visible in effect to Safari and Chrome): set to antialiased, letterforms are less beholden to the pixel grid and more immune to the thickening that sometimes occurs on high-contrast backgrounds.

It’s not something I’d normally note, but we’re breaking away from the norm a bit with our article treatments, using the more traditional indentation format instead of the web’s usual paragraph spacing, finding it to flow better. It’s done using a somewhat complex series of CSS pseudo-elements in combination with adjacent selectors – browser support is finally good enough to accomplish such a thing, thankfully, though it does take a moment to get used to on the screen, strangely enough. And we’re soon going to be launching another section of the site with text rotation, another technique only recently made possible in pure CSS. Coming from a print background, it’s a bit exciting to have these tools available again.

Most of the layout is accomplished with the help of the 960 Grid System. Earlier attempts at something more semantically meaningful proved more hassle than they were worth, considering our plethora of more complex layouts. We’ve really attempted something tighter and more integrated than normally seen on the web, and I think it’s paid off well. That said, doing so really highlighted the difficulties of designing for dynamic systems of content – one such case that reared it’s head early on was titles in tiles (one of the few “units” of content used throughout the site).

A tricky issue at first considering our avoidance of ugly web aesthetics like fades (and artificial depth/dimensionality, and gradients, and drop shadows…), but one eventually solved with the implementation of our date treatments, whose connecting lines also function nicely as a cropping line – a tight, interlocking, cohesive system using one design element to solve the issues of another. We’ve tried to use similar solutions across the site, crafting a system of constraints and affordances, as in the case of our generated article excerpts:

Since we’re losing an element of control with freeform text fields on the web and no specific design oversight as to their individual display, we’ve chosen to implement logic that calculates an article title’s line-length, and then generates only enough lines of the excerpt to match the height of any neighboring articles. It’s a small detail for sure, but we’re hoping these details add up to a fine experience overall.

Anyway, there’s still more to come – you’ll see a few painfully neglected areas here and there (our collections in particular, but also the Sculpture Garden and to a lesser extent these blogs), but they’re next on our list and we’ll document their evolution here.

Process/miscellany

Event Documentation and Webcasting for Museums

At the Walker, we webcast many of our events live. It is a history wrought with hiccups and road bumps, but doing so has given our audiences the opportunity to watch lectures, artist talks, and events live from their home or even abroad. More importantly, webcasting has focused our technique for documenting events. In the […]

At the Walker, we webcast many of our events live. It is a history wrought with hiccups and road bumps, but doing so has given our audiences the opportunity to watch lectures, artist talks, and events live from their home or even abroad. More importantly, webcasting has focused our technique for documenting events. In the broadcast world, “straight to tape” is a term used for programs such as late night talk shows that are directed live and sent straight to video tape, free of post-production. For the most part, we also try to minimize our post-production process, allowing us to push out content relatively quickly before moving onto the next show.

At the heart of our process is a Panasonic AV-HS400 video mixer, which accepts both an HD-SDI camera feed and a VGA feed from the presenter’s laptop.  The video mixer allows us to cut live between the speaker and his or her presentation materials, either with fades or straight cuts. In addition, the mixer’s picture-in-picture capability allows us to insert presentation materials into the frame, next to the speaker.  Doing so gives viewers both the expressiveness of the presenter and the visual references live audiences are seeing. One thing to note: if a speaker begins moving around the stage, it becomes difficult to frame a picture-in-picture, so the technique works better when people stand still.

        

The camera we use is a Sony PMW-350K, which is part of the XDCAM family. We shoot from the back of the room in all of our public spaces, putting a lot of distance between the camera and the subject. As a result, we need all the zoom our camera lens can give. Presently our lens is a Fujinon 8mm–128mm (16x), but realistically we could use something longer for better close-ups of the speaker. This is an important factor when considering cameras: where will your camera be positioned in relation to the subject, and how much reach is needed to get a good shot. Having a camera close to the speaker isn’t always practical with a live audience present, so many of shooters push the limits of their camera lens. Being so far out also puts a lot of strain on a tripod head. It is very easy to jiggle the frame when making slight camera moves fully zoomed out, so a good tripod head should go hand in hand with a long video lens.

For audio, our presenter’s microphone first hits the house soundboard and then travels to our camera where levels are monitored and adjusted. At that point, both the audio and the camera’s images travel through a single HD-SDI BNC cable to our video mixer where audio and video signals split up once again. This happens because the mixer draws audio from whatever source is selected. As such, if a non-camera source is selected, such as the PowerPoint, no audio is present. To resolve this, an HD-SDI direct out from the camera source on the mixer is used to feed a device that re-embeds the audio with the final mixed video signal. The embedding device we use is an AJA FS-1 frame synchronizer.

         

With the frame synchronizer now kicking out a finished program, complete with embedded audio, our AJA KiPro records the content to an Apple ProRes file. We use a solid-state hard drive module as media, which pops out after an event is over and plugs directly into a computer for file transferring. An important thing to remember for anyone considering a mixer is that an external recording device is necessary to capture the final product.

To webcast, our FS-1 frame synchronizer simultaneously sends out a second finished signal to our Apple laptop. The laptop is outfitted with a video capture card, in our case a Matrox MXO2 LE breakout box, that attaches via the ExpressCard slot. Once the computer recognizes the video signal, it is ready for webcasting. The particular service we use is called Ustream. A link to our Ustream account is embedded in the Walker’s video page, titled The Channel, and viewers can watch the event live through their browser. Live viewership can run the gamut from just a few people to more than 75 viewers. Design-related programs–like the popular lecture by designer Aaron Draplin in March–tend to attract the biggest audiences. Once an event has concluded, Ustream stores a recording of the event within the account. We have the option to link to this recorded Ustream file through our website, but we don’t. Instead we try to quickly process our own recording to improve the quality before uploading it to YouTube.

       

The most frustrating part of our webcasting experiment has been bandwidth. The Walker has very little of it and thus we share a DSL line with the FTP server for webcasting. The upload speed on this DSL line tops out at 750 kbps. In real life, we get more like 500 kbps, leaving us to broadcast around 400 kbps. These are essentially dial-up numbers, which means the image quality is poor and our stream is periodically lost, even when the bit rate is kept down. Viewers at home are therefore prone to multiple disruptions while watching an event. We do hope to increase bandwidth in the coming months to make our service more reliable.

Earlier I mentioned that the Walker does as little post-production as possible for event documentation, but we still do some. Once the final ProRes file is transferred to an editing station, it is opened up in Final Cut 7. The audio track is then exported as a stand-alone stereo file and opened with Soundtrack Pro where it is normalized to 0db and given a layer of compression. With live events, speakers often turn their head or move away from the microphone periodically. This can make audio levels uneven.  Compression helps bring the softer moments in line with the louder ones, thus limiting dynamic range and delivering a more consistent product.

After the audio track is finished, it is dropped back into the timeline and the program’s front and back end are trimmed. We try to cut out all topical announcements and unnecessary introductions. Viewers don’t need to hear about this weekend’s events two years from now, so we don’t waste their time with it. In addition to tightening up the top of the show, an opening title slide is added including the program’s name and date. The timeline is then exported as a reference file and converted to an MP4 through the shareware program MPEG streamclip.

MPEG streamclip is a favorite of mine because it lists the final file size and lets users easily adjust the bit rate. With a 2GB file size limit on YouTube uploads, we try to maximize bitrate (typically 1800–3000 kbps) for our 1280 x 720p files. Using a constant bit rate for encoding instead of a variable bit rate also saves us a lot of time. With the runtime of our events averaging 90 minutes, the sacrifice in image quality for a constant bit rate seems justified considering how long an HD variable bit rate encode can take.

Once we have the final MP4 file it is uploaded to YouTube and embedded in the Walker’s video page.

 

Digital Wayfinding in the Walker, Pt. 1

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New […]

An ongoing conversation here at the Walker concerns the issue of systemic wayfinding within our spaces — certainly an important issue for an institution actively seeking attendance and public engagement, not to mention an institution whose building is literally a hybrid of the old and new (with our 2005 expansion). While not normally in New Media’s purview, and only occasionally so for Design, a recent initiative to improve the flow and general satisfaction of visitors brought with it the idea of using digital displays, with their malleable content and powerful visual appeal, to guide and direct people throughout the Walker.

Our new static directional signage

Currently installed in one location of an eventual three, and with a simple “phase one” version of the content, the Bazinet Lobby monitor banks cycle through the title graphics for all the exhibitions currently on view, providing a mental checklist of sorts that allows the visitor to tally what he or she has or hasn’t yet seen that directly references the vinyl graphics at each gallery entrance. The corner conveniently works as an intersection for two hallways leading to a roughly equivalent number of galleries in either direction, one direction leading to our collection galleries in the Barnes tower, and the other our special exhibition galleries in the Herzog & de Meuron expansion. To this end, we’ve repurposed the “street sign” motif used on our new vinyl wall graphics to point either way (which also functions as a nice spacial divider). Each display tower cycles through it’s given exhibitions with a simple sliding transition, exposing the graphics one by one. An interesting side effect of this motion and the high-contrast LCDs has been the illusion of each tower being a ’70s-style mechanical lightbox; I’ve been tempted to supplement it with a soundtrack of quiet creaking.

The system, powered by Sedna Presenter and running on four headless, remotely-accessible Mac Minis directly behind the wall, affords us a lot of flexibility. While our normal exhibitions cycle is a looped After Effects composition, we’re also working on everything from decorative blasts of light and pattern (the screens are blindingly bright enough to bathe almost the entire lobby in color), to live-updating Twitter streams (during parties and special events), to severe weather and fire alerts (complete with a rather terrifying pulsating field of deep red). In fact, this same system is now even powering our pre-show cinema trailers. I’m particularly interested in connecting these to an Arduino’s environmental sensors that would allow us to dynamically change color, brightness, etc. based on everything from temperature to visitor count to time of day — look for more on that soon.

See it in action:

Behind the scenes / Severe weather alert:

 

Installation:

  

Next