Blogs Media Lab

Out with the Dialog Table, in with the Touch Wall

If you’ve explored our galleries, you’ve probably noticed the Dialog Table tucked into the Best Buy Info Lounge just off one of our main arteries. It’s a poster child of technical gotchas: custom hardware and software, cameras, projectors, finicky lighting requirements… Despite all the potential trouble embedded in the installation, it’s actually been remarkably solid […]

If you’ve explored our galleries, you’ve probably noticed the Dialog Table tucked into the Best Buy Info Lounge just off one of our main arteries. It’s a poster child of technical gotchas: custom hardware and software, cameras, projectors, finicky lighting requirements… Despite all the potential trouble embedded in the installation, it’s actually been remarkably solid apart from a few high-profile failures. At last tally, only the CPUS, capture cards, and one graphics board are original; the rest has been slowly replaced over the years as pieces have broken. (The graphics and video capture cards have drivers that aren’t upgradable at this point, so I’ve been trolling eBay to acquire various bits of antique hardware.)

It’s been a tank. A gorgeous, ahead-of-its-time, and mostly misunderstood tank. I’m both sad and excited to see it go.

I am, however, unequivocally excited about the replacement: two 65″ touch walls from Ideum. This change alone will alleviate one of the biggest human interface mis-matches with the old table: it wasn’t a touch surface, and everyone tried to use it that way.

presenter1

Early meeting with demo software

We’re moving very quickly with our first round of work on the walls, trying to get something up as soon as possible and iterating from there. The immediate intention is to pursue a large-scale “big swipe” viewer of highlights from our collection. Trying to convey the multidisciplinary aspect of the Walker’s collection is always a challenge, but the Presenter wall gives us a great canvas with the option for video and audio.

prsenter2

The huge screen is an attention magnet

With the recently announced alpha release of Gestureworks Core with Python bindings, I’m also excited for the possibilities of what’s next for the walls. The open source Python library at kivy.org looks like a fantastic fit for rapidly developing multi-touch apps, with the possible benefit of pushing out Android / iOS versions as well. At the recent National Digital Forum conference in New Zealand I was inspired by a demo from Tim Wray showing some of his innovative work in presenting collections on a tablet. We don’t have a comprehensive body of tags around our work at this point, but this demo seems to provide a compelling case for gathering that data. Imagine being able to create a set of objects on the fly showing “Violent scenes in nature” just from the paired tags “nature” and “violent”. Or “Blue paintings from Europe” using the tag “blue” and basic object metadata. Somehow the plain text description imposed on simple tag data makes the set of objects more interesting (to me, anyway). I’m starting to think that collection search is moving into the “solved” category, but truly browsing a collection online… We’re not there.

Touch screens, and multitouch in particular, seem destined for eventually greatness in the galleries, but as always the trick is to make the technical aspect of the experience disappear. I hope by starting very simply with obvious interactions we can avoid the temptation to make this about the screens, and instead about the works we’ll be showing.

Beyond Interface: #Opencurating and the Walker’s Digital Initiatives

The new Walker Art Center website “heralds a paradigmatic shift for innovative museum websites in creating an online platform with an emphasis on publishing,” write Max Andrews and Mariana Cánepa Luna of the Barcelona-based curatorial office Latitudes, who add that the site places the Walker “at the centre of generating conversations around content from both inside […]

The new Walker Art Center website “heralds a paradigmatic shift for innovative museum websites in creating an online platform with an emphasis on publishing,” write Max Andrews and Mariana Cánepa Luna of the Barcelona-based curatorial office Latitudes, who add that the site places the Walker “at the centre of generating conversations around content from both inside and outside the Walker’s activities.” The pair discusses ideas behind the site with Robin Dowden, the Walker’s director of new media initiatives, web editor Paul Schmelzer, and Nate Solas, senior new media designer, as part of #OpenCurating, Latitudes’ new research effort investigating the ways contemporary art projects “can function beyond the traditional format of exhibition-and-catalogue in ways which might be more fully knitted into the web of information which exists in the world today.” Consisting of a moderated Twitter discussion, an event in Barcelona, and a series of 10 online interviews, #OpenCurating launches with the conversation below. As #OpenCurating content partner, the Walker will host conversations from this developing series on its homepage.

(more…)

Introducing Media Lab and the New Walker Blogs

It’s been seven years since we launched the Walker Blogs and with the release of our new website back in December we thought it was finally time for a refresh. You’ll notice that the design has changed to align with our new website and we’ve used the redesign process as an opportunity to rebrand each […]

It’s been seven years since we launched the Walker Blogs and with the release of our new website back in December we thought it was finally time for a refresh. You’ll notice that the design has changed to align with our new website and we’ve used the redesign process as an opportunity to rebrand each of our core blogs. It was an interesting exercise and allowed us to assess the state of our collective blogging efforts – how each of our (now) nine blogs serves a different audience, how they all have different use characteristics by their audiences, and how they could all be focused into tighter streams of content. The blogs definitely represent the long tail side of our publishing efforts – lots of small bits of specialized content for micro-niche audiences – so maintaining a strong emphasis on the personalities behind the Walker and their specific interests was key. And the rebranding process illustrated for us that when you present people with tangible criteria to change, such as a new name, tighter description, graphic – an understandable format to inhabit – it helps them better speculate on what their blog can be.

We decided on a system of flag graphics to represent the various blogs, since each blog is really a representation of a different group of people at the Walker (in most cases the individual programming departments). It’s a tricky balance to strike between striving for traditional, recognizable flag forms and having a graphic that cleverly plays off the title, but we’re glad to have a consistent vocabulary to build on in the future, especially since the blogs now match our comparatively monochromatic main site. We’re particularly fond of the Green Room’s flag.

Beyond the simple graphic forms, this is the first truly responsively designed Walker site – resize your browser window to see things reflow to fit a variety of screen sizes. Content and interface elements of lesser importance become hidden behind links at certain screen sizes. The main content area, on the other hand, stretches to fill a large width when called for. It leads to some pretty long line lengths, but gives our older, image-heavy content the space it needs to fit. We’ll be soon applying this technique to the redesigned Walker Collections, which features a strong publishing component. With the easy adaptations to tablets and mobile devices, it’s a good fit for our eventual goal of efficient multi-channel communications.

Other, smaller items of note include the addition of a grid/list view toggle in the top left to make skimming easier, smarter ordering of categories and authors (by popularity and date of last post, respectively), and a fun little flag animation when you roll over the left-side blog names (in full-width view).

And just for kicks, here are some rejected flag sketches:

Walkerart.org Design Notes #1

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own […]

As you’ve likely seen, we recently launched a brand new, long overdue redesign of our web presence. Olga already touched on the major themes nicely, so suffice it to say, we’ve taken a major step towards reconceptualizing the Walker as an online content provider, creating another core institutional offering that can live on its own as an internationally-focused “digital Walker,” instead of something that merely serves the local, physical space.

We largely started from scratch with the user experience and design of the site; the old site, for all its merits, had started to show its age on that front, being originally designed over six years ago – an eternity in web-years. That said, we’re still traditionalists in some ways where new media design is concerned, and took a really minimal, monochromatic, print/newspaper-style approach to the homepage and article content. So in a way, it’s a unique hybrid of the old/time-tested (in layout) and new/innovative (in concept and content), hopefully all tempered by an unadorned, type-centric aesthetic that lets the variety of visuals really speak for themselves.

Our inspiration was a bit scattershot, as we tried to bridge a gap between high and low culture in a way reflective of the Walker itself. Arts and cultural sites were obviously a big part (particularly Metropolis M and it’s wonderful branded sidebar widgets), but not so much museums, which have traditionally been more conservative and promotionally-driven. With our new journalistic focus, two common touchstones became The New York Times’ site and The Huffington Post – with the space in between being the sweet spot. The former goes without saying. The latter gets a bad rap, but we were intrigued by it’s slippery, weirdly click-enticing design tricks and general sense of content-driven chaos enlivened by huge contrasts in scale. The screaming headlines aren’t pretty, but they’re tersely honest and engaging in an area where a more traditional design would introduce some distance. And the content, however vapid, is true to its medium; it’s varied and easily digestible. (See also Jason Fried’s defense of the seemingly indefensible.)

Of course, we ended up closer to the classier, NYT side of things, and to that end, we were really fortunate to start this process around the advent of truly usable web font services. While the selection’s still rather meager beyond the workhorse classics and a smattering of more gimmicky display faces (in other words, Lineto, we’re waiting), really I’m just happy to see less Verdana in the world. And luckily for us, the exception-to-the-rule Colophon Foundry has really stepped up their online offerings lately – it’s Aperçu that you’re seeing most around the site, similar in form to my perennial favorite Neuzeit Grotesk but warmer, more geometric, and with a touch of quirk.

Setting type for the web isn’t without it’s issues still, with even one-point size adjustments resulting in sometimes wildly different renderings, but with careful trial-and-error testing and selective application of the life-saving -webkit-font-smoothing CSS property, we managed to get as close as possible to our ideal. It’s the latter in particular that allows us elegant heading treatments (though only visible in effect to Safari and Chrome): set to antialiased, letterforms are less beholden to the pixel grid and more immune to the thickening that sometimes occurs on high-contrast backgrounds.

It’s not something I’d normally note, but we’re breaking away from the norm a bit with our article treatments, using the more traditional indentation format instead of the web’s usual paragraph spacing, finding it to flow better. It’s done using a somewhat complex series of CSS pseudo-elements in combination with adjacent selectors – browser support is finally good enough to accomplish such a thing, thankfully, though it does take a moment to get used to on the screen, strangely enough. And we’re soon going to be launching another section of the site with text rotation, another technique only recently made possible in pure CSS. Coming from a print background, it’s a bit exciting to have these tools available again.

Most of the layout is accomplished with the help of the 960 Grid System. Earlier attempts at something more semantically meaningful proved more hassle than they were worth, considering our plethora of more complex layouts. We’ve really attempted something tighter and more integrated than normally seen on the web, and I think it’s paid off well. That said, doing so really highlighted the difficulties of designing for dynamic systems of content – one such case that reared it’s head early on was titles in tiles (one of the few “units” of content used throughout the site).

A tricky issue at first considering our avoidance of ugly web aesthetics like fades (and artificial depth/dimensionality, and gradients, and drop shadows…), but one eventually solved with the implementation of our date treatments, whose connecting lines also function nicely as a cropping line – a tight, interlocking, cohesive system using one design element to solve the issues of another. We’ve tried to use similar solutions across the site, crafting a system of constraints and affordances, as in the case of our generated article excerpts:

Since we’re losing an element of control with freeform text fields on the web and no specific design oversight as to their individual display, we’ve chosen to implement logic that calculates an article title’s line-length, and then generates only enough lines of the excerpt to match the height of any neighboring articles. It’s a small detail for sure, but we’re hoping these details add up to a fine experience overall.

Anyway, there’s still more to come – you’ll see a few painfully neglected areas here and there (our collections in particular, but also the Sculpture Garden and to a lesser extent these blogs), but they’re next on our list and we’ll document their evolution here.

Process/miscellany

Event Documentation and Webcasting for Museums

At the Walker, we webcast many of our events live. It is a history wrought with hiccups and road bumps, but doing so has given our audiences the opportunity to watch lectures, artist talks, and events live from their home or even abroad. More importantly, webcasting has focused our technique for documenting events. In the […]

At the Walker, we webcast many of our events live. It is a history wrought with hiccups and road bumps, but doing so has given our audiences the opportunity to watch lectures, artist talks, and events live from their home or even abroad. More importantly, webcasting has focused our technique for documenting events. In the broadcast world, “straight to tape” is a term used for programs such as late night talk shows that are directed live and sent straight to video tape, free of post-production. For the most part, we also try to minimize our post-production process, allowing us to push out content relatively quickly before moving onto the next show.

At the heart of our process is a Panasonic AV-HS400 video mixer, which accepts both an HD-SDI camera feed and a VGA feed from the presenter’s laptop.  The video mixer allows us to cut live between the speaker and his or her presentation materials, either with fades or straight cuts. In addition, the mixer’s picture-in-picture capability allows us to insert presentation materials into the frame, next to the speaker.  Doing so gives viewers both the expressiveness of the presenter and the visual references live audiences are seeing. One thing to note: if a speaker begins moving around the stage, it becomes difficult to frame a picture-in-picture, so the technique works better when people stand still.

        

The camera we use is a Sony PMW-350K, which is part of the XDCAM family. We shoot from the back of the room in all of our public spaces, putting a lot of distance between the camera and the subject. As a result, we need all the zoom our camera lens can give. Presently our lens is a Fujinon 8mm–128mm (16x), but realistically we could use something longer for better close-ups of the speaker. This is an important factor when considering cameras: where will your camera be positioned in relation to the subject, and how much reach is needed to get a good shot. Having a camera close to the speaker isn’t always practical with a live audience present, so many of shooters push the limits of their camera lens. Being so far out also puts a lot of strain on a tripod head. It is very easy to jiggle the frame when making slight camera moves fully zoomed out, so a good tripod head should go hand in hand with a long video lens.

For audio, our presenter’s microphone first hits the house soundboard and then travels to our camera where levels are monitored and adjusted. At that point, both the audio and the camera’s images travel through a single HD-SDI BNC cable to our video mixer where audio and video signals split up once again. This happens because the mixer draws audio from whatever source is selected. As such, if a non-camera source is selected, such as the PowerPoint, no audio is present. To resolve this, an HD-SDI direct out from the camera source on the mixer is used to feed a device that re-embeds the audio with the final mixed video signal. The embedding device we use is an AJA FS-1 frame synchronizer.

         

With the frame synchronizer now kicking out a finished program, complete with embedded audio, our AJA KiPro records the content to an Apple ProRes file. We use a solid-state hard drive module as media, which pops out after an event is over and plugs directly into a computer for file transferring. An important thing to remember for anyone considering a mixer is that an external recording device is necessary to capture the final product.

To webcast, our FS-1 frame synchronizer simultaneously sends out a second finished signal to our Apple laptop. The laptop is outfitted with a video capture card, in our case a Matrox MXO2 LE breakout box, that attaches via the ExpressCard slot. Once the computer recognizes the video signal, it is ready for webcasting. The particular service we use is called Ustream. A link to our Ustream account is embedded in the Walker’s video page, titled The Channel, and viewers can watch the event live through their browser. Live viewership can run the gamut from just a few people to more than 75 viewers. Design-related programs–like the popular lecture by designer Aaron Draplin in March–tend to attract the biggest audiences. Once an event has concluded, Ustream stores a recording of the event within the account. We have the option to link to this recorded Ustream file through our website, but we don’t. Instead we try to quickly process our own recording to improve the quality before uploading it to YouTube.

       

The most frustrating part of our webcasting experiment has been bandwidth. The Walker has very little of it and thus we share a DSL line with the FTP server for webcasting. The upload speed on this DSL line tops out at 750 kbps. In real life, we get more like 500 kbps, leaving us to broadcast around 400 kbps. These are essentially dial-up numbers, which means the image quality is poor and our stream is periodically lost, even when the bit rate is kept down. Viewers at home are therefore prone to multiple disruptions while watching an event. We do hope to increase bandwidth in the coming months to make our service more reliable.

Earlier I mentioned that the Walker does as little post-production as possible for event documentation, but we still do some. Once the final ProRes file is transferred to an editing station, it is opened up in Final Cut 7. The audio track is then exported as a stand-alone stereo file and opened with Soundtrack Pro where it is normalized to 0db and given a layer of compression. With live events, speakers often turn their head or move away from the microphone periodically. This can make audio levels uneven.  Compression helps bring the softer moments in line with the louder ones, thus limiting dynamic range and delivering a more consistent product.

After the audio track is finished, it is dropped back into the timeline and the program’s front and back end are trimmed. We try to cut out all topical announcements and unnecessary introductions. Viewers don’t need to hear about this weekend’s events two years from now, so we don’t waste their time with it. In addition to tightening up the top of the show, an opening title slide is added including the program’s name and date. The timeline is then exported as a reference file and converted to an MP4 through the shareware program MPEG streamclip.

MPEG streamclip is a favorite of mine because it lists the final file size and lets users easily adjust the bit rate. With a 2GB file size limit on YouTube uploads, we try to maximize bitrate (typically 1800–3000 kbps) for our 1280 x 720p files. Using a constant bit rate for encoding instead of a variable bit rate also saves us a lot of time. With the runtime of our events averaging 90 minutes, the sacrifice in image quality for a constant bit rate seems justified considering how long an HD variable bit rate encode can take.

Once we have the final MP4 file it is uploaded to YouTube and embedded in the Walker’s video page.

 

Museums & the Web 2012 Conference Notes

It’s been a couple of years since I attended the annual Museums & the Web conference. A must-stop for professionals working in the field of museums + all things online, this conference celebrated its 16th anniversary under new management with the same great content we’ve come to expect. A few of my conference takeaways: Cultural […]

It’s been a couple of years since I attended the annual Museums & the Web conference. A must-stop for professionals working in the field of museums + all things online, this conference celebrated its 16th anniversary under new management with the same great content we’ve come to expect.

A few of my conference takeaways:

Cultural data sculpting
Sarah Kenderine kicked off the conference, wowing us with her work in immersive environments using panoramic and stereoscopic display systems. I was entranced by recent installations using 3D imagery, high resolution augmented panoramas, and circular screens to recreate cultural heritage sites, performances and narratives (imagine dancers animating images in a cave painting and physical interactions with enormous datasets). From Hampi, India, and the Mogao caves, Dunhuang, China, to adaptions of Beckett narratives, the work of Kenderine’s lab at the City University of Hong Kong demonstrates the amazing possibilities for enhanced exploration, interactive interpretation, and new modalities of human interaction for cultural heritage preservation. Project documentation available here.

Be where the puck is going
In a session on Digital Strategies, Bruce Wyman evoked Wayne Gretsky’s advice to “Skate to where the puck is going to be, not where it has been.” Bruce spoke to the permeability of place as the future of interactive media and suggested restrictive digital strategies may run counter to our needs. In a period of fundamental change, we need to evolve the things that we are good at, be nimble, and design not for the device but for the visitor and their engagement. Wyman encouraged us to trust our audiences and serialize the experience by developing content that transcends and crosses platforms.

Like Wyman, Rob Stein is an eloquent technology advocate. In the same session, he advised to make sure your digital strategy reflects the larger museum strategy. And all you technologists who think you have difficulty getting upper management’s ear, work on your communication skills. Learn to write! Despite his claim that writing doesn’t come easy, Stein’s paper is excellent: Blow Up Your Digital Strategy: Changing the Conversation about Museums and Technology.

After Gutenberg
There was much talk in conference sessions and informal meetups about changing publishing models. In the session After Gutenberg, the Whitney’s Sarah Hromack described the evolution of Whitney Stories, a blog wherein the museum is wrestling with questions of authority—what stories do we want to tell, which staff are qualified to speak on behalf of the museum, editorial approval—and issues of sustainability. I haven’t had a chance to read the paper but the presentation was a refreshingly honest assessment of the inherent problems in this work and the reality of making it a part of our daily practice (not in addition to what we do but rethinking how we do our work).

A museum without labels
The Museum of Old and New Art (MONA) is Australia’s largest private museum, a “secular temple” of 6,000 square meters to worship materialism with nary a label on the walls. Visitors use the ‘O’ mobile device to read about art on display and listen to interviews with the artists. The museum’s unique take on audience engagement—including claims to remove the most popular work as evidenced in ‘O’ stats and restricting online collection access to visitors who have actually experienced the artwork—suggest this is indeed a museum visitors are unlikely to forget. I enjoyed this article on MONA’s founder, David Walsh, describing his vision for this “subversive Disneyland.”

Spreading an analytics culture
There were a number of good sessions addressing the importance of continuous evaluation and building a culture of analytics. The panel on the Culture24 research project focused on the key findings in their recently published report. Among them, be clear what you are trying to do online and who it is for. Revise the whole suite of metrics you care about and the tools used to measure them. Google Analytics is only part of a multi-tool solution that begins with a good problem definition.

One of the participants in the Culture24 project, the Tate went into more detail on its efforts in a subsequent session and paper Making Sense of Numbers: A Journey of Spreading the Analytics Culture at Tate. Using the Tate Liverpool Alice in Wonderland exhibition as a test case, they described the analytics tools used (including Hootsuite, Adwords, Google Analytics, Facebook Insights, ticketing system, and YouTube analytics), matrices, and reports built in response to the exhibitions communication plan and areas of activity, both on and offline. While the exhibition reporting was awe-inspiring in its quality and thoroughness, Tijana Tasich, Tate’s senior digital producer, admitted that more work, training, and resources are required to implement similar evaluations across the organization and its programs.

Epic fail
There’s much to learn from failed projects in our field and #MW2012 used this as a topic for its closing session. Hats off to the project cases studies that took the stage to reveal what didn’t work and why. Each project report included a round of bingo, with categories for failure occupying spaces on the card. Among them: poor organizational fit, must-be-invented-here syndrome, feature creep, tech in search of a problem, no user research, pleasing donors and funders, no local context, no backup plan, and not knowing when to say goodbye. Wifi was off during the session, forcing all of us to listen, learn, and not tweet specifics. Everyone should feel good after their time in the chair with therapist Wyman and his Labrador. We appreciate your honesty and hope we’re brave enough to take the stage at future conferences.

Best of Web Awards
The Walker was lucky enough to walk away with two awards for the redesign of our website (best in the category of Innovation/Experimental and best Overall). We are honored to receive the recognition of our peers and humbled to be in the company of so many excellent projects. The full slate of winners is available here.

Honeybees and Confetti Drops: Having Fun with Web Design

We’re a serious bunch at the Walker Art Center, except when we aren’t. Cat breaks have made their way into Art News from Elsewhere, and we’ve tucked in a few Easter eggs for fans of these hidden amusements. Our new site includes a confetti drop that appears when you click on Parties & Special Events […]

We’re a serious bunch at the Walker Art Center, except when we aren’t. Cat breaks have made their way into Art News from Elsewhere, and we’ve tucked in a few Easter eggs for fans of these hidden amusements. Our new site includes a confetti drop that appears when you click on Parties & Special Events in the calendar. And for those who find their way to a place that they shouldn’t, there’s a custom 404 page. God forbid there’s a server crash, we’ll send you to a page featuring Charles Ray’s Unpainted Sculpture.

Last week Eric added accumulating bees to the Lifelike exhibition page. The longer you stay on the page, the larger the swarm.


For those of you hoping to attract a few bees of your own, here’s Eric’s script.

Continuous Deployment with Fabric

We have been using Fabric to deploy changes to walkerart.org. Fabric is a library that enables a string of commands to be run on multiple servers. Though similar things could be done with shell scripts, we enjoy staying in one language as much as possible. In Fabric, strings are composed and sent to remote servers […]

textured brown fabric
We have been using Fabric to deploy changes to walkerart.org. Fabric is a library that enables a string of commands to be run on multiple servers. Though similar things could be done with shell scripts, we enjoy staying in one language as much as possible. In Fabric, strings are composed and sent to remote servers as commands over an SSH connection. Our Fabric scripts have been evolving over time with the project using the mentality: “If you know you are going to be doing something more than twice, script it!”

With Fabric we can tailor our deployments precisely. We deploy often with one of two commands:
[cci]fab production simple_deploy[/cci] or [cci]fab production deploy[/cci].
[cci]simple_deploy[/cci] simply pulls new code from the repo and restarts the web server.
[cci]deploy[/cci] does many things, each of which can be executed independently, and is explained below.

The scripts we run go both ways, code goes up to the server and data comes back to the workstation. We have [cci]fab sync_with_production[/cci], which pulls the database and images. The images arrive locally in a directory specified by an environment variable or the default directory. Conventional naming schemes simplify variables across systems such as the database name. Except for some development settings, our workstation environments are identical to the production environment, which means we can replicate a bug or feature locally and immediately.

We have been collecting all of the commands we normally run on the servers into our fabfile. And then we can group them by calling tasks from other tasks. Our deployment consists of 12 tasks. With this Fabric task, one can deploy to the production or staging server with this one command:
[cci]fab production deploy[/cci].

This makes it incredibly simple to put code that is written on developer workstations into production in as safe and secure way. Here is our deployment in Fabric:

[cc lang=”python”]
def deploy():
with cd(env.project):
run(‘git pull’)
get_settings()
install_requirements()
production_templates()
celerybeat(‘stop’)
celeryd(‘stop’)
synccompress()
migrate()
gunicorn(‘restart’)
celeryd(‘start’)
celerybeat(‘start’)

[/cc]
First the “with” blocks put us onto the remote server, into the right directory and within Python’s virtual environment. From there “git_pull” gets the new code which contains the settings files, and “get_settings” copies any new settings into place. The task called “install_requirements” calls on pip to validate our virtual environment’s packages against the setting file called requirements. All third party packages are locked to a version so we aren’t surprised by new “features” that have adverse effects. We use celery to harvest data from other sites so we make sure they are running with fresh config files. The task “syncompress” does our compressing of css and js, “migrate” alters the database per our migration files and gunicorn is the program that is running django.

It takes about 60 seconds for a new version of the website to get into production. From there it takes 0-10 minutes for the memcached values to expire before the public changes are visible. We are deploying continuously so watch closely for updates!