My team and I were tasked to create an app for the Connected by Design exhibit at the Cooper Hewitt, Smithsonian design museum. The exhibit is a showcase of, R/GA founder and CEO, Bob Greenberg’s personal collection of technological innovations spanning from the early 1900’s to the present.
1. User needs: Provide value to exhibit visitors beyond the information that will be available at the exhibit on walls and in the printed brochure.
2. Client needs: Create an app with a distinguishing feature—one that can set it apart from what is available in the museum landscape.
I led the app’s design alongside a creative director establishing the experience and look-and-feel of the app. I presented the app to internal leadership and developers.
Museum visitors’ natural behavior is to take pictures of artwork they are impacted by and interested in.
What if a picture of an object in the exhibit could reveal additional information, media and design expert opinions about the object?
Our initial approach was to create an experience centered on an AI camera that could recognize photos taken by visitors of the exhibit’s collection.
The primary value of this approach is in keeping the users engaged with the objects and exhibition space. In a conventional approach whereby artwork is assigned audio tour numbers, the user is temporarily disengaging with what is physically around them. With our AI camera approach, the app simply functions as a lens through which the user accesses media and expert opinions.
We wanted the experience of capturing an object to be very simple and focused but also reflective of the camera’s intelligence. The user is directed to place the object of interest within the camera’s sight and press the shutter button once the object is registered. Cues around and above the shutter button act as subtle indicators of the AI camera’s identification progress.
Once the user takes a picture of an object, we wanted to reinforce the sense that they’ve captured it. We achieved this by treating object images as floating squares. Pushing and pulling these squares revealed information or interactivity; as a result, motion was vital in the success of these designs.
Initially, we wanted the app experience to be very simple. The landing experience was the camera. Swiping right would reveal the exhibit information and swiping left would reveal a catalog of the collection.
After presenting the work to stakeholders and testing our prototype around the office, a consistent question arose: what if I just want to hit play and be guided? Our existing paradigm was biased towards a specific behavior: exploration. It was very likely that a visitor would prefer a conventionally linear experience of the exhibit by starting an audio tour and being taken from object to object. We explored.
After iterating and testing our ideas, we concluded that these two ways of experiencing the exhibit should be treated as equal options:
A guided experience
a free-form experience.
On landing, we decided to present the user with the option of either experiencing the exhibit through a guided audio tour or through the AI camera. Once they select their preference, the option they did not choose collapses and is minimized yet available (it is not buried within a menu for example).
As with the object detail experience, swiping up or down would reveal information or alternative ways to progress through the audio.