TL,DR: Originally written by Product Designer and prototyper Mat Sanders upon internal release, the following is a deep dive into our thinking, process, and research that went into experiment codename: Ally-oop. And, since every story deserves a happy ending, we reveal what our little music toy looks like today, when it hits the App Store for the first time.
Experiment Codename: Ally-Oop
When projects often spend two weeks with the team orienting themselves within their problem space, how can a team structure their work with a bias towards action focusing on learning through building, rather than learning and then building?
Back in November of 2015, Ally-oop was an internal project where we experimented with our approach towards product design.
In this project we made the intentional decision to move away from upfront-planning, and artifacts-as-deliverables, to move towards a just-in-time approach to decision making, and shifting our perspective to the product as the deliverable.
With a team of three generalist designers, in three weeks we went from a literal blank slate to internally releasing an iOS app that makes music making collaborative, fun, and accessible.
The first step we took was to identify the outcomes we wanted from the end of the project. As a group we’d already come together around the high level idea of doing something involving sound or music. Here’s what we agreed on:
- We wanted something tangible that we could actually use, even if it was hacked together, rather than a deck of concepts.
- We wanted something that made music making fun.
- We wanted something that made music collaborative.
- We didn’t want something that was a professional tool.
These outcomes became our mantras that kept our decision making on direction: fun, collaborative, non-professional.
Finally we discussed personal or professional goals that we hoped to achieve from this project.
Next we pitched ideas to each other and picked a single idea that we thought had best opportunities for our project & personal goals.
We also discussed the structure of the first two weeks (or the next 9 days) for a meta-discussion around whether we wanted to time box explorations of three ideas (with 3 days for each timebox) or focus on a single concept, and if we would focus on a native iOS prototype (which I had more experience with) or an HTML5 prototype (which Pia had more experience with).
Planning The First Iteration
Our concept was quite complex: multiple instruments, instrument design, streaming & merging of instruments, and visualization of the merged tracks were all components of the initial concept that was pitched.
A traditional approach would assume this initial concept as the final outcome, and an early step would be a top-down approach to sketch out a high-level view of the entire experience to see how everything fit together, then breaking this design into component parts for detailed exploration and refined design.
We wanted to avoid this approach because we could easily spend the entire project on this activity alone, and have nothing tangible to show for it. Instead, we decided on a bottom up approach where we’d build something simple, and one step at a time decide what to do next.
Sketching out ideas for what a first iteration might look, we aimed to not only capture the essence of our concept, but would also be simple to build.
This exercise stripped away many components and ended up being a simple key-based instrument (like a piano) but instead of playing a note, would play a short music file.
The team divided with separate goals: Rimar to create or collect 4 music files to use, Pia to start looking at how to layout the four keys, and myself to figure out how to play a music file on an iOS device.
The project was starting to feel quite different from other projects. It was interesting that we were already prioritizing the sourcing of tangible content (music files), even if it was only to be used as a placeholder, and (for the time being) not worrying about the big picture of what we were building.
The First Iteration
The simplicity of our initial concept meant that we had a working prototype before lunch. Rimar had sourced some sound samples from an earlier project, Pia had reused a color palette from our internal brand guidelines, and I had cobbled together a simple prototype that played the samples.
With something tangible in our hands to react to three things we obvious:
- There was a short, but obvious delay between tapping a key and hearing the sample;
- There was no visual feedback when tapping a key; and
- The sound files that we’d sourced from an earlier project just didn’t sound good.
Rimar went off to create some more interesting samples to use. We decided that we wanted to quickly explore both some more traditional instrument sounds (like piano keys) and also more quirky ideas (like snippets from Dolly Parton songs).
Pia and I quickly added a highlight to show when a key was tapped, and investigated the delay more.
What we quickly discovered was simply that the default trigger for a button event was when the touch event ends, we changed the prototype to play the sample when the touch event was first detected.
This was an eye-opening moment for me because on previous projects where design and development sprints are staggered I’d been frustrated when small but important details (like when a sample should be triggered) were overlooked in documentation and, by no fault to to engineering teams, implemented inconsistently.
The alternative to this was obvious: don’t try and document to precision how something should work, but instead shorted the time it takes to complete a design/development cycles and add what you’d overlooked in the next iteration.
This act felt like an important step in the project because despite the simplicity of the prototype, it was magical to have something running on all our phones, and felt like we’d built both momentum, and ownership of what we were building.
It also set the expectation for how we’d work on this project: build something, use it, react to what we’d made, and figure out the next step to take.
The First Collaboration
Another benefit to our prototype-first approach that wasn’t obvious until we’d started, was the ability that the prototype could be used to bootstrap the exploration of future features, and to get continuous ongoing feedback from a wider group of people.
Once our prototype was updated with custom-build samples, and we’d fixed the latency problem between hitting a key and hearing a sample we set up our first collaboration.
Although we ultimately wanted to merge the output from each device and play from a common device (e.g. an Apple TV) we could still simulate this experience simply by turning the volume up to max and playing together, which we did for our first collaboration with some people we recruited from outside of the project team.
The feedback from this first collaboration was so useful was we set it up as a regular routine of our day with an early morning team jam using the latest prototype, and ongoing evaluations from people within the studio.
Another benefit of this approach was the use of our prototype across a range of device sizes, from an iPhone 4 to the latest iPads. This constant use of the prototype across different devices helped us shift our mindset away from designing for a fixed viewport to thinking about adaptive layouts.
Team Tools & Habits
We started each day with the habit of a morning jam session where we collaborated with the latest prototype. Then, with a fresh experience in mind, we discussed ideas about the direction we wanted to go in the future, and the features that would be needed to support that direction.
Every idea was added to a post it note, and added to our backlog, which was simply a big piece of paper with ideas loosely ordered with easier ideas at the top, and harder ideas near the bottom.
Then we made two decisions: What was the next major feature we would work on adding today? As the team prototyper, I had some idea about how long a feature might take, but I tried an approach where we prioritized choosing that next feature with the context that it might take the entire rest of the project for me to implement it.
Secondly, we talked about what sort of mini changes we could make that would only take a short amount of time to complete (for example updating or adding sound files, or simple UI layout updates).
The rest of the day was spent working towards those major changes, and we ended the day with another jam session trying out any new features that we’d added.
In addition to our ‘backlog and next step’ I also kept a team diary which again was just a big sheet of paper where we made a note of the decisions, or progress that we’d made that day.
What's The Most Interesting Thing We Can Do Next?
We started each morning with a collaboration using the latest iteration of the prototype and asking ourselves “what’s the most interesting thing we can do next?”
In addition, I asked the team to frame this question with the assumption that whatever we decided to do could potentially take up all the remaining time we had left on the project, and so this next step we make could also be the last step we make.
This was partially because as the team prototyper I didn’t have the ability to accurately estimate how long it would take to build something, but also because I wanted the team to make tough decisions about prioritizing.
Areas that we explored in our early prototypes included:
- Further decreasing the latency of tapping a key and hearing the sample play (which took us not only in a technical exploration of iOS’s CoreAudio library and a number of third part libraries, but also into the physiology of hearing);
- The ability to shift the pitch of a sample that was played depending on the area of the key that you tapped; and
- Allowing polyphonic sound that allowed us to play multiple samples at different pitches at the same time.
Reacting To Technology
An early step that we took to explore the possibility musical collaboration was an exploration on iOS’s MultiPeerConnectivity framework, which allows apps to create a shared peer-to-peer network, rather than using a client-server model.
We explored this as an option first because we didn’t have access to a server environment, but we soon realized that this model could be a benefit and unlock the ability for collaborations to occur in places where internet wasn’t available (like on our daily commute in the NYC subway).
Although we initially wanted instruments merged in real-time, we discovered that latency from wireless protocols would make this impossible (with some investigation, we discovered that our brains are hardwired to perceive even very small delays with aural input).
Instead of using this limitation as a blocker, we used it as a solution and shifted our experience from a real-time collaboration to one that was near-real time by creating short loops of samples.
A Field Trip to Chinatown
Because of our bias towards action, we didn’t invest a lot of upfront time at the start of the project with either general research of existing experiences, or benchmarking potential competitors.
That didn’t mean that the team discounted the benefits of research, but instead deferred it until a time we felt it could add most value. That time came when as a team we started to question our goal of making the collaborative music experience as fun as possible.
We discussed possible game mechanics that could be involved around enjoying music, and to understand this better, we took a mid-day trip to a Chinatown arcade to investigate and play video games associated with music.
While many of these games were fun, they were also not entirely satisfying. We left without a clear idea of what we wanted to do, but with a clearer idea of experiences that we wanted to avoid.
Until this point, the app had been an experience that occurred on a single screen.
The detection and connection of other nearby devices running the Ally-Oop app had been occurring automagically, but we knew that at some point we wanted to add specific UI so that people had more control over this step of the experience.
A recent addition to toggling between a mode where a person could either add a new loop, or delete a single loop got us thinking about additional screens or modes that users could explore.
Originally, sound packs had been structured so that they each had the same number of sounds. As we curated the packs, we realized that it would be a lot more interesting (and easier) if they had a different number of sounds, and if the keys were organized in a grid made most sense for that pack.
So, one of our final steps was the hand-written curation of our sounds into sensible packs, figuring out how the sounds should be arranged, and what the packs should be called.
Finally, we spent an afternoon creating a landing page for the prototype where people in the studio could side-load the prototype onto their device.
The Final Iteration
We’d been adding various animations as part of our daily ‘quick changes’ but for the final version we added a lot of refinements. Compared to the first iteration, we’d come a long way in a short time.
The final iteration had the following features:
- 12 packs, each containing between 2 and 20 samples created and curated for the app.
- A collaborative game-mechanic where a song is built up of 5 individual loops.
- Ability to connect with nearby people and sync loops playback with each other.
- An option to record loops either ½, 1, 2, or 4 seconds long.
- Beat-rounding to lower the barrier of creating loops that play in sync.
- Visual feedback of the loop you’re creating and the ability to preview it before committing to the shared loop with other people.
- A custom-built loop engine with low-latency coordination of audio and visual events.
- Ability to delete individual loops, or the entire song.
- Visual feedback with subtle animations to lead people though the on boarding experience of creating their first loop.
This was our final iteration. We were done, but not finished; we had a large backlog with a mixture of UI, content and interaction refinements, some known bugs, and several ambitious major features.
1. The process of making something is an amazing process to learn.
Designers are typically perfectionists, and we’ve trained ourselves to gather as much upfront information about a topic before we actually start building. While this is a sensible approach, it limits a lot of opportunities for learning.
2. Framing our next step as ‘the next thing could be the last thing’ helped focus prioritization.
However, this perspective probably also biased our decisions towards solutions that were quicker or easier to build, and this ultimately shaped the direction we went.
For example, we de-prioritized some early concepts of a non-key based instruments, and non-sample based interactions for playing instruments. Both of these could have taken our app in a completely different direction.
3. Framing decisions as questions “What’s the most interesting thing we can do next? How can we make music more collaborative?” is a powerful technique.
Framing projects as questions was a powerful design tool in that it kept us focused on outcomes rather than outputs, and encouraged a mindset where we focused on “what’s the best we can do in this time?”, rather than “how long will it take to do this?”.
4. Using ‘technology as a medium’ takes you in directions you’d probably never imagine on your own.
The tools that we use to make digital products have advanced rapidly in the last decade, and the ease that proof-of-concepts can be made has greatly improved. We need to learn to take better advantage.
5. Not everyone is motivated by the same things. Some people get excited by seeing the progress we’ve made, others get excited about the possibility of where we might be going.
It’s important to understand what motivates a team, and to make sure that the energy from this motivation is directed in a useful direction.
6. Self-organizing still requires structure.
We took a naive, or perhaps too literal approach to self-organizing teams where we didn’t set up specific expectations for regular team habits and routines (like a regular morning standup).
7. We worked in a public space, but could have given more specific updates to the studio about our process and progress.
When working on an internal project, your stakeholders aren’t just your internal team, but also the wider company that you’re working in. Internal projects are great opportunities to explore ideas that we’re not ready to use on client projects, but if the wider company doesn’t have an opportunity to learn from the project, then a huge learning opportunity is lost.
Epilogue: One Year Later
Continuing the tradition (and slight addiction) of pushing boundaries and experimenting in our spare time, we really just couldn’t let Ally-oop rest on its internal laurels.
Over the past year (and some odd weeks), Ally-oop evolved, grew, lost team members, added new ones, and slowly, its design, persona, and backend began to take new shapes. All while never losing sight of its humble beginnings as an experiment in musical collaboration that any generation could enjoy.
Today, we’re excited to debut what we’ve re-dubbed Yoop to the world, now available for download on the App Store. Our intent is to expand this humble little experiment’s horizons once again by getting thoughts and feedback from the best group of user testers we know: You, our ustwo community.
A Fresh, Completely Adorable Face
Our team went through dozens of challenges over the past year, one of which was re-designing the collaboration element of Yoop and sprucing it up with a new coat of paint.
We went through various iterations – exploring differing mechanisms of co-play, and settled on an experience designed to leveraged the fact that users would be in the same room together, communicating and assuming a natural order, like that of a drum circle. With that as our base, we needed to devise a way for participants to tell each other and their contributions apart. Yoop uses our newly designed characters to convey who is who in the experience.
Focusing on the brand of approachability and simply just facilitating “fun” – Yoop took on a very playful, lighthearted design and product language, thus inspiring the characters throughout the app. We wanted to develop an aesthetic that could be appreciated by both a younger and older audience, bridging generational gaps through music.
Sound As An Everyday Medium
As a music app, Yoop’s intent is less focused on production and more on playing and experimentation. Its charm was in the prototype’s proposition that you don’t need to be a musician to make music – anything you do in Yoop, regardless of musical proficiency, can sound good.
We were inspired by the potential of making sound an accessible medium of expression, in the way that doodling or writing is a common, everyday activity. The goal for a user in Yoop isn’t to thoughtfully compose a structured song, but rather to freely and intuitively play with sounds. In creating a music-making app that didn’t require musical skill, the main challenge was reducing the chances for cacophony. We looked to design features to ensure that anything a user does in Yoop sounds musical, whether the user intended to or not. The sounds themselves are all harmoniously compatible and designed in the same key. The audio engine was developed to include quantization, which effectively keeps anything that the user plays to a universal tempo – nothing can ever sound off-beat in Yoop.
The experience calls users to openly explore the possibilities of the sounds in the app in a stream-of-conscious play format. Indirectly inspired by composer Steve Reich, we wanted to celebrate the transiency of musical moments. As users yoop, they add layers of sounds, the oldest layers leave, and this results in a quirky and lovely ephemeral tune.
For help on creating the Yoop sound, we enlisted the talents of New York-based musician, London O’Connor. London recently signed with the record label True Panther Sounds to officially release his debut album “O∆”, which has garnered acclaim from the likes of The FADER, Pitchfork, The Guardian, and BBC Radio 1. A friend of the the New York studio, London instantly connected with the idea for Yoop, and set out to soundtrack Yoop’s brand and aesthetic. He created five sound packs for the app that users can play with – GLEEP, BUZ, BEAM, CLIF, WUP – all with their own style and offering something a bit different to the experience.
Yoop: Here It Is
Enjoyable as it is irreverent, Yoop’s purpose is in its pointlessness. Allowing people to toy around with music in a social context, it makes the music process both approachable and ephemeral. In its current iteration, which releases today, no saving or exporting is included, and only five layers can be heard at a time. We don’t know what the future holds for our little experiment, but it has certainly come a long way, and we’re excited to get your feedback for continued iteration.
In the vein of experimentation and testing, our team would love to learn:
- What do you love (and on the flip side, dislike) about the experience of yoop'ing?
- What features would you like to see added in the future?
- What should we name our characters? (Yes, they definitely need names!)
- What kind of sounds should we add in our next pack?