Press

  • Creativity Is Not A 9-5 Job

    Originally published on the Kicker Studio blog. 

    Design is an applied art, where creativity is married to business. Creativity and business are not always particularly compatible. One stays up all night, bingeing on brownies, exhaling the stars, while the other wakes with the birds every morning and captures the flag. True creativity is reckless and manic. Good business is steady and secure. When Creativity meets Business, she shudders with loathing. When Business meets Creativity, she stifles an eye roll. How on earth are these guys ever going to get along?

    At Kicker we realize how important it is to combine our creative ideals with a functional business model. Let’s talk about the nature of both creativity and business, as we see it, and then discuss the Kicker methodology for combining the two for optimal success.

    Creativity is enchanting. We humans are drawn to it the way we’re attracted to fire or to kittens. For those of us with a penchant for making things, the process of creating often feels like folding fragile origami creatures whilst flailing around inside a brooding summer hurricane. It’s a dynamic process that takes hold of us, and if we’re lucky, trounces us over a wave of transcendence, eventually. The process entails much beautiful struggle and ultimately, surrender, as we crawl out from the melee, drenched and grateful, like a newborn dragon. As you can imagine, this whole inspired and torturous affair is not something one can perform on cue. That’s just not how the muse works. The Muse of Creativity is effervescent, temperamental and sly. You have to take her out on dates. Spend hours connecting with her, listening to her stories. You have to present her with gifts. Take her on drives. Dream with her.

    So then there’s the business of Business, the contracts, meetings, piles of documents, emails, spreadsheets and power calls that are inevitably necessary if you want to run a company. It’s all pretty standard stuff, with well-established protocols for success, that is, if you’re running a bank. The trouble comes when you try to ruin a creative company as if it was a bank. Sorry, I meant to say “run” a creative company, not “ruin” but well, anyway… Banks are linear places, where 1+1 = 2 or you’ve got a problem. In a creative company where 1+1 = 2, you end up churning out cookie cutter “creativity” that relies on the same solutions every time (read: bad design). Many design companies fall into this trap: we work with business, we need to adapt creativity to the business process. These companies have very set document templates for expressing design to clients. The problem is that this reduces ideas down to a formula of filling in the blanks. Following the same formula every time gets you similar results every time. While that may be familiar and safe, it’s certainly not tantamount to innovation.  

    Or maybe you fall prey to the work harder, work faster illusion, where you think that if you force yourself to just do it, the end result will be better. Any creative person knows that this mindset of forcing it fails. If you push too hard toward getting things done, the whole thing seizes up. That combination of creativity, deadlines and formulaic pressure unique to design, attracts adrenaline junkies who, ironically, waste time spinning their wheels in the rut of an uninspired process whose outcome, like Project Runway, is quickly reduced to mind-numbing noise. In that set-up, there is no time for exploration and designers instead rely on instinct and a toolbox of tricks to push something out the door.

    Unfortunately, you can’t just wield creativity. It’s a relationship, and like most significant, worthwhile relationships, it ends up being a marathon, not a sprint. The amazing thing about creativity is that 1+1 > 2, meaning creativity is bigger than we are, and doesn’t care about linear confines. This is why artists live the way we do: wandering, exploring, making, and then sharing our art with anyone who cares to listen. The Creative Muse can’t always be scheduled to show up for business meetings when you need her, or relied upon to take lunch with the rest of the team. You have to yield to her, the way a flower bends toward the sun. If you can run your company so there’s time and space to bask in the sunshine of The Creative Muse, she’ll shine her magic on your work. At a creative company, this magic is at the center of everything. Without this creative magic happening, the company has no reason to exist, so all the contracts and emails and production meetings may as well leap into a black (and white) hole.

    So like we were saying, it seems that creativity and the standard business model just don’t mix very well, but here’s the thing… they must find a way. Flying to Neverland is amazing, but no company can run on fairy dust alone. Design is a business, after all. Clients are ultimately paying for ideas they can use, not just pretty daydreams, but concepts and products of utility. Employees need to be paid, deadlines need to be met, the outside world demands attention. All successful designers must find a way to collaborate with the gods of 9-5. Without some of the imposed structure these business gods provide, your company will eventually end up crashing down around you, like that house in the Buster Keaton movie.

    Creative teams want very much to do creative work, the question is how do we build a good, strong house that will support our creative collaboration without it falling down around us? At Kicker we’ve tried various strategies and over time we’ve learned how to keep our creative team bobbing and weaving, smiling and producing, with integrity while giving clients exactly what they ask for: great, innovative design that will grab attention in the marketplace.

    Support with Sea Legs

    At Kicker, our main goal is innovation. We don’t walk backwards, which is what happens when you design based on precedent. This forward-thinking approach requires a business model that puts creativity at the heart of it all. As designers, artists and thinkers, we don’t need a support team that’s constantly shutting us down. We need support that fosters and protects creativity. This is where the sea legs come in. Instead of a business model where the spreadsheet spreader’s fundamental function is to tow the line, and tell the crazy artists “no”, we have a team that takes the journey together. It’s crucial to have business support that gives the team the ability to meet challenge head on, with new eyes and a fearless, optimistic mind. Our support team, like our design team, thrives on real-time problem solving. They are the translators between business and creativity.

    Time is Fluid, So Pay Attention

    We have an artisan approach to design and that’s evident in our process. We make time to experiment and stare off into space because that is what creativity requires. Sometimes you have to go away from the work and foster new stories to share. Then, suddenly a brilliant idea seizes you, and you have to make it right now! The time-is-fluid concept runs completely contrary to the work-faster-work-harder idea. When you allow your team the time (and space) to ruminate, fertilize and germinate, you end up with many more great ideas to work with. Once you’ve got a bunch of inspired ideas, you can iterate on them pretty quickly. There are phases in the design process: the dream phase, the build phase, the refinement phase, and time behaves differently in each of these phases. If you trust and pay attention to how time works distinctly in each phase, you can nurture each specific part of the process properly.

    Reasonable and Consistent Backdrop to the Chaos

    We compliment our artisan approach with a reasonable and consistent support system. We build production schedules so that Tuesday, Thursday and the weekends are focused on getting stuff done – the actual making. This time is not spent in justification or meetings, it’s spent lost in the dream of the creative process. it’s messy, and chaotic, sometimes ugly. This is the absolutely necessary aspect of the work that’s often perceived of as scary by business types. We know we’re not the first to honor the creative process by taking this approach. Google Prototyping X, for example, has inspired us by developing great tech in this exact way. But, we’re not trying to scare anybody, so we have focus-days also, which creates a balance. Monday, and Wednesday and sometimes on Friday we focus on meetings, logistics and all that’s necessary to run a company. We distill our work’s progress down into understandable and actionable documents, presentations, and meetings, because that’s what business requires.

    Allow for Solutions to Evolve

    Finally, we’re brave in our communication with each other. No need to cling to a potentially out-moded piece of the puzzle. If something feels off, we speak up and work through it. We allow for our practices to evolve, both over time and sometimes in relationship to a particular project. It’s important not to get too bogged down in notions of how things typically work, and instead to stay open to each new project as it unfolds. Some of the best business and design is the result of having the brains and the chutzpah to wildly improvise, so we’re mindful about creating a culture that fosters new ideas and laughs at fear.

    Nanu Nanu…

    Written by: Jody Medich & Wendy Rolon

  • Viva La Fidget

    Originally published on the Kicker Studio blog. 

    Humans run on energy. We have lots of it. Sometimes it comes out in quirky ways. We squirm around, tap our feet, twirl our pens… even when we’re tired and lethargic, we gulp down some coffee and KAPOW! we’re back to being our twiddling, jiggling, air guitar playing, fabulous, fidgeting selves. Fidgeting is entirely natural. We humans fidget to relieve stress and manage run-off energy. Worry beads, rosary beads and malas are all examples of this. They all provide a physical action that keeps us corporeally grounded, which is particularly comforting and ultimately a real boon for us human-types.

    What we find even more interesting, is that for many of us, it’s actually easier to think deeply and stay focused when we have something to do with our hands, meaning that fidgeting helps us grapple with and process information. Sure, tapping your foot helps blow off some steam, but even better, it helps engage your thinking process by increasing your levels of dopamine and norepinephrine, two neurotransmitters that sharpen concentration. In other words, you actually think with your body. “Ago ergo cogito” – I act, therefore I think, meaning we’ve known about this body thinking stuff for so long that I can quote to you about it in a dead language, so let’s use this fact to our advantage, yes?

    There are a number of studies which prove that via embodied cognition, engaging the body helps the brain to process information. In one such study, researchers focused on kids learning math. In the past, students were given a multiplication table and told to memorize it. Today, more and more, kids are taught to use their fingers while learning math. In doing so, teachers find their students are far more likely to retain what they learn, not only because an embodied instructional method gives them a way to visualize the abstract, but also because the act of manipulating their fingers while conceptualizing new information encourages a deeper, more “full-bodied” understanding of the material.

    As you know, at Kicker we’re all about designing devices that celebrate and harness our body’s already fantastic functionality, so we want to capitalize on our natural inclination toward fidgeting, both to help you focus like a ninja, and also to continue our righteous campaign, providing alternatives to screen/keyboard interaction via the principles of embodied cognition. There are some innovations happening currently, that are moving technology in what we think is an exciting direction. All kinds of small, useful, and wearable devices are popping up, like the Nike Fuelband and the Misfit Shine, which are physical activity tracking devices. Using a simple, glanceable UI that lives on your wrist, they provide sensor data about your movement through the world whilst capitalizing on your inherent need to fidget, and anyone will tell you that a good watch-inspired fidget is impossible to resist. The Fuelband and Shine are off to a good start.

    Another bunch of wearables that are popping up are the phone watches like the Metawatch Frame. In terms UI, these wearable devices are basically no different than cell phones, only they’re mounted to your wrist and smushed down into a tiny screen. A similar evolution from pocket watch to wrist watch, the big idea being it’s simpler to glance at your watch than it is to pull out your phone to use its apps. We applaud the idea of seamless glancing at a device that’s always at the ready, but unfortunately, it’s not so pleasant to run apps at 1/8 size. Unlike a pocket watch, a smartphone is a computer, and has a multiplicity of capabilities beyond telling me the time. I expect my smartwatch to be smart, (read: not annoying to use) or I’ll stick with the original “dumb” one.

    Various flaws notwithstanding, these new devices are encouraging, however, they all still rely on a visual UI. Sometimes, in order to think at maximum throttle, your gaze needs to be drifting elsewhere: driving, walking, listening to a lecture, sitting in a meeting, whizzing by trees on a train. In these instances our wandering eye is actually helping us to focus by shutting out extraneous input, much in the same way white noise would, which is tantamount to a sort of audio fidgeting.

    I know how it is when you’re dreaming up big ideas. I notice your fidgeting. Sometimes you’re staring into a dust mote while flipping your pencil around and around. Sometimes you’re rubbing your forehead. Sometimes you bite your nails. I can’t tell you how many times I’ve sat through a lecture, distracted by the click click clicking of your pen, or the bobbing up and down of your foot, which moves in asynchronous time with the speaker’s voice. These variously employed fidgets are satisfying precisely because they provide a rhythm and a pathway to concentration. They block out mental noise and free up the senses for maximal sensory uptake, providing you with the cognitive expansion and momentum to focus your thinking optimally, allowing you to process information with all your senses.

    At Kicker, we’re into designing products that enhance the thinking process, make us smarter, quicker, more dimensionally intelligent. We want to design products that are not only novel in the way they’re worn, but also groundbreaking in terms of what they’re capable of doing, and how easily they’re able to get it done. It’s not enough to take an existing technology and call it new just because you’ve strapped it to my head. No sir, not hardly. What we need is the type of technology that seamlessly and elegantly, gives the wearer dynamic control of her environment, by taking advantage of the body’s natural rhythms and propensities, without such reliance on visual UI. That’s what we’re working on here at Kicker.

    What if you could control your music playlist with a leg flex? Or how about if shaking a pen during a discussion could tag important content for later? What if pacing at the front of the room would automatically start your presentation? There are countless examples of how technology, working with our physical tendencies and taking advantage of how human bodies function naturally in the real world, can actually make us humans more super-powered.

    We like superheroes. We want to be more like those guys and yes, we already purchased the spandex tights, but beyond that, let’s say we re-think technology to create interfaces that maximize our powers, by working seamlessly with the behaviors, like fidgeting, that we’re inclined to do naturally.

    Written by Jody Medich & Wendy Rolon

  • Screens Make Me Sick - No Really

    Originally published on the Kicker Studio blog.

    Screens make me sick. Like, literally sick. I get very bad motion sickness. I can’t use technology in any moving vehicle, not even on a train. Every time I get on BART, I get really jealous that everyone else is able to retreat into their own little bubble. The only people who don’t seem to take advantage of this ability are the creepy stare-you-back-in-your-eyes types that I don’t want to have to deal with… at all. I’ve learned to stand up and offer the inside seat when my lack of looking at a device is mistaken for an invitation — especially when he’s homeless and smelly…. The fact that I have no technology bubble is kind of depressing and sometimes, downright infuriating. I’m totally wasting my BART time staring at the bald spots of people who’re actually being happily productive while I flounder around like a loser. Something must be done about this.

    The reason I can’t have a bubble on BART is because the experience relies on my eyes focusing on a screen, which is not possible for me without the potential of introducing puke into the situation. What it comes down to is this: Some activities, like riding a bus or driving a car or crossing the street or a whole host of James Bond type of eventualities are just not compatible with looking at screens. At Kicker, we understand this, so we’re working on alternatives that can help people work easily and more efficiently in lots of different scenarios, with the goal of making them (meaning you!) more like Iron Man than one of those lesser superheroes whose main superpower involves staring at screens… wait a minute, there IS no superhero like that, is there? Yeah… exactly.

    We’re working on a couple of different wearable devices, and more and more, we’re understanding the importance of eyes-free interactions. The more mobile we are, the more important it is to be able to interact with technology through our other senses, not just me, with my merciless motion-sickness, but all of us who want to unlock our superhero potential. For example, we can prevent people walking into oncoming traffic or falling off cliffs while texting, or crashing their cars while reading emails, if we provide them with technology that relies on verbal and tactile interfaces instead of a screen.

    Studies show that people are very well versed at multi-tasking while listening to things, but not so much while looking at things. People who are heads down, reading their screens are immersed in a way that prevents multi-tasking, and since there’s no way we can avoid multi-tasking in this crazy beautiful world, we need to make it easier to do well.

    So we’re spending some time contemplating the significance and potential impact of eyes–free technology, and audio is a big part of that, however, we feel that ultimately, strictly audio interface is less than ideal. For example, it’s difficult to edit text using an audio interface, and also, there are many situations where speaking out loud is just not practical. Voice interface is great in certain situations, but not ideal on a bus or any other time you want privacy. I don’t need everyone around me to know that I want to listen to Agatha Christie’s book on tape (not that I do, I was speaking hypothetically just now…)

    What some devices do, is confirm a user’s voice commands with text that appears on the screen as the device hears it, so the user can quickly look to see what the device heard, and what it’s planning to do with the command. This is ok, but sometimes lands us in a situation where we’re cursing Siri to hell and back- I know, there’s a certain amount of satisfaction in that… huh? I mean, poor thing! …she’s only trying her best.

    But wait… We do have other senses besides vision and hearing, correct? Well at Kicker, we’ve been exploring ways to innovate beyond current audio/visual offerings, and we’ve discovered that there’s another, niftier approach we like even better than cursing Siri…

    Tactile interface combined with adequate spatial mapping of tasks can provide an effective non-visual method to navigate. As a woman, I often carry a purse, aka the black hole. To find something, I don’t open it and look inside, instead I stick my hand in, feel around and magically pull out what I want through a combination of touch, sound, and spatially relevant pockets and pouches. The thought process is something like this: that’s jangly — must be keys; that’s smooth and long — must be pens; then there’s the sparkle pouch vs. the vinyl pouch for telling the difference between lipstick and aspirin; my phone is in the front pocket vs. the inside ID pocket.

    Now, imagine if you could feel the difference between Wolfmother and Britney Spears on the device in your pocket. You would never have to exclaim to your music player and an entire busload of innocent people, that you (actually) want to listen to Britney Spears, but because you suffer from motion sickness, can’t secretly communicate this (sick) desire to your device, via screen, without barfing. See? You’d get the music you want, without barf on the seats, and your dignity would remain intact. I feel so much better already.

    Anyway, what all this means is that we’re working on designing a device that can be controlled by touch AND audio, in combination, with the ability to switch back and forth, depending on the specific situation, and also, another device that utilizes tactile interface and spatial mapping, because sometimes you’re gonna need that. Sound good? Of course it does.

    We’ll keep you updated as we finalize the results of these concepts. We are currently in the prototyping stages and look forward to sharing the details with you soon.

  • What I Learned About Technology From My Dog

    Originally published on the Kicker Studio blog. 

    What I learned about technology from my dog…

    Dogs understand, communicate and serve like any good robot should. By observing my dog, I came to appreciate how to design interfaces that could properly communicate with people. Dog is man’s best friend, right? That’s because they listen to us and respond appropriately, especially when we’re feeling stressed, down, or otherwise screwed up… hell, they corroborate our very existence with their penetrating eye contact and tail wagging. Here’s a bit about my dog.

    • Expresses empathy: He has eyebrows. This is important. He is able to mimic my mood and let me feel as if he understands my emotional state. When I’m happy and excited, he jumps around, happy and excited. When I’m sad, his face mimics mine and he tries his best to cuddle up next to me and give me a bunch of dog kisses. When I’m mad at him, he shows he’s sorry and listens very closely to my instructions. When I’m scared, he jumps to attention and barks at anything that moves. He understands my emotions and responds with empathy to me.

    • Pays attention to context: When I start getting dressed, my dog gets upset. He knows this means I’m heading to work, and right away, he starts pleading his case to come along with me. He reads my steps to getting ready to leave the house as an indication of my intentions, and works to let me know he understands what’s happening. However, if I do these steps individually, at other times of the day, or not at all, my dog does not run around the house dramatically, letting me know he’s extremely interested in accompanying me somewhere. He understands the context of my actions, and emphatically, (hysterically?) communicates possible functions in that context. If I’m outside and something scares me, he stays by my side and barks like crazy, but he doesn’t do that if we just go outside. He infers what his behavior should be based on what is happening around him… around us.

    • Reads my gestures even better than my words: And he does the best when I use voice with gesture. I can communicate with my dog without making a sound, and with simple gestures. The more I repeat them, the more he learns what I mean when I do them. He lets me know he understands by watching my gestures with his eyes, and then carrying out my request.

    • Productive feedback: Also really important, is that if he doesn’t understand, he communicates this, by tilting his head in a questioning manner. It’s just as important that he communicates when he doesn’t understand as when he does. Otherwise, it’s just frustrating. He’s honest. He doesn’t try to bullshit me by pretending he understands while he’s really just running on some incredibly irrelevant script, no way! He’s the real deal. He feels me.

    So see? My dog and I have a quite the functional relationship happening. Clearly, my dog’s got my back. He aims to please. We’ve got it goin’ on. We understand one another. Well, what if I could train my devices to learn my commands, gestures and feeling states? What if my devices and I could get along as well as my dog and I?

    It turns out, that we can actually do this now with technology. We can design technology that responds just the way a dog does.

    Check this out:

    • Voice Technology: Emotion detection (or affective computing) technology exists. Acknowledging a user’s emotional state will vastly improve his or her functioning, according to various studies about automotive safety and voice emotion recognition. Smile/frown recognition is already an available feature on many devices made by companies like Samsung and Microsoft. If I’m angry, put a little empathy in the voice (not too much — because that’s annoying), just a little cream and sugar. It might actually stop me from spiking my phone on the pavement.
    • Context: The Google interface does a good job of recognizing my most relevant data, based on my calendar and emails. It infers what information I will most likely need in any given moment. Furthering this context-based inference making ability will mean the creation of interfaces that recognize our patterns/needs and offer productive feedback, in the form of applicable options/solutions that feel authentically helpful instead of disconnected and, well… infuriating.
    • Gesture: There are any number of gesture recognition products on the market. The majority are used for novelty, like in immersive video games, but they’re increasingly being used for function. Here are just a few cool examples: The lift-gate on the Ford C-Max Minivan uses gesture technology to provide a hands-free experience. It “sees” that I have my arms full and opens the lift-gate automatically. Gesture tech is also currently being used to assist stroke victims via robot – patient rehabilitation, and there’s even gesture recognition software that can transcribe sign language symbols into text.

    In Conclusion…
    My dog is an amazing companion. We spend lots of time together, and our interactions have revolutionized my idea of what’s possible in terms of personal devices, especially regarding how they can better serve me, with enhanced context awareness, gesture recognition and productive feedback, which is great because I probably spend about as much time with my devices as I do with my dog. Jeez… did I just say that? Well, yeah I did, and if that means we live in a world that relies too much on devices… well, I’m not even going there, except to say that I want my devices to be devoted to my happiness, just like my dog. That’s the mission… at Kicker we’re all about providing that extra kick that’ll make the world a happier place.

    Written by Jody Medich and Wendy Rolon

  • I Hate Technology

    Originally published on the Kicker Studio blog. 

    I hate technology. Seriously, nothing can make me as angry as technology can, because technology doesn’t care about me. I mean this literally. It has no emotion, so it can’t care. What’s worse is it can’t even fake it properly. It’s totally oblivious! It doesn’t empathize with me and doesn’t respond appropriately for the situation, not even if I’m finding out I have a deadly disease. “Sorry, I don’t understand hemorrhaging. Please try again.”

    In these moments, when I’m losing my shit, technology needs to talk calmly and clearly. Instead, it blithely repeats every irrelevant option it always offers, which makes as much sense as these ridiculous Americans who speak English in a really loud voice to foreigners, thinking that if they just yell, they’ll be understood… but at least you can smack those people. Technology you can’t even smack without wasting a lot of money because now you’ve gone and broken the damn thing.

    When someone is stressed, they lose their ability to focus and understand what is happening around them. In these moments, the sympathetic nervous system kicks in and the brain goes myopic on us, shutting out choices and scrambling all sensory input, except for the info you need for survival. Blood boils, eyes pop out of heads, breathing gets shallow and in the mind’s eye, a team of guys in track suits start jumping up and down, screaming “Run! Run for your lives!”

    Technology could care less about us humans and our stressed out lives. This is made all the more infuriating when it’s an NUI technology – which is designed so that we can relate to it the way we do living things. Here, the betrayal is twice as bad, because NUI technology should know better, for god’s sake. Voice, is a great example because it sort of sounds cognizant, suggests embodiment, and reminds us of ourselves, therefore, it’s that much more egregious of a crime when voice technology fails us utterly, kicks us in the nuts when we’re already down, and doesn’t even realize it’s doing anything wrong.

    Humans respond to a person in stress empathetically. If someone is frantically asking us for help, we try to calm them and we ask them simple questions. But voice interface is not like that. At all. Turns out, voice interface is like the Honey Badger.

    Personally, I’ve been excited about voice interface for a long time. I get very bad motion sickness and can’t use technology in any moving vehicle, not even if I’m a passenger. Looking down causes me to be immediately ill. So I was really rooting for my new voice interface on my fancy new Android phone.

    The other day, I was trying to pick up a friend of mine at the airport. Traffic was crazy on the freeway and I needed to let her know I would be there a few minutes later than we’d planned. I can’t text and drive, so I decided to use my voice interface. I double pressed the button to wake up the voice and I said “Text Nora I’m on my way”. I waited a long time. Nothing happened. I looked down and saw the words “Network Error”.

    I did a bit of deep breathing, crossed my fingers and repeated the process again. This time the voice heard me, but couldn’t find the contact. Nora is a frequently used contact, so this made no sense. I asked again. Again, it insisted that there was no Nora in the contacts. It even spelled Nora correctly when it informed me of this. My grip on the steering wheel tightened. Ok, so now you’re just f**king with me, right? I pulled over, found the contact by hand, and used the voice feature again, to send a text, and it worked. Ok. Fine. Voice interface had some flaws, but, once I found the contact info manually, thereby giving it visual parameters, it worked, basically… sort of.

    Much later, I was driving back from a client meeting and had some ideas I really wanted to get down. For years, I’ve desperately wanted a voice-to-text service that I could use in the car. I typically get ideas while driving and have always wished I could just voice-text them to myself. So here we go! I activated the voice interface and said “record voice note”. This is what happened: For about 45 minutes, every time I woke up the voice interface, it would ask me “Who would you like to message?”. I could not get it to cancel. I could not get it to change the topic. It ignored requests for help, that useless scum. In fact, that incorrigible shit ignored every attempt I made to get out of that mode, and who even knows what “that mode” was?! I even restarted the phone. I was crawling in traffic fantasizing about murder.

    At the end of this exchange, I was in tears. Frustrated tears, staving off road rage, and stuck next to a police officer — so no looking down – Jesus, I really needed to take down the notes, and at this point, I cannot for the life of me remember what revelatory information I wanted so badly to record. Instead, I came home and wrote this post.

    So yeah, my voice interface and I are still in a fight. I mean, imagine the scenario I just described as a conversation with someone sitting with you in the car. That idiot would’ve gotten a swift kick in the ass and been jettisoned to the curb. Game over. You think you’re funny, stupid voice? Well hahaha. Looks like you’ll be walking home, buddy.

    So like I said, I hate technology, and here’s the challenge. How about we design voice technology that actually works the way a conversation does? How about voice technology that would be responsive not only to my words, but my tone and the context of our conversation, as well, the same way a person would?

    Studies have shown that when people interact with a non-empathetic voice interface while driving, especially if they’re in a heightened emotional state of either happiness or distress, they’re 2xs as likely to crash their car. This is just awful. We might as well just be texting while driving!

    However, the good new is, that emotion detection technology actually exists, and it’s quite good at detecting emotion at either end of the happy – upset continuum. Google is proving that it’s possible to design emotion and context sensitive voice technology, based on their research which shows that humans really do behave within fairly reliable patterns. This makes it possible for voice technology to provide relevant data to the user when he or she is most likely to want it. Adding this type of predictive/contextual analysis to voice interface will make it using it way more worthwhile. 

    What’s more, we can train this same voice technology to respond to our specific tone and cues, making it customizable by user. We can now say to our devices, “Learn my language.” Chances are, I make the same request the same way, every time.

    Learn it. Big dog did it. You can too.

    Written by Jody Medich & Wendy Rolon

  • Tactile Interface of Driving a Car

    Originally published on the Kicker Studio blog. 

    Tactile interface is not something new. We have instinctively built tools on that premise for thousands of years: tools made to fit and attract the human hand that can, and should, be operated without looking. We have traditionally produced tactile interface through mechanical means such as knobs, handles and shiny physical buttons. I like to think about that when trying to create tactile interface.

    The tactile interface of a car.

    For example, I drive my car without looking at the controls. That is because it is focused on tactile interface. Cars are built with buttons to push, pedals to depress, levers to flip, and a wheel to turn. The steering wheel not only has indications of how it should be operated and where my hands should go, but through it, I can also feel the road surface. Secondary, but important, and frequently needed controls, like turn signals and wiper blades, are given significant tactile presence. These are usually levers that I can easily locate without ever taking my eyes off the road. In these instances, their feedback is also tactile: Up (with a click) is always right, down (with a click) is always left. I know by the clicking that I’ve twisted the knob once for slow  wipers and three times for fast.

    Even our brakes are designed for tactile interface. The brake pedal is on the left and is wide and short versus the gas pedal on the right that is skinny and tall. If I take my feet off the pedals, I can quickly, without looking, tell the difference between them because of their placement and how they feel under my feet. A driver knows when a car’s anti-lock brakes are engaged because the pedal does a little pump action that the driver can feel in his foot. This is because traditional brakes required the driver to pump the brake with his foot. This action prevented the brake from locking up and causing the car to skid. Automatic anti-lock brakes, introduced in the 1980s, do this action automatically for the driver. However, if the driver pumps an anti-lock brake system as he used to with traditional brakes, it would cause the brakes to fail and lock up. So, when cars started to include automatic anti-lock brakes, the manufacturers made it so that the brake pedal physically communicated to the driver that the anti-lock brakes were doing their job and pumping already. Now, when the brakes are engaged, the brake pedal does a slight stutter, telegraphing up the driver’s leg that the brakes are working.

    The early car stereos are also great examples of a tactile interface. For the most part, they were designed to be operated eyes-free. One could clearly feel the buttons and there weren’t too many of them. If the driver turned a simple dial, it would scroll through his options giving an audio snippet of what is next on the radio stations. Most cars made today have moved the most essential stereo operations onto the steering wheel in mechanical buttons to facilitate simpler, more natural tactile interactions. Drivers don’t even have to move their hands from the wheel–song selection and volume adjustment are just a thumb press away.

    The tactile interface of a car has always allowed us to operate it in an easy and intuitive way which allows us to focus our eyes on the road while driving. How can we take these lessons and incorporate them into digital devices to make it easier, and more natural, to interact with technology?

    Learnings from the Tactile Interface of Driving a Car

    1. In heads-up operation, make sure to consistently place and group objects so that the user can instinctively reach to that area to do task x.
    2. Provide a tactile hierarchy. For example: the hazard lights are bigger than the radio buttons, which is smaller than the turn signals, which are all smaller than the steering wheel.
    3. Provide tactile affordances. The shape and size of the tactile element should indicate to the user what to do to activate that element.
    4. The feedback on that element must also be tactile, even if it is accompanied by voice or action.

    Next time you are in a car, notice how much you can do without looking. What else do you operate solely through touch?

  • Design for Actual Women

    We’ve been doing a lot of work recently in product design for women. There are many commonly cited issues with designing for women, from forgetting that we are “women”, not “girls”, to thinking that making a product pink is the only way we will buy something. But there are some basic female insights that could go a long way in thinking how to design for women.

    1. I carry a bag.
    Many times, when designing a portable device, the inclination is to make it small so it will fit in a pocket. Small is not helpful to me as a woman. I do not carry things in my pocket if I can help it. I carry a bag. Something tiny and slick is the worst thing possible to throw into a purse. In my bag, I have tons of smaller bags that feel different from one another. I use those bags to contain smaller, slick objects so that I can easily locate and use those items. Make it easy for me to find it in my black hole of a bag. Make it feel different. Or better yet, let me operate it by tactile interface.

    2. I wear jewelry.
    However, something small could be infinitely useful if it were attached to me. There are plenty of wearable opportunities in my life. I wear jewelry. Men do not often wear jewelry, but I do. And when I wear jewelry, I don’t want it to look like a piece of machinery or a giant scuba watch. I want things that are simple and elegant. Functional is great, as long as it has some style to it. By the way, pink is not a style.

    3. The older I get, the less I can see.
    I have a friend, aged 58, who was recently complaining to me about her potions in the shower. “I’m supposed to wear my reading glasses in the shower? I can’t tell if I have shampoo or conditioner.” The type was way too small. So instead, she buys different color bottles for the shampoo and conditioner so that she can easily tell them apart. When designing for real women, realize they’re not all 18. Older women want to use technology, too. Use other types of cues and output besides just tiny text on a tiny screen. Look to voice, gesture, and tactile methods to communicate.

    4. I use a lot of new technology.
    In fact, it was recently revealed that older women are actually the dominant users of new digital devices. So go ahead, design for your mom. She’s probably more important to the adoption of new devices than those college age boys most think are going to be the first to use it.

    5. I not only manage my life, but all of my family’s life as well.
    Women are the pivot point between home and work. We are most often the people that make sure that our kids get to all their after school activities while simultaneously ensuring that our big presentation is ready to go the next day at 8 a.m. after we drop our dad off at the doctor. Women’s lives are about managing multiple complicated responsibilities. Create tools that help us manage that chaos. And remember, it’s very likely that our children and parents are probably going to use our device.

    6. I think spatially.
    Many studies into how people process digital space have discovered that women perform best when given the opportunity to understand spatial relations between objects. Providing me with real world spatial cues (signposts, orientation, landmarks, lighting cues) will help me to process digital information. Additionally, allow me the opportunity to create spatially relevant areas on the device as buckets of information. In other words, let me assign meaning to areas of the device and organize my information around those signposts.

  • Designing Multi-modal Interfaces

    At Device Design Day 2012, Macadamian's Susan Hosking moderated a panel discussion on the new paradigms for physical computing and how emerging technologies, such as audio, voice, touch, haptics, and gesture can augment user experience so that it feels more natural. Panelists included: Karen Kaushansky of Jawbone, Nathan Moody of Stimulant, Geoffrey Parker of Macadamian and Kicker Studio’s own, Jody Medich.

  • Kicker Tea Tumbler

    The Kicker Tea Tumbler, a concept project from Kicker Studio, tackles an everyday task that’s intentionally low tech – making the perfect cup of tea. Our tumbler consists of a teapot, infuser, and heater combined into one device. It uses a physical interface that subtly incorporates technology to help the user make a perfect cup of tea.The Kicker Tea Tumbler gives making tea a technology kick, without taking the tactile experience away from making tea

  • Form IS Function

    Originally published on the Kicker Studio blog. 

    I’ve always been really bothered by the term “form and function.” It somehow implies that form is outside of function. As if they are two completely separate things. I think form and function have a relationship that is a lot more blurred. That in fact, form is part of function.

    Aesthetics affect the way we perceive something functions. It both sets up our expectation of functionality, as well as informs the way we approach that functionality. If it looks like it’s going to work, we do everything we can to make it work that way. Aesthetics trigger some innate sense in our minds about the quality and type of functionality to prepare to experience.

    Look at these two lemons. Both of them are perfectly edible, taste the same, smell the same. But which one would you pick off the tree if given the choice? As a layer of protection, the human brain is wired to pick out the one that is most aesthetically pleasing in order to pick the good from the bad. The one on the left seems as if it must be inferior based on its aesthetic, but I used it in my water and it tasted just fine.

    When we see a well made tool, where all the pieces fit together just right and looks like it’s perfectly made to do the job you are attempting to do, we expect it to work well. If we attempt to use it and it fails, we are disappointed. Have you ever used a Michael Graves design product from Target? He has an entire line of household gadgets. And if you took away the ID of the product, it would work just the same as any other product in its category. But for some reason, I’m convinced my Michael Graves Toaster Oven just works better. I somehow enjoy it more, regardless of how many times the knob has fallen off.

    When Ford created the new Mustang, they knew they needed to appeal to the Middle-aged men who had dreamed of a Mustang in their teens. That was the era of the Muscle Car. It was a good era. But now these men were a little less nimble. They were a little wider. They also didn’t want to pay for all the gas that the giant V-8 engines required. So, instead, Ford made the seats a little wider, raised the entry height a little higher, and tuned the vibrations in the seats to make the more efficient modern engine under the hood feel like the good ole V-8. It feels and sounds like Muscle, but a 1968 Mustang would beat it up and steal its lunch money. The aesthetics match the illusion in our minds, therefore we believe it has the same functionality.

    Recently, we’ve been playing a lot with haptics. We’ve learned that you can manipulate the way people think a given haptic feels by changing sound effect you connect to it. One sound effect makes the exact same haptic feel completely different than it felt paired with a different sound. Aesthetics have a similar affect on functionality.

    In UI terms, aesthetics set the stage for the expected interactions. They communicate a message about the quality of the application, as well as the provider of that application. Anti-aesthetic is as much of an aesthetic as high polish, it just communicates a different message. In some cases, the anti-aesthetic indicates high quality (on a developers site, for instance, command line interface means that the makers are developers and should be trusted), but in others it indicates shoddy workmanship (a command line interface for a shopping site would not instill a lot of confidence that my credit info will be treated securely). It’s important to acknowledge how highly impactful the aesthetic of a product is to the mind of the user as they assess functionality. It’s not a separate thought, but one and the same.