Category Archives: user experience

Hey Educational App Designers, Stop Creating Glorified Worksheets!

Educational app designs need a rethink.

I am a product manager and UX designer in the software industry, and I’m also a parent of an elementary school aged child. Learning applications are a big part of our educational experience, and I find I am constantly frustrated by them. While they often have great promise and claims, and they use modern graphics and game engines, they rarely use the technology to help facilitate learning. In fact, they often put fantastic game engines around worksheets. They have spectacular characters, a wonderful environment and storyline, then for the actual math or literacy, they just display a virtual worksheet to complete. Even worse, if the user gets a question wrong, instead of showing them how to fix the problem and learn from it using the technology, they just lose points, need to find the correct answer somehow, and at worst, are unable to progress. The fun part of the app is often the parts around the actual learning, rather than making the learning part the fun part and main focus. No matter how cool and amazing the application is, if you are just wrapping that around the same old printed worksheets students have used for decades, you a really aren’t making good use of the technology.

Educational apps that don’t use game engines or a game format still tend to use game mechanics in their design to help students understand progress, facts they have mastered, concepts and activities they have tried, etc. When you know what to look for, you see the mechanics in virtually every educational app, but no matter how cool, new, flashy or exciting, they tend to devolve into learning as worksheets.

Here are three examples of math apps we have tried in the past.

We downloaded a math app, and it was a sandbox style game with customizable avatars, a rich environment to explore, and lots of clever use of music and animation. When it was time to do math work to earn points to buy things to add to your game environment, the user had to answer a series of questions, in a certain period of time. While the graphics were nice, and there was animation to help make it more engaging, it was still just look at a math equation, enter in the answer, and move on. If you didn’t reach a certain number of points in a certain amount of time, your progress was stuck. You couldn’t do anything more in the game other than wander around until you passed that level. This isn’t much fun, particularly if it is a skill you need to work on. Not only do you need more practice, but the game isn’t fun anymore.

Another math app we tried had an immersive RPG play style. You choose and customize an avatar, and your character does quests, engages with other players, and had boss battles, and other fun activities. This looks fun! However, when it was time to do math questions, you are literally taken out of your immersive environment and shown a virtual math worksheet, like the kind you print out and fill in by hand. At least with this app you aren’t punished for getting answers wrong, you just get a certain number of points to continue. However, there isn’t a lot of actual math learning going on, you have to practice those skills off screen, then come back to it. My son was so impressed with this game and loved it so much, he dedicated daily homework to develop skills to advance further. We spent three weeks of doing daily math practice and worksheets so he could master the level. While we were impressed with how motivated he was, we were baffled on why the game didn’t provide that practice. There was a little bit of support to show what the correct answers were, but beyond that, it was just doing worksheets.

The worst example of a math app with poor mechanics was one that used no graphics at all. It just had math equations, a timer, and a score. There were no visual indications of how many questions to answer, and many of the questions required off screen work, since there was nothing to do other than fill in the answer and hope for the best. To do the work to solve the problems required several minutes of whiteboard or work on paper off screen, then enter in your answer. If you had a typo, or an off by one error, you not only get a message that your answer is wrong, but your score reduces. If your score doesn’t reach a certain level, you just continue, over and over, until it reaches a certain point. You have no sense of progress, and while the app would show the correct answer, it didn’t do anything to teach the student how to do better. You get points for the correct answer, so you better come to the app with a lot of facts memorized. If you make some mistakes, not only do you not get much feedback, but you get punished. Furthermore, if you are too slow, that also affects your score, dragging the effort out even longer.

There are lots of apps that at least provide visuals and let students change their minds, but they are still mostly virtual worksheets that are trying to get students to enter the correct answer. While there is a place for that, such as dragging letters around to make words, dragging words around to figure out parts of speech, or moving objects into groups to divide or multiply, they are still not utilizing technology to help learn very much.

In math, it is so simple to design a visual calculator, and let people play around with numbers and see how that affects outcomes, and how patterns start to emerge. Once math stops being abstract, and people can play with manipulatives and see what happens, things can really click in a learner’s brain. Math manipulatives such as number blocks, Montessori boards, cuisinaire rods and more are extremely helpful learning tools, but virtualizing them, adding in animation and allowing safe exploration would be incredibly powerful. Instead of catering to learners who do well with worksheets and flash cards, learners who are struggling to understand a concept should be able to visualize the concept in various ways, play around with inputs and outputs, and see how the concept manifests itself. Not everyone can translate abstract math concepts into visualizations or numbers in their minds. Providing ways to see not just how objects and patterns interact with math, but how those concepts can be applied with virtual tools holds a lot of power. While all the technology is available to us, educational apps tend to fall back on some sort of worksheet, which only appeals to a certain kind of learner. On the other hand, virtual objects you can interact with and learn from are more engaging to every learner, and they can help people actually learn something new.

Use Technology as a Safe Place For Learning from Mistakes

What drives me up the wall with educational apps is they tend to only focus on getting correct answers. Instead, they should provide a space for experimentation. What happens if I play with addends or minuends? What happens if I multiply negative numbers together? What happens if I play with the variables and use huge and small numbers in a multiplication problem? What happens if the divisor is larger than the dividend? What if the divisor is an emoji or a letter? What does it look like if I make a word problem come alive? What happens to a graph if we loop through a huge number of values for x and y in a linear equation? What happens if I watch an animation of a huge range of possible inputs? What if the inputs are at extremes or nonsensical? Imagine how that can be quickly visualized, and different types of inputs can change the outputs, and what patterns arise from different kinds of mathematical concepts.

The beauty of virtual tools is they are SAFE places to make mistakes. You get to put in some inputs, and then watch what happens. In real life, when you make a mistake on a worksheet, you have to erase it and fix it. Virtually, there are no eraser smudges, you just change it. Furthermore, a printed worksheet can’t come alive and show you what happens when a train leaves Philadelphia at 6:00pm and another leaves New York at 7:00pm, or how many ball bearings can fit in the back of a pickup truck. Game engines with virtual tools can. Furthermore, making mistakes should be fun learning experiences, rather than being punitive. Sure, there is a point in learning where it is important to have precision and to be able to do things by hand is vital. However, playing around with technology and seeing what might happen will help students form a picture in their mind of how a concept works, not just memorizing how to get the right answer.

Actually using game engines, game design and having and understanding of different styles of game play will help people with different needs be able to learn the concepts in the app itself. Different players will have different needs, and while some people like timed tests and fact based answer seeking, others are vastly different. Andrzej Marczewski makes it easy for us to learn and incorporate Gamer User Types in our designs. For example, a socializer might want to help others learn something they struggled with, and provide a tutorial of something cool they discovered. A griefer might giggle away putting in extreme values. An explorer might try lots of different combinations of things to see how that is visualized or what virtual outcomes might be. An achiever might be motivated more by virtual rewards and determining how many and what kinds of activities to complete. There are a lot of differing user goals and scenarios, and there is a tremendous amount of knowledge and experience in the games industry we can learn from.

A number of years ago, I was asked to do a UX audit of an anatomy app. It had beautiful graphics, and a ton of fantastic information. However, it was really just a digitized version of something like Gray’s Anatomy, the famous anatomy text book. Sure, you could search, you could look at amazing graphics and click around to help you memorize, but was not using the technology to help teach. I saw two problems immediately: it was a digitized book or worksheet, and it was static. Anatomy in living organisms is not static. A living organism has different states in their body at all times. For example, there was no use of technology to show oxygenated vs deoxygenated blood in a circulatory system, or to simulate illness, pathologies, or other things a med student needs to do to apply their anatomy knowledge. Furthermore, testing was fact based. You needed to memorize facts using the app, and then state those facts in an exam. The learning was about reading and looking and memorizing, not experiencing. When you are a medical professional, one of the most important sources of learning is from mistakes, or from failures. A patient doesn’t respond well either due to a lack of knowledge or the wrong treatment, and you learn what not to do.

My design concept was to digitize that virtual patient experience, as sort of a medical study Tamagotchi. Instead of memorizing a virtual anatomy text book, why not have a virtual patient to keep alive for a semester? Sure, you have the required anatomy to understand and commit to memory, but you have a simulated patient who can have certain illnesses, pathologies or states to manage. It sounds crass, but if your virtual patient dies a lot of times, you are going to learn a tremendous amount from that experience that you can use in the real world. It is much safer to learn and fail and see what happens to your virtual patient, rather than memorize and get a poor score if you get things wrong. If you can fail virtually and learn from it, that has a lot of value. I would prefer to play and experiment and be rewarded for learning from mistakes, rather than memorizing text books facts and being afraid to fail an exam. Exam scores have real world consequences, but playing around in an app and having fun, piquing curiosity to explore “what if” scenarios, or having instructors throw you challenges to keep your virtual patient alive is something we can absolutely do with computers that we just can’t do with dead tree text books.

Another area to learn from video games is how they treat failure. To make a game engaging, failure is part of the game, not a punishment. In popular games, their designs never make you feel lost or dumb. You feel like a super hero, and when things go wrong, you can recover and try again. In fact, many games make the failure part incredibly fun and rewarding. Who doesn’t want their avatar to scream in a ridiculous way and burst into flames if they fall off a balance rope in an obstacle course? Some failure modes are so fun and hilarious that people spend more time crashing their characters than completing tasks. Even games that are extremely challenging and are designed to be frustrating are engaging and use that frustration and failure to encourage people to try again. You aren’t left feeling stuck and dumb, you feel like you need to try again, just one more time. Furthermore, if your character crashes and burns, you just respawn and try again. You aren’t stuck unable to play without doing a lot of work outside of the game to continue. The game helps you succeed, and if you are really stuck, game communities are fabulous places of sharing knowledge and helping each other.

The more I experience educational apps with my kid, the more I see that educational app designers completely miss the power of virtual technology and learning. They should design the apps around experimentation and reward failure as a part of learning, but they end up digitizing worksheets. They expect people to know facts, they don’t help people pique their curiosity in a safe way. They have an extremely narrow view of learning and teaching. Why don’t they support inquiry and experience? Why do they just duplicate books and worksheets, even when they have a fancy MMO or RPG engine around the learning? Virtual learning environments themselves are fantastic places to do whatever you want to learn. Where else can you safely find out what happens if you feed something poison, or if you fly the rocket into the ground, or you play around with your math question variables, or if you rearrange your words in a nonsensical way. These are pretty bad ideas in the real world, but great learning experiences in a virtual one. Plus, mistakes can help you learn, and they can be fun and silly. Laughing at a ridiculous mistake on a math concept and visualizing the carnage is a much more effective learning technique than getting a long division problem wrong, after you spent ten minutes solving it off screen.

Using the technology to just digitize the printed worksheets completely misses out on this important approach to learning. Sure, at the end of the experimentation, you want the student to have knowledge and skills and to have learned, but we have game engines, and graphics and powerful machines that can be used to learn what we need them to learn, and we instead just give them worksheets. And in many cases, the worksheets are even worse than a printed one.

Bottom Line: Let students play with the concepts you are trying to teach and let them succeed and fail in a safe way, using everything technology affords us. Stop punishing learners for making mistakes, let them make mistakes and explore the outcomes virtually. Stop taking dead tree technology, digitizing it, and rewarding people for getting the correct answers and calling that an educational experience. Use the technology to show, tell, demonstrate, play with and really get a solid grounding in the concepts without real world consequences. That is the differentiator with learning with technology: you have limitless access to information, and tons of rich tools to virtualize problem solving and learning in stunning ways. Provide structure and opportunities to learn, don’t just expect people to write an answer on a worksheet. Give them more.

UPDATE:

April 24, 2024

I was reading this article about educational apps: The 5 Percent Problem: Online mathematics programs may benefit most the kids who need it least, and there are some thought provoking points. This quote in particular stood out: “…the programs may have been unintentionally designed to fit high achievers better, says Stacy Marple, a researcher at WestEd who has studied several online programs.”

Put another way, if you design apps that expect learners to already have mastery, they will tolerate your virtual worksheets because they can easily enter in the answers. They have the knowledge, skills, and confidence to grind away to get back to the fun part after the get the math or literacy “worksheet” completed. For learners who don’t already have mastery, they will be frustrated and stuck, because there aren’t mechanisms in place to help them safely learn, to build their understanding and confidence, and to actually help.

Designing for Smart Fabrics: Wear It’s At – Part 3

In Part 1 of this series, we looked at defining smart fabrics. In Part 2, we looked at some design ideas. In Part 3 we will explore things that could go wrong with this technology, and offer up two possible futures for products using smartfabric tech.

fashion-300337_640
(image via pixabay)

What are the Potential Drawbacks?

If we can access, visualize and interact with our digital lives with fabric, that brings a whole set of implications to design for (privacy, security, etc.) and the challenges of displaying and interacting with systems.

Imagine the possibilities. We will soon have clothing with:

  • built in displays that can show you information, images and video you would normally see on a PC or smartphone screen.
  • interaction with a system using certain gestures by touching your clothing.
  • powerful, tiny sensors that monitor biological functions.

Pretty cool huh? These technologies are currently being developed, and they could open the door to some fantastic possibilities. They also have a darker side, with potential pitfalls.

The most obvious pitfall is to needlessly annoy the wearer with notifications or prompts to interact with their clothing. Over notifying can over stimulate our senses. If vibration or lighting/color changes are used to get our attention, this might be useful if it is used sparingly. If we get a lot of buzzes against our skin, or our clothing rapidly is changing color or displays or lights are blinking at us too much, smartfabrics products could make people sick.

Accidental interactions can cause unintended results. What if I can control my TV shows with my jeans by gesturing on the fabric, but every time I change positions in my chair, I change the channel? Controlling inputs, determining touch targets and gestures, as well as what to display, and how to notify people of events are difficult to implement well on something that is constantly touching your skin. Over-notify and people will want to destroy their clothing, tickle people and they will go crazy.

Target areas for inputs and outputs are important to consider in clothing design. Inputs and outputs need to be placed very carefully on the body. They could draw unwanted attention to a particular area of the wearer’s body. Even if they aren’t highlighting private parts, they may trigger body image issues if it highlights a perceived flaw in the wearer. It’s also easy to imagine accidental or even purposeful unwanted touching – someone could use the input as an excuse to touch you. If the design encourages interaction, it could be easy for someone to be compelled to touch you when they normally wouldn’t. Inadvertently encouraging unwanted touching in our designs could have very serious implications. Unwanted touching is assault.

On the other hand, in private, intimate environments, that sort of touching can be welcome and fun. Context, as always, is key to understanding interactions with things you can wear.

Beyond unwanted touching, pranksters or “griefers” might think it is funny to reach over and mess with your digital interaction by touching the smartfabric device while you wear it. This could cause your data to become corrupted or trigger unwanted events in a system. (What if you were trying to book a calendar event, your friend reaches over and taps their fingers on your shirtsleeve, and now you have a thousand calendar events?)

We live in public, and smartfabrics might accidentally display personal information that we wouldn’t want others to see. What if a private text or email is displayed to my coworkers in a meeting because it is displayed on my shirt? (You might not want that nude selfie your significant other texts you to be displayed on your suit jacket when you are in a job interview.) What if I work in HR and my boss sends sensitive salary information that is displayed to an entire department I am presenting to? It could also be inappropriate for your dress shirt to display your photostream in church, or if your sombre outfit suddenly played the Benny Hill theme during a funeral. Understanding context and appropriateness of what to display and when, and how much control over who might see it users have is vital to understand and take into account.

Security and privacy is a huge issue with smartfabric devices. Data that is gathered from a user wearing smartfabrics is deeply personal. Any information that is passed to other devices or systems absolutely must be secure. That data should never be intercepted and stolen, it is very personal and could be misused and cause great harm to the user. If data transmitted from clothing or other devices could be intercepted by a 3rd party, that could be very embarrassing and potentially damaging if it is used against them. Also, if servers or cloud-based storage facilities are compromised and smartfabric generated user data is stolen, that could be catastrophic to the people whose data has been collected. The inevitable public backlash could be severe enough to destroy a company. Furthermore, data privacy laws could be broken and an organization could be faced with fines, lawsuits and the resulting bad publicity. Everyone loses if the data generated and used by these devices isn’t kept private and secure.

Private information should not be displayed publicly on my smartfabric device unless I want it to be displayed that way. I need to control what is shown, and when with speed and ease. Other people should not see my current status of what is going on with me and my body unless I want them to.

Smartfabric devices need to have their capabilities completely under the control of the wearer. When a feature isn’t appropriate in a particular context, or if something odd happens, the wearer need to be able to turn it off or mute it. Even active smartfabrics that change their form depending on outside triggers require user control to be able to turn it off.

Under controlled conditions (such as the development lab) you may never see irritating behavior, but in the real world strange things can happen. If your dress changes color depending on light and you are at a dance club, the lighting could cause the dress to rapidly change color. That requires a lot of energy if it happens over and over, so the power source could overheat and cause physical trauma (ie. burning) to the wearer. At best, it might wear your battery down prematurely.

Compatibility with other devices and services will also be difficult to address. What if my smartfabric jeans will only work with Android devices, and I only use Apple devices? What if the clothing that looks good on me and fits my budget is incompatible with my file hosting services I use and depend on? What happens if the smartfabric device manufacturer changes alliances, and my favorite suit company now only supports a platform I hate using?

Reliability is another issue that is difficult with clothing because we have to clean it constantly. How do we create hardware that can survive different weather, food and drink spills, sweat and other bodily fluids, and constant washing and drying? Clothing takes an absolute beating, and we subject it to extremes when we wear it, and especially when we clean it. This is an incredibly difficult environment for electronics to survive in, let alone work reliably in.

Two Possible Futures

The Scary Future

If we design smartfabric experiences poorly, we will distract people from their real world experiences, causing them to live diminished experiences when they wear these products. Instead of using technology to enhance their lives, we could burden them even more than our always connected experiences do already. Also, having more devices reading more and more sensitive and private information creates the potential to track people and make decisions about them based on their movements and biological data. This has incredibly serious, far-reaching implications. If people or organizations use this technology to reward or punish people due to the data that is collected, we could literally have a dystopian future on our hands.

If we aren’t careful, we might just create smartfabric products that are expensive, unreliable, irritating and in some cases, downright dangerous. That will be the kiss of death for the technology. If the solutions aren’t designed with empathy for real people, using them in the real world, not to mention our user’s needs, bodies and state of mind, they will quickly be relegated to the dustbin of history. The good that we could do with this technology would be lost because we did a terrible job with the user experience when we introduced our products to market.

The Awesome Future

Back when I wrote programs to help people on software teams be more productive, we used to joke that we were giving them super powers. They could now get visibility and control within systems that was formerly hidden from them, and use that information to make better decisions, or to diagnose and solve difficult problems. Similarly, even though we have many senses and powerful observation skills, there is a lot about ourselves and our environments that we can’t see. Sometimes this information can be incredibly important to get insight into.

I was delighted when Shannon Hoover (MakeFashion) suggested a similar superpower design theme with smartfabrics. Shannon goes beyond the concept of visibility and control, and he believes smartfabrics will eventually provide us with different superpowers by extending our senses, or replacing those that are injured or defective. Shannon says smartfabrics can help provide “X-Ray vision” by reading and presenting certain kinds of important data that are invisible to us. Just like a radio interprets radio waves and brings sound into a living room, smartfabrics have the potential to show us our current location and alert us where to move if we are travelling and get off track.

There are also applications to help provide us more strength and stability, and while we don’t all have access to a SciFi robotic exoskeleton (at least not yet), they are being developed for commercial and health related applications. Shannon goes further pointing out “spidey sense” activities to sense hidden danger, or important events. If the smartfabric is alerted to something that is important for you to know, it can get your attention to warn you immediately. Smartfabrics can use haptic feedback by interacting with your sense of touch in various ways to get your attention quickly.

Smartfabrics that sense danger and warn the wearer can be incredibly powerful. In particular, if our own senses are impaired due to illness, clothing that can warn us when our bodies are unable to can be life changing. Orpyx have created a vest that notifies people who can’t feel their extremities properly. Diabetic neuropathy sufferers can be warned of too much pressure in their feet by using sensors to vibrate against the wearer’s back to warn them. If your feet are getting injured, but you don’t feel pain, you can cause irreparable damage. Augmenting your body with another system to help prevent damage is an amazing feature to improve the life of people with illness and physical conditions. This also has enormous implications for safety gear and clothing for workers.

Once the wearer gets used to the alternative, haptic feedback, it feels as natural as the normal pain signals your body generates that you no longer feel from damaged areas. The technical term for this is “neuroplasticity”. Smartfabrics have the potential to use this as an “extra sense” to seamlessly interact with us in our environments. Shannon sees a future of products that can provide extra senses even for healthy users, to alert them or aid them as they perform tasks in the world so they are safer, happier and more productive.

Smartfabrics also have more whimsical applications that are also important. Imagine that you are at a holiday party in your new dress, shoes and accessories to match. You’re feeling good about yourself until you spot your nemesis from accounting. “That #$%#! She is wearing the same outfit! Disaster!

What do you do?

Smartfabrics to the rescue! You quickly dart out to the powder room, and with a quick, discrete gesture, you change the color of your dress. Crisis averted! This might seem silly compared to important medical or other applications, but this kind of technology would be incredibly useful for our quality of life.

Conclusion

Enhancing our bodies and our life activities with smartfabrics has the potential to enhance our senses, extend our physical capabilities and greatly inform our knowledge and insight. The choice for what future we want to bring to our users is in our hands. Do we want to enhance the lives of people who will use our products, or do we want to needlessly distract them away from what they should be experiencing?

Once smartfabric technology is reliable and inexpensive enough, we designers have some important work to do and not much time to do it in. Therefore, we need to choose what future we would like our customers to have even before we start designing.

Designing for Smart Fabrics: Wear It’s At – Part 2

In Part 1 of this series, we looked at defining smart fabrics. In Part 2, we will look at designing for them.

sewing-machine-925458_640
(image via pixabay)

Deciding What to Design

Work with a Clothing Designer (someone who works with fabrics)

I can’t stress enough how important it is to work with someone who understands making clothing, and other handicrafts out of fabric and textiles. I spoke with Jen Kot, a professional engineer who makes her own clothing, knits and creates all kinds of interesting objects with textiles. As a technical person, she understands how the underlying technology works. As a crafty person, she deeply understands the application and costs of different textiles, the pros and cons of using different materials, what is easier or more difficult to work with, and what looks good.

Clothing is one of the most powerful tools we have to create and reinforce our image, or how we want the world to perceive us. One of Jen’s criticisms of much of the current wearables such as bracelets, watches and glasses are that they look “nerdy.” To a techie, we might not find this to be important. We may overlook the form for the features, and even find something awkward looking to be appealing. When you are competing in a world where fashion dictates what is available for us to wear in stores, we need to
understand how other people want to be perceived
.

When Jen and I brainstormed uses for smartfabrics, my solution ideas were much more functional. Her ideas were much more whimsical and fun. I kept thinking about how the technology could be applied and impact our lives, while she thought about what would look appealing as well as be engaging and fun.

Shannon Hoover, co-founder of MakeFashion, an organization that brings technologists and designers together to collaborate with wearable tech also reinforced this view. He also understands both the visual and technical worlds that are possible with smartfabrics.

Shannon says that many designers and developers are looking at wearables as a suite of tools (detect with sensor, compute data, send it to something else) but he also believes that isn’t very much fun. It seems very engineering-dominated, where we tend to focus on technology first, then apply it to a problem. Instead, Shannon feels we should also look at wearables from an aesthetic perspective (it looks interesting) and as a vehicle for
human expression. This is an artist and fashion-designer focused perspective.

Shannon goes even further, and says that clothing helps people tell the story about who they are – it is a narrative generator. It also gets people talking to you – it gets their attention and evokes emotions. Clothing is a great conversation starter. So what conversations do we want people to have about what we are wearing?

Using fabrics is complex, you need to understand how something looks on a person, how it feels on the skin, what colors are in fashion and how different cuts and shapes look on people. Fashion changes constantly. To make smartfabrics work, we will need both the technical view (here is how we make the technology work) and the fashion designer perspective (here is how we make the product look great and be appealing to wearers.) As a technology designer, talking to people who design clothing and furniture is exciting and helps generate new ideas. I understand the basics of what they are talking about, but their experience and perspective is completely different. When design materials use combinations of different technologies, our solution ideas are much better when we work together with other disciplines and share expertise and different perspectives.

Beyond Clothing

Making clothing is one way to use smartfabrics, but fabrics are used in a lot more things than clothing. For example, Nomex was used in heat shields for the space shuttle, Gore-Tex is used as a tissue replacement in medical procedures, and Kevlar is used to create high performance vehicle components. In fact, IoT developer and bbotx co-founder Geoff Kratz feels that smart fabrics have even more potential in products other than clothing. For example, he sees using smartfabrics in vehicle upholstery could provide alternatives for inputs and displays and other feedback mechanisms for safety purposes. Geoff also sees furniture as another good candidate. Smartfabrics could integrate with entertainment systems to provide an even more immersive experience. Chairs could sync up with other systems and provide you with reminders or safety information. Carpets could have safety lighting that is triggered by darkness or emergency situations. There is enormous potential for these sorts of ideas as well as interactive, connected art applications for homes, and public areas.

Quilter and chemical technologist (and reviewer of this article) Cindy Johnstone shares Geoff’s views. Since she quilts, she immediately thought of applications for blankets. Cindy says that portrait quilts or family quilts are very popular. She says the resolution of the images would be so much better if digital technology could be incorporated. An active smartfabric quilt that could tighten the batting to make it warmer in the winter and relax the fibres for summer use would be useful. People could use one blanket for different seasons, rather than having more than one blanket. Cindy also sees health care applications where adding in technology into blankets used by patients could provide more insight and control into patient care.

Corporate Innovation

As computing and wireless technology “disappears” into real world devices there is enormous potential to solve more interesting problems. We often look to organizations with well-funded R&D (research & development) programs to set the tone for the rest of us. There will be likely be useful, popular smartfabric products developed by some familiar leaders in the tech sector. The space is also ripe for disruption by some new up and coming organizations we haven’t heard of yet. Due to the combination of physical and virtual worlds, investment in these kinds of products will be more expensive than software alone.

One area that organizations are working on developing smartfabric and similar technology for is the health and wellness sector. Calgary-based Orpyx Technologies is a company that provides a wearable sensor platform for healthcare. In one of their products, they use embedded sensors in footwear to help people with diabetic neuropathy. Those with diabetic neuropathy have nerve damage and often don’t feel their limbs as well as other people do. Because they can’t feel when their feet hurt, they can injure them permanently. In severe cases this leads to amputation. Orpyx have developed a system to warn patients before this happens. A sensor-embedded insert worn inside a patient’s shoe gathers info from the sensors (pressure, etc) and transmits it to a smartwatch which then alerts them to potential problems.

Stephanie Zakala, Marketing and Inside Sales Manager at Orpyx says that they have been watching smartfabrics closely as a next step for some of their solutions. For example, rather than using embedded sensors within shoe inserts, a smartfabric sock would be a fantastic solution. So far though, there are technical limitations with the smartfabrics they have looked at. It is difficult to make socks with sensing capabilities that are comfortable, washable, and reliable over time. Stephanie says that clothing is particularly challenging because it creates hostile environments for electronics. Shoes and under garments are worn in conditions with pressure, high heat and humidity. Also, the high cost of many smartfabrics is currently prohibitive for many mass market consumer applications.

Other organizations are using sensors embedded in clothing to measure heart rate, temperature and other vital signs for healthcare, and athletics.

As smartfabric reliability improves and prices go down, many organizations will find them to be a great alternative technology for some of their current solutions. Beyond that, they will create new products and services by using smartfabrics to solve problems we were unable to address without this technology.

Innovation from Maker Communities

Great ideas don’t just come from companies, they can come from crowdsourcing as well. Craft communities are important crowdsourcing resources where people share interesting ideas for clothing and crafts. In these communities good ideas rise up to prominence because they work and are easy to replicate. Currently, there are knitting and quilting clubs, fashion collectives and maker fairs sprouting up all over, where people support each other socially, teach each other new skills, and most importantly, share patterns and design ideas so that others can make the same item.

As people try different patterns that others have created, they put their own unique spin on things, and improve on the original ideas. With such a large community, many ideas are shared and tested at large scale; far outstripping the resources of most companies R&D budgets and timelines might allow. Failure isn’t tied to profits and loss, so people can experiment without fear, and the best ideas tend to win out by becoming popular and emulated more frequently.

Homebrewing was a hugely vital part of early Personal Computer (PC) development, and this spirit of creativity and doing it yourself is evident virtually in craft communities. Adding technology to traditional materials is a natural step. In fact, while researching this piece, I found more people from craft communities that were interested in smartfabrics than technologists.

One community that reminds me of early PC homebrewing and software clubs is Ravelry. Ravelry is a community for people who knit and crochet, and it has a unique blend of features that allow people to share ideas and patterns. Local real-life knitting clubs have been started as a result of people from the same geographic location meeting virtually on Ravelry, then getting together and helping each other out. Sharing patterns and pictures of finished items is a huge part of the Ravelry experience, and popular patterns that help people create things that look good and work start to emerge.

Crowdsourced ideas from maker communities are often more fashion-conscious and whimsical than their corporate counterparts. Sometimes as technologists we forget that having a product that looks good and is fun is just as important to people as a life-saving device. It just depends on context. In fact, products that look good and are fun have a much larger market appeal. Maker communities are an area to watch because they not only filter the ideas for us, they remind us techies that the world can be a fun, colorful place and we need to incorporate those aspects into our designs.

These communities have access to a wealth of knowledge, and as digital designers, we can learn a tremendous amount from them. Once they have experimented with smartfabrics for a period of time, we can benefit from communities of people figuring out what works best for certain applications. These sorts of organizations are filled with people who like to experiment and create things for themselves and their friends. If you’re wondering where to start, Jen Kot says, make what is interesting and useful to you and then share it. The crowd can create at scale, so the good stuff will get copied and become popular.

Frame Your Design Thinking

Solve a Real Problem

When looking at technology first, and then trying to find useful applications of it, we can mistakenly create products that people don’t like. It’s important that technology solutions are actually useful and will be used by real people. Many wearables are not worn after a few months of use. In 2014, Endeavour Partners surveyed people who use wearables and found that one third of activity tracker users stopped using it after six months. Once people get a sense of the data that is measured and how that is reflected in their activities, they don’t seem to find a lot of value in the wearable information anymore.

It is vital to use technology that actually solves problems for people. Author and thinker Simon Sinek talks about “starting with why” we are doing something. In technology, we work a lot with the “what”. We have access to cutting edge technology, and we need to spend a lot of time learning how to master it and apply it. Many wearable and smartfabric demos seem to reflect this. People talk a lot about the technology and how it works, but they fail to make a compelling case for why it’s useful to me in my life and world right now. Very few people care about technical details, they want something that serves a purpose, looks good and fits their budget.

Think of buying a new pair of blue jeans as an example. When I shop, I look for something that solves my problem (I need clothing, specifically a new pair of jeans), that fits my personal style, that I look good wearing, and that I can afford. Invariably, I am drawn to a stylish visual design and a certain level of quality of materials and workmanship. Unfortunately, that means the jeans I would like to consider buying are expensive. Most importantly, the jeans have to fit my body and look and feel good while wearing them. So I start compromising to see if I can find something that fits my budget. If you are a smartfabric designer, how are you going to convince me to pay more money for something that has electronic capabilities in my jeans? It may just feel like an unnecessary extra. You have to convince me, the buyer, why I absolutely need this new technology in my jeans. Will they fit better? Will they look letter? Will they keep me warmer or drier or more comfortable? Will they provide data or notifications that are incredibly handy, or that I just can’t live without once I have them?

Whenever I design something new with complex technology, I strip the problem down to its essence, and I remove digital technology from the solution ideas at first. If I can use paper, pen, materials readily at hand and perhaps some physical services like mail or parcel delivery, what would I do? If you can fully understand the underlying problem you are solving and provide alternate ways of solving it with different technologies, it’s time to add in the digital technology to form a solution. It’s amazing how your perspective changes, and that pet feature you just loved in the technology doesn’t necessarily translate into an ideal product for the people you are designing for.

An ideal product should be so superior using smartfabrics to other, older technology, the customer will want it and will wonder how they got by without it in the past.

Paul Hanson (bbotx CEO) warns designers for wearables to avoid focusing on technology first, then looking for solutions in people’s lives. Instead, when we design wearables for people, we have to think about the person wearing the device first, what they are doing in their lives and how the technology that the device provides is useful for that person. However, it can’t stop there. Beyond the device itself, the generated data and activities recorded or performed by the wearable are much more useful when you take an entire system (such as your interactions with other people) into account. Gathering a lot of data with a smartfabric wearable and displaying that data on a smartphone has limited usefulness.

What does it mean? What should the wearer do?

With passive wearables, which are usually used to gather data, there is limited value in providing activity data only to the end user. There is infinitely more value if that data can be shared with people who can interpret the data, and have special expertise to help apply that to help you improve your life. In healthcare for example, clothing that monitors heart rate is incredibly useful if care givers or specialists have access to the data to help put context around it and point you towards behaviors that will benefit you. This requires a complex, secure computing system to support not only the wearable and the user, but everyone else who needs to be involved.

Active wearables need to interact with a system too – the immediate real world around us. Imagine a winter jacket that changes form depending on temperature so you are always at optimal comfort, or a shirt that changes color depending on lighting. The data the wearable interprets around them can trigger change in the form or attributes to enhance the experience of the wearer. Ideally, the user should be able to control these changes themselves if they want to override them.

Interfaces and Inputs

In my last article on wearables, I suggested that the real world is your primary user interface. With smartfabrics that can be used as clothing, user interfaces can also be on people’s bodies. Not only do you need to understand where the user is, what they are doing and what the environmental conditions are, you also have to understand where on the user’s body the user interfaces and inputs will take place.

Finally, you have to understand the inputs and outputs and displays themselves. This is incredibly challenging.

On a PC, we are used to input methods (keyboards, pointers, stylus, microphones, fingers, etc.) and output on a screen, from speakers etc. The end user tends to use the PC in more ideal conditions, and we don’t think too much about what else is going on around them. Mobile devices complicate I/O matters, because now we have to think about limited actions, and what else is going on in their world as they use our programs on the move.

Wearables such as smartwatches and activity trackers complicate this further, since they can overly distract from our real world activities, and since we are wearing them on our wrists or clipped on to clothing, we can’t get away from them if they needlessly over stimulate us with notifications and alerts. With smartfabrics used in clothing or furniture, the devices are right up against our skin and have much more opportunity to distract and annoy us. Can you imagine how terrible an experience could be if your clothing was vibrating or lighting up and you couldn’t stop it? It could be embarrassing, irritating, and could even cause injury to the wearer.

Deciding where to provide inputs and outputs on smartfabric is incredibly important for a design. With furniture or objects we interact with, it can be more straight forward, but would still require a lot of user testing with different users to get right. For clothing, it gets much more complicated. What are sensitive areas of the body we would want to avoid for inputs? How about appropriate areas for screens or lights or other outputs? If outputs or inputs they draw attention to private parts of the body, that could be disastrous in public, or just the right thing in private. We do not want to expose our customers to unwanted attention or touching, and they need to be in control of their own bodies and what technology can do to enhance them.

Smartfabric designers need to think of inputs and outputs for smartfabrics:

  1. What is around users in the real world and what activities will be enhanced by the technology
  2. The human body – where and what is useful with regards to inputs and outputs what is and feasible to put on or around our bodies and appropriate given the context of use
  3. The inputs and interface designs themselves

Design for Simple Interactions

On mobile devices, we have learned that we can distract people needlessly and take away from their real-life experiences rather than enhance them. If an app is annoying, people just delete it and move on. As designers, we must understand the context of use, such as the environment around users, and what they want to do at a particular time with the technology. We also know that people have less time and space to interact with mobile devices than they do with larger screens, so we have to develop for quick, economic interactions rather than a long workflow. For example, if I am walking outside with my smartphone, it is much harder to interact with than when I am sitting comfortably at a desk typing on my PC. If I am in a hurry, or I am in bad weather, it is even more difficult.

Now think of clothing you are wearing that is designed with smartfabrics. They are even more difficult to interact with than a small smartphone. What is going on around us and our limited ability to see and interact as we move around are amplified with smartfabrics. Imagine how irritating it would be to not be able to control or turn off notifications on your clothing, or how dangerous it might be if the smartfabric distracts you when you are walking, driving or riding a bicycle. The simple interactions mantra that mobile designers repeat over and over is that much more important to take into account with smartfabrics.

MakeFashion co-founder and UX designer Chelsea Klukas says: “…the most successful mobile products have created experiences that quickly allow customers to continue a task after interruption. With wearables these interactions will become even briefer, and successful experiences will need to be quick with minimal interruption to the user. Interfaces will need to be designed to rely on simple one-tap inputs and voice commands that can be achieved instantly.”

With wearables, the real world should be your primary user interface because that is what holds the most attention for the wearer, not the technology. Wearable technology should be designed to work within that context to complement real-life experiences. Chelsea says: “As wearables gain widespread adoption, we are going to have to be increasingly sensitive to the amount of interruptions and distractions they cause. When used correctly, wearables can be incredibly useful in providing information, wayfinding, or accessibility. When used incorrectly, they can become distracting and provide interruptions to user’s tasks and routines.”

Wayfinding came up a lot when I talked to people about possible applications of smartfabrics. Shannon Hoover suggested that smartfabric clothing that could sync with a map service would be hugely beneficial for tourists and travellers. If clothing provided tactile indications (such as through the use of a vibration motor) and subtle visuals, finding your way in an unfamiliar place could be much more enjoyable than having your face in a smartphone or an old fashioned map. You could focus on your surroundings, and have a richer experience, without worrying about going off your planned route. The clothing would remind and guide you. This could also be safer, you wouldn’t be a target as an obvious tourist to pickpockets or scammers.

Others described ideas using smartfabrics for people operating a vehicle. This technology would not be as visually distracting as looking at a smartphone or GPS map. Motorcyclists and bicycle commuters in particular found this a welcome change. Instead of relying on a hand held or mounted device that would distract their visual attention away from where the vehicle was heading, they could get tactile indications of where to go.

Smartfabrics could provide the ultimate handy interface for quick reference, reminders and interactions on the go, or they could interrupt needlessly and distract away from our real world experiences. Furthermore, since the human body is a secondary user interface, there is really very little difference between touching fabric on your body and touching bare skin. Imagine rubbing your thigh in one spot 100 times a day. How do you think your leg would feel in an hour, in a day, or after a week?

Simplicity is not only necessary for a good user experience, it could be necessary for our health, well being, and our relationships.

In Part 3 we will look at things that could go wrong with this technology, and offer up two possible futures for products using smartfabric tech.

Designing for Smart Fabrics: Wear It’s At – Part 1

Lately, the term “wearables” appears more and more in our conversations. Usually we are describing a smart watch that extends our mobile experience, or a bracelet that tracks physical activity. Sure, those are things we wear, but what about something with computing power that we actually put on as clothing? Now that is something that is really a wearable. This next wave of wearable moves technology beyond accessories to clothing we wear, furniture we use, vehicles we are transported in and art that we enjoy.

This is made possible through smartfabrics. Smartfabrics are textiles with embedded electronics that bring computing power even further from our devices into everyday items. This technology poses new challenges for designers. Not only is the form factor very different from screen devices, it is right up against the user’s skin.

needle-672396_640
(image via pixabay)

This is an interesting topic for me. While I have done design work with smartwatches, wearable integration and for Internet of Things (IoT) devices, I haven’t worked with smartfabrics yet. So I reached out to people in the community who have more insight than I do, and asked for their thoughts to share on the future of this technology. This is more of a forward-looking piece than my usual experience-reports, so I will likely get some of it wrong. However, a lot of you have been asking me to weigh on on this topic. Furthermore, the low impact, high value principles behind designing for this technology are important to take into account as we move forward. We might have a limited window of opportunity to get it right.

In my last article on wearables Designing For Smartwatches And Wearables To Enhance Real-Life Experience, I wrote about designing experiences that integrate smartwatches and activity trackers. I mentioned that we have two futures for technology: in one, we are distracted away from our real-world experiences, increasingly focused on technology and missing out on what is going on around us; in the other, technology enhances our life experiences by providing a needed boost at just the right time. Understanding good distractions as well as unwelcome distractions is vital to consider when you are designing for something that you will have right up against your skin for hours at a time on a daily basis. With smartfabrics, we have even more potential to cause harm by distracting people from their lives, or to bring even more good by using powerful technology to enhance our real-life experiences.

As I spoke with people who design with smartfabrics and similar technology, a common theme emerged: smartfabrics aren’t quite there yet for mass market applications. There are some niche players leading the way, but nothing has captured significant mind share. That’s because it’s difficult to bring computing and electronics, power sources, sensors and wireless connectivity to fabrics without making them bulky, impractical, and expensive. However, a lot of great organizations with brilliant people are working hard to create reliable, cost-effective smartfabric technology, so the day will arrive soon. When it does, those of us who are technology designers will be designing digital experiences that are completely different from the screen experiences we are accustomed to. It’s important to understand that we are designing a digital experience that supplements a user’s real-world experience.

Know Your Design Material

Electronics

One way to think about smartfabrics is that they are Internet of Things (IoT) devices. In his talk Magical UX and the Internet of Things , mobile UX expert Josh Clark defined IoT devices as “Sensors + Smarts + Connectivity”. A “thing” is anything at all that we can put technology into. “Sensors” are what make mobile devices and wearables truly special, they can sense movement, direction, and in some cases even biometric data.

“Smarts” refers to computing power and electronics that makes sense of data, and supports inputs and outputs into the system.

“Connectivity” allows us to get information from the thing onto smartphones
and other devices. It is one thing to have a wearable that measures or allows for inputs and outputs, but if it can only work by itself and not communicate with other computers, it has limited value.

Sensors, smarts and connectivity depend on power sources to operate, which means they need batteries to make them work.

Smartfabrics are textiles with IoT-style technology woven into them. Some technologies added to textiles are sensors, wireless, batteries, location services (such as geo-positioning tech), transducers, bio monitors, lighting and displays. New kinds of fabrics blur lines further with electronics in the development of conductive thread for inputs, and stretchy displays, circuit boards and wiring embedded right into the fibers or printed on the fabric. In some cases you can barely tell the difference between smart fabrics and traditional textiles.

IoT devices provide visibility and control into just about anything we can stuff technology in. Paul Hanson, CEO of IoT technology company bbotx points out that they “help create enhanced situational awareness.” In other words, IoT devices can help us extend our own capabilities by providing visibility and control into our environments, our interactions with other people and systems, and within ourselves. For example, health monitoring with wearable IoT devices can provide constant flows of data so the wearer can make better health decisions. With IoT technology, this insight is also available to health experts. What was once a sporadic activity – that required an appointment, specialized equipment and expertise – can now be an ongoing, continuous activity. These systems provide a degree of insight into a patient’s day to day conditions that was impossible before. Patients get visibility into their current condition at any time, and experts can closely monitor changes and recommend treatment options.

Doug Hagedorn, CEO of Tactalis describes two different kinds of smartfabrics: passive and active. Passive smartfabrics are designed towards monitoring and gathering information to for use within a system. Active smartfabrics react immediately to stimulus in the environment and may change physical aspects such color, shape or their digital behavior. Passive smartfabrics have enormous potential in areas such as health and fitness, while active smartfabrics could reduce the need for multiple articles of clothing that have one particular purpose and design.

Fabrics & Materials

If you are a digital designer who like me who in virtual worlds all the time, integrating software with real world physical objects can be a challenge.

Textiles are commonplace enough to appear simple, but they are subtly complex. From ancient times, we have used animal and plant-based materials to clothe ourselves for protection and warmth. Eventually, we brought in metals and other minerals as materials. (The most obvious example of this is medieval armor such as chainmail.)

As technology advanced further, we moved beyond natural resources and started making synthetic fabrics. Today, textiles are sophisticated combinations natural, synthetic, mineral and other materials. We take them for granted because they are all around us, and the complexity is hidden from us.

The choice of material when designing clothing or other textile goods brings with it different strengths and weaknesses for warmth or cooling, comfort or sturdiness, and care when dirty.

To get more out of a particular fabric, we combine different materials to make it even more useful for a particular purpose. Some interesting examples of technology fused with technology are:

  • Kevlar used in fabrics for protective wear
  • Gore-Tex Water repellant technology to keep us warm and dry
  • Nomex provides flame resistance
  • Spandex provides support for athletic wear and underwear
  • Conductive thread in our mittens and gloves allows us to use our touchscreens in the cold.

When you weave these powerful fibers into fabric for use in clothing, it allows the wearer to do things that were previously difficult or impossible. In most cases, you wouldn’t even know that complex technology is in your clothing unless you looked at the label. Similarly, as electronics become smaller, they become a part of the purpose of the clothing. They fade into the fabric so that we don’t even notice them. Smartfabrics have the potential to allow us to do much more with clothing and other objects than we can with traditional fabrics. The key as a designer is to know exactly what kind of smart fabric you require to solve a problem for your users.

Stay tuned for Part 2, as we look at smartfabric design potential.

New Article – Designing For Smartwatches And Wearables To Enhance Real-Life Experience

I expanded on my blog post on this topic and wrote an article for Smashing Magazine: Designing For Smartwatches And Wearables To Enhance Real-Life Experience.

Now that smartwatches and wearables are in a huge growth phase, I shared my ideas on treating the real world as your primary interface, and developing app experiences that enhance our lives, rather than needlessly distract.

Lessons Learned When Designing Products for Smartwatches & Wearables

Lately, I have been doing a bit of work designing products for smartwatches and wearables. It’s a challenge, but it is also a lot of fun. I’m just getting started, but I’ll try to describe what I have learned so far.

Designing for these devices has required a shift in my thinking. Here’s why: we have a long and rich history in User Experience (UX) and User Interface (UI) design for programs written for computers with screens. When we made the leap from a command line interface to graphical interface, this movement exploded. For years we have benefitted from the UX community. Whenever I am faced with a design problem when I’m working on a program or web site, I have tons of material to reach for to help design a great user interface.

That isn’t the case with wearables because they are fundamentally different when it comes to user interaction. For example, a smartwatch may have a small, low-fidelity screen, while an exercise bracelet may have no screen at all. It might just have a vibration motor inside and a blinking light on the outside to provide live feedback to the end user.

So where do you start when you are designing software experiences that integrate with wearables? The first thing I did was look at APIs for popular wearables, and for their guidance on how to interact with end users. I did what I always do, I tried to find similarities with computers or mobile devices and designed the experiences the way I would with them.

Trouble was, when we tested these software experiences on real devices, in the real world, they were sometimes really annoying. There were unintended consequences with the devices vibrating, blinking, and interrupting real world activities.
“AHHHH! Turn it off! TURN IT OFF!!”

Ok, back to the drawing board. What did we miss?

One insight I learned from this experience sounds simple, but it required a big adjustment in my design approach. I have been working on software systems that tried to make a virtual experience on a computer relatable to a real-world experience. With wearables devices that we literally embed into our physical lives, that model reverses. It can mess with your mind a bit, but it is actually very obvious once it clicks in your brain.

Simply put, when I don’t have a UI on a device, the world becomes my UI.

Let me expand on my emerging wearable design approach to help explain why.

Understand the Core Value Proposition of Your Product

If you’ve been developing software for computers and mobile devices already, this may sound simple, but it can actually be a difficult concept to nail down.

One approach I take is to reduce the current features. If we cut this feature, does the app still work? Does it prevent the end user from solving problems or being entertained? If we can cut it, it might be a supporting feature, not a core feature. Remember, wearables have less power and screen real estate, so we’ll have to reduce. When I had a group of core features remaining, now it is time to summarize. Can we describe what these features do together to create value and a great experience for users?

Another approach I use is to abstract our application away from computing technology altogether. I map out common user goals and workflows and try to repeat them away from the PC with a paper and pen. With an enterprise productivity application that involved a lot of sharing and collaboration, I was able to do this with different coloured paper (to represent different classes of information), folders (to represent private or shared files), post-its and different coloured pens for labelling, personalization.

In a video game context, I did this by reducing the game and mechanics down to a paper, pen, rule book and dice. I then started adding technology back until I had enough for the wearable design.

Now, how do you describe how you are different? Have you researched other players in this market? Who are your competitors, or who has an offering that is quite similar? How are you different? What sets you apart in a sea of apps and devices? This is vital to understand and express clearly.

How do I know if I am done, or close enough? As a team, we should be able to express what our product is and what it does in a sentence or two. Then, that should be relatable to people outside of our team, preferably people we know who aren’t technologists. If they understand the core offering, and express interest with a level of excitement, then we are on our way.
If you are starting out new, this can be a little simpler since it is often easier to create something new than to change what is established. However, even with a fresh, new product, it is easy to bloat it up with unneeded features, so have the courage to be ruthless about keeping things simple, at least at first.

Research and Understand the Device

With wearables and mobile devices in general, the technology is very different than what we are used to with PCs. I call them “sensor-based devices” since the sensors are a core differentiator from PCs and enable them to be so powerful and engaging to users. The technical capabilities of these devices are incredibly important to understand because it helps frame our world of possibilities when we decide what features to implement on wearables and smart watches. Some people prefer to do blue-sky feature generation without these restrictions in place, but I prefer to work with what is actually appropriate and possible with the technology. Also, if you understand the technology and what it was designed for, you can exploit its strengths rather than try to get it to do something it can’t do, or does very poorly.

This is what I do when I am researching a new device:

  • Read any media reviews I can find. PR firms will send out prototypes or early designs, so even if the device hasn’t been released yet, there are often some information and impressions out there already.
  • Read or at least skim the API documentation. Development teams work very hard to create app development or integration ecosystems for their devices. If you aren’t technical, get a friendly neighbourhood developer on your team to study it and summarize the device capabilities and how it is composed. You need to understand what sensors it has, how they are used, and any wireless integration that it uses to communicate to other devices and systems.
  • If they have provide it, thoroughly read the device’s design/UX/HCI guidelines. If they don’t, read others that are offering similar. For example, Pebble smart watches have a simple but useful Pebble UX Guide for UI development. It also refers to the Android and Apple design guidelines and talks about their design philosophy. Pebble currently emphasize a minimalist design, and recommend creating apps for monitoring, notifications and remote control. That is incredibly helpful for narrowing your focus.
  • Search the web – look for dev forums, etc. for information about what people are doing. You can pick up on chatter about popular features or affordances, common problems, and other ideas that are useful to digest. Dev forums are also full of announcements and advice from the technical teams delivering the devices as well, which is useful to review.

Determine Key Features by Creating an Impact Story

Now we can put together our core value proposition and the device’s capabilities. However, it’s important to understand our target market of users, and where they will use these devices, and why. I’ve been calling these types of stories different things over the years: technical fables, usage narratives, expanded scenarios and others, but nothing felt quite right. Then I took the course User Experience Done Right by Jasvir Shukla and Meghan Armstrong and I was delighted to find out that they use this approach as well. They had a better name: impact stories, so that is what I have adopted as well.

What I do is create an impact story that describe situations where this sort of technology might help. However, I frame them according to people going about their regular everyday lives. Remember that stories have a beginning, middle and end, and they have a scene, protagonists, antagonists, and things don’t always go well. I add in pressures and bad weather conditions that make the user uncomfortable, making sure they are things that actually occur in life, trying to create as realistic situations as I can. Ideally, I have already created some personas on the project and I can use them as the main characters.

Most people aren’t technology-driven – they have goals and tasks and ideas that they want to explore in their everyday lives and technology needs to enable them. I try to leave the technology we are developing out of the picture for the first story. Instead, I describe something related to what our technology might solve, and I explore the positives, negatives, pressures, harmonies and conflicts that inevitably arise. From this story, we can then look at gaps that our technology might fill. Remember that core value proposition we figured out above? Now we use this to figure out how we can use our technology platforms to address any needs or gaps in the story.

Next, we filter those ideas through the technical capabilities of the device(s) we are targeting for development. This is how we can start to generate useful features.

Once we get an idea on some core features, I then write three more short stories: a happy ending story (what we aspire to), a bad ending story (the technology fails them, and we want to make sure we avoid that) and a story that ends unresolved (to help us brainstorm about good and bad user experience outcomes.)

Impact stories and personas are great tools for creating and maintaining alignment with both business and technical stakeholders on teams. Stories have great hooks, they are memorable, and they are relatable. With experienced people, they remind them of good and bad project outcomes in the past, which help spur on the motivation for a great user experience. No one wants their solution to be as crappy as the mobile app that let you down last night at the restaurant and cost you a parking ticket.

Use the Real World as Your User Interface

UX experts will tell you that concrete imagery and wording works better than abstract concepts. That means if you have a virtual folder, create an icon that looks like a folder to represent what it is by using a cue from the physical world. What do we do if we have no user interface on a device to put any imagery on it at all? Or maybe it is just very small and limited, what then? It turns out the physical world around us is full of concrete imagery, so with a bit of awareness of a user’s context, we can use the real world as our UI, and enhance those experiences with a wearable device.

Alternate Reality Games (ARGs) are a great source of inspiration and ideas for this sort of approach. For a game development project I was working on, I also looked at Geocaching mechanics. Looking to older cellular or location-based technology and how they solved problems with less powerful devices is an enormous source of information when you are looking at new devices that share some similarities.

I talked to a couple of friends who used to build location-based games for cell phones in the pre-smartphone era, and they told me that one trick with this approach pick things that are universal (roads, trees, bodies of water, etc.) and add a virtual significance to them in your app experience. If I am using an exercise wearable, my exercise path and items in my path that I pass by might trigger events or significance to the data I am creating. If you run past significant points of interest on a path, information notifications to help cheer you on can be incredibly rewarding and engaging.

Enhance situational activities

One thing that bugs me about my smartphone is that it rarely has situational awareness. I have to stop what I am doing and inform it of the context I am in and go through all these steps to get what I want at that moment. I want it to just know. Yesterday I was on my way to a meeting in a part of town I am bit unfamiliar with. I had the destination on my smartphone map, without turn by turn directions turned on. I had to take a detour because of construction, so I needed to start a trip and get turn-by turn directions from the detoured area I was on. I pulled over to the side of the road, pulled out my smartphone, and I spent far too long trying to get it to plan out a trip. I had to re-enter the destination address, get the current location I was at and mess around with it before I could activate it. A better experience would be a maps app that would help and suggest once it senses you have stopped, and allow you to quickly get an adjusted trip going. While you have a an active trip, these devices are quite good at adjusting on the fly, but it would be even better if they knew what I was doing and suggested things that would make sense for me right now, in that particular situation.

It is easy to get irritating and over suggest and bug people to death about inconsequential things, but imagine you are walking past your favorite local restaurant, and a social app tells you your friends are there. Or on the day you usually stop in after work, your smartwatch or wearable alerts you to today’s special. If I leave my doctor’s office and walk to the front counter, a summary of my calendar might be a useful thing to have displayed for me. There are many ways that devices can use sensors and location services to help enhance an existing situation, and I see a massive amount of opportunity for this. Most of the experience takes place in real life, away from a machine, but the machine pops up briefly to help enhance the real life experience.

Rely on the Brain and the Imagination of Your User

If we create or extend a narrative that can make real world activities also have virtual meaning, that can be a powerful engagement tool. One mobile app I like is a jogging app that creates a zombie game overlay on your exercise routine. Zombies, Run! is a fantastic example of framing one activity into a context of another. This app can make my exercise more interesting, and gets your brain involved to help focus on what might become a mundane activity.

With a wearable, we can do this too! You just extend the narrative of what you created on your job and delay telling you what happened until you are complete, and have logged in to your account on a PC or smartphone/tablet. You have to reinforce the imagery and narrative a bit more on the supporting apps on devices with a screen.

ARGs really got me thinking about persisting a narrative. It is one thing to apply virtual significance to real-world objects, but what happens if we have no user interface at all? What are we left with? The most powerful tool we have access to is our human brains, so why not use those too? Sometimes as software designers I think we forget about how powerful this can be, and we almost talk down to our users. We dumb everything down and over praise them rather than respecting that they might have different interpretations or alternative ways of creating value for themselves with our software. Just because we didn’t think of it doesn’t mean it has no merit. It does require a shift towards encouraging overall experiences rather than a set of steps that have to be followed, which can be challenging at first.

Wearable Integration – Data Conversion

If you are working with a wearable that doesn’t have a screen or UI, and is essentially a measuring device, one option to tie in your app experience is to think of converting the data from one context into another. This can be done by tying into APIs for popular wearables. You don’t have an app on the device, but your device ties into the data that is gathered by it and used for something else. For example, convert effort from an exercise wearable into something else in your app. One example of this is Virgin Pulse, an employee engagement application that has a wearable that tracks exercise. Exercise with the wearable can be converted into various rewards within their system. The opportunities for this sort of conversion of data that is measured for one purpose to another experience altogether are endless.

One app I designed extended data generation activities to a narrative in an app. We extended the our app concepts to the physical activity and tapped into the creative minds and vivid imaginations of the humans using the devices with a few well placed cues. This was initially the most difficult app for me to design, but it turned out that this was the overwhelming favourite from a “fun to use” perspective. The delay between generating the data out in the real world, and then coming home and using your PC or tablet to discover what your data measured by the wearable had created in our app was powerful. Anticipation is a powerful thing.

However, be careful when you do this. Here are a couple of things to be aware of:

  • Make sure the conversion rules are completely transparent and communicated to the users. Users need to feel like the system is fair, and if they feel taken advantage of, they will stop using your app. Furthermore, many consumer protection groups and laws could be broken in different jurisdictions if you don’t publish it, and change it without user consent.
  • Study currency conversion for ideas on how to do this well. Many games use the US dollar as a baseline for virtual currencies in-game, mirroring the real world markets. These are sophisticated systems with a long history, so you don’t have to re-invent the wheel, you can build on knowledge and systems that are already there.
  • Add Variability Design Mechanics to Combat Boredom

    It can be really boring to use a device that just does the same things over and over. Eventually, it can just fade into the background and users don’t notice it anymore, which causes you to lose them. If they are filtering out your app, they won’t engage with it. Now, this is a tricky area to address because the last thing you want to do is harass people or irritate them. I get angry if an app nags me too much to use it like some needy ex, or try hard salesman. However, a bit of design work here can help add some interest without being bothersome, and in many cases, add to the positive experience.

    Here are some simple ideas on adding variation:

  • Easter Eggs: add in navigation time savers that will be discovered by more savvy users and shared with friends to surprise and delight
  • Variable Results: don’t do the same thing every time. Add in different screen designs for slightly different events. One trick is to use time and seasons as cues to update a screen with themes that fit daytime, night time, and seasons. Another is to use the current context of use to change the application behaviour or look. There are lots of things you can do here.
  • Game Mechanics: levelling and progression can help people feel a sense of progress and accomplishment, and if there are rewards or features that get unlocked at different levels, it can be a powerful motivator. It also adds dimensions to something repetitive that changes the user’s perspective and keeps it from getting stale.

Provide for User Control and Error Correction

As we learned when designing notifications for a smartwatch, it can be incredibly irritating if it is going off all the time and buzzing on your wrist. Since wearables are integrated with our clothing, or worn directly next to our bodies, it is incredibly important to provide options and control for users. If your app is irritating, people will stop using it. However, one person’s irritating is another person’s delight, so be sure to allow for notifications and vibrations and similar affordances in your product to be turned on and off.

Conclusion

This is one of the most fun areas for me right now in my work, and I hope you find my initial brain dump of ideas on the topic helpful. Sensor-based devices are gaining in popularity, and all indications show that some combination of them will become much more popular in the future.