Category:

 

The beauty – Mobile interface offering users what they need, when they want it, in a way they expect it.

User TestingWireframing • Research • InterfaceVisual Design


Smart devices – introducing fantastic new possibilities to make lives much easier and convenient for everyone – I think they fail.

Those are the words I open my master’s thesis with. Intu is my ‘exploration of touch screen intuitiveness’, a research and design project aimed at finding the reason not every single one of us is able to interact with their touch screen device as fluently as we’d want to.

I blame the way current touch interfaces are designed and communicate, and built a prototype messaging app as a means to test my theories about making a universally understood interface.

The problem

So why do I say smart devices fail? I think the touch screen user interfaces they come with do not provide a means to use the device on a regular and natural basis for everyone – young, old, first time user or ‘tech savvy’. There is a lack of communication towards the user, which makes for a certain anxiety whenever they pick up their smartphone. This leaves them unwilling to explore the device’s capabilities any further, and makes each moment they have to interact with it slightly uncomfortable. Of course all of this is based on my personal observations and experiences, but it was enough of an incentive for me to start to look into how I could maybe alleviate this perceived anxiety.

Resulting prototype

After several small iterations of low and somewhat higher fidelity prototypes, I concluded my research with a fully interactive mobile prototype of a messaging app – note that the prototype is not about the app or it’s functionality, but about it’s interface considerations all based on my months of research and testing.

So here’s a question for you –
‘How can interaction anxiety with touch based devices be minimized through conscious visual interface design?’

Spoiler

I did not succeed in creating a touch screen application that is instantly understood and recognized. However the prototype did seem to allow for more trial and error without scaring off the user. Rich feedback, clear communication and context all make sure users constantly know what is happening. Now that I have experienced how to create such an interface, I certainly think I have found some sort of ‘philosophy’ I can expand in future touch screen applications.

Research

First of all, I needed to verify my observation – was there actually a problem? Later on I needed to test and observe the way people interacted with my different iterations of prototypes.

Focused testing was done during three sessions with groups of 6 to 10 people of different ages and backgrounds. The people I have tested with were between 35 and 60 years old, amongst them high school teachers and a retired sailor. This diversity confirmed the three main issues detailed in this chapter are realistic problems encountered by different types of users, both novice or experienced.

Aside from these organized testing moments, I regularly asked people around me (at work, public transport etc.) what their experience with their smart device was, and whether they wanted to try my prototype. The fact my prototype runs natively on my own iPhone helped a lot in making these spontaneous tests possible, as I almost always have it with me.

Three issues

Eventually I narrowed the problem down to three issues on which I would focus the project:

A lack of feed forward – ‘interaction anxiety’

One of the behaviors I regularly observed with people using their touch screen device can best be described as ‘being insecure’. They seem to lack a certain confidence in interacting with different options within the devices’ system. Toggling a setting, opening a set of options nested within another set, etc. They seem to rather not open or use the options at all instead of doing something they think might ‘break’ the system (which they maybe only just started getting used to).

The main cause of this issues is the interface not or very vaguely communicating what is behind a button, label or figure. Also, the context in which an option is offered plays a big role in making that option a logical one.

Recognizing interactive elements – ‘where are my options?’

I have taken to calling this ‘the floating finger’. A user wants to perform a certain action on screen. They scour the icons, texts, divider lines, labels and other graphical items on screen, looking for a means to achieve their goal. Their finger floating above the screen, ready to strike when they found what they assume to be the correct button. Tap. A new screen opens. Again: orientation. What just happened? Where did the previous screen go? What are the current options? How do you get back? Step two: confirming this was actually the screen where the user wanted to go. Are the expected functionalities offered? Where are they? How can they be utilized?

From observing these users interacting with touch devices, I can conclude that cluttered screens, inconsistent interface elements and low contrast visuals make for a poor understanding of the flow of an interface.

Feedback on input registration – ‘did you get that?’

The last but far from least (especially in terms of user annoyance) of the three issues I will be focusing on is feedback. Feedback from the interface, communicating to the user that they interacted correctly and their input is being processed, or that their input was not received or eligible.

Users activated functions or menus when they did not mean to by accidentally tapping twice – when the device did not immediately seem to respond to their input, they would tap again, only to have the second tap registered within the next screen.

What happened here was not a false input registration – instead the first tap was actually registered, starting a background process in the device that would load up whatever result the tap would produce. However, when this was happening, the interface (that which communicates with the user) did nothing to make the user aware of this process. It remained static, as if nothing happened.

Another way of not providing sufficient feedback is to make buttons/labels/options not respond to interaction. Especially combined with above scenario, when a button does not visually react to input, there will always be some doubt whether the input was actually received. This issue did not so much present itself by not having responsive buttons, but rather by having too subtle/short/small button responses.