🙅🏽‍♂️
W1: Gestural Interaction - a time travel back to 2005 and how gestures enhance my Smart Home today
Tags
UX 👋
Published Date
Description

My experience with gestural interfaces

Touch gestures level 2004

The journey with touch gestures for the broad masses most likely started around 2009, when the iPhone gained traction and other companies flooded the market with similar products. There was however a time before that.
“Nobody wants a stylus”
Yes, I am talking about that time: The time of the stylus. A device, Steve Jobs, CEO of Apple at that time once wanted to put on the graveyard.
Apple however was the same company, who just years later introduced the Apple Pencil.
There was a time, where styluses were the key to control the screen of our devices. And there were companies who mastered this “limitation” in a remarkable way, even back then.
 
Palm used to be a household brand for pocket PCs back then. It was back in 2005 when I had to make a serious decision: Whether I should get a Nintendo DS or Palm Tungsten E2? Entirely different devices for different target audiences. An extendable and multi purpose organizer or a limited game console with small touch screen?
I decided for the Palm and never regretted it. Why? Well, basically it gave me the abilities of a smartphone such as calendar, contacts, music and video player, before there were smartphones around.
 

How to type on such a small screen?

Well nowadays we are blessed with huge screen smartphone so literally every person is made able to use touch input and won’t likely miss the keys.
Palm however back then achieved a reliable text input, without the need of a huge screen.
The key? Gestures!
People were primarily coming from physical notepads back then and while on screen keyboards are hard to achieve for two reasons:
  • Small screen size
  • Jumping between characters requires advanced eye-hand coordination
However back then it was still too early to implement advanced technology such as hand writing detection. But Palm had its own secret sauce:
 
“Let’s just create our own alphabet. “
 
With the gesture based alphabet Palm created, it was easy to write text fast. Why?
  • They kept it simple with small chance of variation from user to user. Overall, the detection was very reliable
  • You never had to leave the input pad - it was literally possible to “blind type” with gestures
  • The input pad followed a clear structure: letters on the left, numbers on the right.
 
Graffiti: The PalmOS alphabet https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)
 
Palm follows a clear distinction between keypad (Graffiti-input) and screen. While there is an onscreen keyboard for those, who really wanted to use it, the "Graffiti" language worked on a superior level.
 

Graffiti never dies

As it turns out, the idea of the PalmOS graffiti still exists today in 2020 and is available on Android.
Nowadays however, I am actively using the swipe-functions on major keyboards (available on iOS / Android), as it allows blind-typing regardless of the phone's screen size.
 

Going beyond touch gestures

There was a time, where my flat mates and I became obsessed with smart home devices. And overall I must admit, that I am still fascinated by it - but rather not about the input/output part. Why would you replace a physical light switch with a fixed wireless button using a battery that would do the exact same thing - trigger the light?
 
For me, one of the reasons I would do so, came in the enhanced possibilities in interaction, it could have over the regular light switch. One product that really caught my interest was the Mi cube from Xiaomi.
This cube is basically a motion and level-sensor being able to detect its
  • Position
  • Movement
  • Turns ((semi)-clockwise)
 
With the amount of conditions this device gave me, it allowed me way more interaction opportunities than the average light switch.
bottom line, I defined two main lighting conditions
  • Move the cube: Toggle the light
  • Flip the cube 90°/180°: Ambient mode / Productive mode
  • Turn the cube semi-clockwise / clockwise: Increase/decrease brightness
 
The reason why I keep on using it in different situations and consider it over voice assistants, which whom I could also control my light: I can control it not only with my hands but even with my feet - for instance when laying on the couch and I casually want to turn off the light or switch to another light setting.
See here a small demo I created to demonstrate the modes the cube features.
 

Do you consider the interaction “natural”? Explain why.

In the last section, I mentioned two examples. Both of which I would describe them as “natural, once you learned how to use them”.
I would propagate that telling whether a product is natural or not largely depends on what the prerequisites are.
Did the user ever use a computer before? A touch screen? Does the product rely on known patterns?
I think that a real natural product needs to follow law of nature - rules that people are aware of and expect - regardless of their level of knowledge and technological skills.
In both of my examples however, although the interaction follows basic gestural rules , users still need to learn the interaction before they will be able to leverage it for their needs.
 

Technological side

PalmOS "Graffiti

The hardware side is quite simple - a resistive touch panel, that reacts based on pressure and measures where the pressure occured. These where quite common back then and in contrast to the now more established capacitive touchscreens, there were no such perks as multi-touch support. However, this type of screen is still the preferred choice today in some industrial appliances, for instance for special machines that are usually operated with gloves.
But well, the hardware (the resistive touch panel with positioning matrix) collects data about the strokes being made, while the software constantly matches the input with known patterns.
After conducting some research, i realized that the actual technology originated from the Unistrokes Technology, initially introduced by Xerox and which according to provided performance testing enabled users with a higher "wpm" - "words per minute", which might be an indicator for how memorable and "natural" to learn different technologies had been to users.
Palm learned from the fact, words per minute were subpar in comparison to Unistroke, so they decided to launch a new revision of "Graffiti", that was designed to make letters and numbers appear closer to actual handwriting to make it more natural, so the learning curve would be easier to achieve for new users.

Xiaomi Aqara cube

The device itself is plain simple, the Mi cube consists of:
  • Motion sensor (detect motion)
  • Battery
  • Zigbee chip (Send detected motion to Zigbee hub)
It can connect to every kind of Zigbee hub, which is a common network protocol for smart devices.
By default, this would be a Xiaomi Gateway. This gateway however sends data through Chinese servers, which I consider unnecessary - Just as the Philips Hue Hub would send data through Philips servers in The Netherlands and if you connect Alexa to it, through Ireland as well.
But after all, these hubs are just containing Zigbee chips as well, so thanks to an ambicious community that developed around the Raspberry Pi scene, nowadays you can eaily get a 5$ Zigbee chip, attach it to your Raspberry Pi and connect all the smart devices to one device - without connecting to any third-party servers.
If you are interested on how to integrate this little magic cube in your home, see here:
 
 
My Question 1: What seems more natural to you? Entering the room at 10pm, light turns on automatically or Entering the room, you toggle the light switch as usual?
My question 2: What is the most natural input source to you? Hand writing, Keyboard, Swipe keyboard?
My Question 3: Will we ever be able to create true natural interfaces, where people require no onboarding at all? How are they going to look, sound, smell like?
 
Why this question? I was thinking about the most edgy edge case e.g. how would aliens interact with the CD, that got sent to space in 1977?
Some articles suggest that voice interaction might be the most natural way of interaction - but what about crowded places, where speaking to a voice assistant is considered awkward? Is there a fallback for deaf people?
I think that in any case, the time between action and reaction should be as low as possible to encourage fast learning and thus support the phenomena of instant expertise
 
📡UX & Technologies blog