Controlling the machine
Inputting the Inputs of Input
It is time to rethink, re-evaluate, and reimagine input. Given that the future of mobile is glasses containing screens, we must solve the problem of how to handle the touch in touch screens when the screen itself is no longer there to be touched
One of the things which made noble Nokia slow to the smartphone was a belief that the communicator keyboard was the way forward, that one had was better than two.
The touch screen has served us well for a little over two decades, a trustworthy tool of tactile triumphs. But when the screen becomes virtual, floating in space, spectral and spooky (Hendiadys), it is now untouchable. The glass is gone; the glare remains.
Voice is, of course, an option. It is astonishing how good voice recognition has become. If you want to be really impressed, try having a few friends who speak different languages talk to your Amazon Alexa. The plastic polyglot will handle requests seamlessly.
Shouting into thin Air
But it is quite anti-social. Talking at a phone to ask it to do things ispeculiar. It was once said that talking to oneself was the first sign of madness. Now talking to yourself is the first sign of Bluetooth. And Suggs walking down your driveway is the first sign of Madness.
Some of the methods for enabling input to augmented reality are quite clever. At the most basic, you have Bluetooth wands. That is what Zapbox does—a fifty-quid AR equivalent of Google Cardboard. The Meta Quest does something similar. But waving a plastic stick around makes you look like a tit. You do not want to look like a tit with your Telephone In The Street.
Then we have the Apple Vision Pro. This uses a combination of eye tracking to know what you are looking at in the virtual world and the device’s front-facing camera to watch for your choice to select something. The Eye watches, and the Hand obeys. This works wonderfully for things like moving windows, but for touch typing? For touch typing, where if you are good you don’t look at the keyboard, it will not work so well.
Meta is better. Meta is smarter. The new Ray-Ban Display is paired with a neural wristband that measures muscle movement. It measures the twitch, the turn, the tension. This allows you to draw letters on a flat surface. YouTube videos demonstrating this are very impressive. Writing, however, is designed for humans to look at the end result. This is not the same as being an input technology. Here, I suspect a re-boot of Apple’s Graffiti would work better. This system of glyphs used in the Newton employs not just what a shape looks like, but how it is produced.
An even older input technology may suit the future of the spectacle: The Microwriter.
This was a chord keyboard, with a button for each finger and two for the thumb. By learning the special set of combined button presses you could type, and type rapidly This is akin to a court stenographer. Microwriter users quickly got up to typing speeds. Speed meant efficiency; efficiency meant power.
With the neural wristband technology, the dedicated hardware doesn’t need to exist. It should be possible to type quickly and accurately in thin air, creating words from the wind.
We look back to move forward. Sometimes the past has the solutions for the problems of the future, and the future is simply the past.



I’m not sure I want to learn yet another input method. I can’t make my two thumbs work together no matter what size phone keyboard I have. So I am typing this with my index finger on one hand.
I can touch type on a regular keyboard with two hands at 55 wpm without looking. So I’m not totally elastic.
And no I hate voice input. No idea why. I just do.
Brilliant take on bridging old and new input methods. The neural wristband paired with chord typing makes so much sence because it leverages musle memory without needing physical hardware. I remember trying the Microwriter at a tech museum and thinking how futuristic it felt back then. The future really is cyclical when you think about it.