So even if there was no tactile engagement there was visual engagement communicating tactile interaction.
Now even that’s gone. Our Pictures Under Glass are bland and flat and look like translucent acrylic and, well, glass.
I'm still hoping for a hover-like mechanism that allows touchscreens to have non-commital interactions. As you say, touch is surprisingly devoid of depth for its maturity and time on the market.
I'm hoping some level of above-the-screen finger detection comes eventually, because if it gets to a mainstream distribution stage, it opens the door for bleeding-edge to aim for larger detection distances, and eventually seeing hand gestures above the screen.
Whether it's in this form or something completely different, our screen interactions need to be broadened, especially in a direction that allows users to signal intent but not commitment (like hovering with a mouse, or looking with eye tracking).
My laptop used to have several cameras (flopping up from behind screen) for 3D hand tracking, (crufty) keycap touch, and head/gaze pose. With shutter glasses, overlaying a slightly-raised transparent top-down view from a keyboard cam was helpful and tolerable. Browser-based stack. Input fusion with diverse latencies required app state rollback. Fun(ish).
> _finally_ [...] even that got dropped a few years back [...] eventually [...] broadened
This was profoundly frustrating. Waiting for a couple of giant companies, interested only in extremely difficult mass consumer markets, to make tech available so it can afterward be explored and nitches probed, with patents shackling everyone else, was just... sigh. Pharma turning the nation's explore-exploit dial hard over regrettably leaves a lot of paths forward nonviable.
And it's because it's hard imagining a completely new way of interacting with computing technology. People that are able to do that don't get paid script writers salaries, they get paid Steve Jobs salaries. And they are worth billions to organizations capable of making them a reality.
A brief rant on the future of interaction design (2011) - https://news.ycombinator.com/item?id=31511361 - May 2022 (132 comments)
A brief rant on the future of interaction design (2011) - https://news.ycombinator.com/item?id=21116948 - Sept 2019 (153 comments)
A Brief Rant on the Future of Interaction Design (2011) - https://news.ycombinator.com/item?id=6325996 - Sept 2013 (35 comments)
A Brief Rant on the Future of Interaction Design - https://news.ycombinator.com/item?id=3212949 - Nov 2011 (150 comments)
I enjoyed this rant. I know it's old, but this is the first time I've seen it.
Hugh Dubberly was a pretty decent chap. I liked working with him.
They had this massive inkjet printer, and would deliver room-length charts.
The keyboard is tactile, and I’ve long dreamed of augmenting Alexa by covering my house in purpose-specific buttons, but that seems like a short road. Even the most advanced HCI device I’ve ever seen proposed by a consumer company—Meta’s Orion wristbands that read hand movements by measuring the electrical signals in your wrist—aren’t tangible in the slightest.
What am I missing? Can any fellow futurists point me in the right direction? I don’t need “doable today” or even “the technology is worked out”, but ideally I’m looking for something more doable than tangible holograms. See https://xkcd.com/678/
EDIT; the VR suits from the Three Body Problem books come to mind, using tiny actuators and temperature controls (thermal actuators?) to simulate touch. I could see that in a glove for sure, and that’s probably the most futuristic tactility gets at this point, but I doubt it’ll ever see fully-body usage, both for technical feasibility and user convenience reasons. There’s a reason the TV show replaced them with brain-modulating headsets… I guess that’s really the end dream.
What is a tangible hologram (or any interface, really) other than an illusion, at the end of the day?
Over time we got better at packing more into a button. Multimodal buttons became a thing, and once alphanumeric displays become practical, a single knob or arrow keypad to scroll through tiny menus became common. Twenty years after that, everything is a flat, touch sensitive button, and more and more we hide them from even being visible until they are interacted with.
I believe the reasons for this are two fold. First, they are futuristic. People are drawn to new, shiny things, even if they are functionally worse. The second is cost. Physical switches are quite expensive compared to digital electronics. As touchscreens have become cheaper, they have replaced more physical inputs. Similarly, they have replaced discrete indicator lights, meters, and other single purpose indicators.
The cost savings is modest in terms of an entire product, but businesses will spend days saving one cent in production costs because it adds up over millions produced. It usually amounts to a small percentage of profits, but they'll chase anything.
Fun fact that we don’t talk about enough: I’m not sure if it’s still happening, but at some point, the controls in the spaceX shuttles were touchscreens running javascript. It brings me joy (and obviously fear!) to imagine an npm node_modules directory makings it to space… I think the consensus is spot on that there can absolutely be too many touchscreens.
More fundamentally your answer doesn’t really speak to what I was thinking of, though: general computing. Buttons and switches are great for purpose-built machines, but the applicability to general computing is minimal. How could one possibly fundamentally expound upon the keyboard or mouse?
I wonder if this is a "pick 2" situation.
Touchscreen: see and sorta manipulate
Orion: manipulate (in only the weak sense that it uses more than one finger)
Braille display, haptic glove/suit: feel and see (redundantly?)
In my touchscreen-biased mind, I can't imagine a technology that could do all three of these, which I would want to use regularly or carry around, or wear for hours at a time.
The touchscreen, combining the "see" with at least a tiny bit of the "manipulate", is actually an amazing step, but it may be a dead end. I've seen research into things like tactile screens, with adjustable surface height. But the benefit it gives is evidently not worth the cost in development, manufacturing, or complexity.
I was going to correct you that Orion is just a baby step towards Minority Report holograms (aka plain ol’ non-tangible holograms), and that reminded me: the famous opening scene had lots more than holograms!
Specifically, they had physical objects (harddrives, knobs, spheres, etc) that didn’t seem to have much electronics actually in them, but that were tracked and responded-to by the computing system in a dynamic way. For example, easily passing data between the hologram and a little ball that you can easily move to another display, give to someone else, etc. in a completely seamless (read: AI-managed) way.
That might be my biggest hope for the future, while we’re waiting on tactile gloves. This kind of thing: https://www.behance.net/gallery/105147921/Apple-x-Procreate-...
In the Zelda games on the Switch, you aim the bow and arrow with motion controls. This may be the smoothest, most natural interface I can think of; changing back to joystick controls feels really bad.
The input is 2DOF manipulation, and the feedback is primarily seen on the screen. There is a small amount of "feel" feedback inherent to the physical motion of the controller.
Another example on the Switch is a "labyrinth" balance minigame in Kirby.
I recall early iPhone games exploring this UI, but it doesn't seem to have developed into anything except games and levelling utility apps.
>>> what I was thinking of [...] How could one possibly fundamentally expound upon the keyboard or mouse?
You're wearing a VR headset, so the visual UX of keyboard and mouse can be 3D anything. The keyboard and surround is all multitouch. The keyboard is n-key rollover - feel free to chord. The mouse is tracked in 3D. It's also on a motorized arm, so it has force feedback, and you can leave it wherever, or have it move itself. You have two of them. Your hands are tracked in 3D, so feel free to gesture, and to interact with the visual environment. If you value the fingertip feel of your keyboard, you can stick with fingernail vibrators. The mouse surface is variously pressure sensitive. The mouse surface is vibrator chips. The keys are covered with vibrators. The keys are multi-axis pressure sensitive. Your palm and back of hand are covered with vibrators. The headset provides a high-resolution soundscape. The headset tracks head and gaze. Any screen is notional, as the headset provides high resolution content with any apparent focus depth. The keyboard is on a robot arm, so you can stand or pace - just ask, and it will be at hand. For some tasks, like sorting piles of objects, you might briefly prefer a different interface device.
Much of that is current-ish tech. Though painful to gather and make comfortable, given a dysfunctional market for such.
Contrast the available design richness of phone vs desktop, interface and apps. Imagine desktop absorbing lessons where phone is better. Now try to imagine the design richness available to desktop vs something, where something isn't pretending to be a half-century old terminal emulating a manual office typewriter.