I picked up John Ohno's Big and Small Computing: Trajectories for the Future of Software because I like the guy and his takes on alternative software history. One of the first articles is called "Un-Jobsifying Software Development", which goes after that unsung villain's legacy by dissecting a key, unspoken assumption in his famous aphorism "users don't know what they want." We've all heard this, and a similar metaphor dubiously attributed to noted anti-Semite Henry Ford, "If I had asked people what they wanted, they would have said faster horses."
Like all the best lies, these quips possess a grain of uncontroversial truth, but cloaked behind them is the fact that we're supposed to believe these men did know what the people really wanted. We're supposed to believe that they made the correct decisions during early stages of the development of technologies they shepherded into the world, and we're not supposed to rankle when those technologies don't suit our needs. After all, they work as everybody else expects them to now, so there's supposedly no need to reinvent the wheel.
I've had these thoughts before, as it's a mindset Human-Computer Interaction material tries to teach against. With interviews and rapid prototyping and personas, you're supposed to figure out at an astronomically granular level what your customer really needs, trusting the process will filter out the nice-to-haves and needless impossibilities. However, even that approach ultimately descends from Jobs because the result is a monolithic piece of software that does everything the user is designated as needing to do by you, the designer. In the end, you know best, apparently. Who are you to make that final determination? Who are you to, ultimately and inevitably, stand in the way of their feature requests because you won't or can't or can't want to build it? When did computer programs become write-once?
Perhaps it's been that way ever since we stopped assuming that the people who used our software could learn to adapt it for their own purposes. We don't teach people how to use computers anymore. We teach people how to accept guidance by computers designed for others' purposes. At work, the computer is not a tool for a worker's use; the computer is an automation tool suited to the boss's goals. At our leisure, that boss becomes whoever is using the software to collect our data, and that software is designed with their goals in mind.
This brings me to my phone, the many apps I've installed, and all their features that make no sense for me because they were not designed for me. Some don't even make sense in terms of computing because they would never, ever be tolerated on a desktop computer, yet we are forced to accept them on mobile.
For example, why on Earth does YouTube not allow me to play videos in the background, so that they're sound-only? Instead, they deign to allow me to have the video float over the other apps I'm using, but individual channels can even turn that off and the screen must stay on. "But Mike," you say, "that is a YouTube Red feature!" Is it a goddamn YouTube Red feature on my laptop? No. They artificially restrict the capabilities of my device in order to shake me down for money.
Why on Earth can I not take a screenshot of anything I'm watching on Netflix or Amazon Prime Video with my phone? "But Mike," you say, "that would violate their copyrights." As much as I am completely ready to turn any given article into a screed on copyright, suffice it to say that it is NOT their prerogative to do this. People have rights to reuse published content, which implies rights to capture that content for reuse. Are screenshots of these shows restricted on my laptop? No. And they end up on social media just the same, without bellyaching over the harmless copying of still images—and clips, while we're at it.
These features do not exist for my benefit. They do not empower me—they deliberately hinder me and make the device I own work against me. Do you realize the number of times a YouTube video I've had playing from my pocket suddenly stopped because my thigh happens to be capacitive? Or I've accidentally hit the button on the side that shuts off the screen? After which I've had to ritualistically reach into my pocket, unlock my phone, open YouTube again, and press play, hoping the stream will just start right up again? Do you realize how dangerous this can be when it happens while I'm riding my bike with no barrier between me and Henry Ford's monstrosities? This has, albeit incrementally, a huge cost in terms of my time and safety, and I'm supposed to just accept YouTube holding my phone hostage.
To circle back, this mindset justifies the siloed application model pioneered by Steve Jobs's narcissism. The model he preferred and chose for us because "users don't know what they want" is perfect for allowing apps to "innovate" and maintain such anti-user quirks for extortionate purposes. Because we've accepted that these faceless companies know best, we have allowed them to continue to design liberties we have thoughtlessly enjoyed on our desktop computers out of our phones. This is wrong, and our only respite from this in the long run will be the emergence of a freedom-oriented mobile OS which will allow us to do what we want—again, but for the first time on mobile. We need a system that will actively design against techniques already devised for limiting us. But, even saying that, I cannot hold my breath. This siloed control model has so captured the imaginations of folks that I suspect people find these limitations a matter of course, even when presented with computers that have been doing the opposite for decades.
We must question the wisdom of would-be masterminds who enclose our technological futures with their silos and pretend to speak for what's best for us. They speak for themselves. After all, it was in Henry Ford's best interest that people put his car before their horse who had always worked just fine. If we have internalized that Steve Jobs knew best for us, then we continue to believe that at our own peril.