Is Software Getting Too Abstract?
9 September, 2022
Millenials and Baby Boomers saw the beginnings of the internet and the digital era. The ones on the very edges of the Millenial and Gen-Z age groups (zillenials?), grew up along with the technology. Gen-Z, is the first generation that is digital-native; thrown right into a flourishing digital age.
In almost every way, that is amazing. They get to participate in a digital world after all suspicions have more or less settled, and everyone has accepted the Internet as a part of life. Social media now has a significant impact on the world, and unlike older generations burdened with responsibilities, Gen-Z can participate in things and be active online, while having much less to lose. They are uniquely situated to drive change towards a better future on a (relatively) egalitarian global platform.
Abstraction
In software (and even in hardware to some extent), how things work is often much less important than that it works. This is helpful while building software; rather than wracking your head over one big chunk of code that does everything, you can build several smaller pieces of 'helper' code that have small and specific responsibilities. Such a chunk of code will take some input, crunch some numbers, and give you the correct output. Once you build this 'helper,' and test that it works perfectly, you no longer need to worry about it again. You can re-use this 'helper' in other programs. How this helper does what it does is now irrelevant; all that matters is that it can do it reliably, so you can re-use this code in other programs, or even share it with other developers around the world.
Then you, or some other developer, can build other programs that make use of this helper. Those programs are often 'helpers' themselves, that take some input, and give relevant output, ready to be used by other programs, adding yet another layer of abstraction to the unending heirarchy.
This abstraction is a fundamental concept in computer science, and is what allows people to build incredible things without having to re-invent the wheel every single time.
From a development pattern to UI pattern...
UI design philosophies tend to go in two opposite directions. The first, tries to be user-friendly; making the interface as idiot-proof as possible, without revealing anything technical to the user. A friendly facade, hiding away all the ugly nuts and bolts. The other kind, is one that tries, instead, to be user-centric. These are more practical interfaces that do allow the users to "tinker with the machinery" underneath, allowing the occasional glimpse of how things might be working under the hood, and making the presence of a UI a mere convenience - nothing more.
The former tends to have mass-appeal. The latter generally appeals only to hobbyist crowds.
Growing up with the Internet
People born in the 80s and 90s came of age along with the Internet. They saw the ugly nuts and bolts, and are still getting used to seeing the pretty, colorful and modernised packaging over it. The generation that preceded, grew up into the finished products of the Industrial age, and the generation that followed, is growing up into the finished products of the Internet age.
Children talk to Google Assistant as if it is the source of answers. I have had to explain to more than one kid, that Google doesn't have any answers, that Google is just pretty good at finding other websites that do have the answers. There is no bitterness here; only curiosity. Afterall, we all learn from somewhere. It is, however, remarkable, the conclusion a young mind communicating with a virtual assitant derives from the experience, and it's effect on the world-model in their head. I am genuinely curious to see how much the mental models, of technology in general and software in particular, are different from reality, in the minds of people who grow up in a world with abstracted technology being ubiquitous.
Software products have matured to the point where the required human interaction is highly idiot-proof. The psychology of design in general and digital interface design in particular, have now had enough time to evolve from having to mimic "real world" surfaces like wood and metal (remember early iOS skeumorphic designs?) to having their own well thought-out paradigms (Apple's Human Interface Design and Google's Material Design, for eg.) that can be intuitively understood through a kind of "digital intertextuality " alone.
All of this puts forth the question of whether or not it is important to consider how much the products we use allow us to understand how they work, or whether it is sufficient to be content with just the fact that they do work, delegating the knowledge of the "how" to the group of people who built it.
My personal opinion leans towards the former. With how easily available knowledge is online, I would imagine that successive generations will be open to learning new things, and retain the malleability of their minds much more than any other generation in history. It is imperative, therefore, to understand, at least to a certain extent, how things work. Of course, it is impractical (and un-profitable) for companies to design user-centric products, for they lack the mass-appeal user-friendly products do, and there's nothing particularly wrong about that. It follows, then, that we must take some time to at least develop a basic understanding of how the technology we use everyday works, at least at a very general level. This is increasingly important for newer generations that did not have the privilege of watching the technology evolve from the early stages, of glimpsing the nuts and bolts under the hood, before so much of it got commoditised.