When we think about the next generation of programming, it can be helpful to look back at how the personal computer has evolved.
To see what I mean, let's take a quick trip back to the 70s.
The origin of personal computers
1975 was a revolutionary year for personal computing. The Altair 8800 was released, the first commercially successful personal computer. This was shortly followed by Altair Basic - the first programming language for the machine, developed by Bill Gates and Paul Allen.
These, combined with a teletype, produced an early computer terminal. It functioned much like a modern terminal developers use today, but it was a hardcopy terminal. The video below demonstrates using Altair Basic on an Altair 8800 with a teletype. You type into a typewriter, hit enter, and the computer processes the input and types a response
Typing Altair Basic on an Altair 8800 with a teletype - what I like to call the "ghost typewriter" computer. Credit
Adding a screen
The Apple II was released in 1977, another landmark in personal computing. It was a key part of a new wave of introducing whole new visual concepts to computing using a graphical display.
The same year, the original digital spreadsheet, VisiCalc, was released for the Apple II. This program alone was considered by Steve Jobs to have "propelled the Apple II to the success that it achieved" (source).
VisiCalc on the Apple II. Credit
Enter the GUI
The original GUI (Graphical User Interface) was first developed by Xerox in 1973. The Xerox Alto was the first computer to use a GUI, the desktop metaphor, and the mouse. The Alto was a major influence for both the Apple Macintosh and the original Windows OS, released in 1984 and 1985, respectively.
Not only did this pave the way for making computers drastically more intuitive and approachable to everyone, it brought us this incredible ad:
Programming with text
Fast forward to today: we interact with computers constantly, essentially forgetting there was ever a day when the GUI did not exist. Can you imagine using an iPhone without being able to navigate and interact with apps by tapping and using gestures, and instead you had to type in commands?
Oddly enough, when it comes to programming for computers, that is where we still are. We aren't that much farther from the original Altair 8800 and teletype. We type commands into a console, and type structured text-based instructions to a compiler/interpreter.
Some might argue this is surprisingly elegant - and it is in certain ways. But at the same time - it's 2021 and we are still using text editors and terminals to code. Shouldn't we have figured out a better way by now?
The benefits of visualization
The benefits of visual computing are obvious. It is approachable, efficient, and elegant - yet still very powerful.
The beauty of using a GUI is that every use case can have its own purpose-built experience. Unlike a programming language that has one simple construct (it's syntax and grammar) to accomplish all tasks, a UI can provide a unique experience optimized for each type of task.
Everything from querying data to analyzing it, there are better tools than text:
Same goes for creating UIs:
The challenges of visualization
So, why are we still writing programs as text like we did 50 years ago? Some have even called this the "great stagnation"
The challenge of visual programming lies in its benefits - there is no one way to do everything. So as a result, we still lean on text-based coding, as its simple yet flexible constructs leave no gaps unfulfilled. In a way this leads text-based coding to being a jack of all trades, and master of none.
To take things back to our examples from the 70s and 80s, a metaphor for the majority of current no-code tooling is an arcade game. Arcade games were single purpose. They had all the things that seemed magical about the revolutionary Macintosh - they had a visual display, they were intuitive for even children to use, etc. Much like the current generation of no-code tools.
But they lacked one key ingredient - they were not general purpose. Don't get me wrong, single purpose computing has its benefit, but a revolution in software development doesn't come from such technology, it comes from generalizability. Aka building something that is intuitive and powerful and is unbounded in what you can create with it.
How do we solve this?
New programming generations are created as layers on top of the prior generations, not as wholly separate and new concepts. New technology is created by standing on the backs of giants, not reinventing the world.
In order to create a visual programming experience unbounded by the constraints of a single problem, we must connect visualization to existing software systems. In other words, we don't need to reinvent the wheel for a single purpose, but connect to it as it is.
Brett Victor, in his incredible talk "Inventing on principle", shows us some examples.
Who is doing this now?
There are 2 main categories - visually enhanced developer tools (developer tools like IDEs with visual features), and low-code tools (visual tools that connect to existing APIs and codebases).
Visually enhanced developer tools
One industry that is really pushing visual coding is game development. Games are created by huge teams, and have huge production value that cannot depend on legacy methods like app/web devs use - things like passing off a design to a developer and asking them to code it up in CSS by hand. A world as intricate as those found in modern games would be a nightmare to build line by line manually.
Credit: Ghost of Tsushima
Would you want to code this landscape by hand like web devs code CSS? Yeah, didn't think so.
The need to take games beyond what could be coded by hand led the industry to invest heavily in visual tools that connect directly to code. Unreal Engine is a great example that you can try yourself today:
Another great example of this is the latest SwiftUI in Xcode
Low-code tools
In web and application software, low-code tools are starting to emerge and grow rapidly. Tools like Airtable, Zapier, Builder, and Retool are showing how we can elegantly allow visual editing connected to existing code, data, and APIs
These work great because they build on top of existing infrastructure - your existing React components, databases, and APIs - and you can can granularly set permissions as to who can edit what and where.
So - what's next? Where is this going?
As we see it, the connection between code and no code will only grow tighter and stronger. We are only at the beginning, you could call it the Apple II days of visual software development. We still have our version of the Macintosh (truly easy and powerful visual development) to get to, and ultimately the iPhone (easy for everyone).
Here are a few projects out of many we are particularly excited about right now - Storybook, JSX Lite, Blockly, and Build
Are there any other exciting visual programming developments that you are excited about or want to see? Drop me a comment below!