I've been thinking a lot recently about how modern applications are developed and deployed. A lot of my experience with this stems from having worked at large corporations, at early stage startups, and simply trying to make goofy projects on my own time.
When I was at larger organizations many years ago, there was a strict release workflow: you checked in your code, documentation, and assets by an agreed upon deadline—something like Tuesdays at 11pm PST. Then, all of that was bundled up and released to the world by an operations team whose sole responsibility was to make sure everything went smoothly. If there was an infrastructure issue with the deploy, a release manager worked with the operations team to solve it. For example, if there was some replication lag in production, the ops team needed to investigate which patch from all the weekly changes caused that problem and fix it. Obviously, this is unfair and not ideal. As a developer working on an application, I was ignorant of what the operations team handled. I had no clue as to the amount of stress they worked through, and how their role was to suss out from hundreds of changelists where a problem might've been introduced. Looking back, this was all completely erroneous: had the developers known what the operations team was dealing with, we would have probably agreed to more ownership and responsibility, in order to avoid introducing issues in the first place. These infrastructure problems were sometimes caused by code that didn't fully grasp how the underlying systems worked, a problem that was, in a sense, abstracted away by organizational hierarchies.
During my time working at a couple of startups, the responsibility of the code and infrastructure operating correctly was distributed amongst everyone working on the product. If, on a deploy, there was an issue with the database, or if storage space ran out, or if timeouts were increasing as a result of unexpected behavior, no one was going to save you. It was your change and yours to fix. This was an immense learning opportunity, and one that I'm definitely grateful for, but it swung too far in the opposite direction. Now, in addition to building features, I was also in charge of controlling the underlying server behavior. While developers should have a grasp as to how their code behaves in production in order to mitigate performance and resource issues, I don't think it's the best use of an application developer's time to constantly wade into these operational details.
In my personal projects, I've struggled to set up modern services that my applications have required. In order to set up a Rails site on a vanilla infrastructure-as-a-service model, one needs to be familiar with the packages available on their Ubuntu base image, an understanding of how to map their nginx server to their domain, setting up services that start when the server does, and securing all of them. This ends up being too much mental overhead. I mean, it can be done, but it takes a few hours and dozens of outdated help articles written over various years by various authors to figure it all out. And once it's set up correctly, how to handle continuous deployments is a Sisyphean task. It turns out that it's actually quite complicated to establish them. For example, which deployment tool should I even use? Is Capistrano still a good choice? I've heard good things about Mina, which is newer and something else to learn. If I decide to write my next project in Node, I'll need to learn and configure a whole new tool, like PM2. And what if, against all odds, the project takes off? Suddenly I'll need to think about the appropriate resources for scaling, load balancing requests, and maybe introducing new dependencies, like Redis for caching.
Know what you need to (and forget the rest)
In recent years, there's been a shift towards embracing DevOps as a methodology. In a nutshell, DevOps means applying long-established principles, such as writing tests or using version control, to your infrastructure setup. You can automate the resources, services, and networking essentials that your application needs, roll them back if problems are introduced, or discuss them with others in a pull request, just like any other piece of code. For example, a DevOps workflow would advocate that your changes include tests for how your code behaves for the end user and on your infrastructure.
However, there's an immense learning curve to achieving proficiency in operational management. You may find that you're spending more time tweaking configuration files than actually writing code for your application. DevOps tends to conflate the roles of developing an application with running its infrastructure; that to be a developer, you must know how to plan, build, and satisfy your user's needs, as well as know how to architect, setup, and scale a server's needs. No one would assume that a backend Ruby developer be also savvy enough in frontend CSS and JavaScript, yet that's more or less what an overemphasis on server management suggests.
I believe that DevOps is a great idea—for companies that are at an immense scale. I'm talking Google/Netflix/Facebook scale, applications that receive millions of requests a second, who host their own data centers (or at least, crucially rely on dedicated servers). A mistake or inefficiency in their operational setup could affect hundreds of thousands of people in an instant, so it makes sense for them to rely on technologies that reduce the possibility for error.
As an application developer, I don't want to think about provisioning servers and CPUs or tweaking processes and deployments. It doesn't much matter whether a project is just me or me and a group of friends or me and an entire organization. Our main concern—and frankly, our main interest—is to build something that people use.
Simple is easier
For the rest of us, there's really no need to make the responsibility of building something that people use more complicated than it already is. Deploying to a platform-as-a-service takes the burden of servicing operations and places it on experts whose sole responsibility is to keep your site up. While PaaS models tend to be pricier than DIY infrastructure-as-a-service models, the trade-off is that you won't be paying engineers to fine-tune your networks, nor will your application developers juggle their focus between features that grow your business and managing an application's underlying services.
Not only is the cognitive overload alleviated, platforms like Heroku (for complete applications), Zeit (for simpler microservices), or Netlify (for static sites) are just easier to work with. Rather than learning a new configuration language or stringing together Unix commands, you can work in a faster continuous deployment cycles through a git push
without worrying about the details of getting the site online during its many iterations. If a new feature depends upon a new service, you can attach it to the app through a command-line sequence as easy as addons kafka:standard-1
and rest assured knowing that best practices are provided for you. As your site grows, you can just drag a slider on the administration page to handle any scaling issues. Even if an infrastructure problem is introduced as a result of a code change, you can temporarily adjust the infrastructure via your PaaSes UI while working on a fix in the application.
The web has become more complicated as it has grown, and over the years, communities have continued to develop tooling that satisfies their intricate needs. I think application developers would do well to take a step back and ask what problems they're trying to solve, and then pick a development strategy that solves them in the simplest way possible. It may well be that taking control of your infrastructure and operational management is essential for your application and organization, but if it's not, it's better to focus on what's important—delivering features that your users need from you.