"Why we" articles are meant share about our engineering considerations and decisions, but it does not mean our decision would be the best for your use case, as your unique context matters a lot!
Our CLI was originally written NodeJS, 2 years ago.
It was something we quickly hacked together at the early beginnings of UI-licious when our focus was to move fast and iterate the product quickly. We wanted to roll out the CLI ASAP, so that users with a CI/CD can hook up their tests to their front-end deployment pipeline. The ever useful commander package was helpful in quickly setting up the CLI.
What's the matter with the original CLI?
This version served most users pretty well, especially in the startup beta-release days. And while we do dogfood our own CLI in our own CI/CD pipeline and felt that it could be better, it wasn't til feedback from mature software teams that were using the CLI heavily in their CI/CD pipeline that made it obvious that we need a better solution.
The issues mostly had to do with installation of the CLI. You see, the original CLI works pretty well for developers and testers. But it wasn't so friendly for DevOps, because npm can be a pretty big pain. - I'll come to that in a bit.
So we decided to rewrite the CLI from scratch and set out what would be the goals for the CLI.
Goals for the new CLI
1. Zero deployment dependencies
While node.js/npm has conquered the front-end development landscape.
Its easy to forget, that a very large segment of current web development still use good old tools. And most CI environments for non node.js based project, will not have them preinstalled.
As a result, in order to use our CLI toolchain within a CI for such projects, several users would need to at best wait an additional 15 minutes to install the whole node.js/npm stack.
Or at worse find it outright impossible due to networking policies, or dependency incompatibility with their existing projects.
So the less we can depend on - the better.
Realistically absolute zero dependency is impossible, for example you are always dependent on the OS. But it is a goal to strive towards.
2. Single file distribution
Having worked with many CLI tools, the ability to download a single file and execute commands - without an installer, or even setup process - does wonders to a user.
This has an additional benefit of making it easily backwards compatible with our NPM distribution channel. By quick single file glue code to link the NPM commands to the new file.
Evaluating our options
Node.js + NPM
good
Works well for >75% of our use case
Easy for company to maintain. JS is a required knowledge for all our developers
Easy to code
Cross platform
bad
Not a single file
Node.js or NPM dependency and compatibility issues for a small % of users who must use outdated builds (for other engineering reasons)
Many enterprise network policies are not very NPM friendly
overall
This would be an obvious choice for a JS exclusive project, where node.js and NPM is a safe assumption. Or when we want to get things done asap.
Unfortunately that is not us. And compatibility hell is a huge pain when it includes "other peoples code".
Java
good
Extremely Cross platform
Single JAR file
Easy for company to maintain. Java is our main backend language
neutral
[Subjective] CLI library syntax : feels like a chore
bad
Probably way freaking overkill in resource usage
JVM dependency : We probably have more users without java installed vs NPM
overall
Java is notoriously known for their obsession with backwards compatibility. If we built our CLI in java 6, we can be extremely confident that we would not face any compatibility issues with other projects. Running with the same code base on anything from IOT devices to supercomputers.
However, it is still a giant dependency. While relatively easier for anyone to install then node.js / npm, the fact that 25%+ users will need to install a JVM just to support our tool doesn't suite well with us.
And seriously, other then java based tools themselves. Those who uses java for their online SaaS product is a rarity. So ¯\_(ツ)_/¯
Shell scripting + Windows shell?
good
Smallest single file deployment (by byte count)
Very Easy to get something to work
neutral
Heavily dependent on several OS modules, while most would be safe assumptions for 90% of the use cases. It is something that needs to be aware and careful of. Mitigation can be done using auto installation steps for the remaining 9% of use cases.
bad
What CLI libraries?
Writing good, easy to read bash scripts isn't easy, nor easy to teach.
Hard for company to maintain : Only 2 developers in the company would be qualified enough to pull this off : and they have other priorities
Windows? Do we need to do double work for a dedicated batchfile equivalent
Remember that 1%?, that tend to happen for what would probably be a VIP linux corporate environment configured for XYZ. This forces the script writer to build complex detection and switching logic according to installed modules. Which will form an extremely convolute the code base easily by a factor of 10 or more (an extreme would be : no curl/wget/netcat? write raw http request sockets)
overall
Despite all its downsides, its final package would be crazy small file size of <100KB - uncompressed and un-minified. (meaning it can go lower)
For comparison our go binary file is 10MB
Especially in situations with specific constraints, such as a guarantee on certain dependencies, or projects where that last 1% does not matter : This would be my preferred choice.
An example would be my recent dev.to PR for a docker run script.
A single bash script that helps quickly setup either a DEV or DEMO environment
bash-3.2$ ./docker-run.sh
#---## This script will perform the following steps ... ## 1) Stop and remove any docker container with the name 'dev-to-postgres' and 'dev-to'# 2) Reset any storage directories if RUN_MODE starts with 'RESET-'# 3) Build the dev.to docker image, with the name of 'dev-to:dev' or 'dev-to:demo'# 4) Deploy the postgres container, mounting '_docker-storage/postgres' with the name 'dev-to-postgres'# 5) Deploy the dev-to container, with the name of 'dev-to-app', and sets up its port to 3000## To run this script properly, execute with the following (inside the dev.to repository folder)...# './docker-run.sh [RUN_MODE] [Additional docker envrionment arguments]'## Alternatively to run this script in 'interactive mode' simply run# './docker-run.sh INTERACTIVE-DEMO'##---#---## RUN_MODE can either be the following## - 'DEV' : Start up the container into bash, with a quick start guide# - 'DEMO' : Start up the container, and run dev.to (requries ALGOLIA environment variables)# - 'RESET-DEV' : Resets postgresql and upload data directory for a clean deployment, before running as DEV mode# - 'RESET-DEMO' : Resets postgresql and upload data directory for a clean deployment, before running as DEMO mode# - 'INTERACTIVE-DEMO' : Runs this script in 'interactive' mode to setup the 'DEMO'## So for example to run a development container in bash its simply# './docker-run.sh DEV'## To run a simple demo, with some dummy data (replace <?> with the actual keys)# './docker-run.sh DEMO -e ALGOLIASEARCH_APPLICATION_ID=<?> -e ALGOLIASEARCH_SEARCH_ONLY_KEY=<?> -e ALGOLIASEARCH_API_KEY=<?>'## Finally to run a working demo, you will need to provide either...# './docker-run.sh .... -e GITHUB_KEY=<?> -e GITHUB_SECRET=<?> -e GITHUB_TOKEN=<?>## And / Or ...# './docker-run.sh .... -e TWITTER_ACCESS_TOKEN=<?> -e TWITTER_ACCESS_TOKEN_SECRET=<?> -e TWITTER_KEY=<?> -e TWITTER_SECRET=<?>## Note that all of this can also be configured via ENVIRONMENT variables prior to running the script##---
And does the deployment using docker. Includes option to do a reset prior to deployment.
Language basics are relatively easy to learn (jumping from java)
Has a cute mascot
neutral
Steep usage learning curve, on following its opinionated coding practises.
bad
No one on the team can claim to have "deep experience" with go
Due to extreme type safety : Processing JSON data is really a pain in the ***
overall
One of the biggest draw is the ability to compile to any platform with the same code base, even ancient IBM systems.
While the language itself is easy to learn. Its strict adherence to a rather opinionated standard is a pain. For example, compiler will refuse to compile if you have unused dependencies in your code - among many many other things. This works both to frustrate the developer, and force better quality code.
Personally I both hate and respect this part of the compiler, as I have looser standard when experimenting in "dev mode", while at the same time have deep respect for it, as I follow a much stricter standard on "production mode".
So why GO?
Node.js, Java, C, C++, etc - are clearly out of the picture based on our goals.
The final showdown boiled down to either shell script or go.lang
Internally, as we used docker and linux extensively in our infrastructure, most of our engineering team do have shell script experience.
This allow us to be confident that we would be able to make shell work on ubuntu, and macosx.
What we are not confident however, is making it work well on windows, alpine, debian, arcOS, etc ...
The general plan at that point of time was to keep go.lang (which we were sceptical of) as a backup plan, and take a plunge into shell scripting - fixing any issue as it comes up with specific customer (the 9%).
However things changed when we were "forced" to jump into a small hackaton project (to fix a major customer issue) : inboxkitten.com
# PS: you should modify this for your use case
docker run \
-e MAILGUN_EMAIL_DOMAIN="<email-domain>" \
-e MAILGUN_API_KEY="<api-key>" \
-e WEBSITE_DOMAIN="localhost:8000" \
-p 8000:8000 \
uilicious/inboxkitten
In that 14 hour project, we decided to use the opportunity, to give go.lang CLI a try along the way in a small isolated project.
Turns out, it can be done relatively easy (after the learning curve). And with that - a decision was made... go lang it will be...
And from the looks of it, it turned out well for us after much testing! (fingers crossed as it hits production usage among our users)
Digression, Personally I would have went this route. Hid myself in a programming cave for a week. And bash out utility scripts around all the limitations across every platform.
However until the team grows much bigger in size and experience, this would be on hold. So maybe next year? (i dunno)
Sounds good, What does uilicious do with a CLI anyway?
We run test scripts like these ...
// Lets go to dev.toI.goTo("https://dev.to")// Fill up searchI.fill("Search","uilicious")I.pressEnter()// I should see myself or my co-founderI.see("Shi Ling")I.see("Eugene Cheah")