Introduction
As developers, we often find ourselves juggling various dependencies, configurations, and deployment environments. This complexity can sometimes make it challenging to ensure that our applications run consistently across different machines and platforms.
Enter Docker, a revolutionary tool that has transformed the way we build, ship, and run applications. Docker provides a lightweight, portable, and self-sufficient environment for running applications, making it an ideal choice for Node.js and Express developers looking to streamline their development workflow and deployment process.
In this blog post, we will embark on a journey to demystify Docker and explore how it can simplify the development and deployment of Node.js and Express applications. Whether you're a seasoned developer or just getting started, this guide will equip you with the fundamental Docker concepts and hands-on skills needed to containerize your Node.js and Express projects with confidence.
What is Docker anyway?
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package.
Why Use Docker with Node.js and Express?
Managing dependencies and ensuring consistent behavior across different development and production environments can be a daunting task. This is where Docker comes to the rescue.
By containerizing your Node.js and Express applications with Docker, you can:
Isolate Dependencies: Docker containers encapsulate your application's dependencies, preventing conflicts and version mismatches between different projects.
Simplify Deployment: Docker's portability ensures that your application runs the same way in development, testing, and production environments.
Scale Effortlessly: Docker containers can be easily replicated and scaled horizontally to handle increased traffic and demand.
Collaborate Seamlessly: Share your Docker containers with team members, ensuring that everyone works in the same environment regardless of their local setup.
Get our hands dirty with practical stuff
Please have docker installed in your system before you move forward.
Docker involves a few key steps to build and run a containerized application. Here are the main steps in simple terms:
Write Your Code: First, you write the code for your application just like you normally would.
Create a Dockerfile: You create a special text file called a "Dockerfile" that tells Docker how to set up your application's environment. It's like writing down a recipe for your application.
Build Your Docker Image: You use the Dockerfile to build an image. An image is like a snapshot of your application and its environment. It contains all the ingredients and instructions needed to run your app.
Run a Container: Once you have an image, you can create a container from it. A container is like a running instance of your application. It's where your app actually runs.
Use Your App: Now that your container is running, you can use your application just like you would with a regular program. It's isolated from your computer, so it won't mess up anything else.
Stop or Remove Containers: When you're done using your app, you can stop the container (which is like turning off the application) or remove it (like deleting it).
To put it in even simpler terms, think of it like baking a cake:
Writing Code: You write down the cake recipe.
Creating a Dockerfile: You write down the steps to bake the cake in a special recipe card.
Building an Image: You use the recipe card to gather all the ingredients and bake the cake. Now you have a cake ready to be served.
Running a Container: You put the cake on a plate and start eating it. This is like running your application in a container.
Stopping or Removing Containers: When you're full and don't want to eat anymore, you can either stop eating (stop the container) or throw away the leftover cake (remove the container).
Docker helps you package your application and its environment into a neat, portable box (the image) and then run it wherever you want (the container). It's like having a magic kitchen that can make and serve your cake anywhere you go!
Let's see it in practical terms with our code:
Write Your Code
If you are already familiar with node/express then this section should be a breeze and thus I'll go at much faster pace.
Create new folder and do this npm init
and keep pressing Enter
.
You should have a package.json
file now.
Create a new file called index.js
and paste this code:
const express = require('express');
const app = express();
app.get('/', (_, res) => {
res.send('Hello Shameel!');
}
);
app.listen(3000, () => {
console.log('Example app listening on port 3000!');
}
);
Now do this in terminal:
npm install
After installation completes, enter this in terminal:
node index.js
You should see this in terminal:
Shameel app listening on port 3000!
It's a simple app that sends Hello Shameel
when you hit http://localhost:3000/
in the browser
Create a Dockerfile 🐳
All you have to do is create a file named: Dockerfile
and paste this:
FROM node:12.18.3-alpine3.12
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 3000:3000
CMD ["node", "index.js"]
Let's break down the Docker instructions:
FROM node:12.18.3-alpine3.12
Explanation: This line tells Docker to start with a base image of Node.js version 12.18.3, and specifically, it's using a lightweight version of Linux called Alpine with version 3.12.
Easier Explanation: Think of this as choosing a pre-made computer that already has Node.js installed. We're picking a computer that's small and fast (Alpine) and comes with Node.js version 12.18.3.
WORKDIR /app
Explanation: This sets the working directory inside the Docker container to a folder called "app."
Easier Explanation: Imagine you're working inside a virtual room, and this command says, "Let's work inside the 'app' room from now on."
COPY package.json /app
Explanation: This copies the "package.json" file from your computer (the one you're using to build the Docker container) into the "app" folder inside the Docker container.
Easier Explanation: It's like taking a piece of paper (package.json) from your desk and placing it inside the "app" room.
RUN npm install
Explanation: This tells Docker to run the "npm install" command inside the "app" folder of the container. It's installing all the software your Node.js application needs based on what's listed in the "package.json" file.
Easier Explanation: Imagine you have a recipe (package.json) that lists all the ingredients you need for a cake. This command is like going to the kitchen (the "app" room) and actually getting all those ingredients (installing them) so you can bake the cake later.
COPY . /app
Explanation: This copies all the files and folders (the entire current directory, represented by ".") from your computer into the "app" folder inside the Docker container.
Easier Explanation: It's like moving all your project files and folders, including your code, into the "app" room.
EXPOSE 3000:3000
Explanation: This part tells us that the container is expected to listen on port 3000. The format here is
hostPort:containerPort
, where:hostPort
is the port on your computer (the host) that you might use to communicate with the container.containerPort
is the port inside the container where your application is set up to receive network requests.Easier Explanation: When this container runs, it's like having a door labeled '3000' on the container. If you want to talk to whatever is inside the container, you should knock on this door (port 3000).
CMD ["node", "index.js"]
Explanation: This sets the default command that will run when the Docker container starts. In this case, it's running the "node index.js" command, which typically starts your Node.js application.
Easier Explanation: Think of it as setting up an automatic action for when you enter the "app" room. As soon as you step inside, someone (Docker) automatically starts your project using "node index.js."
So, all these instructions together are like giving Docker a set of steps to create a special workspace ("app" room), bring in the tools and ingredients (Node.js and dependencies), and then start your project when you enter that room. This way, anyone who has Docker can use this set of instructions to create the exact same environment for your Node.js application.
Build Your Docker Image
Enter this command in the terminal
docker build .
The docker build .
command is used to build a Docker image based on the instructions provided in a Dockerfile located in the current directory (represented by the dot .
).
Here's what this command does in more detail:
Dockerfile Required: Before using this command, you must have a Dockerfile in the same directory where you're running the command. The Dockerfile contains instructions on how to create the image, including what base image to use, what software to install, and how to configure the environment.
Build Process: When you run
docker build .
, Docker reads the instructions from the Dockerfile and follows them step by step. It starts with an initial base image (specified in the Dockerfile) and then executes each command in the Dockerfile to create layers on top of that base image.Layered Approach: Docker uses a layered approach to build images. Each instruction in the Dockerfile adds a new layer to the image. This layering allows for efficient caching and reuse of layers, making subsequent builds faster if the Dockerfile hasn't changed.
Output: As Docker executes the instructions in the Dockerfile, you'll see output in your terminal showing the progress of the build. Docker will download necessary components, install software, and perform other tasks according to the instructions.
Final Image: Once all the instructions in the Dockerfile have been executed, Docker produces a final image. This image is a snapshot of your application and its environment, including all the dependencies and configurations specified in the Dockerfile.
Tagging: By default, Docker assigns a unique identifier (a long hexadecimal string) to the image it builds. However, you can also use the
-t
flag to give your image a more human-readable name and tag. For example:docker build -t myapp:1.0 .
Result: After the build process completes successfully, you'll have a Docker image that encapsulates your application and its environment. You can then use this image to create and run containers, which are instances of your application that can be executed in isolation.
After you run that command you will see an output like this:
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 157B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:12.18.3-alpine3.12 11.1s
=> [auth] library/node:pull token for registry-1.docker.io 0.0s
=> [1/5] FROM docker.io/library/node:12.18.3-alpine3.12@sha256:24d74460bbffb823e26129ea186ebab1678757e210a521e3358 30.8s
=> => resolve docker.io/library/node:12.18.3-alpine3.12@sha256:24d74460bbffb823e26129ea186ebab1678757e210a521e3
As you can see that you are on first step out of 5 and it uses a Layered Approach as discussed in point 3 above.
It downloads the image first. and then executes subsequent commands which you will see like this:
[4/5] RUN npm install
[5/5] COPY . /app
At the end you will see something like which denotes that you now have an image build ready.
writing image sha256:<some-string>
Basic Docker Image Operations
Now that you have docker image written, lets see it.
List Image
Run this command:
docker image ls
You should be able to see your image without any tag:
Note that it has no tag because we did not provide any.
Delete Image
Run this command:
docker image rm <image-id>
You can find id
of image
from docker image ls
command as demonstrated earler.
For my case, I'd give it like this:
docker image rm 3eeaca53075f
And I saw a response like this:
Deleted: sha256:3eeaca53075f9fda421fb006d8627e605f90e8a71d331c5e12d7a517c58a2daf
Build Docker Image With Tag
Use -t
flag with the name
of the tag
as demonstrated below:
Run this command:
docker build -t shameel-node-image .
Cool Fact:
Look at this:
You see the magic of layering? These steps are already cached:
=> CACHED [2/5] WORKDIR /app
=> CACHED [3/5] COPY package.json /app
=> CACHED [4/5] RUN npm install
This means that npm install
won't always run whenever you make changes in your node.js application. Only the changes will be copied. It will change only if you add/remove dependency.
Running a Container
Containers run from images.
Lets list the image we have that we just built with the cool tag.
You can run it with:
docker run -p 3000:3000 -d --name shameel-node-app -d shameel-node-image
Lets digest what that command is:
docker run: This tells Docker to run a container, which is like a virtual space to run your application.
--name shameel-node-app: This part lets you give your container a unique nickname, lets say in this case it is "shameel-node-app."
-d: This flag stands for "detached mode." It means that your container will run in the background, and you can continue to use your terminal for other tasks.
-p 3000:3000: This part specifies port mapping. It tells Docker to map port 3000 on your host machine (the computer where Docker is running) to port 3000 inside the container. This is done so that you can access services running inside the container via your host's port 3000.
shameel-node-image: This is the name of the Docker image you want to use to create your container. It's like the recipe you want to follow to make your application.
So, when you run this command, Docker will:
- Take the "shameel-node-image" (your application recipe).
- Create a new container from it, calling it "shameel-node-app" (like making a serving plate for your dish).
- Start running your application inside this container in the background, and you can continue using your terminal for other things.
It's as if you ordered a meal (the image), asked for it to be served on a plate (the container with the name "shameel-node-app"), and then enjoyed your meal without having to watch the chef (Docker) cook it in the kitchen (your computer).
After that's done and dusted, you should see a weird string:
This weird string is the ID of the container.
For my case it is:
2d06a5f7e628c12b76cea2e99b3bf7e2485d3b0dd7a3e895b69a0221cce654a6
You can list the containers with this command
docker ps
So that's done..
Now go to the link in browser and hit this:
http://localhost:3000/
And magically you would see this:
And if you have docker desktop then you should be able to see the terminal for the app:
Importance of .dockerignore
Currently, we are copying everything but we don't want to copy everything, in this case, node_modules
, Dockerfile
. In production apps, we also don't really push everything especially secrets and stuff like that.
So, just like in git
, we have .gitignore
, here, we have .dockerignore
which helps us avoiding files we never want to copy into our image.
Just create a .dockerignore
file write the name of the files/directories, you want to ignore.
Like this:
node_modules
Dockerfile
.gitignore
.git
Conclusion
Here are the key takeaways and commands for Docker:
Key Takeaways:
Docker allows you to create, deploy, and run applications in containers, which package the application and its dependencies into a single, portable unit.
Using Docker with Node.js and Express can help you isolate dependencies, simplify deployment, scale effortlessly, and collaborate seamlessly with your team.
The
Dockerfile
is a crucial component that defines how your Docker image is built. It includes instructions to set up the environment, install dependencies, and run your application.Docker images are snapshots of your application and its environment, and Docker containers are instances of those images that can be run in isolation.
Docker commands, such as
docker build
,docker run
,docker image ls
, anddocker image rm
, are used to build, run, list, and remove Docker images and containers.
Key Docker Commands:
docker build <image-name> .
: Build a Docker image using a Dockerfile in the current directory.docker run -p <host-port>:<container-port> -d --name <container-name> <image-name>
: Run a Docker container from an image, specifying port mapping, a container name, and the image name.docker image ls
: List Docker images on your system.docker image rm <image-id>
: Remove a Docker image by its ID.docker ps
: List Docker containers on your system.
Happy coding and containerizing! 🐳✨
Follow me for more such content:
LinkedIn: https://www.linkedin.com/in/shameeluddin/
Github: https://github.com/Shameel123