Syndicated from Learning to use Docker on stegriff.co.uk
Ever get that panicky feeling that the future is coming on and you're being left behind?
Well I was getting that feeling any time I read about Docker and Kubernetes, so I thought I'm way past due to start this journey!
Docker is a way of getting some software and all its dependencies, and running that software in a somewhat isolated environment.
That's my short definition, anyway.
Images are available from a registry, usually Docker Hub (https://hub.docker.com/), or Microsoft Container Registry.
An image is analogous to a software installer or package. You can use it to create
a container on your machine. A container can be running or stopped. You could have many containers of the same image, each with different settings. Each of them can be started and stopped independently.
In writing, this sounds a bit clunky and heavy, but in practice it's pretty cool. With a few commands, you can get up and running with a complex package like MongoDB or Ghost. For one thing, this lets you test out a new tech to see whether it's a good fit without having to go through laborious setup. In production, containers form the basis for a reproducible, hardware-agnostic setup of services and microservices.
Setup on Windows
To install Docker on Windows, visit https://hub.docker.com/ and follow the download link for 'Docker Desktop'. This was formerly known as 'Docker for Windows' I think. At time of writing, the download action is a big banner, front and centre on the landing page.
When you first set up Docker Desktop, you have a tickbox for whether you want to be in Linux container mode or Windows. Leave it unticked to use Linux. You can change this easily any time and switch between Linux/Windows mode.
Some commands
Open a terminal, whether cmd, PowerShell, git bash... you can run just docker
on its own to see help text including a list of commands. What follows is my playbook of pertinent examples.
Images
Here are some commands for finding and managing Images on your machine. Each command line is prefaced with a comment to explain the intent:
# Search docker hub for an image
docker search nginx
# Download an image (doesn't create a container)
docker pull nginx
# List the images that you have on the machine
docker images
# Remove the downloaded nginx image
# (all running containers of that image have to be stopped first)
docker rmi nginx
Containers
After a container is created, you can identify it to future commands either using it's whole name, or an unambiguous fragment of its ID. The IDs look like git commit hashes. If you don't assign a container a name, it will get a random one which seems to be based on [adjective]_[scientist]
!
# Create a container based on the nginx image
# Called 'my-server' (optional)
# Mapping container port 80 to host port 8080
docker create --name my-server -p 8080:80 nginx
# Start the container
# Detached by default, use -it for interactive terminal
docker start my-server
docker start -it my-server
# Stop a container
docker stop my-server
# List all running containers
docker ps
# List all containers, running and stopped
docker ps -a
# Delete a container
# Doesn't affect the image or any other containers of the same image
docker rm my-server
Notice the ps
command, named after the Unix ps
command which lists running processes. Confusingly, ps -a
is the command to get you a list of all existent containers, running or not.
For docker start
, -it
is a combination of the -i
and -t
flags.
All-in-ones
docker run nginx
is a shorthand command that will:
- Download the image (i.e. nginx) if it doesn't exist
- Create a new container based on the image
- Start the container
As such, you can supply many of the parameters from other commands to docker run
, such as --name
and -p
for ports. Run docker run --help
for a full list.
Be warned that docker run
will always create a new container. So if you use it repeatedly for the same image, you'll end up with lots of different running containers of the same software! I find that ps -a
is crucial for staying on top of this.
On the bright side, once you have a given version of the base image on your machine, Docker will use the local copy rather than re-downloading the software. When the latest
version increments, presumably docker run
would download and run the newer version? Fact checker needed!
What can I do with a running container?
My primary way of interacting with these containers so far is to get them to expose their key functionality (website, database, whatever) to a port on the host machine - my PC.
Then I can use this connection to use the running software either in a browser or by connecting an app to the running database, for example.
Although the running software has a Linux distro at its base (generally Alpine), I haven't yet tried remoting into the Linux-y bit to mess around. Filed under 'L' for 'Later'.
More about containers
Consequently, a container actually has a virtual presence on your network!
You can find (way too much) information about a container with docker inspect container-name
. Near the bottom of the long JSON result of that command is an IP Address. I haven't learned much about what you can do with this.
I'm aware that in multi-container setups using Docker Swarm and Kubernetes, these virtual network IPs are used for inter-container communication. Filed under 'L' for 'Laziness'.
Contain your enthusiasm
Whether or not containers are "the future" (for many, they are "the present") I think we need to get to grips with this paradigm. So I'm working on it! Thanks for reading this far, I hope you got something out of my newbie notes.