Containers 101 for DBAs: 2-Building your First Database Container
As explained in the first article of this series, containers offer a lightweight, portable way to bundle and run applications, including databases, which can simplify development, testing, and deployment processes.
During this article we will go through a step-by-step guide to get you started with the basics of containers and deploying your first database containers for PostgreSQL, SQL Server, and MongoDB on your local machine.
A Basic Introduction
Before we proceed in building our database containers, we need to know first where these containers will be running and how this environment will exist, and this will lead us to talk about contianer’s engines.
What is a Container Engine?
A container engine is the underlying software that provides the runtime environment for containers. It enables users to create, launch, and manage containers on a host system. Essentially, it’s what makes it possible to encapsulate applications into containers that can run consistently across different computing environments.
Core Functions of a Container Engine
–Image Management: Handling container images, which are immutable files with source code, libraries, dependencies, and other necessary parts to run an application.
– Container Orchestration: Managing the lifecycle of containers including creation, deletion, start, and stop.
– Resource Isolation: Ensuring that containers have access to resources (CPU, memory, I/O, network, etc.) without interfering with each other.
– Security: Providing mechanisms to secure containers and their communications.
– Networking: Managing network settings like network bridges, IPs, and ports to allow containers to communicate with each other and the outside world.
– Volume Management: Handling external storage attached to containers for data persistence.
Core Functions of a Container Engine
1. Docker Engine: The most popular container engine, which simplifies the process of building and sharing containerized applications and microservices.
2. containerd: An industry-standard container runtime with an emphasis on simplicity, robustness, and portability. containerd is a core component of Docker Engine but can be used standalone in other contexts.
3. CRI-O: A lightweight container engine specifically designed for Kubernetes. It implements the Kubernetes Container Runtime Interface (CRI), allowing Kubernetes to use any OCI (Open Container Initiative)-compliant runtime.
4. Podman: An alternative to Docker Engine that is daemonless and designed to run on Linux systems. It can run containers and pods directly from the command line without requiring a container daemon. Podman is compatible with Docker but offers additional features like rootless containers.
5. rkt (pronounced “rocket”): Developed by CoreOS, rkt was designed for security and efficiency, though its development has been discontinued in favor of other projects like containerd and CRI-O.
Each container engine has its unique features and benefits, with Docker Engine being the most widely adopted for its ease of use and comprehensive toolset. However, the choice of a container engine can depend on specific needs such as integration with Kubernetes, security requirements, or the need for a lightweight solution.
We will be using Docker as our container engine during this series articles and in order to be more precise, we will be using Docker Desktop.
What is Docker Desktop?
Docker Desktop is an application that simplifies the process of building, sharing, and running containerized applications and microservices. It’s designed for developers who want to use Docker and Kubernetes on a Mac or Windows workstation. Docker Desktop includes the Docker Engine, Docker CLI client, Docker Compose, Docker Content Trust, Kubernetes, and Credential Helper.
Key Features of Docker Desktop?
– Easy Setup: Docker Desktop provides an easy installation process for Docker on Mac and Windows, including everything you need to run containers and Kubernetes clusters on your local machine.
– Developer-Friendly: It offers an intuitive user interface (UI) that allows developers to manage their Docker images, containers, networks, and volumes, as well as view container logs and configure container settings.
– Docker and Kubernetes Integration: Docker Desktop includes a standalone Kubernetes server and client, integrated with the Docker CLI, allowing you to use Docker commands to deploy your applications on a Kubernetes cluster.
– Automatic Updates: It keeps your Docker and Kubernetes versions up-to-date, ensuring you have the latest features and security updates.
– Advanced Networking: Docker Desktop provides advanced networking features like custom networks and DNS-based service discovery, making it easier to simulate production environments.
– Volume Management: It offers robust volume management that allows for persistent data storage and sharing between containers and your host system, facilitating data persistence across container restarts.
– Cross-Platform: While Docker Desktop is available for Mac and Windows, it abstracts and standardizes container runtime environments across different operating systems, making it easier to develop and test applications that can run anywhere.
Use Cases for Docker Desktop
– Development and Testing: Docker Desktop is ideal for developers looking to develop and test applications in a containerized environment that mirrors production systems.
– Learning Docker and Kubernetes: It provides a comprehensive environment for learning Docker and Kubernetes concepts, CLI operations, and orchestration.
– Microservices and Application Packaging: Docker Desktop helps in packaging applications and microservices into containers, ensuring consistency across development, testing, and production environments.
Docker Desktop simplifies the complexity of containerization technology and is a key tool for developers looking to embrace containerized applications and microservices architectures. Its integrated Docker and Kubernetes support makes it an indispensable tool for modern software development practices.
Building Database Container
Step 1: Install Docker
First, you’ll need to install Docker, the most popular container platform. Docker allows you to build, run, and manage containers on your local machine and cloud environments.
– Go to the Docker website and download Docker Desktop for your operating system (Windows/MacOS/Linux).
– Follow the installation instructions for your OS.
Step 2: Understand Docker Basics
Before diving into databases, it’s crucial to understand a few Docker basics:
– Images: Blueprints for containers, including the application and its dependencies.
– Containers: Running instances of Docker images.
– Dockerfile: A text document that contains all the commands to build a Docker image.
– Docker Hub: A registry to find and share Docker images.
Step 3: Understand Docker Basic Commands
Docker commands are essential for managing containers, images, volumes, and networks. Here’s a list of the most important Docker commands that cover the basics of Docker usage, from images and containers to networking and management.
Working with Docker Images
– docker pull [image]: Downloads an image from a registry (like Docker Hub) to your local machine.
– docker build -t [image-name]: Builds a Docker image from a Dockerfile in the current directory, tagging it with a name.
– docker images: Lists all images currently stored on your local machine.
– docker rmi [image]: Removes one or more images from your local machine.
Managing Containers
– docker run [options] [image] [command]: Creates and starts a container from an image.
– Options like -d for detached mode, –name to name the container, -p for port mapping, -e for environment variables, and –rm to automatically remove the container when it exits.
– docker ps (and docker ps -a): Lists running containers (and all containers, including stopped ones).
– docker stop [container]: Stops a running container.
– docker start [container]: Starts a stopped container.
– docker restart [container]: Restarts a container.
– docker rm [container]: Removes one or more containers.
– docker logs [container]: Fetches the logs of a container.
– docker exec -it [container] [command]: Executes a command inside a running container (often used to access a bash shell with bash or sh as the command).
Managing Networks
– docker network ls: Lists all networks on your Docker host.
– docker network create [options] [name]: Creates a new Docker network.
– docker network rm [network]: Removes one or more Docker networks.
Managing Volumes
– docker volume ls: Lists all volumes.
– docker volume create [name]: Creates a new volume.
– docker volume rm [volume]: Removes one or more volumes.
Docker Compose Commands
If you’re using Docker Compose to manage multi-container applications:
– docker-compose up: Starts and runs the entire app defined in a docker-compose.yml file, building images if necessary.
– docker-compose down: Stops and removes containers, networks, volumes, and images created by up.
System and Configuration
– docker info: Displays system-wide information about Docker.
– docker version: Shows the Docker version information.
This is a foundational set of commands that should help you get started with Docker, manage your containers and images, and maintain your Docker environment effectively. Each command has a range of options and variations, so consulting the Docker documentation or using the –help flag with any Docker command can provide more detailed information.
Step 4: Running Your First Container
Let’s start with something simple. Open a terminal and run:
sh docker run hello-world
Or if you are running under Windows or running PowerShell core on Linux/macOS:
pwsh docker run hello-world
This command downloads the `hello-world` image (if it’s not already locally available) and runs it in a container. You’ll see a welcome message indicating that Docker is working correctly.
Step 5: Running a PostgreSQL Container
1. Pull the PostgreSQL image from Docker Hub:
Open a new terminal/session and run:
docker pull postgres
2. Run the PostgreSQL container:
docker run --name my-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
This command starts a PostgreSQL container named my-postgres with a specified password. The -d flag runs the container in detached mode (in the background).
Step 6: Running a SQL Server Container
1. Pull the SQL Server 2022 image from Docker Hub:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=YourStrong(!)Password' --name 'my-sqlserver' -d mcr.microsoft.com/mssql/server:2022-latest
This command starts a SQL Server container named my-sqlserver in detached mode.
Step 7: Running a MongoDB Container
1. Pull the MongoDB image from Docker Hub:
Open a new terminal/session and run:
docker pull mongo
2. Run the MongoDB container:
docker run --name my-mongo -d mongo
This command starts a MongoDB container named my-mongo in detached mode.
Step 8: Interacting with Your Database Containers
You can interact with your database containers using various tools:
– PostgreSQL: Use psql command-line tools respectively, or a GUI like pgAdmin.
– SQL Server: Connect via SQL Server Management Studio (SSMS) or Azure Data Studio.
– MongoDB: Use the mongo shell or a GUI like MongoDB Compass.
Step 9: Learning More
To deepen your knowledge:
– Explore the official Docker documentation to understand Docker concepts and commands.
– Dive into the documentation for each database system (PostgreSQL, SQL Server, MongoDB) to learn about configuration, optimization, and administration within a containerized environment.
– Experiment with creating Dockerfiles for custom images, for instance, to include database seed data or specific configurations.
Containerizing databases can be highly beneficial for development and testing, offering consistency across environments and simplifying deployment processes. Happy containerizing!
Leave a comment