Название: Kubernetes Cookbook
Автор: Kirill Kazakov
Издательство: Издательские решения
isbn: 9785006465633
isbn:
Under the hood, Buildx uses QEMU to emulate the target architecture. The build process can take more time than usual cause it will start separate VMs for each target architecture. After the build is complete, you can find out the image’s available architectures by using the following command:
docker inspect auth-app | jq '.[].Architecture’
You need to install the “jq’ tool to run this and further commands. It is a command-line JSON processor that helps you parse and manipulate JSON data.
brew install jq
You will get the following output:
“amd64”
You might notice that only one architecture is available. This is because Buildx uses the ` – output=docker’ type by default, which cannot export multi-platform images. Instead, multi-platform images must be pushed to a registry using the ` – output=oci’ or simply with just the ` – push’ flag. When you use this flag, Docker creates a manifest with all available architectures for the image and attaches it to a nearby image within the registry where it’s pushed. When you pull the image, it will choose your architecture’s image. Let’s check the manifest for the [official Rust image] (https://hub.docker.com/_/rust) on the Docker Hub registry:
docker manifest inspect rust:1.73-bookworm | jq '.manifests[].platform’
Why don’t we specify any URL for a remote Docker Hub registry? That is because Docker CLI has a default registry, so the actual command above explicitly looks like this:
docker manifest inspect docker.io/rust:1.73-bookworm | jq '.manifests[].platform’
You will see output like so:
{
“architecture”: “amd64”,
“os”: “linux”
}
{
“architecture”: “arm”,
“os”: “linux”,
“variant”: “v7”
}
{
“architecture”: “arm64”,
“os”: “linux”,
“variant”: “v8”
}
{
“architecture”: “386”,
“os”: “linux”
}
You can see that the Rust image supports four architectures. Roughly speaking, the “arm’ architecture is for the Raspberry Pi. The “386” architecture is for 32-bit systems. The “amd64” architecture is for 64-bit systems. The “arm64” architecture is for Apple’s M chip.
The Role of Docker in Modern Development
Docker has transformed modern software development by providing a standardized approach through containerization. This approach has made software development, testing, and operations more efficient. Docker creates container images on various hardware configurations, including traditional x64/64 and ARM architectures. It integrates with multiple programming languages, making development and deployment more accessible and versatile for developers.
Docker is helpful for individual development environments and container orchestration and management. Organizations use Docker to streamline their software delivery pipelines, making them more efficient and reliable. Docker provides a comprehensive tool suite for containerization, which impacts software development at all stages.
Our journey doesn’t end with Docker alone as we navigate the complex world of modern development. The following section will explain the critical role of Kubernetes in orchestration and how it fits into the contemporary development landscape. Let’s explore how Kubernetes can orchestrate containerized applications.
Understanding Kubernetes’ Role in Orchestration
Building on our prior knowledge, we understand that container deployment is straightforward. What Kubernetes brings to the table, as detailed earlier, is large-scale container orchestration – particularly beneficial in complex microservice and multi-cloud environments.
Kubernetes, often regarded as the cloud’s operating system, extends beyond its origins as Google’s internal project, now serving as a cornerstone in the orchestration of containerized applications. It is a decent system for automating containerized application deployment, scaling, and management. It is a portable, extensible, and open-source platform. It is also a production-ready platform that powers the most extensive applications worldwide. Google, Spotify, The New York Times, and many other companies use Kubernetes at scale.
With the increasing complexity of microservices, Kubernetes’ vibrant community, including contributors from leading entities like Google and Red Hat, continually enhances its capabilities to simplify its management. Its active development mirrors the characteristic rapid evolution of open-source projects. Expect more discussions about Kubernetes involving IT professionals and individuals from diverse technical backgrounds, even those less familiar with technology.
Comparing Docker Compose and Kubernetes
Docker is a container platform. Kubernetes is a platform for orchestrating containers. It’s crucial to recognize that these two platforms cater to distinct purposes. An alternative to Kubernetes, even if incomplete, is Docker Compose. It presents a simpler solution for running Docker applications with multiple containers, finding its niche in local development environments. Some fearless individuals even deploy it in production. However, when comparing them, Docker Compose is like a small forklift that moves containers. On the other hand, Kubernetes can be envisioned as a cutting-edge logistics center comparable to the top-tier facilities in Amazon’s warehouses. It gives advanced automation, offering unparalleled container management at scale.
Docker Compose for Multi-Container Applications
With Docker Compose, you can define and run multiple containers. It uses a simple YAML file structure to configure the services. A service definition contains the configuration that is applied to each container. You can create and start all the services from your configuration with a single command.
Let’s enhance our auth-app application. Let’s assume it requires in-memory storage to keep the user’s data. We will use Redis for that. Also, we need a broker to send messages to the queue. We will use RabbitMQ as a traditional way to do that. Let’s create a “compose. yml’ file with the following content:
version: “3”
services:
auth-app:
image: СКАЧАТЬ