Название: Kubernetes Cookbook
Автор: Kirill Kazakov
Издательство: Издательские решения
isbn: 9785006465633
isbn:
RUST_LOG: info
REDIS_HOST: redis
REDIS_PORT: 6379
RABBITMQ_HOST: rabitmq
RABBITMQ_PORT: 5672
redis:
image: redis: latest
volumes:
– redis:/data
ports:
– 6379
rabitmq:
image: rabbitmq: latest
volumes:
– rabbitmq:/var/lib/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
– 5672
volumes:
redis:
rabbitmq:
To run two containers, you need to use the following command:
docker-compose up
Ofter it’s practical to run containers in the background:
docker-compose up -d
And follow the logs in the same terminal session:
docker-compose logs -f
To stop all the compose’s containers, use the following command:
docker-compose down
Transitioning from Docker Compose to Kubernetes Orchestration
Migrating from Docker Compose to Kubernetes can offer several benefits and enhance the capabilities of your containerized applications. There are various reasons why Kubernetes can be a suitable option for this transition:
– Docker Compose is constrained by a single-cluster limitation, restricting deployment to just one host. Conversely, Kubernetes is a platform that effectively manages containers across multiple hosts.
– In Docker Compose, the failure of the host running containers results in the failure of all containers on that host. In contrast, Kubernetes employs a primary node to oversee the cluster and multiple worker nodes. If a worker node fails, the cluster can operate with minimal disruption.
– Kubernetes boasts many features and possibilities that can be expanded with new components and functionalities. Although Docker Compose allows adding a few features, it generally needs to catch up to Kubernetes in popularity and scope.
– With robust cloud-native support, Kubernetes facilitates deployment on any cloud provider. This flexibility has contributed to its growing popularity among software developers in recent years.
Conclusion
This section discusses how software packaging has evolved from traditional methods to modern containerization techniques using Docker and Kubernetes. It explains the benefits and considerations associated with Docker Engine, Docker Desktop, Podman, and Colima. The book will further explore the practical aspects of encapsulating applications into containers, the importance of Docker in current development methods, and the crucial role Kubernetes plays in orchestrating containerized applications at scale.
Docker and Kubernetes: Understanding Containerization
Creating a Local Cluster with Minikube
Minikube is a tool that makes it easy to run Kubernetes locally. It simplifies the process by running a single-node cluster inside a virtual machine (VM) on your device, which can emulate a multi-node Kubernetes cluster. Minikube is the most used local Kubernetes cluster. It is a great way to get started with Kubernetes. It is also an excellent environment for testing Kubernetes applications before deploying them to a production cluster.
There are equivalent alternatives to Minikube, such as Kubernetes support in Docker Desktop and Kind (Kubernetes in Docker), where you can also run Kubernetes clusters locally. However, Minikube is the most favored and widely used tool. It is also the most straightforward. It is a single binary that you can quickly download and run on your machine. It is also available for Windows, macOS, and Linux.
Installing Minikube
To install Minikube, download the binary from the [official website] (https://minikube.sigs.k8s.io/docs/start/). For example, If you use macOS with Intel Chip, apply this command:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
In case you prefer not to use Curl and Sudo combination, you can use Homebrew:
brew install minikube
Configuring and Launching Your Minikube Cluster
You can start Minikube simply as much as possible with the default configuration:
minikube start
While the provided command is generally functional, it’s recommended to explicitly specify the Minikube driver to enhance understanding of future provisioning configurations. For instance, the Container Network Interface (CNI) is set to auto by default, potentially leading to unforeseen consequences depending on the Minikube-selected driver.
It’s worth noting that Minikube often selects the driver based on the underlying operating system configuration. For example, if the Docker service runs, Minikube might default to using the Docker driver. Explicitly specifying the driver ensures a more predictable and tailored configuration for your specific needs.
minikube start – cpus=4 – memory=8192 – disk-size=50g – driver=docker – addons=ingress – addons=metrics-server
Most options are self-explanatory. The ` – driver’ option specifies the virtualization driver. By default, Minikube prefers the Docker driver or VM on macOS if Docker is not installed. On Linux – Docker, KVM2, and Podman drivers are favored; however, you can use all seven currently available options. The ` – addons’ option specifies the list of add-ons to enable. You can list the available add-ons by using the following command:
minikube addons list
If you use Docker Desktop, make sure the virtual machine’s CPU and memory settings are higher than Minikube’s settings. Otherwise, you will get an error like:
Exiting due to MK_USAGE: Docker Desktop has only 7959MB memory, but you specified 8192MB.
Once СКАЧАТЬ