Introduction to K8s using Minikube
What is Kubernetes?
Kubernetes helps us manage containerized applications that contain hundreds or thousands of containers in the different deployment environments, for example, physical machines, virtual machines, or cloud environments.
Kubernetes is a popular container orchestration platform that allows developers to deploy, manage, and scale containerized applications.
What is MiniKube?
Minikube is an open-source tool that helps developers (us) to run a single-node Kubernetes cluster on our local machine.
minikube type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured
Minikube is a tool that allows you to run a Kubernetes cluster locally on your machine for development and testing purposes.
minikube status command is used to check the status of the Minikube cluster.
The output above shows the current status of the minikube cluster. Here’s what each line means:
type: Control Plane : This indicates that the minikube cluster is running as a control plane, which means it has the ability to schedule and manage containers.
host: Running: This indicates that the virtual machine hosting the minikube cluster is currently running.
kubelet: Running: This indicates that the Kubernetes agent, kubelet, is running on the virtual machine and is responsible for managing containers on the node.
apiserver: Running: This indicates that the Kubernetes API server, which is responsible for managing the state of the Kubernetes cluster, is running.
kubeconfig: Configured: This indicates that the kubeconfig file, which contains the necessary credentials to access the Kubernetes API server, has been properly configured.
Example of K8 Setup for a Basic Web App using Flask
Create Docker Image from our Simple Flask Web App
Heres the basic file structure of the app below:
. ├── Dockerfile ├── app │ ├── app.py │ └── templates │ ├── index.html ├── docker-compose.yaml └── requirements.txt
We’d use this compose file to spin up a n number of container at thier respectable ports using the same image.
version: '3' services: container1: image: burger ports: - "8080:6000" container2: image: burger ports: - "8081:6000" container3: image: burger ports: - "8082:6000" container4: image: burger ports: - "8083:6000" container5: image: burger ports: - "8084:6000"
To start up our containers, we’d use the docker-compose up command.
from flask import Flask, render_template import json, os, signal app = Flask(__name__) @app.route("/") def index(): return render_template('index.html') @app.route('/exit', methods=['GET']) def stopServer(): os.kill(os.getpid(), signal.SIGINT) if __name__ == "__main__": app.run(host='0.0.0.0', port=6000)
Our Basic Flask App has two endpoints. With the first endpoint
/ rendering an
index.html file (below) and another endpoint at
/exit, which will STOP THE SERVER (which will come in handy later)
<!DOCTYPE html> <html> <body> <iframe width="560" height="315" src="https://www.youtube.com/embed/9cPxh2DikIA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> </body> </html>
This html file is very simple and displays a youtube video which can be changed if you update the
src with whatever Youtube link. I asked ChatGPT to creater a simple
html file for me that display a youtube video.
FROM python:3.6 COPY . /app WORKDIR /app RUN pip install -r requirements.txt EXPOSE 8080 ENTRYPOINT ["python"] CMD ["app/app.py"]
What does EXPOSE stand for?
In a Dockerfile, EXPOSE instruction is used to inform Docker which port the container should listen on for incoming network connections.
In the given Dockerfile, EXPOSE 8080 is used to inform Docker that the container will listen for incoming connections on port 8080. This doesn’t actually publish the port, but it is a way of documenting which ports the container is expected to use.
To actually publish the port so that external clients can access the container, you will need to use the -p option when running the docker run command to bind the container’s port to a port on the host machine. For example, you could run the container with the command docker run -p 8080:8080 image_name to publish the container’s port 8080 to the host’s port 8080.
Note that the EXPOSE instruction does not actually start the container listening on the specified port, it simply documents which ports are expected to be used. The actual process of starting the container and listening on the specified port is handled by the application running inside the container.
Lets build the Image
In the terminal and in our current app directory, the name
name-of-image is the name we decided for our Image, using the
docker build -t "name-of-image" .
Start Minikube Kuberentes Cluster
- Since were running it in Docker
Were creating a virtual enviroment (containizer) enviroment for this cluster, the cluster that we’re simulating on our local machine to test K8. This cluster (in our example) is created using Docker.
We use the kubectl command to send information (on our local machine) to the K8 Cluster (Master Node and Control Plane)! Kubectl = Kube Control which is our communication device with the Master Node.
minikube start --driver=docker
With our Cluster up and running, we can send instructions to create deployment to the cluster. How do we accomplish this using kubectl command.
We want to use the kubectl to create **new deployment object** (remember we work with these objects, which are then picked up by the K8 Cluster) We can create objects with the
kubectl create command
Were going to create a deployment object, which is more common object that you will create.
This Object is automatically send to the K8 Cluster, we can give it a name . Then we have to add
--image= option, which will will specify which image we should be used for the container.
Note: We can’t use our local image that we created above. Instead we have to upload our image to our Docker Image Repository online.
Lets push our image that we created above to DockerHub
need to update here
Lets retag our image for our app that we Dockerized above as as our Docker Hub Account name
kubectl create deployment first-app --image=devinpowers/burger-repo
How to delete a Deployment
Use the command
kubectl delete deployment name_of_deployment
kubectl delete deployment first-app deployment.apps "first-app" deleted
Now if we check deployments in the terminal:
kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE first-app 1/1 1 1 2m22s
We have 1 ready Deployment.
Then if we run
kubectl get pods:
(base) devinpowers@Devins-MacBook-Pro basic app % kubectl get pods NAME READY STATUS RESTARTS AGE first-app-7bbc4c4c49-2hvn5 1/1 Running 0 3m40s
We will also have 1/1 Ready. Our application is up and running.
We can’t reach it yet (comming)
We can see the Web Broswer UI using the command:
minikube dashboard, which will take us to
Video 190: exposing Pod
Creating a Service so we can vist our Flask App that we made above!
- Port 6000 because thats what we exposed in the Dockerfile!
kubectl expose deployment first-app --type=LoadBalancer --port=6000
kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE first-app LoadBalancer 10.110.114.205 <pending> 8080:30386/TCP 82s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
minikube service first-app
Output from the terminal:
|-----------|-----------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-----------|-------------|---------------------------| | default | first-app | 8080 | http://192.168.49.2:30386 | |-----------|-----------|-------------|---------------------------| Starting tunnel for service first-app. |-----------|-----------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-----------|-------------|------------------------| | default | first-app | | http://127.0.0.1:52110 | |-----------|-----------|-------------|------------------------|
This will open a link to our Web App!
Video 191: Restarting Containers
One of the Big Concepts and uses of Kubernetes is *Managing. If for example we went to the
./exit endpoint in our web app, like we did earlier above, our pod stops! As we can see when we type
kubectl get pods into the terminal as seen below:
kubectl get pods NAME READY STATUS RESTARTS AGE first-app-64d94d9ffc-9jt5x 0/1 CrashLoopBackOff 4 (38s ago) 6h57m
If we wait a little bit, we will see that the Ready satus will update to 1/1 and that kubernetes restarted our Pod for us! We didn’t have to do it manually.
Video 192: Scaling in Action
kubectl scale deployment/first-app --replicas=3
kubectl get pods NAME READY STATUS RESTARTS AGE first-app-64d94d9ffc-2n6jm 1/1 Running 2 (2m55s ago) 4m52s first-app-64d94d9ffc-p8nqd 1/1 Running 0 15s first-app-64d94d9ffc-wkq5q 1/1 Running 0 15s
We can see that we now have 3 Pods
Video 193: Updating Deployments
We can update our deployments with a newer image. Lets see how we do this:
Lets first update the
index.html to display a different youtube video. [x]
Then we need to rebuild our image.
docker build -t devinpowers/burger-repo .
Now we need to push our updated image to Dockerhub:
docker push devinpowers/burger-app
Using default tag: latest The push refers to repository [docker.io/devinpowers/burger-repo] 5b41cfe4b886: Pushing 11.96MB 5b41cfe4b886: Pushed a316ba36abf6: Pushed 16697f967866: Layer already exists a8b3ae1d334a: Layer already exists 7cdfdc39018d: Layer already exists 28c914fab499: Layer already exists fef6f293382e: Layer already exists ffd50287b468: Layer already exists cba7a92f211b: Layer already exists fe09b9981fd2: Layer already exists dd5b48ca5196: Layer already exists latest: digest: sha256:89725ca61cb1dc3cbf8babd0901563bf6f60ac1933c521906f4880a02d388a8c size: 2843
Now we can update our Deployment using the
set and then add
image and the name of our deployment. Then we can grab the container-name
kubectl set image deployment/burger-app burger-repo=devinpowers/burger-repo
This will update the deployment.
Once we execute the set image and we go to the terminal and type
kubectl get deployments:
NAME READY UP-TO-DATE AVAILABLE AGE burger-app 10/10 10 10 113m
Not that our web-app deployment WILL ONLY UPDATE if we update the image with a different tag, lets do this below. Note: Make sure we’re in our directory where we have the file we want to Dockerize.
docker build -t devinpowers/burger-repo:2 .
Now we can push this image with the tag at the end to our Dockerhub.
docker push devinpowers/burger-repo:2
With that finished we can apply the
set image command again but this time include the tag.
kubectl set image deployment/burger-app burger-repo=devinpowers/burger-repo:2
Output We can see that it updated based on the output after setting the image again.
deployment.apps/burger-app image updated
This will let Kuberentes know that this is a new tag (version) and redownload the image and restart the containers based on it.
We can view the current updated status using the kubectl command below:
kubectl rollout status deployment/burger-app
deployment "burger-app" successfully rolled out
If we go back to our Browswer, we should see the updated web-app with a different video! Maybe wait a minute or two!
Video 194: Deployment Rollbacks & History
What is a Rollback in K8?
A roll-back in kubectl for Kubernetes (K8s) deployments refers to the process of reverting a deployment to a previous version or state. It involves undoing the changes made in the current deployment and restoring the previous deployment configuration.
In kubectl, a roll-back can be performed using the “kubectl rollout undo” command, which is used to revert to the previous deployment revision. When this command is executed, Kubernetes will create a new revision of the deployment that has the same configuration as the previous revision. This new revision will then be rolled out to the cluster, replacing the current revision.
The roll-back command can be used with various options to control the behavior of the roll-back process, such as the number of revisions to undo, the namespace, and the deployment name.
Roll-backs are important in Kubernetes as they allow you to quickly recover from issues or errors introduced by a deployment. By rolling back to a previous version of the deployment, you can revert to a known good state and avoid downtime or other issues caused by a problematic deployment.
Video 195: The Imperative vs The Declarative Approach
.yamlfiles for configuring everything instead of the imperative approach that we’ve been doing up to this point, where we have to type all the commands into our terminal.
Video 196: Creating a Deployment Configuration File
Lets create a deployment.yaml file in our directory for our Web Application as shown below:
. ├── Dockerfile ├── app │ ├── app.py │ └── templates │ ├── index.html |___ deployment.yaml └── requirements.txt
Lets configure our Deployment yaml file:
apiVersion: apps/v1 kind: Deployment metadata: name: second-app-deployment spec: replicas: 1 template: metadata: labels: app: second-app spec: containers: - name: second-app image: devinpowers/burger-repo:latest # - name: # image:
Here a link to the Reference docs for Building Yaml FIles for K8s
Here is a summary of the above yaml configuration for our App.
apiVersion: the Kubernetes API version of the Deployment object. In this case, it is using the apps/v1 API version.
kind: the type of Kubernetes Object being defined. In our case, it is a Deployment.
metadata: the metadata associated with the Deployment object. It includes the name of the Deployment, which is second-app-deployment. Can name it whatever we like!
spec: the desired state of the Deployment. It specifies the number of replicas (pods), which is set to 1. It also includes a selector that matches the labels of the Pods managed by the Deployment. In this case, it matches the app: second-app and tier: backend labels. The template field defines the Pod template that is used to create new Pods when necessary. It includes the labels that the Pods will have, which again match the selector labels, and a containers field that defines the container specification for the Pod. In this case, it defines a single container with the name second-node and the image academind/kub-first-app:2.
There are also two commented out sections under the containers field. These can be uncommented and modified to add additional containers to the Pod!
Note: In Kubernetes (often abbreviated as “K8s”), a “replica” refers to a set of identical instances of a pod. A pod is the smallest deployable unit in Kubernetes, and it can contain one or more containers.
Video 197: Adding Pods and Container Specs
kubectl apply -f deployment.yaml
The kubectl command apply, simply applies a configuration file to the connected cluster, you identify the file with the
-f (you can have multiple -f options, if you want to apply multiple files at once). Then you just add an
=file_name.yaml. Or the path to the file.
If we do this, we get an error in the output:
error: error validating "deployment.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false
We did this on purpose to show selector are important concepts in the Kubernetes World.
Video 198: Working with Labels & Selectors
How do we fix this?
We go to the Specification of the Deployment, we also must include a
selector key with a
matchLabels and two nested below
tier, as shown below:
- We want to match with this deployment
Note that deployments are dynamic objects
A deployment continously watches all the pods which are out there and sees if any there are any pods should it control. AND it selects the to-be controled pods with a so-called selector! We will see selector in many resources Kubernetes works with! There are different types of selectors, for (kind) Deployment object, we can use two different types of selecting:
This selector (below in our yaml file) is specifying that the Deployment should manage pods with the labels “app=second-app” and “tier=backend”. This means that the Deployment will only manage pods that have those exact labels.
apiVersion: apps/v1 kind: Deployment metadata: name: second-app-deployment spec: replicas: 1 selector: matchLabels: app: second-app tier: backend template: metadata: labels: app: second-app tier: backend spec: containers: - name: second-app image: devinpowers/burger-repo:latest # - name: ... # image: ...
The first spec is for the overall Deployment. We add another spec, on the same level as metadata, indented below template and here we define how this pod should be configured. Specification of the individual pods which are created for this Deployment. We can have multiple containers defined using the
- sign or dashes. In yaml formating the
- stands for a list.
Here were using the
image devinpowers/burger-repo:latest, which is from Dockerhub. Needs to an image on a registry. We used tag (version), which is the version.
How can we now apply this Deployment? How can we make the cluster aware of it, and have it create the Deployment and Pod, and launch that container?
Now if we try to
kubectl apply -f deployment.yaml
We get the output:
now this deployment was created. And if we run
kubectl get deployments we can see our deployment is up and running:
NAME READY UP-TO-DATE AVAILABLE AGE second-app-deployment 1/1 1 1 101s
And if we run
kubectl get pods we can also see this up and running.
NAME READY STATUS RESTARTS AGE second-app-deployment-86f4467b98-svvx9 1/1 Running 0 2m22s
All our configuration was done in the
deployment.yaml file. Declariative Approach.
At the moment we can’t visit the App because the Service is missing. (more on this below):
Video 199: Creating a Service Declaratively
Lets create a Service resources for our App.
apiVersion: v1 kind: Service metadata: name: backend spec: selector: app: second-app ports: - protocol: 'TCP' port: 80 targetPort: 8080 # - protocol: 'TCP' # port: 443 # targetPort: 443 type: LoadBalancer
spec: This section specifies the desired state of the service. It includes:
selector: This specifies how Kubernetes should select the pods that the service should route traffic to. In this case, it uses the label “app=second-app” to select the pods.
ports: This specifies the ports that the service should listen on and route traffic to. In this case, there is only one port specified:
protocol: This specifies the network protocol that the port should use. In this case, it is set to “TCP”.
port: This specifies the port number that the service should listen on. In this case, it is set to port 80.
targetPort: This specifies the port number that the service should route traffic to on the pods. In this case, it is set to port 8080.
type: This specifies the type of the service. In this case, it is set to “LoadBalancer”. This means that the service will be exposed externally using a cloud provider’s load balancer, if available. This will provide a stable IP address for the service, and distribute traffic across the pods that the service is routing traffic to.
Services control Pods, if we remember that the Service Object Exposes Pods to the Cluster or Externally, we can’t reach a pod without Services!
- Pods have an internal IP (address) by default -it changes when a Pod is Replaced
- Finding Pods is hard if the IP changes all the time
- Services group Pods with a shared IP
- Services can allow exrternal access to Pods
- The default (internal only) can be overwritten.
More on Services:
In Kubernetes, a Service is an abstraction layer that provides a stable network endpoint for accessing a set of pods. When a Service receives traffic from a client, it routes that traffic to one of the pods that it manages, based on a set of rules specified in the Service configuration.
In the YAML file you provided earlier, the “ports” field is used to specify the port number that the Service should listen on, as well as the target port number on the pods to which the Service should route the incoming traffic.
Here’s what the “port” and “targetPort” fields mean in the context of a Kubernetes Service:
Port: This is the port number that the Service should listen on for incoming traffic. When a client sends traffic to this port, the Service will route that traffic to one of the pods it manages, based on the Service’s routing rules.
TargetPort: This is the port number on the pods that the Service should route the incoming traffic to. When the Service receives traffic on its own port, it forwards that traffic to the target port on the selected pod.
For example, in the YAML file we have above:
ports: - protocol: 'TCP' port: 80 targetPort: 8080
This specifies that the Service should listen on port 80, and route traffic to the pods on port 8080. So, when a client sends traffic to the Service’s port 80, the Service will forward that traffic to one of the pods it manages on port 8080.
By using separate port numbers for the Service and the target port on the pods, Kubernetes makes it easy to expose services to the external world, while still allowing the internal components of the application to communicate with each other using their own ports.
kubectl apply -f service.yaml
Now if we go
kubectl get services we can see our service we created from the
service.yaml file above:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend LoadBalancer 10.108.251.190 <pending> 80:32097/TCP 52s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6s
Note that the Service named
kubernetes is always running!
Now we can now expose the Service with Minikube Service with the name we gave it, which is named
minikube service backend
minikube service backend |-----------|---------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|---------|-------------|---------------------------| | default | backend | 80 | http://192.168.49.2:32097 | |-----------|---------|-------------|---------------------------| 🏃 Starting tunnel for service backend. |-----------|---------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|---------|-------------|------------------------| | default | backend | | http://127.0.0.1:52355 | |-----------|---------|-------------|------------------------| 🎉 Opening service default/backend in default browser... ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
What is TCP?
TCP (Transmission Control Protocol) is a protocol that governs the way data is transmitted over the Internet. It is one of the main protocols in the Internet protocol suite, which is a set of communication protocols that are used for transmitting data over networks.
TCP is a connection-oriented protocol, which means that it establishes a virtual connection between two devices before transmitting data. It ensures that data is delivered reliably, with error checking and correction mechanisms built in. TCP provides a mechanism for controlling the flow of data between two devices, so that one device does not overwhelm the other with too much data too quickly. It also provides a mechanism for retransmitting lost packets, so that data is not lost during transmission.
TCP is used by many applications on the Internet, including web browsers, email clients, and file transfer protocols. When a client sends a request to a server, TCP is used to establish a connection between the two devices. Once the connection is established, data can be transmitted back and forth between the client and server.
Overall, TCP is a critical protocol for ensuring reliable data transmission over the Internet, and it is an essential component of the modern digital infrastructure that powers the Internet.
Extra Showing the high overview Architecture of K8 using Minikube vs using a Cloud Service like Azure
Extra Notes on K8 Architecture