Kubernetes Tutorial for Beginners: Introduction to K8s
This blog is intended to discuss the basic concepts of Kubernetes for beginners or software engineers who wants to learn about container orchestration with the most popular orchestrator tool also known as k8s. By the end of this tutorial, you will have gained a foundational understanding of Kubernetes, enabling you to interact confidently with Kubernetes clusters. Additionally, you'll have hands-on experience with essential commands and practical scenarios.
Before delving into the fundamental concepts of Kubernetes, let's take a moment to preview some of the key ideas that will be covered in this discussion.
What is Containerization?
What is Kubernetes?
Kubernetes Architecture
Kubernetes Key Features
Setting up Kubernetes Cluster
Common Kubernetes Objects
Hands-on Experience with Kubernetes
What is Containerization?
As a beginner, think of Containerization as a way of packaging the source code, libraries, and dependencies of your application in a container, providing a self-contained and consistent environment for your application to run across various computing environments. Docker is the most popular containerization platform, allowing developers to create, deploy, and run applications in isolated containers.
What is Kubernetes?
Kubernetes, often abbreviated as K8s, is a powerful open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications.
Kubernetes provides a robust framework for efficiently managing containerized workloads and services across a cluster of machines. It was originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF).
While Docker excels as a platform for building and packaging containerized applications, It focuses on the development and runtime aspects of containers, making it an ideal choice for individual developers and smaller-scale deployments. On the other hand, Kubernetes specializes in container orchestration, providing robust features for automating deployment, scaling, and management of containerized applications in large-scale environments.
The Kubernetes Architecture
Kubernetes follows a distributed and modular architecture designed for scalability, extensibility, and resilience. The key components of Kubernetes architecture include:
Node: A "Node" refers to a single machine, whether it's a physical server or a virtual machine, that is part of the Kubernetes cluster.
Master Node (Control Plane):The Master Node, also known as the Control Plane, is responsible for managing the overall state and configuration of the Kubernetes cluster. Its components includes:
API Server: Acts as the entry point for all administrative tasks and serves as the frontend for the Kubernetes control plane.
Controller Manager: Ensures the desired state of the cluster, handling node and workload-related tasks.
Scheduler: Assigns nodes to newly created pods based on resource requirements and constraints.
etcd: A distributed key-value store used for storing the cluster's configuration data, ensuring high availability and consistency.
Worker Node (Data Plane): Worker Nodes are the machines where containers are deployed and run. They host the applications and workloads. Its components includes:
Kubelet: Ensures that containers are running in a Pod on the node. It communicates with the master node.
Kube Proxy: Maintains network rules on nodes, facilitating communication between Pods and external traffic.
Container Runtime: Executes containers (e.g., Docker, containerd) and manages their lifecycle
Pod: The smallest deployable unit in Kubernetes, representing one or more containers sharing the same network namespace and storage.
Kubernetes Key Features
Kubernetes is the leading container orchestration platform with key features that make it indispensable for modern application deployment:
Container Orchestration: Kubernetes automates deployment, scaling, and management of containerized applications.
Automated Scaling: It adjusts the number of running instances based on resource usage or custom metrics.
Self-Healing: Also Kubernetes ensures high availability by automatically restarting failed containers.
Service Discovery and Load Balancing: Kubernetes facilitates seamless communication between containers and distributes traffic for optimal performance.
Rolling Updates and Rollbacks: Kubernetes enables smooth deployment of new versions and quick rollbacks in case of issues.
Multi-Cloud and Hybrid Cloud Support: Kubernetes is cloud-agnostic, supporting deployment across various cloud providers or on-premises.
Declarative Configuration: It uses a declarative approach for defining the desired state of the system.
Extensibility and Ecosystem: Kubernetes is highly extensible with a rich ecosystem, supporting custom resources and operators.
Role-Based Access Control (RBAC): Kubernetes offers fine-grained access control through roles and permissions.
Kubernetes offers so many features, making it a a powerful and versatile container orchestration platform, streamlining the deployment, scaling, and management of containerized applications.
Setting up Kubernetes Cluster
Creating a Kubernetes cluster from scratch is a non-trivial task, and there are various options and tools available, each with its considerations. In this blog, we will install Kubernetes locally using K3s, a lightweight distribution designed for ease of use. K3s is particularly suitable for local development and testing scenarios. You can explore other alternative tools like Kubeadm, Minikube, and KinD.
To install K3s, you can use a convenient script provided by Rancher, the organization behind K3s. Open a terminal and run the following command:
$ curl -sfL https://get.k3s.io | sh -
...
[INFO] Starting k3s
This output indicates that K3s has been successfully installed, you can now configure kubectl
to use the K3s cluster by copying the auto-generated Kubeconfig file. Copy the command below and run it in your terminal:
$ mkdir -p ~/.kube
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
$ sudo chown $USER:$USER ~/.kube/config
Set the KUBECONFIG
environment variable to point to the copied config file:
$ export KUBECONFIG=~/.kube/config
You can add the export command to your shell profile (e.g., ~/.bashrc
or ~/.zshrc
) to make this configuration persistent across sessions.
Now you can use kubectl
to interact with your local K3s cluster. Let's cofirm with the below command:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-server Ready control-plane,master 5m v1.28.7+k3s1
If the installation was successful, you should see a single node listed with the hostname of your machine. The node status should be "Ready" indicating that your Kubernetes cluster is now set up and ready for use!
Common Kubernetes Objects
Good job 👍, You've done pretty well to get to this stage, but before we get our hands dirty with some practical works, Let's understand a bit more about some important Kubernetes objects.
Pod:
A Pod is the smallest deployable unit in Kubernetes, representing a single instance of a running process.
Pods are the basic units that run applications in Kubernetes. They can contain one or more containers that share the same network namespace, storage, and have the capability to communicate with each other using localhost. Pods are often used to deploy tightly coupled application components.
Service:
A Service is an abstraction that defines a logical set of Pods and a policy to access them, providing a stable endpoint for communication. Kubernetes supports different Service types, such as ClusterIP, NodePort, LoadBalancer and Headless Services.
Kubernetes Services enable networking within a Kubernetes cluster. By abstracting away the underlying Pods, Services facilitate load balancing, scaling, and service discovery.
Namespace:
Namespace provides a way to divide cluster resources into virtual clusters, helping manage and isolate multiple projects or teams within the same Kubernetes cluster. Resources like Pods, Services, and ConfigMaps can exist in different namespaces, ensuring better isolation and resource management.
Deployment:
Deployments are a higher-level abstraction that allows you to describe the desired state of your application. They manage the creation and scaling of replica sets, ensuring that the specified number of Pods are running and handling updates or rollbacks with minimal downtime. A Deployment provides declarative updates to applications, managing the deployment and scaling of replica sets.
Volume:
Volume is a directory containing data that can be shared among containers in a Pod, providing persistent storage beyond the Pod’s lifecycle.
Kubernetes Volumes allow data to persist across the lifecycle of a Pod. They can be used to share files between containers, store configuration data, or provide durable storage for applications. Kubernetes supports various types of volumes, including emptyDir
, hostPath
, ConfigMap
and PersistentVolume
storage solutions.
StatefulSet:
StatefulSets are used for applications that require stable network identities and persistent storage. They ensure that each Pod gets a unique and predictable hostname, allowing stateful applications like databases to maintain their state and identity even during scaling or rescheduling.
NetworkPolicy:
NetworkPolicy is a Kubernetes resource that defines how Pods are allowed to communicate with each other and other network endpoints. It allows fine-grained control over communication between Pods. It specifies rules to permit or deny traffic based on factors like podSelector
, namespaceSelector
, and specific ports
.
Ingress:
Ingress is an API object that manages external access to services within a cluster, handling HTTP and HTTPS traffic routing. It provides a way to expose services to the external world. It acts as a traffic manager, directing incoming requests based on defined rules. This allows for features like domain-based routing, SSL termination, and load balancing at the application layer.
Hands-on Experience with Kubernetes
Now, let's dive into the practical aspect of Kubernetes using kubectl
, the official command-line tool for interacting with Kubernetes clusters.
Deploying our First Pod:
Let's start by listing all pods in the default namespace:
$ kubectl get pods
No resources found in default namespace.
As expected, there are currently no pods in the default namespace. Let's create one:
$ kubectl run nginx-pod --image=nginx --port=80
pod/nginx-pod created
Check again to see if the pod is running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-pod 1/1 Running 0 7s
Now, let's access the pod:
$ kubectl port-forward nginx-pod 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Visit http://localhost:8080
in a web browser to see the result.
Managing Replicas with Deployments:
Moving forward, Let's create a deployment imperatively:
$ kubectl create deployment nginx-deployment --image=nginx
deployment.apps/nginx-deployment created
Run the below command to scale the deployment that creates additional replicas of the pod:
$ kubectl scale deployment nginx-deployment --replicas=3
deployment.apps/nginx-deployment scaled
Exploring Services
After that, we will expose the deployment with a service.
$ kubectl expose deployment nginx-deployment --type=ClusterIP --name=nginx-service --port=80
service/nginx-service exposed
Now, Retrieve information about all resources in the current (default) namespace.
$ kubectl get all --namespace default
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-6d6565499c-94brn 1/1 Running 0 43s
pod/nginx-deployment-6d6565499c-g7f6r 1/1 Running 0 35s
pod/nginx-deployment-6d6565499c-mg4hf 1/1 Running 0 35s
pod/nginx-pod 1/1 Running 0 54s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d23h
service/nginx-service ClusterIP 10.96.207.216 <none> 80/TCP 5s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 3/3 3 3 43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-6d6565499c 3 3 3 43s
As seen, we have three running pods as part of the nginx-deployment, an additional standalone pod (nginx-pod), the newly created service (nginx-service), the deployment, and replicaset.
Map the local port 8081 to the service's port 80. This means that any traffic directed to localhost:8081
on your machine will be forwarded to the nginx-service
on port 80
in the Kubernetes cluster:
$ kubectl port-forward service/nginx-service 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80
Visit http://localhost:8081
in a web browser to confirm.
Congratulations! you have been able to deploy and access the deployment of an application on a Kubernetes cluster. Before we call it a day, Let's take a look at a simple Declarative approach for creating a resource in Kubernetes.
Create a YAML file (e.g., nginx-pod.yaml
) with the following content:
apiVersion: v1
kind: Pod
metadata:
name: nginx-declarative
spec:
containers:
- name: nginx-container
image: nginx
Apply the YAML file to create the pod:
$ kubectl apply -f nginx-pod.yaml
pod/nginx-declarative created
Verify the creation of the declarative Pod:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-declarative 1/1 Running 0 93s
nginx-deployment-6d6565499c-94brn 1/1 Running 0 27m
nginx-deployment-6d6565499c-g7f6r 1/1 Running 0 27m
nginx-deployment-6d6565499c-mg4hf 1/1 Running 0 27m
nginx-pod 1/1 Running 0 67m
Anytime you modify the configuration file, re-run the apply command to update your Kubernetes cluster.
Deleting Resources
To clean up the resources created during the tutorial, use kubectl get all
to identify the provisioned resources. Deleting a resource in Kubernetes involves using the kubectl delete
command with the resource type and name. The basic syntax is as follows:
$ kubectl delete <resource_type> <resource_name>
For example, to delete the Pod with the name nginx-pod
, you would run:
$ kubectl delete pod nginx-pod
pod "nginx-pod" deleted
To delete multiple resources based on a label selector, use the --selector
flag. For instance, to delete all Pods with the label app=nginx
, run:
$ kubectl delete pod --selector=app=nginx
Exercise caution with kubectl delete
as it irreversibly removes resources. Double-check the resource type and name before executing the command.
More kubectl
Commands
Other useful kubectl
commands, includes:
Describe: Display detailed information about a specific resource.
$ kubectl describe <resource_type> <resource_name>
Contexts: Switch between different Kubernetes clusters and namespaces.
$ kubectl config get-contexts
$ kubectl config use-context <context_name>
Logs: View logs from a running pod.
$ kubectl logs <pod_name>
Exec: Run commands in a running pod.
$ kubectl exec -it <pod_name> -- /bin/bash
Events: View cluster events for troubleshooting.
$ kubectl get events
kubectl provides a comprehensive set of commands for managing the entire lifecycle of Kubernetes resources.
With this hands-on experience using kubectl
, you should now have a foundational understanding of interacting with Kubernetes.
Codegiant's CI/CD platform is entirely Kubernetes-native, simplifying the process of building, testing, and deploying across various cloud providers or on-premises systems. With built-in support for GitOps, Blue/Green deploys, Canary releases, and Rolling deploys, Codegiant offers a comprehensive set of features out of the box. Codegiant also offers a Visual pipeline builder, allowing you to spend less time navigating through lines of YAML and more time building. Positioned as the No. 1 true DevSecOps platform, Codegiant equips you with all the tools you need for faster development that leads to increased revenue. From Issue Tracker and Git Repositories to CI/CD, Codepods, Error & APM Tracing, Observability, Chaos Engineering, Uptime Monitoring, Status Pages, to Document Hub – Codegiant has you covered. Sign up for a free account at Codegiant to experience the true essence of a complete DevSecOps platform.
Key Takeaways
The tutorial provides foundational insights into containerization, Kubernetes architecture, key features, and hands-on experience using kubectl. Explore additional insightful tutorials and guides on navigating Codegiant effectively through our blog.