From b55e24056d857a27c9df95d55fa31e520d544e0e Mon Sep 17 00:00:00 2001
From: Michael Ingeman-Nielsen
Date: Mon, 20 Jan 2025 14:45:12 +0100
Subject: [PATCH] Fix markdown linting errors - not line length
This fixes the existing markdownlint errors that are _not_ related to line length. Those will be dealt with separately, since word wrapping may be a controversial subject.
---
CONTRIBUTION.md | 4 +-
README.md | 10 ++--
accessing-your-application.md | 7 ++-
configmaps-secrets.md | 61 ++++++++++----------
deployments-ingress.md | 101 +++++++++++++++++-----------------
desired-state.md | 33 ++++++-----
exercise-template.md | 2 +-
intro.md | 31 ++++-------
manifests.md | 6 +-
persistent-storage.md | 32 +++++------
rolling-updates.md | 5 +-
services.md | 22 ++++----
trainer-notes.md | 2 +-
13 files changed, 152 insertions(+), 164 deletions(-)
diff --git a/CONTRIBUTION.md b/CONTRIBUTION.md
index 96b8c84..9644232 100644
--- a/CONTRIBUTION.md
+++ b/CONTRIBUTION.md
@@ -10,10 +10,12 @@ The exercise should have the following sections:
* Extras and wrap-up (optional)
When creating a new exercise, you should use the [exercise template](exercise-template.md) as a starting point.
+
## Best practices
### Hints
-Use :bulb: `:bulb:` to indicate a hint to the exercise.
+
+Use :bulb: `:bulb:` to indicate a hint to the exercise.
### Dealing with text rich content
diff --git a/README.md b/README.md
index d12f186..213a069 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
-[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)][gitpod]
-
# kubernetes-katas
+[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)][gitpod]
+
A selection of [katas][kata-def] for Kubernetes (k8s).
The exercises are ordered in the way we think it makes sense to introduce Kubernetes concepts.
@@ -13,7 +13,7 @@ You can find a summary of many of the commands used in the exercises in the
> Please have a look at the [Setup](#setup) section if that is not the case.
> There are plenty of free and easy options.
-## Katas in suggested order:
+## Katas in suggested order
- [intro](intro.md)
- [desired-state](desired-state.md)
@@ -54,11 +54,11 @@ The commands above will enable kubectl autocompletion when you start a new bash
See: [Kubernetes.io - Enabling shell autocompletion][autocompletion] for more info.
-# Cheatsheet
+## Cheatsheet
A collection of useful commands to use throughout the exercises:
-```
+```shell
kubectl api-resources # List resource types
diff --git a/accessing-your-application.md b/accessing-your-application.md
index da00e76..b11392f 100644
--- a/accessing-your-application.md
+++ b/accessing-your-application.md
@@ -74,7 +74,7 @@ The pod is defined in the `frontend-pod.yaml` file.
You should see something like this:
-```
+```text
NAME READY STATUS RESTARTS AGE
frontend 1/1 Running 0 2m
```
@@ -105,15 +105,16 @@ Now we will deploy both the frontend and backend pods.
You should see something like this:
-```
+```shell
k get pods backend -o wide
+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
backend 1/1 Running 0 11s 10.0.40.196 ip-10-0-35-102.eu-west-1.compute.internal
```
In this case the IP is `10.0.40.196`, but it will be different in your case.
-**Add environment variables to the frontend pod**
+#### Add environment variables to the frontend pod
- Open the `frontend-pod.yaml` file and add the following environment variables to the pod:
diff --git a/configmaps-secrets.md b/configmaps-secrets.md
index ea714ba..6c717ae 100644
--- a/configmaps-secrets.md
+++ b/configmaps-secrets.md
@@ -152,7 +152,7 @@ Step by step:
> :bulb: All files for the exercise are found in the `configmap-secrets/start` folder.
-**Add the database part of the application**
+#### Add the database part of the application
We have already created the database part of the application, with a deployment and a service.
@@ -180,7 +180,7 @@ frontend-5f9b5f46c8-jkw9n 1/1 Running 0 4s
postgres-6fbd757dd7-ttpqj 1/1 Running 0 4s
```
-**Refactor the database user into a configmap and implement that in the backend**
+#### Refactor the database user into a configmap and implement that in the backend
We want to change the database user into a configmap, so that we can change it in one place, and use it on all deployments that needs it.
@@ -195,7 +195,6 @@ data:
DB_NAME: quotes
```
-
:bulb: If you are unsure how to do this, look at the [configmap section](#configmaps) above.
@@ -205,7 +204,7 @@ If you are stuck, here is the solution:
This will generate the file:
-```
+```shell
kubectl create configmap postgres-config --from-literal=DB_HOST=postgres --from-literal=DB_PORT=5432 --from-literal=DB_USER=superuser --from-literal=DB_PASSWORD=complicated --from-literal=DB_NAME=quotes --dry-run=client -o yaml > postgres-config.yaml
```
@@ -226,39 +225,37 @@ data:
-
- apply the configmap with `kubectl apply -f postgres-config.yaml`
-
- In the `backend-deployment.yaml`, change the environment variables to use the configmap instead of the hardcoded values.
-Change this:
-
-```yaml
-env:
- - name: DB_HOST
- value: postgres
- - name: DB_PORT
- value: "5432"
- - name: DB_USER
- value: superuser
- - name: DB_PASSWORD
- value: complicated
- - name: DB_NAME
- value: quotes
-```
-
-To this:
-
-```yaml
-envFrom:
- - configMapRef:
- name: postgres-config
-```
+ Change this:
+
+ ```yaml
+ env:
+ - name: DB_HOST
+ value: postgres
+ - name: DB_PORT
+ value: "5432"
+ - name: DB_USER
+ value: superuser
+ - name: DB_PASSWORD
+ value: complicated
+ - name: DB_NAME
+ value: quotes
+ ```
+
+ To this:
+
+ ```yaml
+ envFrom:
+ - configMapRef:
+ name: postgres-config
+ ```
- re-apply the backend deployment with `kubectl apply -f backend-deployment.yaml`
- check that the website is still running.
-**Change the database password into a secret, and implement that in the backend.**
+#### Change the database password into a secret, and implement that in the backend
We want to change the database password into a secret, so that we can change it in one place, and use it on all deployments that needs it.
In order for this, we need to change the backend deployment to use the secret instead of the configmap for the password itself.
@@ -277,7 +274,7 @@ Help me out!
If you are stuck, here is the solution:
-```
+```shell
kubectl create secret generic postgres-secret --from-literal=DB_PASSWORD=complicated --dry-run=client -o yaml > postgres-secret.yaml
```
@@ -322,7 +319,7 @@ envFrom:
- Check that the website is still running.
-**Change database deployment to use the configmap and secret.**
+#### Change database deployment to use the configmap and secret
We are going to implement the configmap and secret in the database deployment as well.
diff --git a/deployments-ingress.md b/deployments-ingress.md
index 1211d15..da2d88a 100644
--- a/deployments-ingress.md
+++ b/deployments-ingress.md
@@ -142,35 +142,35 @@ Step by step:
In the directory we have the pod manifests for the backend and frontend that have created in the previous exercises.
We also have two services, one for the backend (type ClusterIP) and one for the frontend (type NodePort) as well as an ingress manifest for the frontend.
-**Add Ingress to frontend service**
+#### Add Ingress to frontend service
As it might take a while for the ingress to work, we will start by adding the ingress to the frontend service, even though we have not applied the service yet.
- Open the `frontend-ingress.yaml` file in your editor.
- Change the hostname to `quotes-..eficode.academy`. Just as long as it is unique.
- - the prefix normally is what is after your workstation-X..eficode.academy. If you are unsure, ask the trainer.
+ - the prefix normally is what is after your workstation-X.``.eficode.academy. If you are unsure, ask the trainer.
- Change the service name to match the name of the frontend service.
- Apply the ingress manifest.
-```
+```shell
kubectl apply -f frontend-ingress.yaml
```
Expected output:
-```
+```text
ingress.networking.k8s.io/frontend-ingress created
```
- Check that the ingress has been created.
-```
+```shell
kubectl get ingress
```
Expected output:
-```
+```text
NAME HOSTS ADDRESS PORTS AGE
frontend-ingress quotes-..eficode.academy 80 1m
```
@@ -178,7 +178,7 @@ frontend-ingress quotes-..eficode.academy 80
Congratulations, you have now added an ingress to the frontend service.
It will take a while for the ingress to work, so we will continue with the backend deployment.
-**Turn the backend pod manifests into a deployment manifest**
+#### Turn the backend pod manifests into a deployment manifest
- Deploy the frontend pod as well as the two services `backend-svc.yaml` and `frontend-svc.yaml`.
Use the `kubectl apply -f` command.
@@ -194,13 +194,13 @@ How do I connect to a pod through a NodePort service?
> :bulb: In previous exercises you learned how connect to a pod exposed through a NodePort service, you need to find the nodePort using `kubectl get service` and the IP address of one of the nodes using `kubectl get nodes -o wide`
> Then combine the node IP address and nodePort with a colon between them, in a browser or using curl:
-```
+```text
http://:
```
-**Turn the backend pod manifests into a deployment manifest**
+#### Apply the backend deployment manifest
- Open both the backend-deployment.yaml and the backend-pod.yaml files in your editor.
@@ -274,42 +274,42 @@ The same as the labels key in the metadata key of the pod template.
-**Apply the deployment manifest**
+#### Apply the deployment manifest
- Apply the deployment manifest, the same way we have applied the pod manifests, just pointing to a different file.
-```
+```shell
kubectl apply -f backend-deployment.yaml
```
Expected output:
-```
+```text
deployment.apps/backend-deployment created
```
- Check that the deployment has been created.
-```
+```shell
kubectl get deployments
```
Expected output:
-```
+```text
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
backend 1 1 1 1 1m
```
- Check that the pod has been created.
-```
+```shell
kubectl get pods
```
Expected output:
-```
+```text
NAME READY STATUS RESTARTS AGE
backend-5f4b8b7b4-5x7xg 1/1 Running 0 1m
```
@@ -318,7 +318,7 @@ backend-5f4b8b7b4-5x7xg 1/1 Running 0 1m
The url should look something like this:
-```
+```text
http://quotes-..eficode.academy
```
@@ -328,45 +328,45 @@ http://quotes-..eficode.academy
- If this works, please delete the `backend-pod.yaml` file, as we now have upgraded to a deployment and no longer need it!
-**Scale the deployment by adding a replicas key**
+#### Scale the deployment by adding a replicas key
- Scale the deployment by changing the replicas key in the deployment manifest.
Set the replicas key to 3.
- Apply the deployment manifest again.
-```
+```shell
kubectl apply -f backend-deployment.yaml
```
Expected output:
-```
+```text
deployment.apps/backend-deployment configured
```
- Check that the deployment has been scaled.
-```
+```shell
kubectl get deployments
```
Expected output:
-```
+```text
NAME READY UP-TO-DATE AVAILABLE AGE
backend 3/3 3 3 3m29s
```
- Check that the pods have been scaled.
-```
+```shell
kubectl get pods
```
Expected output:
-```
+```text
NAME READY STATUS RESTARTS AGE
backend-5f4b8b7b4-5x7xg 1/1 Running 0 2m
backend-5f4b8b7b4-6j6xg 1/1 Running 0 1m
@@ -380,7 +380,7 @@ backend-5f4b8b7b4-7x7xg 1/1 Running 0 1m
-**Turn frontend pod manifests into a deployment manifest**
+#### Turn frontend pod manifests into a deployment manifest
You will now do the exact same thing for the frontend, we will walk you through it again, but at a higher level, if get stuck you can go back and double check how you did it for the backend.
@@ -394,41 +394,41 @@ You will now do the exact same thing for the frontend, we will walk you through
The selector key should have a matchLabels key.
The matchLabels key should have a `run: frontend` key-value pair.
-**Apply the frontend deployment manifest**
+#### Apply the frontend deployment manifest
- First, delete the frontend pod.
-```
+```shell
kubectl delete pod frontend
```
Expected output:
-```
+```text
pod "frontend" deleted
```
- Apply the frontend deployment manifest.
-```
+```shell
kubectl apply -f frontend-deployment.yaml
```
Expected output:
-```
+```text
deployment.apps/frontend-deployment created
```
- Check that the deployment has been created.
-```
+```shell
kubectl get deployments
```
Expected output:
-```
+```text
NAME READY UP-TO-DATE AVAILABLE AGE
backend 3/3 3 3 2m41s
frontend 3/3 3 3 2m41s
@@ -436,13 +436,13 @@ frontend 3/3 3 3 2m41s
- Check that the pod has been created.
-```
+```shell
kubectl get pods
```
Expected output:
-```
+```text
NAME READY STATUS RESTARTS AGE
backend-5f4b8b7b4-5x7xg 1/1 Running 0 3m
backend-5f4b8b7b4-6j6xg 1/1 Running 0 2m
@@ -457,34 +457,33 @@ frontend-47b45fb8b-4x7xg 1/1 Running 0 1m
- If this works, please delete the `frontend-pod.yaml` file, as we now have upgraded to a deployment and no longer need it!
-
### Clean up
- Delete the deployments.
-```
+```shell
kubectl delete -f frontend-deployment.yaml
kubectl delete -f backend-deployment.yaml
```
- Delete the services
-```
+```shell
kubectl delete -f frontend-svc.yaml
kubectl delete -f backend-svc.yaml
```
- Delete the ingress
-```
+```shell
kubectl delete -f frontend-ingress.yaml
```
> :bulb: If you ever want to delete all resources from a particular directory, you can use: `kubectl delete -f .` which will point at **all** files in that directory!
-# Extra Exercise
+## Extra Exercise
Test Kubernetes promise of resiliency and high availability
@@ -499,7 +498,7 @@ This enables us to see which of a group of network-multitool pods that served th
Create the network-multitool deployment:
-```
+```shell
kubectl create deployment customnginx --image ghcr.io/eficode-academy/network-multitool --port 80 --replicas 4
```
@@ -507,7 +506,7 @@ We create the network-multitool deployment with the name "customnginx" and with
We also create a service of type `LoadBalancer`:
-```
+```shell
kubectl expose deployment customnginx --port 80 --type LoadBalancer
```
@@ -515,13 +514,13 @@ kubectl expose deployment customnginx --port 80 --type LoadBalancer
When the LoadBalancer is ready we setup a loop to keep sending requests to the pods:
-```
+```text
while true; do curl --connect-timeout 1 -m 1 -s ; sleep 0.5; done
```
Expected output:
-```
+```text
Eficode Academy Network MultiTool (with NGINX) - customnginx-7fcfd947cf-zbvtd - 100.96.2.36
Eficode Academy Network MultiTool (with NGINX) - customnginx-7fcfd947cf-zbvtd - 100.96.1.150
Eficode Academy Network MultiTool (with NGINX) - customnginx-7fcfd947cf-zbvtd - 100.96.2.37
@@ -534,13 +533,13 @@ None of the curl commands time out.
Now, if we kill three out of four pods, the service should still respond, without timing out.
We let the loop run in a separate terminal, and kill three pods of this deployment from another terminal.
-```
+```shell
kubectl delete pod customnginx-3557040084-1z489 customnginx-3557040084-3hhlt customnginx-3557040084-c6skw
```
Expected output:
-```
+```text
pod "customnginx-3557040084-1z489" deleted
pod "customnginx-3557040084-3hhlt" deleted
pod "customnginx-3557040084-c6skw" deleted
@@ -548,13 +547,13 @@ pod "customnginx-3557040084-c6skw" deleted
Immediately check the other terminal for any failed curl commands or timeouts.
-```
+```text
Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-4w4gf - 10.244.0.19
```
Expected output:
-```
+```text
Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-h2dbg - 10.244.0.21
Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-5xbjc - 10.244.0.22
Eficode Academy Network MultiTool (with NGINX) - customnginx-59db6cff7b-h2dbg - 10.244.0.21
@@ -569,13 +568,13 @@ Why is that?
It is because, as soon as the pods are deleted, the deployment sees that it's desired state is four pods, and there is only one running, so it immediately starts three more to reach the desired state of four pods.
And, while the pods are in process of starting, one surviving pod serves all of the traffic, preventing our application from missing any requests.
-```
+```shell
kubectl get pods
```
Expected output:
-```
+```text
NAME READY STATUS RESTARTS AGE
customnginx-3557040084-0s7l8 1/1 Running 0 15s
customnginx-3557040084-1z489 1/1 Terminating 0 16m
@@ -590,13 +589,13 @@ This proves, Kubernetes enables high availability, by using multiple replicas of
Remember to clean up the deployment afterwards with:
-```
+```shell
kubectl delete deployment customnginx
```
And delete the LoadBalancer service:
-```
+```shell
kubectl delete service customnginx
```
diff --git a/desired-state.md b/desired-state.md
index c748084..e56f68b 100644
--- a/desired-state.md
+++ b/desired-state.md
@@ -30,7 +30,7 @@ Kubernetes controllers are the components that fulfills your desired state. As t
Step by step:
-## Inspect existing Kubernetes manifest for a `deployment` object.
+## Inspect existing Kubernetes manifest for a `deployment` object
We have prepared a Kubernetes manifest for you.
@@ -64,42 +64,42 @@ spec:
- containerPort: 80 # port the container is listening on
```
-## Apply the manifest using the `kubectl apply`.
+## Apply the manifest using the `kubectl apply`
Use the `kubectl apply -f ` command to send the manifest with your desired state to Kubernetes:
-```
+```shell
kubectl apply -f desired-state/nginx-deployment.yaml
```
Expected output:
-```
+```text
deployment.apps/nginx applied
```
Verify that the deployment is created:
-```
+```shell
kubectl get deployments
```
Expected output:
-```
+```text
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 36s
```
Check if the pods are running:
-```
+```shell
kubectl get pods
```
Expected output:
-```
+```text
NAME READY STATUS RESTARTS AGE
nginx-431080787-9r0lx 1/1 Running 0 40s
```
@@ -122,13 +122,13 @@ First, find the name of your pod using `kubectl get pods`, like you did above.
The name will be something like `nginx-431080787-9r0lx`. **Yours will have a different, but similar name**.
-```
+```shell
kubectl delete pod nginx-431080787-9r0lx
```
Expected output:
-```
+```text
pod "nginx-431080787-9r0lx" deleted
```
@@ -140,26 +140,26 @@ Therefore Kubernetes must make a change to the state of the cluster to once agai
We use `kubectl get` to verify that a **new** nginx pod is created (with a different name):
-```
+```shell
kubectl get pods
```
Expected output:
-```
+```text
NAME READY STATUS RESTARTS AGE
nginx-431080787-tx5m7 0/1 ContainerCreating 0 5s
```
And after few more seconds:
-```
+```shell
kubectl get pods
```
Expected output:
-```
+```text
NAME READY STATUS RESTARTS AGE
nginx-431080787-tx5m7 1/1 Running 0 12s
```
@@ -170,11 +170,10 @@ You have also seen that Kubernetes keeps it's promise of fulfilling your desired
-
### Clean up
Delete your desired state by using the `kubectl delete -f ` command.
-```
+```text
kubectl delete -f desired-state/nginx-deployment.yaml
-```
\ No newline at end of file
+```
diff --git a/exercise-template.md b/exercise-template.md
index efe288f..03db7a3 100644
--- a/exercise-template.md
+++ b/exercise-template.md
@@ -27,7 +27,7 @@ You can have several subsections if needed.
More Details
-**take the same bullet names as above and put them in to illustrate how far the student have gone**
+> **NOTE**: Take **the same bullet names as above** and put them in to illustrate how far the student have gone. Then **delete this line**.
- all actions that you believe the student should do, should be in a bullet
diff --git a/intro.md b/intro.md
index 61c74cf..f8ef150 100644
--- a/intro.md
+++ b/intro.md
@@ -1,11 +1,9 @@
# Kubernetes-introduction
-## Introduction
-
In this course, we will learn how to deploy applications in Kubernetes.
-We will start by deploying the entire quotes application the way it will be done when we are done with the course.
+We will start by deploying the entire quotes application the way it will be done when we are done with the course.
-# Deploying an application
+## Deploying an application
Our small example flask application that displays quotes.
@@ -20,7 +18,6 @@ For persistent storage, a postgresql database is used.
- Familiarize yourself with the quotes application.
- Access the application from outside the cluster through a service.
-
## Introduction
In Kubernetes, we run containers, but we don't manage containers directly. Containers are instead placed inside `pods`.
@@ -47,7 +44,6 @@ Declaratively means that we _declare what we want,_ and _not how_ Kubernetes sho
This declaration is what we call our `desired state`.
-
### Interacting with Kubernetes using kubectl
We will be interacting with Kubernetes using the command line.
@@ -69,10 +65,7 @@ To use it, type `kubectl ` in a terminal.
Step by step
-**take the same bullet names as above and put them in to illustrate how far the student have gone**
-
-## Inspect existing Kubernetes manifest for a `deployment` object.
-
+## Inspect existing Kubernetes manifest for a `deployment` object
We have prepared all the Kubernetes manifests that you need for the application to run.
@@ -89,7 +82,7 @@ Try to see if you can find information about:
Do not worry if you don't understand everything yet, we will go through it in detail later in the course.
-## Apply the manifest using the `kubectl apply`.
+## Apply the manifest using the `kubectl apply`
Use the `kubectl apply -f ` command to send the manifest with your desired state to Kubernetes:
@@ -99,7 +92,7 @@ kubectl apply -f quotes-flask/
Expected output:
-```
+```text
configmap/backend-config created
deployment.apps/backend created
service/backend created
@@ -120,7 +113,7 @@ kubectl get deployments
Expected output:
-```
+```text
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 27s
frontend 1/1 1 1 27s
@@ -129,7 +122,7 @@ postgres 1/1 1 1 27s
> :bulb: You might need to issue the command a couple of times, as it might take a few seconds for the deployment to be created and available.
-## Access the application from the Internet
+## Access the application from the Internet
We are getting a little ahead of our exercises here, but to illustrate that we actually have
a functioning application running in our cluster, let's try accessing it from a browser!
@@ -145,7 +138,7 @@ kubectl get service frontend
Expected output:
-```
+```text
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.96.223.218 80:32458/TCP 12s
```
@@ -162,7 +155,7 @@ kubectl get nodes -o wide
Expected output:
-```
+```text
NAME STATUS . . . INTERNAL-IP EXTERNAL-IP . . .
node1 Ready . . . 10.123.0.8 35.240.20.246 . . .
node2 Ready . . . 10.123.0.7 35.205.245.42 . . .
@@ -180,15 +173,14 @@ You should see the application in the browser now!
-Congratulations! You have deployed your first application in Kubernetes!
+Congratulations! You have deployed your first application in Kubernetes!
Easy, right :-)
-
### Clean up
To clean up, run the following command:
-```
+```shell
kubectl delete -f quotes-flask/
```
@@ -206,4 +198,3 @@ Then take a look at the service manifest, and see if you can find the following
- The name of the service
- The port the service listens on
-
diff --git a/manifests.md b/manifests.md
index a336b1d..4db0378 100644
--- a/manifests.md
+++ b/manifests.md
@@ -52,7 +52,7 @@ spec:# The desired state of the object
Step by step:
-### Write your own `pod` manifest.
+### Write your own `pod` manifest
- Go into the `manifests/start` directory.
- Open the `frontend-pod.yaml` file in a text editor.
@@ -107,11 +107,11 @@ spec:
-### Apply the `pod` manifest.
+### Apply the `pod` manifest
Try to apply the manifest with `kubectl apply -f frontend-pod.yaml` command.
-### Verify the `pod` is created correctly.
+### Verify the `pod` is created correctly
Check the status of the pod with `kubectl get pods` command.
diff --git a/persistent-storage.md b/persistent-storage.md
index 2f3bf77..23e00bf 100644
--- a/persistent-storage.md
+++ b/persistent-storage.md
@@ -32,7 +32,7 @@ In summary:
- Consume the PersistentVolume using a PersistentVolumeClaim (pvc) and mounting the volume to a pod
- Delete pod with volume attached and observe that state is persisted when a new pod is created
-### Step by step instructions:
+### Step by step instructions
@@ -59,14 +59,14 @@ In order to fix this, we need to persist the filesystem state of our database co
Use `kubectl` to get the available `StorageClasses` in the cluster, the shortname for `StorageClass` is `sc`:
-```
+```shell
kubectl get StorageClasses
```
Expected output:
-```
-AME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+```shell
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 54m
```
@@ -141,38 +141,38 @@ spec:
Apply your new `PersistenVolumeClaim` with `kubectl apply`:
-```
+```shell
kubectl apply -f persistent-storage/start/postgres-pvc.yaml
```
Expected output:
-```
+```text
persistentvolumeclaim/postgres-pvc created
```
Check that the `PersistenVolumeClaim` was created using `kubectl get`:
-```
+```shell
kubectl get persistentvolumeclaim
```
Expected output:
-```
+```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pvc Pending gp2 3m19s
```
Check if a `PersistentVolume` was created using `kubectl get`:
-```
+```shell
kubectl get persistentvolume
```
Expected output
-```
+```text
No resources found
```
@@ -255,7 +255,7 @@ Fill in the values:
In this case this should be `postgres-pvc`
- `mountPath` is the path in container to mount the volume to. For postgres, the database state is stored to the path `/var/lib/postgresql/data`
- `subPath` should be `postgres`, and specifies a directory to be created within the volume, we need this because of a quirk with combining `AWS EBS` with Postgres.
- (If you are curios why: https://stackoverflow.com/a/51174380)
+ (If you are curios why: )
The finished manifest should look like this
@@ -291,25 +291,25 @@ spec:
Apply the changes to the postgres deployment using `kubectl apply`:
-```
+```shell
kubectl apply -f persistent-storage/start/postgres-deployment
```
Expected output:
-```
+```text
deployment.apps/postgres configured
```
Observe that the `PersistentVolume` is now created:
-```
+```shell
kubectl get persistentvolumeclaims,persistentvolumes
```
Expected output:
-```
+```text
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/postgres-pvc Bound pvc-60e5235b-e2bb-4d71-9136-3901ca4dece9 5Gi RWO gp2 3m55s
@@ -423,6 +423,6 @@ Now that the state of our postgres database is persisted to the volume, let's ve
Delete all resources created using `kubectl delete -f `
-```
+```shell
kubectl delete -f persistent-storage/start
```
diff --git a/rolling-updates.md b/rolling-updates.md
index fe28020..7735d1d 100644
--- a/rolling-updates.md
+++ b/rolling-updates.md
@@ -47,8 +47,7 @@ Step by step:
Now go ahead and `apply` the deployments and the services:
- `kubectl apply -f .`. This will apply all the files in the current directory
-
-* Access the frontend by the NodePort service
+- Access the frontend by the NodePort service
:bulb: How is it that you do that?
@@ -84,7 +83,7 @@ Now we will try to roll out an update to the backend image.
Expected output:
-```
+```text
Waiting for deployment "backend" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "backend" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "backend" rollout to finish: 1 out of 3 new replicas have been updated...
diff --git a/services.md b/services.md
index a1dcc03..6e973be 100644
--- a/services.md
+++ b/services.md
@@ -198,7 +198,7 @@ Hint: the apply command can take more than one `-f` parameter to apply more than
You should see something like this:
-```
+```text
NAME READY STATUS RESTARTS AGE
pod/backend 1/1 Running 0 28s
pod/frontend 1/1 Running 0 20s
@@ -221,7 +221,7 @@ Now that we have the pods running, we can create a service that will expose the
You should see something like this:
-```
+```text
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend ClusterIP 172.20.114.230 5000/TCP 23s
```
@@ -231,7 +231,7 @@ service/backend ClusterIP 172.20.114.230 5000/TCP 23s
You should see something like this:
-```
+```shell
root@frontend:/app#
```
@@ -239,19 +239,19 @@ Make sure that you are inside a pod and not in your terminal window.
- Try to reach backend pod through backend service `Cluster-IP` from within your frontend pod
-```sh
+```shell
curl 172.20.114.230:5000
```
You should see something like this:
-```
+```text
Hello from the backend!
```
- Try accessing the service using dns name now
-```sh
+```shell
curl backend:5000
```
@@ -275,7 +275,7 @@ You can type `exit` or press `Ctrl-d` to exit from your container.
You should see something like this:
-```
+```text
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.106.136.250 5000:31941/TCP 23s
service/backend ClusterIP 172.20.114.230 5000/TCP 23s
@@ -287,7 +287,7 @@ service/backend ClusterIP 172.20.114.230 5000/TCP 23
You should see something like this:
-```
+```text
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-33-234.eu-west-1.compute.internal Ready 152m v1.23.9-eks-ba74326 10.0.33.234 54.194.220.73 Amazon Linux 2
5.4.219-126.411.amzn2.x86_64 docker://20.10.17
@@ -305,13 +305,13 @@ Copy the port from your frontend service that looks something like `31941` and p
Alternatively, you could also test it using curl from your terminal window.
-```sh
+```shell
curl 34.244.123.152:31941 | grep h1
```
You should see something like this:
-```
+```text
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3051 100 3051 0 0 576k 0 --:--:-- --:--:-- --:--:-- 595k
@@ -361,7 +361,7 @@ Try to apply the manifests again and write four commands that does the following
- List only the pods where label app is not frontend
- List only the pods where label app is not backend
-The documentation on this can be found here: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
+The documentation on this can be found here:
Remember to clean up after you are done.
diff --git a/trainer-notes.md b/trainer-notes.md
index 0533666..776d116 100644
--- a/trainer-notes.md
+++ b/trainer-notes.md
@@ -1,5 +1,5 @@
# Kubernetes cluster
-# Writing and CI
+## Writing and CI
Automated tests can extract Kubernetes YAML from markdown files and automatically lint them. For this to work, annotate your multi-line code blocks with both `yaml` and `k8s`. Your multi-line code block will have as its first line ````yaml,k8s`.