-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path0-ALL KUBE NOTES
1691 lines (1275 loc) · 58.6 KB
/
0-ALL KUBE NOTES
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
kubernetes commands cheatsheet - https://kubernetes.io/docs/reference/kubectl/cheatsheet/
master node -
etcd cluster - stores the information about containers in the cluster
where and what containers are running
it stores data in a key valuse store.
ETCDCTL is the CLI tool used to interact with ETCD.
ETCDCTL can interact with ETCD Server using 2 API versions - Version 2 and Version 3. By default its set to use Version 2. Each version has different sets of commands.
For example ETCDCTL version 2 supports the following commands:
etcdctl backup
etcdctl cluster-health
etcdctl mk
etcdctl mkdir
etcdctl set
Whereas the commands are different in version 3
etcdctl snapshot save
etcdctl endpoint health
etcdctl get
etcdctl put
To set the right version of API set the environment variable ETCDCTL_API command
export ETCDCTL_API=3
When API version is not set, it is assumed to be set to version 2. And version 3 commands listed above don't work. When API version is set to version 3, version 2 commands listed above don't work.
Apart from that, you must also specify path to certificate files so that ETCDCTL can authenticate to the ETCD API Server. The certificate files are available in the etcd-master at the following path -
--cacert /etc/kubernetes/pki/etcd/ca.crt
--cert /etc/kubernetes/pki/etcd/server.crt
--key /etc/kubernetes/pki/etcd/server.key
So for the commands to work you must specify the ETCDCTL API version and path to certificate files. Below is the final form:
kubectl exec etcd-master -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl get / --prefix --keys-only --limit=10 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key"
--------------------------------------------------------------------------
kube scheduler - it deploys the containers or to workers based on two factors
1)how many resources(cpus) the container need (if a node has less cpus than the pod requres schuduler wont deploy it on that node)
2) how maany resources(cpus) will be left after deploying the pod(id two nodes have 14 and 16 cpus respectively and the pod need 10 cps , the pod will be deployed in the 16 cpu node as it has more cpus)
--------------------------------------------------------------------------
kube controller manager - it manages replication , node control
node controller - continously monitors the status of nodes and take necessary actions to keed nodes alive and it does through kube api server( node controller sends the request to kube api server and kube api server sends the requests to kubelet to get thge stratus of node)
the node controller gets the status of nodes every 5 seconds
if it stops getting heart beat from a node then the node is marked as unreachable ( node monitor grace period 40 seconds)
kube controller give the unreachable node 5 mins to come back online, if it doesnt then all the pods that were running on that nodes will be moved tha healthy nodes.
replication controller - it is responisible for maintaining the desired number of replicas
--------------------------------------------------------------------------
kube api server - its useful to talk to kubernets cluster
in order to deploy a pod in kubernetes it follows below steps
1) user is authenticated by kube api server
2) request is validated by kube api server
3)kube api server creaste a pod object without assaigning a node and then it is updated in etcd cluster
4)then user is informed that pod is created
5)scheduler continuously monitors api server and when it realises a pod is created , it assaigns a node to to the pod
6)kube api server updated the etcd cluster with the node.
7)kube api server passes the information to the kubelet on the worker node assaigned by scheduler
8)kubelet then creates the pod and instructs container runtime engine to deploy the application image
9) once deployment is done kublet updates the status back to kube api server
10)api server then updates the data in etcd cluster
This whole process is repeated for every pod deployment request
kube api server is the only service that talks directly to etcd cluster, rest all services use kube api server to talk to etcd cluster
--------------------------------------------------------------------------
worker node -
kubelet - this listens to instruction from kube api server and manages containers.
kube proxy -this enables comunication between different worker nodes
--------------------------------------------------------------------------
command to create a pod from a image - kubectl run nginx --image=nginx
command to get the node on which pod is placesd - kubectl get pods -o wide
command to create a dryrun and get a yml from run command - kubectl run redis --image=redis123 --dry-run=client -o yaml >pod.yaml (this will not create pod but give us yaml output for deploying pod)
command with command - kubectl run static-busybox --image=buzybox --commmand sleep 1000 --dry-run=client -o yaml >pod.yaml
command to get all the options available in a pod yaml file - kubectl explain pod --recursive|less
commands to generate yml file - https://kubernetes.io/docs/reference/kubectl/conventions/
to deploy the pod from yaml (do changes to yaml file and run the command kubectl apply -f pod.yaml)
Example yaml to deploy two containers in a pod
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod (this is name of pod)
labels:
app: myapp
anykey: anyvalue
costcenter: US
spec:
containers:
- name: nginx-container (this is name of container in pod)
image: nginx (this is the image that kubernetes gets from docker hub)
- name: redis-container
image: redis
Add the above line to pod-definition.yaml (cat > pod-definition.yaml and paste the above thing)
and run the command - kubectl create -f pod-definition.yaml - this creates a pod
to see the pods - kubectl get pods
to see the status of a pod - kubectl describe pod <pod name>
command to see the pods with labels : kubectl get pods -l name=payroll
kubectl get pods --show-labels
to edit a pod- kubectl edit pod redis
before -
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: my-redis-app
cost-centre: US
spec:
containers:
- name: redis
image: redis123
kubectl create -f pod-definition.yaml - now as image name is wrong, pod will have error saying image name wrong
to fix this edit the image name from redis123 to redis , to edit a pod- kubectl edit pod redis (this will open the running conf of a pod, go to the bottom , edit image name from redis123 to redis and save)
to get the logs of a pod command -kubectl logs <podname>
to run a shell command on a pod use command - kubectl exec --namespace=kube-public curl -- sh -c ' <cmmand>'
yamls with args to a conatiner -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod (this is name of pod)
labels:
app: myapp
anykey: anyvalue
costcenter: US
spec:
containers:
- name: ubuntu-sleeper
image: ubuntu
command: ["sleep"]
args: ["10"]
ports:
- conatinerPort: 8080
env:
- name: APP_COLOR (name of environmental variable)
value: blue (value to be assaigned to env vaiable)
- name: APP_CLUSTER
value: prod
OR
yamls with args to a conatiner -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod (this is name of pod)
labels:
app: myapp
anykey: anyvalue
costcenter: US
spec:
containers:
- name: ubuntu-sleeper
image: ubuntu
command:
- "sleep"
- "1200"
ports:
- conatinerPort: 8080
env:
- name: APP_COLOR (name of environmental variable)
value: blue (value to be assaigned to env vaiable)
CONFIGMAP
to read the environmental variables from a file we create an configmap
imperative way to create a configmap - kubectl create configmap <config name> --from-literal=<key>=<value>
then run command - kubectl create configmap app-config --from-literal=APP-COLOR=blue --from-literal=APP-CLUSTER=prod
declarative way -
cat >config-map.yaml
apiVersion: v1
kind: configmap
metadata:
name: app-config
data:
APP-COLOR=blue
APP-CLUSTER=prod
kubectl create -f config-map.yaml
commad to view config maps - kubectl get configmaps
commad to describe config maps - kubectl describe configmaps
now we need to configure a pod with configmap we created
example - pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod (this is name of pod)
labels:
app: myapp
anykey: anyvalue
costcenter: US
spec:
containers:
- name: webserver
image: nginx
command: ["nginx"]
ports:
- conatinerPort: 8080
envFrom:
- configMapRef:
name: app-config
kubectl create -f pod-definition.yaml
SECRETS
data stored in a secret is encoded or hashed format
two steps to use secret -
1 -create the secret2 - inject it into pod
imperative way to create a secret
use command - kubectl create secret generic <secret name> --from-literal=<key>=<value>
create secrect from file - kubectl create secret generic <secret name> --from-file=<path to file>
data in file -
<key1>=<value1>
<key2>=<value2>
<key3>=<value3>
declarative way -
example - cat > secret-definition.yaml
apiVersion: v1
kind: Secret
metadata:
name: myapp-pod (this is name of secret)
labels:
app: myapp
anykey: anyvalue
costcenter: US
data:
<key1>=<value1>
<key2>=<value2>
<key3>=<value3>
kubectl create -f secret-definition.yaml
command to view secrets - kubectl get secrets
command to describe secret - kubectl descibe secrets - this shows keys but not values
command to see the values in a secrets - kubectl get secret <secret name> -o yaml
pod-definition file to use secret -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod (this is name of pod)
labels:
app: myapp
anykey: anyvalue
costcenter: US
spec:
containers:
- name: webserver
image: nginx
command: ["nginx"]
ports:
- conatinerPort: 8080
envFrom:
- secretRef:
name: <secret name>
another way -
env:
- name: <secret name>
valueFrom:
secretKeyRef:
name: <key>
key: <value>
INITCONTAINER -
When a POD is first created the initContainer is run, and the process in the initContainer must run to a completion before the real container hosting the application starts.
exmple yaml for init conatiner -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
- name: init-myservice-2
image: busybox
command: ['sh', '-c', 'git clone <some-repository-that-will-be-used-by-application> ; done;']
ref - https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
--------------------------------------------------------------------------
Replication controller -
example yaml to create repliaction controller (cat > rc-difintion.yaml)
apiVersion: v1
kind: ReplicationController
metadata:
name:rc-controller(this is name of relica controller)
labels:
app: my-app
costCenter: US
spec:
template:
metadata:
name:nginx-app(this is name of pod)
labels:
app: my-app
costcenter: us
user: hari
spec:
containers:
- name:nginx-container
image:nginx
replicas: 3 (this is number of replicas)
command - kubectl create -f rc-definition.yaml
command to the the replication controllers - kubectl get replicationcontroller
--------------------------------------------------------------------------
Replicaset -
command to create replicaset yaml - kubectl create deployment busybox --image=busyboxxxxxxx --dry-run -o yaml > busybox.yamlx
example yaml to create repliaction set (cat > rs-difintion.yaml)
apiVersion: apps/v1
kind: Replicaset
metadata:
name:rc-controller(this is name of relica set)
labels:
app: my-app
costCenter: US
spec:
template:
metadata:
name:nginx-app(this is name of pod)
labels:
app: my-app
costcenter: us
user: hari
spec:
containers:
- name:nginx-container
image:nginx
replicas: 3 (this is number of replicas)
selector:
matchLabels:
type:
costCenter:us
command - kubectl create -f rs-definition.yaml
in replicationset we can add the pods that were created before, we juust need to their labels in selector -> matchlabes - >type, then theat pod also comes under replicaset( THIS IS MAJOR DIFFERENCE BETWEEN REPLICA CONTROLLER AND REPLICA SET)
command to the the replication controllers - kubectl get replicaset
how to update the number of replicas after the resplicaset is created -
method 1 -
use the command - kubectl scale --replicas=6 -f rs-definition.yaml or
use the command - kubectl scale --replicas=6 replicaset <replicaset name>
method 2 -
update the number of replicas from 3 to lets say 6 in the rs-definition.yaml and run the command - kubectl replace -f rs-definition.yaml
--------------------------------------------------------------------------
Deployment - this creates a replica set, offers rollback, rolling update of pods
command to create deployments - kubectl create deployment httpd-frontend --image=http:2.4-alpine --dry-run -o yaml > dep.yaml
command to create deployment - kubectl create deployment httpd-frontend --image=http:2.4-alpine
then scale using command - kubectl scale deployment --replicas=3 httpd-frontend
example yaml to create repliaction set (cat > deploy-defintion.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name:rc-deployment(this is name of deployment)
labels:
app: my-app
costCenter: US
spec:
template:
metadata:
name:nginx-app(this is name of pod)
labels:
app: my-app
costcenter: us
user: hari
spec:
containers:
- name:nginx-container
image:nginx
replicas: 3 (this is number of replicas)
selector:
matchLabels:
app:my-app
command - kubectl create -f deploy-defintion.yaml
kubectl get deployments
kubectl get replicaset - this will show replica set created by deployment
to see all object created in kubernetes use command - kubectl get all
commands to generate yml file - https://kubernetes.io/docs/reference/kubectl/conventions/
command to update the image of a deployment - kubectl set image <deployment name> nginx=nginx:1.9.1
ROLLING UPDATES AND ROLLBACKS
to check the rollout status of a deployment use command - kubectl rollout status <deployment name>
to check the history of a rollout of a deployment use command - kubectl rollout history <deployment name>
command to update the image of a deployment - kubectl set image <deployment name> nginx=nginx:1.9.1
command to rollback to older version of deployment - kubectl rollout undo <deployment name>
--------------------------------------------------------------------------
Namespace
command to create a namespace - kubectl create namespace <namespace name>
command to see the current namespaces in cluser - kubectl get namespaces
or kubectl get ns or kubectl get ns --no-headers
to see the pods running in another namespace - kubectl get pods --namespace=<namespace name>
or kubectl -n <namespacename> get pods
to create a pod in another namespace - kubectl get pods --namespace=<namespace name>
we can alsoe mention namespace in yaml file under metadata -
example -
metadata:
name:nginx-app(this is name of pod)
namespace: namespace-name
labels:
example yaml file to create a namespace (cat > namespace-defintion.yaml) -
apiVersion: v1
kind: Namespace
metadata:
name: dev
kubectl create -f namespace-defintion.yaml
command to change the namespace from default to dev(ur own namespace name) - kubectl config set-context $(kubectl config current-context) --namespace=dev
to view pods in all namespaces - kubectl get pods --all-namespaces
--------------------------------------------------------------------------
SERVICE in kubernetes
service is a object in kubernetes that helps in communication between pods and also help in communincation between users or a batabase outside kebernets to talk with pods
types of services -
NodePort - this makes an internal pod on a node accessable to outside
nodeport rage - 30000 - 32767
example yml - cat > service-definition.yaml -
apiVersion: apps/v1
kind: Service
metadata:
name:myapp-service(this is name of service)
labels:
app: my-app
costCenter: US
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodeport: 30001
selector:
app: my-app
command - kubectl create -f service-definition.yaml
kubectl get services
command to create a nodeport yaml - kubectl expose deployment <name of deployment> --name=<service name> --target-port=<target port> --type=NodePort --port=<port> --dry-run=client -o yaml > service-definition.yaml
ex - kubectl expose deployment simple-webapp-deployment --name=webapp-service --target-port=8080 --type=NodePort --port=8080 --dry-run=client -o yaml > service-definition.yaml
ClusterIP - this creates a virual ip inside the cluster that enables the communication b/w different services in the cluster
frnt end service has 3 pods
backend 3pods
key-value store 3 pods
in order to enable comminuincation between them we use clusterIP
example yml - cat > service-definition.yaml -
apiVersion: apps/v1
kind: Service
metadata:
name: back-end(this is name of service)
labels:
app: my-app
costCenter: US
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: my-app
command - kubectl create -f service-definition.yaml
kubectl get services
loadbalancer - node port is useful to expose port to outside but the user need to hits the ip of the pod in order to reach the pod
ex -
if 3 pods are on ip's 192.168.1.2,3,4
then user can reach the pod using curl http://192.168.1.1 or http://192.168.1.2 or http://192.168.1.2 but the end user need a single ip to hit on to reach the service
Load balancer does this.
This works only in native cloud like aws azure or GCP
example yml - cat > service-definition.yaml -
apiVersion: apps/v1
kind: Service
metadata:
name:myapp-service(this is name of service)
labels:
app: my-app
costCenter: US
spec:
type: LoadBalancer
ports:
- targetPort: 80
port: 80
nodeport: 30001
selector:
app: my-app
--------------------------------------------------------------------------
IMPERATIVE - we giving all the instructions that the machine needs to do
imperative way of managing objects in kubernetes commands-
create objects -
kubectl run --image=nginx
kubectl run redis --image=redis --labels=tier=db
kubectl create deployment --image=nginx nginx
kubectl expose deployment nginx --port=80
kubectl expose pod redis --name reis-service --port=6379 --target-port=6379
kubectl create configmap <config name> --from-literal=<key>=<value>
pod
kubectl run nginx --image=nginx --dry-run=client -o yaml
kubectl run httpd --imae=httpd --port=80 --expose --dry-run=client -o yaml
deployment
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml
kubectl create deployment nginx --image=nginx--dry-run=client -o yaml > nginx-deployment.yaml
service
kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml
kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml
kubectl expose pod nginx --port=80 --name nginx-service --type=NodePort --dry-run=client -o yaml
kubectl create service nodeport nginx --tcp=80:80 --node-port=30080 --dry-run=client -o yaml
update objects-
kubectl edit deployment nginx
kubectl scale deployment nginx --replicas=5
kubectl set image deployment nginx nginx=nginx:1.8
kubectl create -f nginx.yaml
kubectl replace -f nginx.yaml
kubectl replace --force -f nginx.yaml - this completely removes objects and recraetes it
kubectl delete -f nginx.yaml
command to update the image of a deployment - kubectl set image <deployment name> nginx=nginx:1.9.1
DECLARATIVE - we give what we need and machien handles how.
declarative way - it reads from a file and does what it needs to do , command-
kubectl apply -f nginx.yaml - this will creat the object from yml file if the object doest exist and it will update the object if it already exist
command to get a yaml from running pod - kubectl -n <namespace> get <resource type> <resource Name> -o yaml
command to run a shell command on a pod use command - kubectl exec --namespace=kube-public <pod name> -- sh -c ' <cmmand>'
command to opena a interactive shell to a pod - kubectl exec -it <pod name> --sh
command to a kind - kubectl explain pod --recursive|less
--------------------------------------------------------------------------
SCHEDULing
command to check if a scheduler is poresent or nor -kubectl get scheduler
command to see the all the components of kubernerts - kubectl -n kube-system get get pods
if schedukler is not present then the node wont be allocated to a pod and then it will be just in running state
Manual scheduling -
if we dont specify nodeName in spec of a pod yaml the kuberenetes auotmaitically assaigns a node if it has a scheduler but if we want to assaign a node to pod we need to specify the nodeName in spec in yaml.
If pod is created and then we want to schedule the pod to different node use binding object (cat > pod-bind.yaml)
apiVersion: v1
kind: Binding
metadata:
name: <name of pod>
target:
apiversion: v1
kind: Node
name: <name of node>
--------------------------------------------------------------------------
LABELS and selectors
kubectl get pods --show-labels
command to select the pods based on the labes - kubectl get pods --selector <key>=<value>,<key2>=<value2> ex - kubectl get pods --selector resion=US or
kubectl get pods -l <key>=<value>,<key2>=<value2>
--------------------------------------------------------------------------
TAINT AND TOLERANCE-
If tainit is applied then pods which dont have tolerence to that node will not be launched on that node
by default all pods dont have any tolerence to any taint
taint- taint is added to node
command to taint a node - kubectl taint nodes <node name> key=value:<taint-effect>
there are 3 taint effects -NoSchedule | PreferNoSchedule | NoExecute
command to remove a taint - kubectl taint node <node name> <key>-
NoSchedule - pods will not be scheduled on the node
PreferNoSchedule - pods will not be scheduled on the node but no garuntee
NoExecute- now new pods will not be schedukled and if alredy existing pods cant tolerate the taint then they will be evicted.
toleration - toleration is added to pod
example yaml of pod with toleration - cat > pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod (this is name of pod)
labels:
app: myapp
anykey: anyvalue
costcenter: US
spec:
containers:
- name: nginx-container (this is name of container in pod)
image: nginx (this is the image that kubernetes gets from docker hub)
tolerations:
- key: "<key>"
operator: "Equal"
value: "<value>"
effect: "NoSchedule"
--------------------------------------------------------------------------
NODE LABEL AND SELECTORS
command to add a label to node - kubectl label nodes <node-name> <key>=<value> example = kubectl label nodes node01 size=large
command to show labels ona node - kubectl get nodes <node-name> --shoe-labels or kubectl descibe node<nodename> |grep -i labels
if node selector is set to a pod then the pod will be launched in the node with label of the node selector
yaml to to cretae a pod with nodeSelector (cat > pod-definition.yaml)
apiVersion: v1
kind: Pod
metadata:
name:webapp-pod
labels:
user: hari
spec:
containers:
- name: hari-container
image: nginx
nodeSelector:
size: large
limitaion of nodeSelector - we cannot say a pod to launch on nodes that doesnt have size=large, we can only say on which node the pod can be launched
--------------------------------------------------------------------------
NODEAFFINITY
the limitions of nodeSelectors are solved using nodeAffinity
yaml file to launch a pod on node with label size( what we did above) using nodeAffinity - (cat >pod-definition.yaml)
apiVersion: v1
kind: Pod
metadata:
name:webapp-pod
labels:
user: hari
spec:
containers:
- name: hari-container
image: nginx
affinity
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- large
- medium (if we want to add two valuess this is how we do it, if we want only one remove medium)
other operators available area
NotIn (opposite of In)
Exists (if we use exits we dont need to give values field, it just checks if key exists and if it exists then it deploys the pod)
types of node affinity -
requiredDuringSchedulingIgnoredDuringExecution - if we use this and then the node doesnt have label, then pod is not scheduled
preferredDuringSchedulingIgnoredDuringExecution - if we use this and then the node doesnt have label, then pod is placed on some other node
after deploying pod , id node label is removed then it doesnt affect the running pod
--------------------------------------------------------------------------
RESOURCE REQUIREMENTS AND LMITS
by default kuberenetes assumes a pod or a container in pod requires 0.5 CPU, 256Mi
by default kubernetes sets limits to a pod or a container in pod - default limits 1VCPU , 521Mi
For the POD to pick up those defaults you must have first set those as default values for request and limit by creating a LimitRange in that namespace.
cat > def-limits.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
kubectl apply -f def-limits.yaml
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
to increase the limites we need to specify in yaml file
if a container needs more than that then we need to specify that in yaml file
example yaml file with resources - (cat > pod-definition.yaml)
piVersion: v1
kind: Pod
metadata:
name: myapp-pod (this is name of pod)
labels:
app: myapp
anykey: anyvalue
costcenter: US
spec:
containers:
- name: nginx-container (this is name of container in pod)
image: nginx (this is the image that kubernetes gets from docker hub)
ports:
- containerPort: 8080
resources:
requests:
memory: "1Gi"
cpu : 1
limits:
memory: "2Gi"
cpu: 2
limits and requests are set to each container in pod
https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource
--------------------------------------------------------------------------
DAEMON SETS - when we use repicaset it makes sure an specified number of copies of pods exist in the cluster but the node the replica exists doesnt matter but when we use DAEMON SETS it makes sure one copy of a pod run on all the nodes, if a new node is added then replica of the pod will be deployed on the new node too.
use case - to monitor the nodes that are in the cluster
- to collect the logs of nodes in the cluster
example yaml for daemon set (cat > daemonset-definition.yaml)-
piVersion: apps/v1
kind: DaemonSet
metadata:
name:monitorining-daemon(this is name of daemonset)
labels:
app: my-app
costCenter: US
spec:
selector:
matchLabels:
app:monitoring-agent
template:
metadata:
labels:
app:montioring-agent
spec:
containers:
- name:monitoring-agent
image:monitoring-agent
This is yaml is exactly like replicaset only change is kind which is DaemonSet instead of ReplicaSet
pods deployed by the daemonset will have the same name as daemonset
command- kubectl create -f daemonset-definition.yaml
to view daemonstes command- kubectl get daemonsets
to view more datials - kubectl describe daemonsets <daemonset name>
command to get daemosnsets in all namespaces - kubectl get ds --all-namespaces
command to see the nodes on which the daemonset is ruinng - kubectl -n <namespace name> get pods -o wide |grep <daemonset name>
command to see the image that daemonset uses - kubectl -n <namespace name> descibe -ds <daemonset name> |grep -i image
another way to create daemonset - kubectl create deployment <daemonset name> --image=<imagename> --dry-run=client -o yaml >daemon-set.yaml
do the required changes to the yaml to convert it into daemonset yaml and run kubectl apply -f daemon-set.yaml
--------------------------------------------------------------------------
STATIC PODS -
kubelet stores the pod definition files (yamls ) in /etc/kubernetes/manifests , you can get this path bye checking for --pod-manifest-path option in /etc/systemd/system/kubelet.service, if u dont find the --pod-manifest-path option the chcek for the option --config and get the path of kubeconfig.yaml and in th ekubeconfig .yaml check for staticPodPath, one more way to find out path - ps -ef|grep kubelet |grep "\--config" , get the config file and grep for static in the config.yaml
kubelet chks this folder for yaml files and creates pods out of them
if we remove a yaml from this directory them pod is removed automatically
the pods that are created by kubelet without the intervension of the cluster components are call static pods
we can only create pods this way not any other service
usecase - useful to craete a master node
how to check for static pods in a cluster -
kubectl get pods --all-namespaces -o wide
bow looks for the pods that have ending as nodename
--------------------------------------------------------------------------
CUSTON SCHEDULER
command to get a yaml from running pod - kubectl -n <namespace> get <resource type> <resource Name> -o yaml
example yaml to create a custom scheduler - (cat > scheduler-definition.yaml)
apiVersion: v1
kind: Pod
metadata:
name: my-scheduler (scheduler name)
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=false
- --scheduler-name=my-scheduler (scheduler name)
- --lock-object-namespace=my-scheduler (scheduler name)
image: k8s.gcr.io/kube-scheduler:v1.20.0
imagePullPolicy: IfNotPresent
name: kube-scheduler-mine
yaml file to create a pod using custom scheduler-definition
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
schedulerName: my-scheduler
reference links -
Please check the following:
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-scheduling/scheduler.md
https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes/
https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/
https://stackoverflow.com/questions/28857993/how-does-kubernetes-scheduler-work
--------------------------------------------------------------------------
MONITORING NODES in a kubernetes cluster -
deploy metrics servers on all nodes to monitor the metrics
to deploy the metrics server run -
1 - git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
2 - cd kubernetes-metrics-server/
3 - kubectl create -f .
now to check the metris run command -
kubectl top node - to get node metrics
kubectl top pod - to get pod metrics
--------------------------------------------------------------------------
APPLICATION LOGS
docker logs -f <conatiner name>
kubectl logs -f <pod name>
if there are muntiple containers in the pod then we needf to specify the container name - kubectl logs -f <pod name> <container name>
to get the container names in the pods use command - kubectl get pods [pod-name-here] -n [namespace] -o jsonpath='{.spec.containers[*].name}* , ex - kubectl get pods redis-bootstrap -n redis-cluster -o jsonpath='{.spec.containers[*].name}*
or use kubectl describe pods redis-bootstrap -n redis-cluster
--------------------------------------------------------------------------
ROLLING UPDATES AND ROLLBACKS
to check the rollout status of a deployment use command - kubectl rollout status <deployment name>
to check the history of a rollout of a deployment use command - kubectl rollout history <deployment name>
command to update the image of a deployment - kubectl set image <deployment name> nginx=nginx:1.9.1
command to rollback to older version of deployment - kubectl rollout undo <deployment name>
--------------------------------------------------------------------------
OS UPGRADE on a node
if a node is down for more than 5 mins then the pods are terminated from that node, kuberenetes considers that node as dead and users wont be able to access those pods.