SoFunction
Updated on 2025-03-10

Extra-detailed tutorial on installing Kubernetes (k8s) in Linux

System initialization

The production environment must be configured with higher configurations, and the virtual machines must be configured with conservative minimum configurations.

machine ip Specification
master 192.168.203.11 1 core 2 threads, 2G memory, 40G disk
node2 192.168.203.12 1 core 2 threads, 2G memory, 40G disk
node3 192.168.203.13 1 core 2 threads, 2G memory, 40G disk

Modify to static ip

vi /etc/

Save and exit after adding content

nameserver 223.5.5.5
nameserver 223.6.6.6
sudo vi /etc/sysconfig/network-scripts/ifcfg-ens33

BOOTPROTO="dhcp" is changed to BOOTPROTO="static". If it is a copy of the machine UUID and IPADDR, it must be inconsistent.

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="0ef41c81-2fa8-405d-9ab5-3ff34ac815cf"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.203.11"
PREFIX="24"
GATEWAY="192.168.203.2"
IPV6_PRIVACY="no"

Restart the network to make the configuration take effect

sudo systemctl restart network

Permanently shut down the firewall (all machines)

sudo systemctl stop firewalld && systemctl disable firewalld

Permanently shut down selinux (all machines)

sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

Enable selinux command: setenforce 0 [No need to execute, just as a record]

Permanently prohibit swap partition (all machines)

sudo sed -ri 's/.*swap.*/#&/' /etc/fstab

Permanently set hostname (set mster, node1, node2 according to the machine)

The three machines are mster, node1, node2

sudo hostnamectl set-hostname master

Use hostnamectl or hostname command to verify that the modification is successful

Add content in hosts file (master settings only)

sudo cat >> /etc/hosts << EOF
192.168.203.11 master
192.168.203.12 node1
192.168.203.13 node2
EOF

Passing bridged IPv4 traffic to iptables' chain (all machines)

sudo cat > /etc// << EOF
-nf-call-ip6tables = 1
-nf-call-iptables = 1
net.ipv4.ip_forward = 1
 = 0
EOF

Make immediate effect

sudo sysctl --system

Time synchronization (all machines)

sudo yum install -y ntpdate

After installation, execute the synchronization time command

sudo ntpdate 

All machines are installed with Docker, Kubeadm, Kubelet, Kubectl

Install Docker

Install some necessary system tools

yum install -y net-tools
yum install -y wget
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

Install configuration management and set up mirror source

sudo yum-config-manager --add-repo /docker-ce/linux/centos/
sudo sed -i 's++/docker-ce+' /etc//

Find the version of Docker-CE

sudo yum list docker-ce.x86_64 --showduplicates | sort -r

Install the specified version of docker-ce

sudo yum -y install docker-ce-[VERSION]

sudo yum -y install docker-ce-18.06.-3.el7

Start the docker service

sudo systemctl enable docker && sudo systemctl start docker

Check whether docker starts successfully [Note that docker's Client and Server must be consistent, otherwise an error will be reported in some cases]

sudo docker --version

Create /etc/docker/ file and set the docker repository to aliyun repository

sudo cat > /etc/docker/ << EOF
{
	"registry-mirrors":[""]
}
EOF

Restart docker to see if the configuration is effective

sudo docker info

Restart

sudo reboot now

Add yum software source to Alibaba Cloud

cat <<EOF > /etc//
[kubernetes]
name=Kubernetes
baseurl=/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=/kubernetes/yum/doc/ /kubernetes/yum/doc/
EOF

Install kubelet, kubeadm, kubectl

sudo yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0

Set up startup and startup

sudo systemctl enable kubelet && systemctl start kubelet

Deploy Kubernetes

apiserver-advertise-address means master host ip
image-repository represents the mirror repository
kubernetes-version represents the version of k8s, which is consistent with the above kubelet, kubeadm, and kubectl versions.
service-cidr means the virtual network inside the cluster, and the Pod access portal is unified
pod-network-cidr means Pod network, consistent with the CNI network component yaml deployed below

Kubernetes initialization [only master execution, the process may be a bit long, please wait patiently for the command line output]

–v=5 can be added or not, it is recommended to add, and output a complete log to facilitate troubleshooting

kubeadm init \
--v=5 \
--apiserver-advertise-address=192.168.203.11 \
--image-repository=/google_containers \
--kubernetes-version=v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

Output the following content indicates that the initialization is successful

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/ $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  /docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.203.11:6443 --token 51c0rb.ehwwxemgec75r1g6 \
    --discovery-token-ca-cert-hash sha256:fad429370f462b36d2651e3e37be4d4b34e63d0378966a1532442dc3f67e41b4

According to the above prompt, execute the corresponding To start using your cluster, you need to run the following as a regular user: command

The master node executes, the node node does not execute
kubectl get nodes to view node information

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/ $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes

The node node executes the corresponding corresponding according to the above prompt. Then you can join any number of worker nodes by running the following on each as root: command

The node node executes, the master node does not execute

kubeadm join 192.168.203.11:6443 --token 51c0rb.ehwwxemgec75r1g6 \
    --discovery-token-ca-cert-hash sha256:fad429370f462b36d2651e3e37be4d4b34e63d0378966a1532442dc3f67e41b4

Execute commands from node1 and node2

Install cni

document

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: 
  annotations:
    /allowedProfileNames: docker/default
    /defaultProfileName: docker/default
    /allowedProfileNames: runtime/default
    /defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: ./v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: ./v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: .
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  : |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  : |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: /os
                    operator: In
                    values:
                      - linux
                  - key: /arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: /coreos/flannel:v0.13.0-rc2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/
        - /etc/cni//
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: /coreos/flannel:v0.13.0-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: 
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: 
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: /os
                    operator: In
                    values:
                      - linux
                  - key: /arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: /coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/
        - /etc/cni//
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: /coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: 
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: 
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: /os
                    operator: In
                    values:
                      - linux
                  - key: /arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: /coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/
        - /etc/cni//
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: /coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: 
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: 
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: /os
                    operator: In
                    values:
                      - linux
                  - key: /arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: /coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/
        - /etc/cni//
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: /coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: 
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: 
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: /os
                    operator: In
                    values:
                      - linux
                  - key: /arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: /coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/
        - /etc/cni//
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: /coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: 
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: 
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
docker pull /coreos/flannel:v0.13.0-rc2
kubectl apply -f 

kubectl get pod -n kube-system Check whether kube-flannel-ds-XXX is runnin status

systemctl restart kubelet
kubectl get pod -n kube-system

Master execution

kubectl get node

Node1 and node2 are in Ready state

[root@master ~]# kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   50m   v1.18.0
node1    Ready    <none>   49m   v1.18.0
node2    Ready    <none>   49m   v1.18.0

Master deploys CNI network plug-in [If the –network-plugin=cni is not removed and restarted kubelet in the previous step, this step may report an error]

kubectl apply -f /coreos/flannel/master/Documentation/
kubectl get pods -n kube-system
kubectl get node

Master executes testing of Kubernetes (k8s) cluster

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

The output is as follows

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        21m
service/nginx        NodePort    10.108.8.133   <none>        80:30008/TCP   111s

If nginx starts fail, delete it

kubectl delete service nginx

Summarize

This is the end of this article about Linux installation of Kubernetes (k8s). For more related content on Linux installation of k8s, please search for my previous articles or continue browsing the following related articles. I hope everyone will support me in the future!