Notice
Recent Posts
Recent Comments
Link
«   2024/12   »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
Tags
more
Archives
Today
Total
관리 메뉴

감사합니다.

Kubernetes on Ubuntu 22.04 - Single Master - kubeadm 본문

Kubernetes 따라하기

Kubernetes on Ubuntu 22.04 - Single Master - kubeadm

springjunny 2023. 9. 11. 15:05

운영환경에서 k8s를 어떻게 구축하는지 잘 모르겠다. 여기 저기 검색해서 확인한 것들을 기록 차원에서 정리해 본다.

 

#20230911 Summary

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin


Before you begin

  • A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
  • 2 GB or more of RAM per machine (any less will leave little room for your apps).
  • 2 CPUs or more.
  • Full network connectivity between all machines in the cluster (public or private network is fine).
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines. See here for more details.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.
    • For example, sudo swapoff -a will disable swapping temporarily. To make this change persistent across reboots, make sure swap is disabled in config files like /etc/fstab, systemd.swap, depending how it was configured on your system.

구축 환경

  • Ubuntu Server 22.04.03
  • Kubernetes 1.28
  • containerd , Flannel Network plugin

Ubuntu 22.04.03, via kubeadmin / 테스트 서버 정보

Server Role Server Hostname Specs IP Address
Master Node k8s-test-1.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.131
Worker Node k8s-test-2.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.132
Worker Node k8s-test-3.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.133

사전 구성 - 모든 노드 동일 적용

# 고정 IP 설정 // hosts 파일 대신에 DNS 서버 사용, 기본 설정은 root 계정으로 진행하였음..
cat <<EOF > /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  ethernets:
    eth0:
      addresses:
      - 192.168.0.131/24
      gateway4: 192.168.0.1
      nameservers:
        addresses:
        - 192.168.0.60
        search: [internal.labs]
  version: 2
EOF
netplan apply

# hostname 설정
hostnamectl set-hostname k8s-test-1.internal.labs

# /etc/hosts, /etc/resolv.conf 파일 업데이트
echo "192.168.0.131 k8s-test-1.internal.labs" >> /etc/hosts
sed -i 's/nameserver 127.0.0.53/nameserver 192.168.0.60/g' /etc/resolv.conf

#Upgrade
apt update
apt -y full-upgrade
reboot

#disable all swaps from /proc/swaps.
swapoff -a

#disable Linux swap space permanently in /etc/fstab
vi /etc/fstab
#/swap.img       none    swap    sw      0       0

#Set up the IPV4 bridge
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sysctl --system

 


container runtime 설치 - containerd 사용, 모든 노드 적용

#일반 계정 사용

# 패키지 설치
sudo apt install -y curl wget vim git gnupg2 software-properties-common apt-transport-https ca-certificates

# Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# Install containerd
sudo apt update
sudo apt install -y containerd.io

# Configure containerd and start service
#confibure cgroup driver
sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.orig
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
systemctl status containerd

kubelet, kubeadm and kubectl 설치, 모든 노드 적용


Kubernetes package repositories

These instructions are for Kubernetes 1.28.

  1. Update the apt package index and install packages needed to use the Kubernetes apt repository:
#위 항목과 중복되어 생략 가능
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl

   2. Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

3. Add the appropriate Kubernetes apt repository:

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

4. Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Note: In releases older than Debian 12 and Ubuntu 22.04, /etc/apt/keyrings does not exist by default; you can create it by running sudo mkdir -m 755 /etc/apt/keyrings


Initialize control plane, 마스터 노드 적용

#module 점검
lsmod | grep br_netfilter
#br_netfilter           22256  0 
#bridge                151336  2 br_netfilter,ebtable_broute

#Enable kubelet service.
sudo systemctl enable kubelet

#control plane components 다운로드
# Containerd
sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock

Containerd kubeadm init 

# Containerd
sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \ #network plugin 고려하여 결정
  --cri-socket /run/containerd/containerd.sock \
  --control-plane-endpoint=k8s-test-1.internal.labs
#세부 옵션을 별도로 확인
#예시  
 root@k8s-test-1:~# sudo kubeadm init \
 --pod-network-cidr=10.244.0.0/16 \
 --cri-socket /run/containerd/containerd.sock \
 --control-plane-endpoint=k8s-test-1.internal.labs
W0911 10:31:41.843987    3854 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0911 10:31:42.389800    3854 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-test-1.internal.labs kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-test-1.internal.labs localhost] and IPs [192.168.0.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-test-1.internal.labs localhost] and IPs [192.168.0.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.001077 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-test-1.internal.labs as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-test-1.internal.labs as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: pmckxb.mhan0nra8d4q0dw9
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-test-1.internal.labs:6443 --token pmckxb.mhan0nra8d4q0dw9 \
        --discovery-token-ca-cert-hash sha256:19b62c76c1e9548950effd6316d1200f7845aca3e75173c87ebf78afa6ea01c2 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-test-1.internal.labs:6443 --token pmckxb.mhan0nra8d4q0dw9 \
        --discovery-token-ca-cert-hash sha256:19b62c76c1e9548950effd6316d1200f7845aca3e75173c87ebf78afa6ea01c2

 

Enable kubectl autocompletion Bash via system

#패키지설치
apt-get install bash-completion
source /usr/share/bash-completion/bash_completion
#현재 사용자만 적용
echo 'source <(kubectl completion bash)' >>~/.bashrc
#시스템에 적용
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null

#알리아스
echo 'alias k=kubectl' >>~/.bashrc
or
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc

#적용
exec bash

 

Control plane node isolation 
By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes, for example for a single machine Kubernetes cluster, run:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

Install Kubernetes network plugin, Flannel, 마스터 노드 적용

#Download installation manifest
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

#Check  podCIDR --> kubeadm init "--pod-network-cidr" 옵션과 동일하게 설정
grep -i network kube-flannel.yml
  - networking.k8s.io
      "Network": "10.244.0.0/16",
      hostNetwork: true

#Install Flannel
kubectl apply -f kube-flannel.yml
sudo systemctl restart kubelet

#Check pods
kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-pppw4   1/1     Running   0          2m16s

#Confirm master node is ready:
kubectl get nodes -o wide
kubectl get pods -o wide -A

Add worker nodes to cluster, 워커 노드 적용

sudo kubeadm join k8s-test-1.internal.labs:6443 --token pmckxb.mhan0nra8d4q0dw9 \
        --discovery-token-ca-cert-hash sha256:19b62c76c1e9548950effd6316d1200f7845aca3e75173c87ebf78afa6ea01c2

#예시
admin@k8s-test-1:~$ kubectl get nodes
NAME                       STATUS   ROLES           AGE    VERSION
k8s-test-1.internal.labs   Ready    control-plane   3h5m   v1.28.1
k8s-test-2.internal.labs   Ready    <none>          22s    v1.28.1
k8s-test-3.internal.labs   Ready    <none>          21s    v1.28.1

클러스터 상태 체크

kubectl get nodes -o wide
kubectl get pods -n kube-system

기타 사항

  • 워커 노드 Role 정의
kubectl label node k8s-test-2.internal.labs node-role.kubernetes.io/worker=

#예시
admin@k8s-test-1:~$ kubectl get nodes
NAME                       STATUS   ROLES           AGE    VERSION
k8s-test-1.internal.labs   Ready    control-plane   3h6m   v1.28.1
k8s-test-2.internal.labs   Ready    worker          108s   v1.28.1
k8s-test-3.internal.labs   Ready    worker          107s   v1.28.1