Notice
Recent Posts
Recent Comments
Link
«   2024/12   »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
Tags
more
Archives
Today
Total
관리 메뉴

감사합니다.

How to install a multi master Kubernetes Cluster 본문

Kubernetes 따라하기

How to install a multi master Kubernetes Cluster

springjunny 2023. 9. 13. 17:08

Virtual Lab에서 로드 밸런서 구성하여 multi master k8s 클러스터를 구성해 보려고 하는데 검색되는 내용이 많지는 않다.

ha proxy 사용한 예제 사이트가 있어서 따라해 보자.

https://medium.com/@ivan.claudio/how-install-a-multi-master-on-premises-kubernetes-cluster-746742d38e5c

 

How install a multi-master on-premises Kubernetes Cluster

If you arrived at this page you are probably familiar with minikube, kind or Kubernetes that comes with Docker Desktop but like me, you…

medium.com


summary

1. HA Proxy 2대로 LB && HA 구성

#haproxy - L4 수준 #keepalived(VRRP) - L7 수준

2. mater node 3ea #kubeadmin init [--upload-certs] 옵션 사용


Options for Highly Available Topology

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/

 

Options for Highly Available Topology

This page explains the two options for configuring the topology of your highly available (HA) Kubernetes clusters. You can set up an HA cluster: With stacked control plane nodes, where etcd nodes are colocated with control plane nodes With external etcd no

kubernetes.io

 


Ubuntu 22.04.03, 테스트 서버 정보

Server Role Server Hostname Specs IP Address
HA Proxy Node ha-proxy-1.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.71
HA Proxy Node ha-proxy-2.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.72
HA Proxy VIP k8s-lb-01.internal.labs   192.168.0.70
Master Node k8s-master-1.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.73
Master Node k8s-master-2.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.74
Master Node k8s-master-2.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.75
Worker Node k8s-worker-1.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.76
Worker Node k8s-worker-2.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.77
Worker Node k8s-worker-3.internal.labs 2GB Ram, 2vcpus,64G Disks 192.168.0.78

HA Proxy 서버 구성, HA Proxy 노드 2대에서 동일하게 설정

sudo apt update && sudo apt install -y haproxy

/etc/haproxy/haproxy.cfg 파일 마지막 라인에 아래 내용 입력

frontend kubernetes
 mode tcp
 bind 192.168.0.70:6443
 option tcplog
 default_backend k8s-control-plane
 
backend k8s-control-plane
 mode tcp
 balance roundrobin
 option tcp-check
 server k8s-master-1.internal.labs 192.168.0.73:6443 check fall 3 rise 2
 server k8s-master-2.internal.labs 192.168.0.74:6443 check fall 3 rise 2
 server k8s-master-3.internal.labs 192.168.0.75:6443 check fall 3 rise 2

# 파일 수정 후 서비스 재기동하면 에러 발생, cfg 에 선언한 backend 서버가 꺼저 있어서 발생함...


Keepalived 구성

sudo apt update && sudo apt install -y keepalived
#non-local ip 바인딩 설정
sudo echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf
sudo sysctl -p
#extra layer of protection
sudo groupadd -r keepalived_script
sudo useradd -r -s /sbin/nologin -g keepalived_script -M keepalived_script

 

/etc/keepalived/keepalived.conf 파일 작성

global_defs {
 # Don't run scripts configured to be run as root if any part of the path
 # is writable by a non-root user.
 enable_script_security
}
vrrp_script chk_haproxy {
 script "/usr/bin/pgrep haproxy"
 interval 2 # check every 2 seconds
 weight 2 # add 2 points of priority if OK
}
vrrp_instance VI_1 {
 interface eth0 # change here to match your network interface name.
 state MASTER # change here to BACKUP on the Backup server.
 virtual_router_id 51
 priority 101 # 101 master, 100 backup change here according to server
virtual_ipaddress {
  192.168.0.70
 }
track_script {
  chk_haproxy
 }
}

서비스 재시작

sudo systemctl restart keepalived

k8s 설치 진행

[Kubernetes 따라하기] - Kubernetes on Ubuntu 22.04 - Single Master - kubeadm

 

Kubernetes on Ubuntu 22.04 - Single Master - kubeadm

운영환경에서 k8s를 어떻게 구축하는지 잘 모르겠다. 여기 저기 검색해서 확인한 것들을 기록 차원에서 정리해 본다. #20230911 Summary https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubea

lifeisju.tistory.com

 


k8s control plane 생성, 마스터 1번 노드에서 진행

network plugin 설치까지 완료 후 init 진행하였음.

sudo kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --cri-socket /run/containerd/containerd.sock \
  --control-plane-endpoint=k8s-lb-01.internal.labs \
  --upload-certs

정상 설치 확인

kubectl get nodes -A
kubectl get pods -o wide -A

마스터 노드 추가하기, 마스터 2번,3번 노드

#root 계정으로 실행하였음.
kubeadm join k8s-lb-01.internal.labs:6443 --token shm8rj.4uauqyqvba1v206t \
        --discovery-token-ca-cert-hash sha256:167f13399d6a556578cc14c93411655a8352b5792c49eb3fb8d2e8a8aae0817f \
        --control-plane --certificate-key 37e54684a5087c781cfb47759c70d787159462191b1cc1d70b133d6306f1e4d8

 

마스터 노드 조인 확인

admin@k8s-master-1:~$ kubectl get nodes -A
NAME                         STATUS   ROLES           AGE   VERSION
k8s-master-1.internal.labs   Ready    control-plane   74m   v1.28.1
k8s-master-2.internal.labs   Ready    control-plane   70m   v1.28.1
k8s-master-3.internal.labs   Ready    control-plane   67m   v1.28.1
admin@k8s-master-1:~$ kubectl get pods -A
NAMESPACE      NAME                                                 READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-jpwk6                                1/1     Running   0             75m
kube-flannel   kube-flannel-ds-lvw4p                                1/1     Running   0             73m
kube-flannel   kube-flannel-ds-xgxz4                                1/1     Running   0             70m
kube-system    coredns-5dd5756b68-cqvqn                             1/1     Running   0             76m
kube-system    coredns-5dd5756b68-vwnp2                             1/1     Running   0             76m
kube-system    etcd-k8s-master-1.internal.labs                      1/1     Running   0             77m
kube-system    etcd-k8s-master-2.internal.labs                      1/1     Running   0             73m
kube-system    etcd-k8s-master-3.internal.labs                      1/1     Running   0             70m
kube-system    kube-apiserver-k8s-master-1.internal.labs            1/1     Running   0             77m
kube-system    kube-apiserver-k8s-master-2.internal.labs            1/1     Running   0             73m
kube-system    kube-apiserver-k8s-master-3.internal.labs            1/1     Running   0             70m
kube-system    kube-controller-manager-k8s-master-1.internal.labs   1/1     Running   1 (72m ago)   77m
kube-system    kube-controller-manager-k8s-master-2.internal.labs   1/1     Running   0             73m
kube-system    kube-controller-manager-k8s-master-3.internal.labs   1/1     Running   0             70m
kube-system    kube-proxy-8rlpl                                     1/1     Running   0             76m
kube-system    kube-proxy-rq4l4                                     1/1     Running   0             70m
kube-system    kube-proxy-rw9jm                                     1/1     Running   0             73m
kube-system    kube-scheduler-k8s-master-1.internal.labs            1/1     Running   1 (72m ago)   77m
kube-system    kube-scheduler-k8s-master-2.internal.labs            1/1     Running   0             73m
kube-system    kube-scheduler-k8s-master-3.internal.labs            1/1     Running   0             70m

## 두번째 마스터 추가할 때 첫번째 마스터의 controller-manager, scheduler 리스타트되었음.

 

워커 노드 추가 결과

admin@k8s-master-1:~$ kubectl get nodes -A
NAME                         STATUS   ROLES           AGE     VERSION
k8s-master-1.internal.labs   Ready    control-plane   3h36m   v1.28.1
k8s-master-2.internal.labs   Ready    control-plane   3h32m   v1.28.1
k8s-master-3.internal.labs   Ready    control-plane   3h30m   v1.28.1
k8s-worker-1.internal.labs   Ready    <none>          39s     v1.28.1
k8s-worker-2.internal.labs   Ready    <none>          31s     v1.28.1
k8s-worker-3.internal.labs   Ready    <none>          18s     v1.28.1

admin@k8s-master-1:~$ kubectl get pods -A
NAMESPACE      NAME                                                 READY   STATUS    RESTARTS        AGE
kube-flannel   kube-flannel-ds-26x2g                                1/1     Running   0               40s
kube-flannel   kube-flannel-ds-bkdl4                                1/1     Running   0               48s
kube-flannel   kube-flannel-ds-jpwk6                                1/1     Running   0               3h35m
kube-flannel   kube-flannel-ds-lvw4p                                1/1     Running   0               3h32m
kube-flannel   kube-flannel-ds-txdzj                                1/1     Running   0               27s
kube-flannel   kube-flannel-ds-xgxz4                                1/1     Running   0               3h30m
kube-system    coredns-5dd5756b68-cqvqn                             1/1     Running   0               3h36m
kube-system    coredns-5dd5756b68-vwnp2                             1/1     Running   0               3h36m
kube-system    etcd-k8s-master-1.internal.labs                      1/1     Running   0               3h37m
kube-system    etcd-k8s-master-2.internal.labs                      1/1     Running   0               3h32m
kube-system    etcd-k8s-master-3.internal.labs                      1/1     Running   0               3h30m
kube-system    kube-apiserver-k8s-master-1.internal.labs            1/1     Running   0               3h37m
kube-system    kube-apiserver-k8s-master-2.internal.labs            1/1     Running   0               3h32m
kube-system    kube-apiserver-k8s-master-3.internal.labs            1/1     Running   0               3h30m
kube-system    kube-controller-manager-k8s-master-1.internal.labs   1/1     Running   1 (3h32m ago)   3h37m
kube-system    kube-controller-manager-k8s-master-2.internal.labs   1/1     Running   0               3h32m
kube-system    kube-controller-manager-k8s-master-3.internal.labs   1/1     Running   0               3h30m
kube-system    kube-proxy-8rlpl                                     1/1     Running   0               3h36m
kube-system    kube-proxy-mg7sp                                     1/1     Running   0               40s
kube-system    kube-proxy-npdq8                                     1/1     Running   0               48s
kube-system    kube-proxy-rq4l4                                     1/1     Running   0               3h30m
kube-system    kube-proxy-rw9jm                                     1/1     Running   0               3h32m
kube-system    kube-proxy-vcsdb                                     1/1     Running   0               27s
kube-system    kube-scheduler-k8s-master-1.internal.labs            1/1     Running   1 (3h32m ago)   3h37m
kube-system    kube-scheduler-k8s-master-2.internal.labs            1/1     Running   0               3h32m
kube-system    kube-scheduler-k8s-master-3.internal.labs            1/1     Running   0               3h30m