시작에 앞서 메모리 문제 때문에 개발 테스트에서는 오류가 많이 나니 주의
우분투 22 에서 다음 사이트를 참고해서 설치하면 에러가 난다.
쿠버네티스
https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
오류
Ign:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease Err:6 https://packages.cloud.google.com/apt kubernetes-xenial Release 404 Not Found [IP: 142.250.207.46 443] Reading package lists... Done E: The repository 'https://apt.kubernetes.io kubernetes-xenial Release' does not have a Release file. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. root@k8s-master:~# sudo apt-get install -y kubelet kubeadm kubectl Reading package lists... Done Building dependency tree... Done Reading state information... Done E: Unable to locate package kubelet E: Unable to locate package kubeadm E: Unable to locate package kubectl
1. NTP 서버 동기화
sudo apt-get install -y ntp sudo systemctl enable ntp sudo systemctl start ntp
2. 스왑 메모리 해제
sudo swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab
3. 도커설치
#1.우분투 시스템 패키지 업데이트 sudo apt-get update #2. 필요한 패키지 설치 sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common #3. Docker의 공식 GPG키를 추가 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - #4. Docker의 공식 apt 저장소를 추가 sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" #5. 시스템 패키지 업데이트 sudo apt-get update #6. Docker 설치 sudo apt-get install docker-ce docker-ce-cli containerd.io
확인
sudo systemctl status docker
4. 쿠버네티스 설치
vi /etc/apt/keyrings/kube_install.sh
# 1. apt 패키지 색인을 업데이트하고, 쿠버네티스 apt 리포지터리를 사용하는 데 필요한 패키지를 설치한다. sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl # 2. 구글 클라우드의 공개 사이닝 키를 다운로드 한다. curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg # 3. 쿠버네티스 apt 리포지터리를 추가한다. echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg # 4. apt 패키지 색인을 업데이트하고, kubelet, kubeadm, kubectl을 설치하고 해당 버전을 고정한다. sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
sh /etc/apt/keyrings/kube_install.sh
다음 주석처리
$ vi /etc/containerd/config.toml
disabled_plugins 을 주석 처리 한다.
#disabled_plugins = ["cri"]
containered 재시작
$ sudo systemctl restart containerd
설치 버전확인
kubectl version --client
좀더 자세히
kubectl version --client --output=yaml
root@k8s-master:/home/ubuntu# kubectl version --client Client Version: v1.28.8 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 root@k8s-master:/home/ubuntu# kubectl version --client --output=yaml clientVersion: buildDate: "2024-03-15T00:07:05Z" compiler: gc gitCommit: fc11ff34c34bc1e6ae6981dc1c7b3faa20b1ac2d gitTreeState: clean gitVersion: v1.28.8 goVersion: go1.21.8 major: "1" minor: "28" platform: linux/amd64 kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
5. 설치시 방화벽 닫기
ufw disable
쿠버네티스 연동 완료후
방화벽 열기
# UFW 설치 및 활성화 sudo apt install ufw -y sudo ufw enable # Kubernetes 필요 포트 열기 (마스터 노드) sudo ufw allow 6443/tcp sudo ufw allow 2379:2380/tcp sudo ufw allow 10250/tcp sudo ufw allow 10259/tcp sudo ufw allow 10257/tcp
# Kubernetes 필요 포트 열기 (워커 노드) sudo ufw allow 10250/tcp sudo ufw allow 30000:32767/tcp
6. 고정 아이피 설정 ( pc vmware 가상머신 작업시)
https://it-svr.com/ubuntu-22-04-lts-static-ip/
만약 로컬 아이피를 다음 192.168.120.137 로 고정하고 싶다면
# vi /etc/netplan/00-installer-config.yaml
1)예
network: version: 2 renderer: networkd ethernets: ens33: dhcp4: no addresses: - 192.168.120.137/24 routes: - to: default via: 192.168.120.2 nameservers: addresses: [8.8.8.8, 8.8.4.4]
1설정이 안되면 2번
2)
# This is the network config written by 'subiquity' network: version: 2 renderer: networkd ethernets: ens33: dhcp4: yes addresses: - 192.168.32.10/24 # routes: # - to: default # via: 192.168.32.2 nameservers: addresses: [8.8.8.8, 8.8.4.4]
적용
sudo netplan apply
경고시
WARNING:root:Cannot call Open vSwitch: ovsdb-server.service is not running.
Open vSwitch를 설치하려면 다음 명령을 사용할 수 있습니다.
sudo apt-get install openvswitch-switch
이미 설치되어 있는 경우, ovsdb-server.service를 시작하려면 다음 명령을 사용할 수 있습니다1:
sudo service openvswitch-switch start
7. 도메인 설정 ( pc vmware 가상머신 작업시)
ssh [사용자 ID]@[접속하려는 컴퓨터의 IP 주소 또는 도메인] sudo hostnamectl set-hostname k8s-master
8. 쿠버네티스 클러스터 설정
1) 마스터 노드 설정(마스터 노드)
마스터 노드 초기화
sudo kubeadm init
다음과 같은 오류 발생시
vagrant@k8s-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 I0401 07:41:11.019987 4990 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.24 [init] Using Kubernetes version: v1.24.17 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: time="2024-04-01T07:41:12Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService" , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
$ vi /etc/containerd/config.toml
disabled_plugins 을 주석 처리 한다.
#disabled_plugins = ["cri"]
containered 재시작
$ sudo systemctl restart containerd
실행하면 다음과 같은 토큰이 출력 된다.
I0401 03:30:52.171233 7235 version.go:256] remote version is much newer: v1.29.3; ~ ~ [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file ~ ~ Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ ~ ~ kubeadm join 192.168.32.10:6443 --token 3aqsqx.kynnanjs7dcwdhnm \ --discovery-token-ca-cert-hash sha256:247737a683cf5dd6d7375f61bfeea9c1e1e24332df9fa0c342230a699e21cbd8
토큰을 메모장에 저장한다.
kubeadm join 192.168.32.10:6443 --token 3aqsqx.kynnanjs7dcwdhnm \ --discovery-token-ca-cert-hash sha256:247737a683cf5dd6d7375f61bfeea9c1e1e24332df9fa0c342230a699e21cbd8
kubectl 명령을 사용할 수 있도록 설정
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #재구동 sudo systemctl restart kubelet
2) 워커 노드 설정 : 마스터 노드에서 발생한 토큰을 실행
sudo kubeadm join 10.0.2.15:6443 --token ptgttc.v8jpd43nlkoom44n \ --discovery-token-ca-cert-hash sha256:37d995197197bf17cc3b2bfa8e088c769c8d616a4a0b8e35f57f8e7f9de633e6
연결이 성공적으로 완료 되면 다음과 같이 출력된다.
root@k8s-node01:~# kubeadm join 192.168.32.10:6443 --token 3aqsqx.kynnanjs7dcwdhnm \ --discovery-token-ca-cert-hash sha256:247737a683cf5dd6d7375f61bfeea9c1e1e24332df9fa0c342230a699e21cbd8 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
이제 이 노드는 쿠버네티스 클러스터의 일부가 된것이다.
이 명령은 클러스터의 모든 노드를 나열하며, 각 노드의 상태와 역할을 보여줍니다.
kubectl get nodes
3) 두번째 워커노드 연결시
a)마스터 노드에서 새로운 토큰 발행
kubeadm token create
두번째 워커노드에서
kubeadm join 통해 연결 시킨다.
개발 서버에서는 메모리 때문에 오류가 많이나니
root@k8s-master:~# sudo systemctl restart kubelet root@k8s-master:~# sudo systemctl restart containerd root@k8s-master:~# sudo swapoff -a
쿠버네티스에서 토큰을 다시 생성하는 방법은 다음과 같습니다:
토큰 리스트 확인: kubeadm token list 명령어를 통해 현재 생성된 토큰의 리스트를 확인할 수 있습니다.
토큰 생성: kubeadm token create 명령어를 통해 새로운 토큰을 생성할 수 있습니다.
root@k8s-master:~# kubeadm token create s181pb.zhsqmen0e8sxizz6
토큰 해쉬 변환값 생성: 생성된 토큰의 해쉬 변환값이 필요합니다.
다음 명령어를 통해 토큰의 해쉬 변환값을 생성할 수 있습니다
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.*'
root@k8s-master:/home/ubuntu# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 247737a683cf5dd6d7375f61bfeea9c1e1e24332df9fa0c342230a699e21cbd8
b) 워커노드에서 kebeadm join
root@k8s-node02:~# kubeadm join 192.168.32.10:6443 --token s181pb.zhsqmen0e8sxizz6 --discovery-token-ca-cert-hash sha256:247737a683cf5dd6d7375f61bfeea9c1e1e24332df9fa0c342230a699e21cbd8 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
이제 마스터 노드에서 kubectl get nodes 명령어를 실행하면, 이 노드가 클러스터에 추가된 것을 확인할 수 있습니다.
root@k8s-master:/home/ubuntu# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane 7h57m v1.28.8 k8s-node01 NotReady <none> 6h6m v1.28.8 k8s-node02 NotReady <none> 2m38s v1.28.8 root@k8s-master:/home/ubuntu# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master NotReady control-plane 7h57m v1.28.8 192.168.32.10 <none> Ubuntu 22.04.4 LTS 5.15.0-101-generic containerd://1.6.28 k8s-node01 NotReady <none> 6h6m v1.28.8 192.168.32.11 <none> Ubuntu 22.04.4 LTS 5.15.0-101-generic containerd://1.6.28 k8s-node02 NotReady <none> 3m8s v1.28.8 192.168.32.12 <none> Ubuntu 22.04.4 LTS 5.15.0-101-generic containerd://1.6.28
※ 1.쿠버네티스 클러스터를 삭제
sudo kubeadm reset
그리고 Docker와 관련된 모든 컨테이너와 볼륨을 삭제하려면 다음 명령어를 사용할 수 있습니다
docker rm -f $(docker ps -aq) docker volume rm $(docker volume ls -q)
rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/ rm -rf /run/calico
sudo systemctl restart docker
이 명령어들을 실행하면 쿠버네티스 클러스터와 관련된 모든 리소스가 시스템에서 제거됩니다
※ 오류발생시
1. 마스터 노드에서 다음을 실행시 오류가 나올경우
root@k8s-master:~# kubectl get nodes
E0404 07:12:39.429323 45392 memcache.go:265] couldn't get current server API group list: Get ~
#재구동을 하거나
sudo systemctl restart kubelet
다시 새롭게
kubectl 명령을 사용할 수 있도록 설정
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #재구동 sudo systemctl restart kubelet
2. 마스터 노드에서 다음을 실행시 NotReady 오류
# kubectl get nodes
root@k8s-master:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane 6h12m v1.28.8 k8s-node01 NotReady <none> 4h21m v1.28.8
sudo systemctl restart kubelet
컨테이너 런타임 재실행
sudo systemctl restart containerd
마스터 노드에서 워커노드 통신상태 확인
root@k8s-master:~# kubectl describe node k8s-node01
다음 오류 발생시
root@k8s-master:~# kubectl describe node k8s-node01
error: Get "https://192.168.32.10:6443/api/v1/nodes/k8s-node01": dial tcp 192.168.32.10:6443: connect: connection refused - error from a previous attempt: read tcp 192.168.32.10:39092->192.168.32.10:6443: read: connection reset by peer
마스터 노드와 워커노드 모두 다음을 반복적으로 실행
sudo systemctl restart kubelet sudo systemctl restart containerd
또한 스왑 메로리 때문에 문제 스왑을 비활성화
sudo swapoff -a
정상적으로 연결될 경우 다음과 같이 출력된다.
root@k8s-master:~# kubectl describe node k8s-node01 Name: k8s-node01 Roles: <none> Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s-node01 kubernetes.io/os=linux Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 04 Apr 2024 02:55:11 +0000 Taints: node.kubernetes.io/not-ready:NoSchedule Unschedulable: false Lease: HolderIdentity: k8s-node01 AcquireTime: <unset> RenewTime: Thu, 04 Apr 2024 07:35:40 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 04 Apr 2024 07:25:10 +0000 Thu, 04 Apr 2024 02:55:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 04 Apr 2024 07:25:10 +0000 Thu, 04 Apr 2024 02:55:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 04 Apr 2024 07:25:10 +0000 Thu, 04 Apr 2024 02:55:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Thu, 04 Apr 2024 07:25:10 +0000 Thu, 04 Apr 2024 02:55:11 +0000 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized Addresses: InternalIP: 192.168.32.11 Hostname: k8s-node01 Capacity: cpu: 2 ephemeral-storage: 19430032Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3969452Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17906717462 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3867052Ki pods: 110 System Info: Machine ID: 9a8e1e49fade4d3695889c0cc03b88e4 System UUID: 758b4d56-77b0-531a-a1b1-104f5f941f73 Boot ID: 31128a8b-03b8-4352-84c2-8d7e0713856d Kernel Version: 5.15.0-101-generic OS Image: Ubuntu 22.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.6.28 Kubelet Version: v1.28.8 Kube-Proxy Version: v1.28.8 Non-terminated Pods: (1 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system kube-proxy-gz2c4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h40m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 6m22s kube-proxy Normal Starting 9m47s kube-proxy Normal Starting 11m kube-proxy Normal Starting 18m kube-proxy Normal Starting 4h39m kube-proxy Normal Starting 8m26s kube-proxy Normal NodeHasSufficientMemory 4h40m (x4 over 4h40m) kubelet Node k8s-node01 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 4h40m (x4 over 4h40m) kubelet Node k8s-node01 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 4h40m (x4 over 4h40m) kubelet Node k8s-node01 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 4h40m kubelet Updated Node Allocatable limit across pods Normal RegisteredNode 4h40m node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal RegisteredNode 4h39m node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal RegisteredNode 82m node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal RegisteredNode 17m node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal RegisteredNode 15m node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal RegisteredNode 11m node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal NodeHasNoDiskPressure 10m kubelet Node k8s-node01 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientMemory 10m kubelet Node k8s-node01 status is now: NodeHasSufficientMemory Warning InvalidDiskCapacity 10m kubelet invalid capacity 0 on image filesystem Normal NodeHasSufficientPID 10m kubelet Node k8s-node01 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 10m kubelet Updated Node Allocatable limit across pods Normal Starting 10m kubelet Starting kubelet. Normal RegisteredNode 9m10s node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal RegisteredNode 6m48s node-controller Node k8s-node01 event: Registered Node k8s-node01 in Controller Normal Starting 35s kubelet Starting kubelet. Warning InvalidDiskCapacity 35s kubelet invalid capacity 0 on image filesystem Normal NodeHasSufficientMemory 35s kubelet Node k8s-node01 status is now: NodeHasSufficientMemory root@k8s-master:~# kubectl describe node k8s-node01
댓글 ( 1)
댓글 남기기