Kubernetes Off-Line Install Guide
● OS: CentOS8 x86_64 minimal
● docker: 18.09.1
● kubernetes: 1.17
1. Yum Repository Server 구성
1) Online Obtaining Repository Packages
a) Install required packages
sudo yum -y install yum-utils createrepo git
b) Make a directory to store the software in the server’s storage
mkdir -p /var/www/html/repos
● SELinux 비활성화
setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
# permissive mode sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config |
c) Repository 추가
● docker ce repository 추가
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
centos7) yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://download.docker.com/linux/centos/7/$basearch/stable #baseurl=https://download.docker.com/linux/centos/7/x86_64/stable enabled=1 gpgcheck=1 |
yum install docker-ce -y
● kubernetes repository 추가
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF |
● CentOS8 BaseOS 추가
cat <<EOF > /etc/yum.repos.d/CentOS-Base.repo [centos8-base] name=CentOS-$releasever - Base mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=BaseOS&infra=$infra #baseurl=http://mirror.centos.org/$contentdir/$releasever/BaseOS/$basearch/os/ #baseurl=http://mirror.centos.org/centos/8.1.1911/BaseOS/x86_64/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial EOF |
● CentOS8 Extras 추가
cat <<EOF > /etc/yum.repos.d/CentOS-Extras.repo [centos8-extras] name=CentOS-$releasever - Extras mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra #baseurl=http://mirror.centos.org/$contentdir/$releasever/extras/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial EOF |
● CentOS8 AppStream 추가
cat <<EOF > /etc/yum.repos.d/CentOS-AppStream.repo [centos8-appstream] name=CentOS-$releasever - AppStream mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=AppStream&infra=$infra #baseurl=http://mirror.centos.org/$contentdir/$releasever/AppStream/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial EOF |
d) Sync the packages and create the repository
dnf update; dnf repolist;
for repo in \ centos8-base \ centos8-extras \ centos8-appstream \ docker-ce-stable \ kubernetes do reposync --gpgcheck -lm --repoid=${repo} --download_path=/var/www/html/repos createrepo -v /var/www/html/repos/${repo} -o /var/www/html/repos/${repo} done |
e) Compress tar Repository Packages
tar -cvf kubenetes-repo.tar /var/www/html/repos
2) Offline Install Repository Server
a) Prepare the webserver
dnf install httpd
b) Extract tar Repository Packages
tar -xvzf kubenetes-repo.tar /var/www/html/
ls /var/www/html/repos
c) Permission repository files
chmod -R +r /var/www/html/repos
restorecon -vR /var/www/html
d) Add the firewall rules
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --reload
e) Enable and start webserver
systemctl enable httpd
systemctl start httpd
2. Kubernetes Offline Installation
1) Prepare Packages/Images for Kubernetes
a) Obtaining Packages
※ rpm package로 설치 시 (offline yum server 구성 시 생략)
yum makecache --timer yumdownloader --resolve kubelet kubeadm kubectl 35625b6ab1da6c58ce4946742181c0dcf9ac9b6c2b5bea2c13eed4876024c342-kubectl-1.17.3-0.x86_64.rpm 3f1db71d0bb6d72bc956d788ffee737714e5717c629b26355a2dcf1dba4ad231-kubelet-1.17.3-0.x86_64.rpm d0c889d5fae925a9266aa521f6aed361ea53792c86a51c38f5d92cd3f03e2d30-kubeadm-1.17.3-0.x86_64.rpm
yumdownloader --assumeyes --destdir=<your_rpm_dir> --resolve yum-utils kubeadm-1.17.* kubelet-1.17.* kubectl-1.17.* ebtables yum install -y --cacheonly --disablerepo=* <your_rpm_dir>/*.rpm kubeadm config images list
|
b) Obtaining images
● pull kubernetes images
docker pull k8s.gcr.io/kube-apiserver:v1.17.3 docker pull k8s.gcr.io/kube-controller-manager:v1.17.3 docker pull k8s.gcr.io/kube-scheduler:v1.17.3 docker pull k8s.gcr.io/kube-proxy:v1.17.3 docker pull k8s.gcr.io/pause:3.1 docker pull k8s.gcr.io/etcd:3.4.3-0 docker pull k8s.gcr.io/coredns:1.6.5
docker pull kubernetesui/dashboard:v2.0.3 docker pull kubernetesui/metrics-scraper:v1.0.4 docker pull weaveworks/weave-npc:2.6.0 docker pull weaveworks/weave-kube:2.6.0 docker pull quay.io/coreos/flannel:v0.11.0-amd64 |
※ dashboard
kubernetesui/dashboard:v2.0.3
kubernetesui/metrics-scraper:v1.0.4
https://github.com/kubernetes/dashboard/releases
=> kubernetes 지원 버전에 맞는 dashboard 버전 설치
Kubernetes version |
1.15 |
1.16 |
1.17 |
1.18 |
Compatibility |
? |
? |
? |
✓ |
● Exporting images
# Create a directory to store
mkdir -p ~/k8s-images
cd ~/k8s-images
# Export kubernetes images
docker save -o kube-master-images.tar \ k8s.gcr.io/kube-proxy:v1.17.3 \ k8s.gcr.io/kube-apiserver:v1.17.3 \ k8s.gcr.io/kube-controller-manager:v1.17.3 \ k8s.gcr.io/kube-scheduler:v1.17.3 \ k8s.gcr.io/coredns:1.6.5 \ k8s.gcr.io/etcd:3.4.3-0 \ k8s.gcr.io/pause:3.1
docker save -o kube-node-images.tar \ k8s.gcr.io/kube-proxy:v1.17.3 \ k8s.gcr.io/coredns:1.6.5 \ k8s.gcr.io/pause:3.1 \ jettech/kube-webhook-certgen:v1.0.0
docker save -o dashboard-images.tar \ kubernetesui/dashboard:v2.0.0-rc5 \ kubernetesui/metrics-scraper:v1.0.3
docker save -o network-images.tar \ weaveworks/weave-npc:2.6.0 \ weaveworks/weave-kube:2.6.0 \ quay.io/coreos/flannel:v0.11.0-amd64 |
c) pod-network download
● Weave network download
export kubever=$(kubectl version | base64 | tr -d '\n')
wget https://cloud.weave.works/k8s/net?k8s-version=$kubever
● kube-flannel download
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
d) dashboard download
https://github.com/kubernetes/dashboard/releases
=> kubernetes 지원 버전에 맞는 dashboard 버전 설치
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
2) VM 환경 구성
● Architecture
hostname address cpu memory disk ------------------------------------------------------------------------ master.k8s.paas x.x.20.11 4 8G 300G node01.k8s.paas x.x.20.21 2 4G 300G node02.k8s.paas x.x.20.22 2 4G 300G |
● hostname 변경
hostnamectl --static set-hostname master01.k8s.paas hostnamectl hostname
vi /etc/hosts x.x.20.11 master.k8s.paas x.x.20.21 node01.k8s.paas x.x.20.22 node02.k8s.paas |
● network connection 설정
# create a connection with the name enp0s8 # nmcli con add type ethernet con-name enp0s3 ifname enp0s3 nmcli con mod enp0s3 ipv4.address x.x.20.11/24 nmcli con mod enp0s3 ipv4.gateway x.x.20.1 nmcli con mod enp0s3 ipv4.method manual nmcli con mod enp0s3 +ipv4.dns 8.8.8.8 nmcli con mod enp0s3 +ipv4.dns-search k8s.paas nmcli con mod enp0s3 connection.autoconnect yes nmcli con up enp0s3 nmcli con show enp0s3
more /etc/resolv.conf more /etc/sysconfig/network-scripts/ifcfg-enp0s3 nmcli dev show enp0s3 nmcli dev status #nmcli con del enp0s3 #nmcli con del f653ccb6-4a12-496b-bceb-78f0ea33aa72 ip a ip r |
● 방화벽 설정
# systemctl disable firewalld && systemctl stop firewalld ▶ Kubernetes service ports in Linux firewall # Master node ---------------------------------------------------------------- firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent firewall-cmd --add-port={4789,8285,8472}/udp --permanent firewall-cmd --reload
# Worker nodes ---------------------------------------------------------------- firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent firewall-cmd --add-port={4789,8285,8472}/udp --permanent firewall-cmd --reload |
● 필요한 패키지 설치
yum install yum-utils device-mapper-persistent-data lvm2 -y yum install wget git net-tools bind-utils iptables-services bash-completion kexec-tools sos psacct vim -y |
b) Yum Repository 추가 (매 vm 별로 추가)
cat <<EOF > /etc/yum.repos.d/kubernetes-offline.repo [docker-ce-stable] name=docker-ce-stable baseurl=http://<server_IP>/repos/docker-ce-stable enabled=1 gpgcheck=0
[kubernetes] name=kubernetes baseurl=http://<server_IP>/repos/kubernetes enabled=1 gpgcheck=0
[centos8-base] name=centos8-base baseurl=http://<server_IP>/repos/centos8-base enabled=1 gpgcheck=0
[centos8-extras] name=centos8-extras baseurl=http://<server_IP>/repos/centos8-extras enabled=1 gpgcheck=0
[centos8-appstream] name=centos8-appstream baseurl=http://<server_IP>/repos/centos8-appstream enabled=1 gpgcheck=0 EOF |
c) install Docker
● docker version list
dnf list docker-ce --showduplicates | sort -r
docker-ce.x86_64 3:18.09.1-3.el7 @docker-ce-stable <- 안정된 버전이 @로 표시됨
● Install docker
# latest version dnf install --nobest docker-ce dnf install docker-ce
# stable version dnf update && dnf install docker-ce-3:18.09.1-3.el7 --allowerasing
systemctl enable --now docker newgrp docker docker version
usermod -aG docker $USER usermod -aG docker paasuser # remove docker group #gpasswd -d paasuser docker |
● daemon.json 수정
cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "insecure-registries": ["registry.k8s.paas:5000","172.30.0.0/16"] } EOF |
● docker 재시작
systemctl daemon-reload systemctl restart docker systemctl status docker |
d) Install Kubernetes packages
● kubernetes package 설치
yum update && systemctl reboot yum install -y epel-release kubelet kubeadm kubectl kubernetes-cni --disableexcludes=kubernetes systemctl enable --now kubelet
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.0-rc.4/bin/linux/amd64/kubectl curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.9/bin/linux/amd64/kubectl https://github.com/kubernetes/kubernetes/releases
yumdownloader --assumeyes --destdir=<your_rpm_dir> --resolve yum-utils kubeadm-1.17.* kubelet-1.17.* kubectl-1.17.* ebtables yum install -y --cacheonly --disablerepo=* <your_rpm_dir>/*.rpm
kubeadm config images list
※ rpm package로 설치 시 rpm -ivh --replacefiles --replacepkgs ~/k8s-images/*.rpm |
● net bridge 등록
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF modprobe br_netfilter modprobe overlay echo '1' > /proc/sys/net/ipv4/ip_forward sysctl --system |
● swap 비활성화
swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab cat /etc/fstab #/dev/mapper/cl-swap swap swap defaults 0 0 |
● Configure cgroup driver used by kubelet on control-plane node
vi /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: <value> |
e) VM 복제
※ master서버를 구성 후 이를 node서버로 복제하여 사용할 경우
● VM의 hostname 변경
hostnamectl --static set-hostname node01.k8s.paas hostnamectl hostname
vi /etc/hosts x.x.20.11 master.k8s.paas x.x.20.21 node01.k8s.paas x.x.20.22 node02.k8s.paas |
● network connection 설정
nmcli con mod enp0s3 ipv4.address x.x.25.61/24 nmcli con up enp0s3 nmcli con show enp0s3 more /etc/resolv.conf more /etc/sysconfig/network-scripts/ifcfg-enp0s3 |
● ssh keygen
ssh-keygen cat ~/.ssh/id_rsa.pub
ssh-copy-id master01 ssh-copy-id worker01 ssh-copy-id worker02
cat >> ~/.ssh/authorized_keys <<EOF ssh-rsa ...root@master01.k8s.paas EOF
ssh master01 ssh worker01 ssh worker02
cat ~/.ssh/authorized_keys |
● 각 VM별 Kubernetes images Load
# master docker load -i ~/k8s-images/kube-master-images.tar # node docker load -i ~/k8s-images/kube-node-images.tar # network (master, node) docker load -i ~/k8s-images/kube-network-images.tar # dashboard (master, node) docker load -i ~/k8s-images/kube-dashboard-images.tar |
3) Offline Install Kubernetes
a) Configure a master node
# centos8) dnf provides tc dnf install iproute-tc
# Initialize your control-plane node lsmod | grep br_netfilter systemctl enable kubelet kubeadm config images pull
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.6
kubeadm init #kubeadm init --ignore-preflight-errors=all --pod-network-cidr 10.244.0.0/16 kubeadm alpha certs certificate-key cd35c5f3a27bf418bbc9f1f778e9f7fe865f008dc404a2b2c44dbe2a4f314f12 kubeadm init phase upload-certs --upload-certs --certificate-key=SOME_VALUE --ignore-preflight-errors=all --pod-network-cidr 10.244.0.0/16 kubeadm init phase bootstrap-token |
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.x.x:6443 --token qza4mi.qtcm7soz32jwatpi \ --discovery-token-ca-cert-hash sha256:553f30ff39cbf3929cd30c8e2d57109641a43a21e6ec4a909e512512f9789756 |
b) KUBECONFIG 설정
unset KUBECONFIG echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
● Enable bash completion for kubectl yum install bash-completion -y kubectl completion bash >/etc/bash_completion.d/kubectl echo 'alias k=kubectl' >> ~/.bashrc echo 'complete -F __start_kubectl k' >> ~/.bashrc source ~/.bashrc |
c) cluster 사용 설정 (master node)
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config |
● client서버에서 cluster 사용 시
scp root@master01.k8s.paas:/etc/kubernetes/admin.conf ~/.kube/config chown $(id -u):$(id -g) ~/.kube/config chmod 644 /root/.kube/config echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc source ~/.bashrc |
● Start and enable kubelet.service
systemctl enable kubelet.service systemctl start kubelet.service systemctl status kubelet.service |
d) pod network 설정 (master)
# Flannel network kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f ~/k8s-images/kube-flannel.yml
kubectl get pods -o wide -w -n kube-system kubectl -n kube-system get pods -l app=flannel
# Weave network export kubever=$(kubectl version | base64 | tr -d '\n') kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" kubectl get pods --all-namespaces
kubectl apply -f ~/k8s-images/weave.yml sudo curl -L git.io/weave -o /usr/local/bin/weave sudo chmod a+x /usr/local/bin/weave
# calico 사용 시 (192.168.0.0/16) kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address={ip address} kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master- |
#systemctl restart docker
#systemctl status docker
e) join worker nodes
※ worker node에서 실행)
kubeadm join $cluster-ip:6443 --token 9rfr3o.n1i8di8exqeut723 \ --discovery-token-ca-cert-hash sha256:4184bd55ac8a8daf29049fd68fe24a4e258fef0b1e2289a5e77b66adc57b2084 |
kubectl get pods -n kube-system
kubectl get nodes
kubectl get po --all-namespaces
f) kubeadm re-install
# Master node kubectl drain master.k8s.paas --delete-local-data --force --ignore-daemonsets iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X #ipvsadm -C kubectl delete node master.k8s.paas kubeadm reset docker rm -f $(docker ps -a -q) rm -f $HOME/.kube/config rm -rf /etc/kubernetes rm -rf /var/lib/kubelet rm -rf /var/lib/etcd rm -rf /etc/cni/net.d
# Worker node kubeadm reset docker rm -f $(docker ps -a -q) #kubeadm init phase bootstrap-token |
3. Dashboard Installation (Off-line)
1) Install Dashboard
# kubectl delete ns kubernetes-dashboard
a) cluster-admin ClusterRole Binding 적용
vim ~/k8s-images/recommended.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard => cluster-admin로 변경 subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard |
b) install dashboard
kubectl apply -f ~/k8s-images/recommended.yaml
2) Dashboard login
a) API Server를 활용하는 방법
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt cat kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key cat kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-ha" |
# 신뢰할 수 있는 루트인증기관 인증서 적용
certutil.exe -addstore "Root" D:\app\cert\ca.crt
# 개인용 인증서 적용
certutil.exe -p root -user -importPFX D:\app\cert\kubecfg.p12
# 인증서 확인/변경
certmgr.msc
b) dashboard login
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep kubernetes-dashboard | awk '{print $1}')
APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ") SECRET_NAME=$(kubectl -n kubernetes-dashboard get secrets | grep ^kubernetes-dashboard-token | cut -f1 -d ' ') TOKEN=$(kubectl -n kubernetes-dashboard describe secret $SECRET_NAME | grep -E '^token' | cut -f2 -d':' | tr -d " ") echo $TOKEN curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure |
https://:$IP:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
https://phoenixnap.com/kb/how-to-install-kubernetes-on-centos
https://computingforgeeks.com/install-kubernetes-cluster-on-centos-with-kubeadm/
https://docs.openshift.com/container-platform/3.11/install/disconnected_install.html
https://ahmermansoor.blogspot.com/2019/04/install-kubernetes-k8s-offline-on-centos-7.html
https://kubernetes.io/ko/docs/tasks/access-application-cluster/web-ui-dashboard/
'PaaS > Kubernetes' 카테고리의 다른 글
CNCF Landscape 가이드 (0) | 2023.04.28 |
---|