■ 목적
MASTER1대 + NODE 3대의 쿠버네티스 클러스터 구축
 

 


■ OS설치
대상: master 1대 + node 3대
Linux 계열 OS 설치
 
 

■ OS 기본 환경 구성
대상: master 1대 + node 3대

 

[yum 최신 업데이트 및 추가 유틸리티 설치]
yum -y update
yum install -y yum-utils  device-mapper-persistent-data   lvm2 net-tools

 

[SWAP 설정 Off, 방화벽 Off, SELINUX Disabled 적용]
swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
 

■ hosts파일 수정
대상: master 1대 + node 3대
/etc/hosts에 아래와 같이 설정 추가 (IP는 본인 서버 설정에 맞게 적용)
 
192.168.114.128 master1
192.168.114.131 node1
192.168.114.129 node12
192.168.114.130 node13
 

■ Docker 설치
대상: master 1대 + node 3대
 
yum-config-manager    --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
 

■ Docker 기동
대상: master 1대 + node 3대
systemctl start docker && systemctl enable docker
 

Kubernetes 기본 환경 구성
대상: master 1대 + node 3대
 
[SWAP 설정 Off, 방화벽 Off, SELINUX Disabled 적용]
swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab
systemctl stop firewalld && systemctl disable firewalld
 
 
[네트워크 관련 OS Kenel Tuning]
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
 
 
[K8S관련 yum Repository 추가]
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
enabled=1
gpgcheck=1
repo_gpgcheck=1
EOF
 
[K8S 패키지 설치]
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet && systemctl start kubelet
 

Master 서버 생성
대상: master 1대
kubeadm init --pod-network-cidr 10.244.0.0/16
--pod-network-cidr=10.244.0.0/16 은 Flannel 에서 사용하는 설정이며 변경 가능.
 
아래 붉은색으로 표시된 부분은 반드시 별도 저장해 둘것
 
# kubeadm init --pod-network-cidr 10.244.0.0/16
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0903 19:18:37.032588   86064 kernel_validator.go:81] Validating kernel version
I0903 19:18:37.032757   86064 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of yo                                           ur internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm                                            config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/                                           kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [test-k8s-master-n                                           cl kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.c                                           luster.local] and IPs [10.96.0.1 10.106.234.130]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [test-k8s-master                                           -ncl localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [test-k8s-master-n                                           cl localhost] and IPs [10.106.234.130 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.                                           conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/k                                           ubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager t                                           o "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/k                                           ubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/m                                           anifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from                                            directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be                                            pulled
[apiclient] All control plane components are healthy after 40.501916 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in t                                           he "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system wi                                           th the configuration for the kubelets in the cluster
[markmaster] Marking the node test-k8s-master-ncl as master by adding the label                                            "node-role.kubernetes.io/master=''"
[markmaster] Marking the node test-k8s-master-ncl as master by adding the taints                                            [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to t                                           he Node API object "test-k8s-master-ncl" as an annotation
[bootstraptoken] using token: pvpoff.3gi89fsxl6q6vq21
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CS                                           Rs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller autom                                           atically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all nod                                           e client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" name                                           space
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 
You can now join any number of machines by running the following on each node
as root:
 
  kubeadm join 192.168.114.128:6443 --token pvpoff.3gi19fsxl8q6vq47 --discovery-t                                           oken-ca-cert-hash sha256:e57e547d3697386005324524878f42b670db3f83227ff247464f470                                           f2fddf2d6
 
 
 

Flannel Network 설치
대상: master 1대
 
 
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
 

 


Kubectl 명령어 환경 설정 구성
대상: master 1대 및 필요한 서버
 
kubectl 클라이언트로 사용할 계정 환경 구성.
아까 master서버에서 별도 기록한 설정을 복사한다.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
 

 


Node 서버 구성
대상:Node 서버 3대
 
아까 master 서버에서 별도 기록한 kubeadm join 명령을 수행하여 클러스터에 join 한다.
$ sudo kubeadm join 192.168.114.128:6443 --token pvpoff.3gi19fsxl8q6vq47 --discovery-token-ca-cert-hash sha256:e57e547d3697386005324524878f42b670db3f83227ff247464f4702fddf2d6
 
 

■ 점검
 
kubectl get node명령을 통해 모든 node가 READY 상태인지 확인한다.
ku# kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
k8s-master-ncl     Ready     master    19m       v1.11.2
kube-node001-ncl   Ready     <none>    1m        v1.11.2
kube-node002-ncl   Ready     <none>    1m        v1.11.2
kube-node003-ncl   Ready     <none>    1m        v1.11.2
 
 
 

 

'Kubernetes' 카테고리의 다른 글

Ingress에 Sticky Session 적용하기  (0) 2020.03.27

개요:
WAS에서 로그인 로직이 존재하는 경우, Ingress에서 Multi POD를 구성하면 로그인이 끊긴다.


원인:
Ingress는 기본적으로 Round-Robin으로 사용자 접속 정보를 Routing하므로, 사용자 세션 정보가 없는 POD로도 지속적으로 사용자 Request를 보낸다.

따라서 Session 정보가 없는 POD에 의한 처리가 될 경우 세션이 끊긴다.


해결책:
두가지 방안이 있다.

  1. Redis와 같은 Session Manager를 구성하여 Session Replication 하기
  2. Ingress에서 Sticky Session을 적용해서 Ingress Routing을 고정하기.
  
여기서는 Ingress에 Sticky Session을 적용하는 법을 가이드한다.


Ingress에 ticky session 적용 절차

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
    
nginx.ingress.kubernetes.io/session-cookie-name: "route"

spec:
  rules:
  - host: test.skcc.com
    http:
      paths:
      - path: /
        backend:
          serviceName: svc-weblogic12c
          servicePort: 17001


 

 

 

'Kubernetes' 카테고리의 다른 글

Kubernetes 클러스터 구성 (1 Master + 3 Nodes)  (0) 2020.03.27

+ Recent posts