Kubernetes HA構成の作成 (2) Kubernetes マルチマスター構成

etcdマルチノード構成の上に、kubernetesを作成する。
ここを参考に作成

etcdクラスタの作成が済んでいることを前提とする。

環境

ソフトウェア バージョン
Kubernetes 1.11.3
Docker 17.03.2
Etcd 3.2.18
Master/Nodes Ubuntu16.04
CNI canal
ノード名 IP 役割
k8s-lb 192.168.110.242 Load balancer
k8s-master1 192.168.110.246 master/etcd
k8s-master2 192.168.110.248 master/etcd
k8s-master3 192.168.110.244 master/etcd
k8s-node1 192.168.110.243 worker
k8s-node2 192.168.110.241 worker
k8s-node3 192.168.110.247 worker

Loadbalancerのセットアップ

Haproxyをインストールして、Kubernetes向けのAPIのLBを作成する。
LBについては、HA構成を今回はとらない。

$ sudo apt-get install -y haproxy
$ sudo mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.org
$ sudo su
# cat <<EOF >  /etc/haproxy/haproxy.cfg
 global
    log         127.0.0.1 local2 info
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     256
    user        haproxy
    group       haproxy
    daemon

defaults
    mode               tcp
    log                global
    option             tcplog
    timeout connect    10s
    timeout client     30s
    timeout server     30s

frontend  http-in
    bind *:80
    mode  http
    stats enable
    stats auth admin:adminpassword
    stats hide-version
    stats show-node
    stats refresh 60s
    stats uri /haproxy?stats

frontend k8s
    bind *:6443
    mode               tcp
    default_backend    k8s_backend

backend k8s_backend
    balance            roundrobin
    server             k8s-master1 192.168.110.246:6443 check
    server             k8s-master2 192.168.110.248:6443 check
    server             k8s-master3 192.168.110.244:6443 check
EOF
systemctl start haproxy
systemctl enable haproxy
systemctl status haproxy

docker,kubeadmのインストール

各masterとnodeにて以下のコマンドを実施してdockerとkubeadmをインストールする。
Dockerのインストール。

$ sudo apt-get update
$ sudo apt-get install -y docker.io

kubaeadm,kubelet,kubectlのインストール

$ sudo su
# apt-get update && apt-get install -y apt-transport-https curl
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# apt-get update
# apt-get install -y kubelet kubeadm kubectl
# apt-mark hold kubelet kubeadm kubectl

swapの無効化もこのタイミングでしておく

$ swapoff -a
$ vi /etc/fstab # comment out the line of swap

最初のmasterの構築

以前コピーしたpemファイルを/etc/etcd/ssl/に置く。
場所はetcdの証明書と同じ場所となる。

kubeadmで使う設定ファイルを作る
kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3
apiServerCertSANs:
- "k8s-lb"
api:
    controlPlaneEndpoint: "k8s-lb":6443"
etcd:
    external:
        endpoints:
        - https://192.168.110.246:2379
        - https://192.168.110.248:2379
        - https://192.168.110.244:2379
        caFile: /etc/etcd/ssl/ca.pem
        certFile: /etc/etcd/ssl/client.pem
        keyFile: /etc/etcd/ssl/client-key.pem
networking:
    # This CIDR is a calico default. Substitute or remove for your CNI provider.
    podSubnet: "10.244.0.0/16"

続いて最初のマスターであるk8s-master1をインストール

$ sudo kubeadm init --config kubeadm-config.yaml
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I0912 13:50:17.886407    6248 kernel_validator.go:81] Validating kernel version
I0912 13:50:17.886763    6248 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-lb k8s-lb] and IPs [10.96.0.1 192.168.110.246]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 42.005332 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master1" as an annotation
[bootstraptoken] using token: 8i8tt3.yamkfbp2eyo4xgom
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join k8s-lb:6443 --token 8i8tt3.yamkfbp2eyo4xgom --discovery-token-ca-cert-hash sha256:9b7beaed43987fc109fa53697f929de5c49cf25cfc68d223450e7f079fcbfddb

kubectlが動作することを確認するため、configファイルをコピーする。

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

ノードの状態を確認すると以下のようになる。
CNIをインストールしていないので、STATUSはNotReadyのままで問題ない。

$ kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
k8s-master1   NotReady   master    31m       v1.11.3

他のmasterの構築

必要なファイルを最初のmasterから他のmasterへコピーする。

$ USER=localadmin
$ CONTROL_PLANE_IPS="192.168.110.248 192.168.110.244"
$ for host in ${CONTROL_PLANE_IPS}; do
    scp ./kubeadm-config.yaml "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
done

そして、ファイルを正しく配置。

$ sudo mkdir /etc/kubernetes/pki/
$ sudo mv ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key /etc/kubernetes/pki/
$ sudo mv client-key.pem client.pem /etc/etcd/ssl/

同じ設定ファイルをつかって、kubeadmでノードを設定する。

$ sudo kubeadm init --config kubeadm-config.yaml

masterがセットアップできれば、以下のように3つのマスターが見える。

$ kubectl get node
NAME          STATUS     ROLES     AGE       VERSION
k8s-master1   NotReady   master    2h        v1.11.3
k8s-master2   NotReady   master    1h        v1.11.3
k8s-master3   NotReady   master    1h        v1.11.3

CNI(canal)のインストール

CNI(canal)をインストールして、各ノードともReadyになるか確認する。

$ kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
clusterrole.rbac.authorization.k8s.io/calico created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created

$ kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

configmap/canal-config created
daemonset.extensions/canal created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

canalがインストールできると、flannelのトンネルインタフェースが作成され、かつノードがReadyになる。

$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether fa:16:3e:da:9a:66 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:6e:40:13:77 brd ff:ff:ff:ff:ff:ff
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether f6:26:4c:99:c2:27 brd ff:ff:ff:ff:ff:ff

$ kubectl get node
NAME          STATUS    ROLES     AGE       VERSION
k8s-master1   Ready     master    2h        v1.11.3
k8s-master2   Ready     master    1h        v1.11.3
k8s-master3   Ready     master    1h        v1.11.3

nodeのセットアップ

masterを作成した際に表示された、joinコマンドを使用してNodeを設定する。
3つのmasterを設定したので、3つjoinコマンドを得たと思うが、そのうちどれでも良い。

$ sudo kubeadm join k8s-lb:6443 --token 6nxr9d.0sloux5tm5isdmp2 --discovery-token-ca-cert-hash sha256:9b7beaed43987fc109fa53697f929de5c49cf25cfc68d223450e7f079fcbfddb
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0912 16:30:21.209400   19859 kernel_validator.go:81] Validating kernel version
I0912 16:30:21.213674   19859 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "k8s-lb:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://k8s-lb:6443"
[discovery] Requesting info from "https://k8s-lb:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "k8s-lb:6443"
[discovery] Successfully established connection with API Server "k8s-lb:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

nodeがjoinすると、canalのPodが作成され、他のノードもReadyになる。

$ kubectl get node
NAME          STATUS    ROLES     AGE       VERSION
k8s-master1   Ready     master    2h        v1.11.3
k8s-master2   Ready     master    1h        v1.11.3
k8s-master3   Ready     master    1h        v1.11.3
k8s-node1     Ready     <none>    3m        v1.11.3
k8s-node2     Ready     <none>    3m        v1.11.3
k8s-node3     Ready     <none>    2m        v1.11.3

これでクラスタの作成は完了。

コメントを残す

メールアドレスが公開されることはありません。 * が付いている欄は必須項目です

このサイトはスパムを低減するために Akismet を使っています。コメントデータの処理方法の詳細はこちらをご覧ください