{"id":7798,"date":"2019-04-23T16:58:49","date_gmt":"2019-04-23T08:58:49","guid":{"rendered":"http:\/\/rmohan.com\/?p=7798"},"modified":"2019-04-23T17:03:09","modified_gmt":"2019-04-23T09:03:09","slug":"kubernetes-install-centos7","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=7798","title":{"rendered":"Kubernetes install centos7"},"content":{"rendered":"\n<p>Kubeadm quickly builds a k8s cluster<\/p>\n\n\n\n<p>surroundings<\/p>\n\n\n\n<p>Master01: 192.168.1.110 (minimum 2 core CPU)<\/p>\n\n\n\n<p>node01: 192.168.1.100<\/p>\n\n\n\n<p>planning<\/p>\n\n\n\n<p>Services network: 10.96.0.0\/12<\/p>\n\n\n\n<p>Pod network: 10.244.0.0\/16<\/p>\n\n\n\n<ol><li>Configure hosts to resolve each host<\/li><\/ol>\n\n\n\n<p>vim \/etc\/hosts<\/p>\n\n\n\n<p>127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4<br>\n::1        localhost localhost.localdomain localhost6 localhost6.localdomain6<br>\n192.168.1.110 master01<br>\n192.168.1.100 node01<\/p>\n\n\n\n<ol><li>Synchronize each host time<\/li><\/ol>\n\n\n\n<p>yum install -y ntpdate<br>\nntpdate time.windows.com<\/p>\n\n\n\n<p>14 Mar 16:51:32 ntpdate[46363]: adjust time server 13.65.88.161 offset -0.001108 sec<\/p>\n\n\n\n<ol><li>Close SWAP and close selinux<\/li><\/ol>\n\n\n\n<p>swapoff -a<\/p>\n\n\n\n<p>vim \/etc\/selinux\/config<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">This file controls the state of SELinux on the system.<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">SELINUX= can take one of these three values:<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">enforcing &#8211; SELinux security policy is enforced.<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">permissive &#8211; SELinux prints warnings instead of enforcing.<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">disabled &#8211; No SELinux policy is loaded.<\/h4>\n\n\n\n<p>SELINUX=disabled<\/p>\n\n\n\n<ol><li>Install docker-ce<\/li><\/ol>\n\n\n\n<p>yum install -y yum-utils device-mapper-persistent-data lvm2<br>\nyum-config-manager &#8211;add-repo http:\/\/mirrors.aliyun.com\/docker-ce\/linux\/CentOS\/docker-ce.repo<br>\nyum makecache fast<br>\nyum -y install docker-ce<\/p>\n\n\n\n<p>Appears after Docker installation: WARNING: bridge-nf-call-iptables is disabled<\/p>\n\n\n\n<p>vim \/etc\/sysctl.conf<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">sysctl settings are defined through files in<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">\/usr\/lib\/sysctl.d\/, \/run\/sysctl.d\/, and \/etc\/sysctl.d\/.<\/h4>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Vendors settings live in \/usr\/lib\/sysctl.d\/.<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">To override a whole file, create a new file with the same in<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">\/etc\/sysctl.d\/ and put new settings there. To override<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">only specific settings, add a file with a lexically later<\/h4>\n\n\n\n<h4 class=\"wp-block-heading\">name in \/etc\/sysctl.d\/ and put new settings there.<\/h4>\n\n\n\n<p>#<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">For more information, see sysctl.conf(5) and sysctl.d(5).<\/h4>\n\n\n\n<p>net.bridge.bridge-nf-call-ip6tables=1<br>\nnet.bridge.bridge-nf-call-iptables=1<br>\nnet.bridge.bridge-nf-call-arptables=1<\/p>\n\n\n\n<p>systemctl enable docker &amp;&amp; systemctl start docker<\/p>\n\n\n\n<ol><li>Install kubernetes<\/li><\/ol>\n\n\n\n<p>cat &lt; \/etc\/yum.repos.d\/kubernetes.repo<br>\n<\/p>\n\n\n<p>[kubernetes]<\/p>\n\n\n\n<p>\nname=Kubernetes<br>\nbaseurl=https:\/\/mirrors.aliyun.com\/kubernetes\/yum\/repos\/kubernetes-el7-x86_64\/<br>\nenabled=1<br>\ngpgcheck=1<br>\nrepo_gpgcheck=1<br>\ngpgkey=https:\/\/mirrors.aliyun.com\/kubernetes\/yum\/doc\/yum-key.gpg https:\/\/mirrors.aliyun.com\/kubernetes\/yum\/doc\/rpm-package-key.gpg<br>\nEOF<br>\nsetenforce 0<br>\nyum install -y kubelet kubeadm kubectl<br>\nsystemctl enable kubelet &amp;&amp; systemctl start kubelet\n<\/p>\n\n\n\n<ol><li>Initialize the cluster<\/li><\/ol>\n\n\n\n<p>kubeadm init &#8211;image-repository registry.aliyuncs.com\/google_containers &#8211;cubernetes-version v1.13.1 &#8211;pod-network-cidr = 10.244.0.0 \/ 16<\/p>\n\n\n\n<p>Your Kubernetes master has initialized successfully!<\/p>\n\n\n\n<p>To start using your cluster, you need to run the following as a regular user:<\/p>\n\n\n\n<p>mkdir -p $HOME\/.kube<br>\n  sudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config<br>\n  sudo chown $(id -u):$(id -g) $HOME\/.kube\/config<\/p>\n\n\n\n<p>You should now deploy a pod network to the cluster.<br>\nRun &#8220;kubectl apply -f [podnetwork].yaml&#8221; with one of the options listed at:<br>\n  https:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/addons\/<\/p>\n\n\n\n<p>You can now join any number of machines by running the following on each node<br>\nas root:<\/p>\n\n\n\n<p>kubeadm join 192.168.1.110:6443 &#8211;token wgrs62.vy0trlpuwtm5jd75 &#8211;discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0<\/p>\n\n\n\n<ol><li>Manually deploy flannel<\/li><\/ol>\n\n\n\n<p>Flannel URL: https:\/\/github.com\/coreos\/flannel<\/p>\n\n\n\n<p>for Kubernetes v1.7 +<\/p>\n\n\n\n<p>kubectl apply -f https:\/\/raw.githubusercontent.com\/coreos\/flannel\/master\/Documentation\/kube-flannel.yml<\/p>\n\n\n\n<p>podsecuritypolicy.extensions\/psp.flannel.unprivileged created<br>\nclusterrole.rbac.authorization.k8s.io\/flannel created<br>\nclusterrolebinding.rbac.authorization.k8s.io\/flannel created<br>\nserviceaccount\/flannel created<br>\nconfigmap\/kube-flannel-cfg created<br>\ndaemonset.extensions\/kube-flannel-ds-amd64 created<br>\ndaemonset.extensions\/kube-flannel-ds-arm64 created<br>\ndaemonset.extensions\/kube-flannel-ds-arm created<br>\ndaemonset.extensions\/kube-flannel-ds-ppc64le created<br>\ndaemonset.extensions\/kube-flannel-ds-s390x created<\/p>\n\n\n\n<p>8.node placement<\/p>\n\n\n\n<p>Install docker kubelet kubeadm<\/p>\n\n\n\n<p>Docker installation is the same as step 4, kubelet kubeadm installation is the same as step 5<\/p>\n\n\n\n<p>9.node joins the master<\/p>\n\n\n\n<p>kubeadm join 192.168.1.110:6443 &#8211;token wgrs62.vy0trlpuwtm5jd75 &#8211;discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0<\/p>\n\n\n\n<p>Kubectl get nodes #View node status<\/p>\n\n\n\n<p>NAME                    STATUS    ROLES    AGE    VERSION<br>\nlocalhost.localdomain  NotReady    130m    v1.13.4<br>\nmaster01                Ready      master  4h47m  v1.13.4<br>\nnode01                  Ready        94m    v1.13.4<\/p>\n\n\n\n<p>Kubectl get cs #View component status<\/p>\n\n\n\n<p>NAME                STATUS    MESSAGE              ERROR<br>\nscheduler            Healthy  ok                  <br>\ncontroller-manager  Healthy  ok                  <br>\netcd-0              Healthy  {&#8220;health&#8221;: &#8220;true&#8221;} <\/p>\n\n\n\n<p>Kubectl get ns #View namespace<\/p>\n\n\n\n<p>NAME          STATUS  AGE<br>\ndefault      Active  4h41m<br>\nkube-public  Active  4h41m<br>\nkube-system  Active  4h41m<\/p>\n\n\n\n<p>Kubectl get pods -n kube-system #View pod status<\/p>\n\n\n\n<p>NAME                              READY  STATUS    RESTARTS  AGE<br> coredns-78d4cf999f-bszbk          1\/1    Running  0          4h44m<br> coredns-78d4cf999f-j68hb          1\/1    Running  0          4h44m<br> etcd-master01                      1\/1    Running  0          4h43m<br> kube-apiserver-master01            1\/1    Running  1          4h43m<br> kube-controller-manager-master01  1\/1    Running  2          4h43m<br> kube-flannel-ds-amd64-27&#215;59        1\/1    Running  1          126m<br> kube-flannel-ds-amd64-5sxgk        1\/1    Running  0          140m<br> kube-flannel-ds-amd64-xvrbw        1\/1    Running  0          91m<br> kube-proxy-4pbdf                  1\/1    Running  0          91m<br> kube-proxy-9fmrl                  1\/1    Running  0          4h44m<br> kube-proxy-nwkl9                  1\/1    Running  0          126m<br> kube-scheduler-master01            1\/1    Running  2          4h43m<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Environment preparation master01 node01 node02, connect to the network, modify the hosts file, and confirm that the three hosts resolve each other.<\/p>\n\n\n\n<p>Vim \/etc\/hosts<\/p>\n\n\n\n<p>127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 <br>\n::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 <br>\n192.168.1.201 master01 <br>\n192.168.1.202 node01 <br>\n192.168.1.203 node02<\/p>\n\n\n\n<p>Host configuration Ali YUM source<\/p>\n\n\n\n<p>mv \/etc\/yum.repos.d\/ CentOS -Base.repo \/etc\/yum.repos.d\/CentOS-Base.repo.backup &amp;&amp; curl -o \/etc\/yum.repos.d\/CentOS-Base.repo http :\/\/mirrors.aliyun.com\/repo\/Centos-7.repo<\/p>\n\n\n\n<p>Start deploying kubernetes<\/p>\n\n\n\n<ol><li>Install etcd on master01<\/li><\/ol>\n\n\n\n<p>Yum install etcd -y<\/p>\n\n\n\n<p>After the installation is complete, modify the etcd configuration file \/etc\/etcd\/etcd.conf<\/p>\n\n\n\n<p>Vim \/etc\/etcd\/etcd.conf<\/p>\n\n\n\n<p>ETCD_LISTEN_CLIENT_URLS=&#8221;http:\/\/0.0.0.0:2379&#8243; <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Modify the listening address ETCD_LISTEN_CLIENT_URLS=&#8221;http:\/\/192.168.1.201:2379&#8243; #Modify the etcd address as the local address<\/h4>\n\n\n\n<p>Set service startup<\/p>\n\n\n\n<p>Systemctl start etcd &amp;&amp; systemctl enable etcd<\/p>\n\n\n\n<ol><li>Install kubernetes on all hosts<\/li><\/ol>\n\n\n\n<p>Yum install kubernetes -y<\/p>\n\n\n\n<ol><li>Configure the master<\/li><\/ol>\n\n\n\n<p>Vim \/etc\/kubernetes\/config<\/p>\n\n\n\n<p>KUBE_MASTER=&#8221;&#8211;master=http:\/\/192.168.1.201:8080&#8243; #Modify kube_master address<\/p>\n\n\n\n<p>Vim \/etc\/kubernetes\/apiserver<\/p>\n\n\n\n<p>KUBE_API_ADDRESS=&#8221;&#8211;insecure-bind-address=0.0.0.0&#8243; <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Modify the listening address KUBE_ETCD_SERVERS=&#8221;&#8211;etcd-servers=http:\/\/192.168.1.201:2379&#8243; #Modify the etcd address<\/h3>\n\n\n\n<p>KUBE_ADMISSION_CONTROL=&#8221;&#8211;admission-control =NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota&#8221; #Delete authentication parameter ServiceAccount<\/p>\n\n\n\n<p>Set the service startup, start sequence apiserver&gt;scheduler=controller-manager<\/p>\n\n\n\n<p>Systemctl start docker &amp;&amp; systemctl enable docker <br>\nsystemctl start kube-apiserver &amp;&amp; systemctl enable kube-apiserver <br>\nsystemctl start kube-scheduler &amp;&amp; systemctl enable kube-scheduler <br>\nsystemctl start kube-controller-manager &amp;&amp; systemctl enable kube-controller-manager<\/p>\n\n\n\n<ol><li>Configure node<\/li><\/ol>\n\n\n\n<p>Vim \/etc\/kubernetes\/config<\/p>\n\n\n\n<p>KUBE_MASTER=&#8221;&#8211;master=http:\/\/192.168.1.201:8080&#8243; #Modify the master address<\/p>\n\n\n\n<p>Vim \/etc\/kubernetes\/kubelet<\/p>\n\n\n\n<p>KUBELET_ADDRESS=&#8221;&#8211;address=192.168.1.202&#8243; #Modify kubelet address <br>\nKUBELET_HOSTNAME=&#8221;&#8211;hostname-override=192.168.1.202&#8243; #Modify kubelet hostname <br>\nKUBELET_API_SERVER=&#8221;&#8211;api-servers=http:\/\/192.168.1.201: 8080&#8243; #Modify apiserver address<\/p>\n\n\n\n<p>Set service startup<\/p>\n\n\n\n<p>Systemctl start docker &amp;&amp; systemctl enable docker <br>\nsystemctl start kubelet &amp;&amp; systemctl enable kubelet <br>\nsystemctl start kube-proxy &amp;&amp; systemctl enable kube-proxy<\/p>\n\n\n\n<ol><li>Deployment is complete, check the cluster status<\/li><\/ol>\n\n\n\n<p>Kubectl get nodes<\/p>\n\n\n<p>[root@node02 kubernetes]<\/p>\n\n\n\n<p># kubectl -s http:\/\/192.168.1.201:8080 get nodes -o wide <br>\nNAME STATUS AGE EXTERNAL-IP <br>\n192.168.1.202 Ready 29s  <br>\n192.168.1.203 Ready 16m \n<\/p>\n\n\n\n<ol><li>Install flannel on all hosts<\/li><\/ol>\n\n\n\n<p>Yum install flannel -y<\/p>\n\n\n\n<p>Vim \/etc\/sysconfig\/flanneld<\/p>\n\n\n\n<p>FLANNEL_ETCD_ENDPOINTS=&#8221;http:\/\/192.168.1.201:2379&#8243; #Modify the etcd address<\/p>\n\n\n\n<p>Etcdctl mk \/atomic.io\/network\/config &#8216;{ &#8220;Network&#8221;: &#8220;172.16.0.0\/16&#8221; }&#8217; #Set the container network in the etcd host<\/p>\n\n\n\n<p>Master host restart service<\/p>\n\n\n\n<p>Systemctl start flanneld &amp;&amp; systemctl enable flanneld <br>\nsystemctl restart docker <br>\nsystemctl restart kube-apiserver <br>\nsystemctl restart kube-scheduler <br>\nsystemctl restart kube-controller-manager<\/p>\n\n\n\n<p>Node host restart service<\/p>\n\n\n\n<p>Systemctl start flanneld &amp;&amp; systemctl enable flanneld <br>\nsystemctl restart docker <br>\nsystemctl restart kubelet <br>\nsystemctl restart kube-proxy<\/p>\n","protected":false},"excerpt":{"rendered":"\n<p>Kubeadm quickly builds a k8s cluster<\/p>\n<p>surroundings<\/p>\n<p>Master01: 192.168.1.110 (minimum 2 core CPU)<\/p>\n<p>node01: 192.168.1.100<\/p>\n<p>planning<\/p>\n<p>Services network: 10.96.0.0\/12<\/p>\n<p>Pod network: 10.244.0.0\/16<\/p>\n<p> Configure hosts to resolve each host <\/p>\n<p>vim \/etc\/hosts<\/p>\n<p>127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.110 master01 192.168.1.100 node01<\/p>\n<p> Synchronize each host time <\/p>\n<p>yum install -y ntpdate ntpdate time.windows.com<\/p>\n<p> [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[92],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7798"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7798"}],"version-history":[{"count":3,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7798\/revisions"}],"predecessor-version":[{"id":7802,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7798\/revisions\/7802"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7798"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7798"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7798"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}