{"id":7887,"date":"2019-06-27T09:17:12","date_gmt":"2019-06-27T01:17:12","guid":{"rendered":"http:\/\/rmohan.com\/?p=7887"},"modified":"2019-06-27T09:17:14","modified_gmt":"2019-06-27T01:17:14","slug":"installing-kubernetes-1-8-1-on-centos-7-with-flannel","status":"publish","type":"post","link":"https:\/\/mohan.sg\/?p=7887","title":{"rendered":"Installing Kubernetes 1.8.1 on centos 7 with flannel"},"content":{"rendered":"\n<p>Prerequisites:-<\/p>\n\n\n\n<p>You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.<\/p>\n\n\n\n<p>1] Master :-<\/p>\n\n\n\n<p>Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )<\/p>\n\n\n\n<p>2] Slave :-<\/p>\n\n\n\n<p>Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD     ( suggested )<\/p>\n\n\n\n<p>3] Also, make sure of following things.<\/p>\n\n\n\n<p>Network interconnectivity between VMs.<br>\nhostnames<br>\nPrefer to give Static IP.<br>\nDNS entries<br>\nDisable SELinux<br>\n$ vi \/etc\/selinux\/config<\/p>\n\n\n\n<p>Disable and stop firewall. ( If you are not familiar with firewall )<br>\n$ systemctl stop firewalld<\/p>\n\n\n\n<p>$ systemctl disable firewalld<\/p>\n\n\n\n<p>Following steps creates k8s cluster on the above VMs using kubeadm on centos 7.<\/p>\n\n\n\n<p>Step 1] Installing kubelet and kubeadm on all your hosts<\/p>\n\n\n\n<p>$ ARCH=x86_64<\/p>\n\n\n\n<p>$ cat &lt; \/etc\/yum.repos.d\/kubernetes.repo<\/p>\n\n\n<p>[kubernetes]<\/p>\n\n\n\n<p>name=Kubernetes<\/p>\n\n\n\n<p>baseurl=https:\/\/packages.cloud.google.com\/yum\/repos\/kubernetes-el7-${ARCH}<\/p>\n\n\n\n<p>enabled=1<\/p>\n\n\n\n<p>gpgcheck=1<\/p>\n\n\n\n<p>repo_gpgcheck=1<\/p>\n\n\n\n<p>gpgkey=https:\/\/packages.cloud.google.com\/yum\/doc\/yum-key.gpg<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>   https:\/\/packages.cloud.google.com\/yum\/doc\/rpm-package-key.gpg<\/code><\/pre>\n\n\n\n<p>EOF<\/p>\n\n\n\n<p>$ setenforce 0<\/p>\n\n\n\n<p>$ yum install -y docker kubelet kubeadm kubectl kubernetes-cni<\/p>\n\n\n\n<p>$ systemctl enable docker &amp;&amp; systemctl start docker<\/p>\n\n\n\n<p>$ systemctl enable kubelet &amp;&amp; systemctl start kubelet<\/p>\n\n\n\n<p>You might have an issue where the kubelet service does not start. You can see the error in \/var\/log\/messages: If you have an error as follows:<br>\nOct 16 09:55:33 k8s-master kubelet: error: unable to load client CA file \/etc\/kubernetes\/pki\/ca.crt: open \/etc\/kubernetes\/pki\/ca.crt: no such file or directory<br>\nOct 16 09:55:33 k8s-master systemd: kubelet.service: main process exited, code=exited, status=1\/FAILURE<\/p>\n\n\n\n<p>Then you will have to initialize the kubeadm first as in the next step. And the start the kubelet service.<\/p>\n\n\n\n<p>Step 2.1] Initializing your master<\/p>\n\n\n\n<p>$ kubeadm init<\/p>\n\n\n\n<p>Note:-<\/p>\n\n\n\n<p>execute above command on master node. This command will select one of interface to be used as API server. If you wants to provide another interface please provide \u201c\u2013apiserver-advertise-address=\u201d as an argument. So the whole command will be like this-<br>\n$ kubeadm init \u2013apiserver-advertise-address=<\/p>\n\n\n\n<p>K8s has provided flexibility to use network of your choice like flannel, calico etc. I am using flannel network. For flannel network we need to pass network CIDR explicitly. So now the whole command will be-<br>\n$ kubeadm init \u2013apiserver-advertise-address= \u2013pod-network-cidr=10.244.0.0\/16<\/p>\n\n\n\n<p>Exa:- $  kubeadm init \u2013apiserver-advertise-address=172.31.14.55 \u2013pod-network-cidr=10.244.0.0\/16<\/p>\n\n\n\n<p>Step 2.2] Start using cluster<\/p>\n\n\n\n<p>$ sudo cp \/etc\/kubernetes\/admin.conf $HOME\/<br>\n$ sudo chown $(id -u):$(id -g) $HOME\/admin.conf<br>\n$ export KUBECONFIG=$HOME\/admin.conf<br>\n-&gt; Use same network CIDR as it is also configured in yaml file of flannel that we are going to configure in step 3.<\/p>\n\n\n\n<p>-&gt; At the end you will get one token along with the command, make a note of it, which will be used to join the slaves.<\/p>\n\n\n\n<p>Step 3] Installing a pod network<\/p>\n\n\n\n<p>Different networks are supported by k8s and depends on user choice. For this demo I am using flannel network. As of k8s-1.6, cluster is more secure by default. It uses RBAC ( Role Based Access Control ), so make sure that the network you are going to use has support for RBAC and k8s-1.6.<\/p>\n\n\n\n<p>Create RBAC Pods :<br>\n$ kubectl apply -f  https:\/\/raw.githubusercontent.com\/coreos\/flannel\/master\/Documentation\/k8s-manifests\/kube-flannel-rbac.yml<\/p>\n\n\n\n<p>Check whether pods are creating or not :<\/p>\n\n\n\n<p>$ kubectl get pods \u2013all-namespaces<\/p>\n\n\n\n<p>Create Flannel pods :<br>\n$ kubectl apply -f   https:\/\/raw.githubusercontent.com\/coreos\/flannel\/master\/Documentation\/kube-flannel.yml<\/p>\n\n\n\n<p>Check whether pods are creating or not :<\/p>\n\n\n\n<p>$ kubectl get pods \u2013all-namespaces -o wide<\/p>\n\n\n\n<p>-&gt; at this stage all your pods should be in running state.<\/p>\n\n\n\n<p>-&gt; option \u201c-o wide\u201d will give more details like IP and slave where it is deployed.<\/p>\n\n\n\n<p>Step 4] Joining your nodes<\/p>\n\n\n\n<p>SSH to slave and execute following command to join the existing cluster.<\/p>\n\n\n\n<p>$ kubeadm join \u2013token  :<\/p>\n\n\n\n<p>You might also have an ca-cert-hash make sure you copy the entire join command from the init output to join the nodes.<\/p>\n\n\n\n<p>Go to master node and see whether new slave has joined or not as-<\/p>\n\n\n\n<p>$ kubectl get nodes<\/p>\n\n\n\n<p>-&gt; If slave is not ready, wait for few seconds, new slave will join soon.<\/p>\n\n\n\n<p>Step 5]  Verify your cluster by running sample nginx application<\/p>\n\n\n\n<p>$ vi  sample_nginx.yaml<\/p>\n\n\n\n<p>\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014<\/p>\n\n\n\n<p>apiVersion: apps\/v1beta1<\/p>\n\n\n\n<p>kind: Deployment<\/p>\n\n\n\n<p>metadata:<\/p>\n\n\n\n<p>name: nginx-deployment<\/p>\n\n\n\n<p>spec:<\/p>\n\n\n\n<p>replicas: 2 # tells deployment to run 2 pods matching the template<\/p>\n\n\n\n<p>template: # create pods using pod definition in this template<\/p>\n\n\n\n<p>metadata:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code> # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is\n\n # generated from the deployment name\n\n labels:\n\n   app: nginx<\/code><\/pre>\n\n\n\n<p>spec:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code> containers:\n\n \u2013 name: nginx\n\n   image: nginx:1.7.9\n\n   ports:\n\n   \u2013 containerPort: 80<\/code><\/pre>\n\n\n\n<p>\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014<\/p>\n\n\n\n<p>$ kubectl create -f sample_nginx.yaml<\/p>\n\n\n\n<p>Verify pods are getting created or not.<\/p>\n\n\n\n<p>$ kubectl get pods<\/p>\n\n\n\n<p>$ kubectl get deployments<\/p>\n\n\n\n<p>Now , lets expose the deployment so that the service will be accessible to other pods in the cluster.<\/p>\n\n\n\n<p>$ kubectl expose deployment nginx-deployment \u2013name=nginx-service \u2013port=80 \u2013target-port=80 \u2013type=NodePort<\/p>\n\n\n\n<p>Above command will create service with the name \u201cnginx-service\u201d. Service will be accessible on the port given by \u201c\u2013port\u201d option for the \u201c\u2013target-port\u201d. Target port will be of pod. Service will be accessible within the cluster only. In order to access it using your host IP \u201cNodePort\u201d option will be used.<\/p>\n\n\n\n<p>\u2013type=NodePort :- when this option is given k8s tries to find out  one of free port in the range 30000-32767 on all the VMs of the cluster and binds the underlying service with it. If no such port found then it will return with an error.<\/p>\n\n\n\n<p>Check service is created or not<\/p>\n\n\n\n<p>$ kubectl get svc<\/p>\n\n\n\n<p>Try to curl \u2013<\/p>\n\n\n\n<p>$ curl  80  <\/p>\n\n\n\n<p>From all the VMs including master. Nginx welcome page should be accessible.<\/p>\n\n\n\n<p>$ curl  nodePort<\/p>\n\n\n\n<p>$ curl  nodePort<\/p>\n\n\n\n<p>Execute this from all the VMs. Nginx welcome page should be accessible.<\/p>\n\n\n\n<p>Also, Access nginx home page by using browser.<\/p>\n","protected":false},"excerpt":{"rendered":"\n<p>Prerequisites:-<\/p>\n<p>You should have at least two VMs (1 master and 1 slave) with you before creating cluster in order to test full functionality of k8s.<\/p>\n<p>1] Master :-<\/p>\n<p>Minimum of 1 Gb RAM, 1 CPU core and 50 Gb HDD ( suggested )<\/p>\n<p>2] Slave :-<\/p>\n<p>Minimum of 1 Gb RAM, 1 CPU [&#8230;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[92],"tags":[],"_links":{"self":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7887"}],"collection":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7887"}],"version-history":[{"count":1,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7887\/revisions"}],"predecessor-version":[{"id":7888,"href":"https:\/\/mohan.sg\/index.php?rest_route=\/wp\/v2\/posts\/7887\/revisions\/7888"}],"wp:attachment":[{"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7887"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7887"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mohan.sg\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7887"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}