CentOS7(mini) 安装 Kubernetes 集群(kubeadm方式)
安装 CentOS
安装net-tools
[root@localhost ~]
# yum install -y net-tools
1
关闭 firewalld
[root@localhost ~]
# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@localhost ~]
# setenforce 0
[root@localhost ~]
# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
1
2
3
4
5
安装 Docker
如今 Docker 分为了 Docker-CE 和 Docker-EE 两个版本,CE 为社区版即免费版,EE 为企业版即商业版。我们选择使用 CE 版。
安装 yum 源工具包
[root@localhost ~]
# yum install -y yum-utils device-mapper-persistent-data lvm2
1
下载 docker-ce 官方的 yum 源配置文件
[root@localhost ~]
# yum-config-manager –add-repo https://download.docker.com/linux/centos/docker-ce.repo
1
禁用 docker-c-edge 源配 edge 是不开发版,不稳定,下载 stable 版
yum-config-manager –disable docker-ce-edge
1
更新本地 YUM 源缓存
yum makecache fast
1
安装 Docker-ce 相应版本的
yum -y install docker-ce
1
运行 hello world
[root@localhost ~]
# systemctl start docker
[root@localhost ~]
# docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
9a0669468bf7: Pull complete
Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
- The Docker client contacted the Docker daemon.
- The Docker daemon pulled the “hello-world” image from the Docker Hub.
- The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading. - The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
安装 kubelet 与kubeadm包
使用kubeadm init 命令初始化集群之下载 Docker 镜像到所有主机的实始化时会下载kubeadm必要的依赖镜像,同时安装 etcd,kube-dns,kube-proxy,由于我们 GFW 防火墙问题我们不能直接访问,因此先通过其它方法下载下面列表中的镜像,然后导入到系统中,再使用kubeadm init 来初始化集群
使用 DaoCloud 加速器(可以跳过这一步)
[root@localhost ~]
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://0d236e3f.m.daocloud.io
docker version >= 1.12
{“registry-mirrors”: [“http://0d236e3f.m.daocloud.io”%5D}
Success.
You need to restart docker to take effect: sudo systemctl restart docker
[root@localhost ~]
# systemctl restart docker
1
2
3
4
5
6
下载镜像,自己通过 Dockerfile 到 dockerhub 生成对镜像,也可以克隆我的
images=(kube-controller-manager-amd64 etcd-amd64 k8s-dns-sidecar-amd64 kube-proxy-amd64 kube-apiserver-amd64 kube-scheduler-amd64 pause-amd64 k8s-dns-dnsmasq-nanny-amd64 k8s-dns-kube-dns-amd64)
for imageName in ${images[@]} ; do
docker pull champly/$imageName
docker tag champly/$imageName gcr.io/google_containers/$imageName
docker rmi champly/$imageName
done
1
2
3
4
5
6
修改版本
docker tag gcr.io/google_containers/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 && \
docker rmi gcr.io/google_containers/etcd-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 && \
docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64 && \
docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 && \
docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64 && \
docker tag gcr.io/google_containers/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-apiserver-amd64 && \
docker tag gcr.io/google_containers/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-controller-manager-amd64 && \
docker tag gcr.io/google_containers/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 && \
docker rmi gcr.io/google_containers/kube-proxy-amd64 && \
docker tag gcr.io/google_containers/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5 && \
docker rmi gcr.io/google_containers/kube-scheduler-amd64 && \
docker tag gcr.io/google_containers/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 && \
docker rmi gcr.io/google_containers/pause-amd64
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
添加阿里源
[root@localhost ~]
# cat >> /etc/yum.repos.d/kubernetes.repo << EOF
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
1
2
3
4
5
6
7
查看 kubectl kubelet kubeadm kubernetes-cni 列表
[root@localhost ~]
# yum list kubectl kubelet kubeadm kubernetes-cni
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
- base: mirrors.tuna.tsinghua.edu.cn
- extras: mirrors.sohu.com
- updates: mirrors.sohu.com
可安装的软件包
kubeadm.x86_64 1.7.5-0 kubernetes
kubectl.x86_64 1.7.5-0 kubernetes
kubelet.x86_64 1.7.5-0 kubernetes
kubernetes-cni.x86_64 0.5.1-0 kubernetes
[root@localhost ~]
#
1
2
3
4
5
6
7
8
9
10
11
12
安装 kubectl kubelet kubeadm kubernetes-cni
[root@localhost ~]
# yum install -y kubectl kubelet kubeadm kubernetes-cni
1
修改 cgroups
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
1
update KUBELET_CGROUP_ARGS=–cgroup-driver=systemd to KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs
修改 kubelet 中的 cAdvisor 监控的端口,默认为 0 改为 4194,这样就可以通过浏器查看 kubelet 的监控 cAdvisor 的 web 页
[root@kub-master ~]
# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
1
Environment=”KUBELET_CADVISOR_ARGS=–cadvisor-port=4194”
启动所有主机上的 kubelet 服务
[root@master ~]
# systemctl enable kubelet && systemctl start kubelet
1
初始化 master master 节点上操作
[root@master ~]
# kubeadm reset && kubeadm init –apiserver-advertise-address=192.168.0.100 –kubernetes-version=v1.7.5 –pod-network-cidr=10.200.0.0/16
[preflight]
Running pre-flight checks
[reset]
Stopping the kubelet service
[reset]
Unmounting mounted directories in “/var/lib/kubelet”
[reset]
Removing kubernetes-managed containers
[reset]
Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd]
[reset]
Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset]
Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[kubeadm]
WARNING: kubeadm is in beta, please do not use it for production clusters.
[init]
Using Kubernetes version: v1.7.5
[init]
Using Authorization modes: [Node RBAC]
[preflight]
Running pre-flight checks
[preflight]
WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12
[preflight]
Starting the kubelet service
[kubeadm]
WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates]
Generated CA certificate and key.
[certificates]
Generated API server certificate and key.
[certificates]
API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.100]
[certificates]
Generated API server kubelet client certificate and key.
[certificates]
Generated service account token signing key and public key.
[certificates]
Generated front-proxy CA certificate and key.
[certificates]
Generated front-proxy client certificate and key.
[certificates]
Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[apiclient]
Created API client, waiting for the control plane to become ready
[apiclient]
All control plane components are healthy after 34.002949 seconds
[token]
Using token: 0696ed.7cd261f787453bd9
[apiconfig]
Created RBAC rules
[addons]
Applied essential addon: kube-proxy
[addons]
Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join –token 0696ed.7cd261f787453bd9 192.168.0.100:6443
[root@master ~]
#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
kubeadm join –token 0696ed.7cd261f787453bd9 192.168.0.100:6443 这个一定要记住,以后无法重现,添加节点需要
添加节点
[root@node1 ~]
# kubeadm join –token 0696ed.7cd261f787453bd9 192.168.0.100:6443
[kubeadm]
WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight]
Running pre-flight checks
[preflight]
WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12
[preflight]
WARNING: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight]
Starting the kubelet service
[discovery]
Trying to connect to API Server “192.168.0.100:6443”
[discovery]
Created cluster-info discovery client, requesting info from “https://192.168.0.100:6443”
[discovery]
Cluster info signature and contents are valid, will use API Server “https://192.168.0.100:6443”
[discovery]
Successfully established connection with API Server “192.168.0.100:6443”
[bootstrap]
Detected server version: v1.7.10
[bootstrap]
The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr]
Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr]
Received signed certificate from the API server, generating KubeConfig…
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
Node join complete:
- Certificate signing request sent to master and response
received. - Kubelet informed of new secure connection details.
Run ‘kubectl get nodes’ on the master to see this machine join.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
在 master 配置 kubectl 的 kubeconfig 文件
[root@master ~]
# mkdir -p $HOME/.kube
[root@master ~]
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]
# chown $(id -u):$(id -g) $HOME/.kube/config
1
2
3
在 Master 上安装 flannel
docker pull quay.io/coreos/flannel:v0.8.0-amd64
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml
1
2
3
查看集群
[root@master ~]
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}
[root@master ~]
# kubectl get nodes
NAME STATUS AGE VERSION
master Ready 24m v1.7.5
node1 NotReady 45s v1.7.5
node2 NotReady 7s v1.7.5
[root@master ~]
# kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 0 24m
kube-system kube-apiserver-master 1/1 Running 0 24m
kube-system kube-controller-manager-master 1/1 Running 0 24m
kube-system kube-dns-2425271678-h48rw 0/3 ImagePullBackOff 0 25m
kube-system kube-flannel-ds-28n3w 1/2 CrashLoopBackOff 13 24m
kube-system kube-flannel-ds-ndspr 0/2 ContainerCreating 0 41s
kube-system kube-flannel-ds-zvx9j 0/2 ContainerCreating 0 1m
kube-system kube-proxy-qxxzr 0/1 ImagePullBackOff 0 41s
kube-system kube-proxy-shkmx 0/1 ImagePullBackOff 0 25m
kube-system kube-proxy-vtk52 0/1 ContainerCreating 0 1m
kube-system kube-scheduler-master 1/1 Running 0 24m
[root@master ~]
#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
如果出现:The connection to the server localhost:8080 was refused – did you specify the right host or port?
解决办法:
为了使用 kubectl 访问 apiserver,在~/.bash_profile 中追加下面的环境变量:
export KUBECONFIG=/etc/kubernetes/admin.conf
source ~/.bash_profile
重新初始化 kubectl
作者:ChamPly
来源:CSDN
原文:https://blog.csdn.net/ChamPly/article/details/78578588
版权声明:本文为博主原创文章,转载请附上博文链接!