基础环境

  • 安装Docker环境
    https://blog.csdn.net/czxljy999/article/details/91830910

  • 关闭交换空间

    swapoff -a && sed -i “s//dev/mapper/centos-swap/#/dev/mapper/centos-swap/g” /etc/fstab

  • 关闭防火墙

    systemctl stop firewalld && systemctl disable firewalld

  • 关闭SElinux:

    setenforce 0 && sed -i “s/SELINUX=enforcing/SELINUX=disabled/g” /etc/selinux/config

  • 修改hostname

    hostnamectl set-hostname k8s.master

测试环境

Hostname ip
k8s.master 10.10.44.124
k8s.node1 10.10.44.125
k8s.node2 10.10.44.123

网络插件:calico

安装 kubeadm

# 安装
yum update  
yum install -y kubelet kubeadm kubectl

# 设置 kubelet 自启动,并启动 kubelet
systemctl enable kubelet && systemctl start kubelet

配置 kubeadm

# 导出配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
# 修改配置为如下内容
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # 修改为主节点 IP
  advertiseAddress: 10.10.44.124
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s.master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
# 国内不能访问 Google,修改为阿里云
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 修改版本号
kubernetesVersion: v1.14.3
networking:
  dnsDomain: cluster.local
  # 配置成 Calico 的默认网段
  podSubnet: "192.168.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
# 开启 IPVS 模式
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
# 查看所需镜像列表
kubeadm config images list --config kubeadm.yml
# 拉取镜像
kubeadm config images pull --config kubeadm.yml

使用 kubeadm 搭建 kubernetes master 节点

执行以下命令初始化主节点,该命令指定了初始化时需要使用的配置文件,其中添加 --experimental-upload-certs 参数可以在后续执行加入节点时自动分发证书文件。追加的 tee kubeadm-init.log 用以输出日志。

  kubeadm init --config=kubeadm.yml --experimental-upload-certs | tee kubeadm-init.log

[init] Using Kubernetes version: v1.14.3
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.05.0-ce. Latest validated version: 18.09
	[WARNING Hostname]: hostname "k8s.master" could not be reached
	[WARNING Hostname]: hostname "k8s.master": lookup k8s.master on 202.198.176.1:53: no such host
	[WARNING RequiredIPVSKernelModulesAvailable]: 

The IPVS proxier may not be used because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs_rr]
or no builtin kernel IPVS support was found: map[ip_vs:{} ip_vs_rr:{} ip_vs_sh:{} ip_vs_wrr:{} nf_conntrack_ipv4:{}].
However, these modules may be loaded automatically by kube-proxy if they are available on your system.
To verify IPVS support:

   Run "lsmod | grep 'ip_vs|nf_conntrack'" and verify each of the above modules are listed.

If they are not listed, you can use the following methods to load them:

1. For each missing module run 'modprobe $modulename' (e.g., 'modprobe ip_vs', 'modprobe ip_vs_rr', ...)
2. If 'modprobe $modulename' returns an error, you will need to install the missing module support for your kernel.

[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s.master localhost] and IPs [10.10.44.124 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s.master localhost] and IPs [10.10.44.124 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.14]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.502579 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
2de0c8c797c81fc74d6c2733240c9cea6391c0e35a9ea907f7b6e214aba7d175
[mark-control-plane] Marking the node k8s.master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s.master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.44.124:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:211ac3e73389f0740dcfbbf0ab0001af87bb3f4f834b22daf13821ec9cfac678 
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
      #这三个命令需要执行
      
      #子节点加入master的命令
kubeadm join 10.10.44.124:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:211ac3e73389f0740dcfbbf0ab0001af87bb3f4f834b22daf13821ec9cfac678 

注意:如果安装 kubernetes 版本和下载的镜像版本不统一则会出现 timed out waiting for the condition 错误。中途失败或是想修改配置可以使用 kubeadm reset 命令重置配置,再做初始化操作即可。

配置 kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# 非 ROOT 用户执行
chown $(id -u):$(id -g) $HOME/.kube/config

验证是否成功

kubectl get node

# 能够打印出节点信息即表示成功

使用 kubeadm 配置 node 节点

在node节点上准备工作如下:

  • 修改主机名
  • 配置软件源
  • 安装三个工具

如上一节所述,执行

kubeadm join 10.10.44.124:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:211ac3e73389f0740dcfbbf0ab0001af87bb3f4f834b22daf13821ec9cfac678 

就可以加入到网络中

说明:
token
可以通过安装 master 时的日志查看 token 信息
可以通过 kubeadm token list 命令打印出 token 信息
如果 token 过期,可以使用 kubeadm token create 命令创建新的 token

discovery-token-ca-cert-hash
可以通过安装 master 时的日志查看 sha256 信息
可以通过 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^. //’ 命令查看 sha256 信息*

验证是否成功

回到master节点

kubectl get nodes

# 可以看到 node 加入 master 即成功

如果 slave 节点加入 master 时配置有问题可以在 slave 节点上使用 kubeadm reset 重置配置再使用 kubeadm join 命令重新加入即可。希望在 master 节点删除 node ,可以使用 kubeadm delete nodes {NAME} 删除。

配置网络

# 在 Master 节点操作即可
kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml

watch kubectl get pods --all-namespaces
# 需要等待所有状态为 Running,注意时间可能较久,3 - 5 分钟的样子