环境准备
kube-apiserver:
- 使用节点本地 nginx 实现高可用;
- 关闭非安全端口 和匿名访问;
- 在安全端口 接收 https 请求;
- 严格的认证和授权策略 (x509、token、RBAC);
- 开启 bootstrap token 认证,支持 kubelet TLS bootstrapping;
- 使用 https 访问 kubelet、etcd,加密通信;
kube-controller-manager:
- 3 节点高可用;
- 关闭非安全端口,在安全端口 接收 https 请求;
- 使用 kubeconfig 访问 apiserver 的安全端口;
- 自动 approve kubelet 证书签名请求 (CSR),证书过期后自动轮转;
- 各 controller 使用自己的 ServiceAccount 访问 apiserver;
kube-scheduler:
- 3 节点高可用;
- 使用 kubeconfig 访问 apiserver 的安全端口;
kubelet:
- 使用 kubeadm 动态创建 bootstrap token,而不是在 apiserver 中静态配置;
- 使用 TLS bootstrap 机制自动生成 client 和 server 证书,过期后自动轮转;
- 在 KubeletConfiguration 类型的 JSON 文件配置主要参数;
- 关闭只读端口,在安全端口 接收 https 请求,对请求进行认证和授权,拒绝匿名访问和非授权访问;
- 使用 kubeconfig 访问 apiserver 的安全端口;
kube-proxy:
- 使用 kubeconfig 访问 apiserver 的安全端口;
- 在 KubeProxyConfiguration 类型的 JSON 文件配置主要参数;
- 使用 ipvs 代理模式;
组件版本
组件 | 版本 | 连接地址 |
rocky linux | 官网 | |
kubernetes | v1. | GitHub |
etcd | v3. | GitHub |
containerd | GitHub | |
calico | v3. | GitHub |
nginx | 官网 |
三台机器混合部署 etcd、master 集群和 woker 集群。
rocky11 master node
rocky12 master node
rocky13 master node
vip
pod /
svc /
升级内核
#参考:http://elrepo.org/tiki/HomePage
grubby --info=ALL #查看所有内核
#1.安装GPG-KEY
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#2.安装epel仓库
dnf install https://www.elrepo.org/elrepo-release-9.el9.elrepo.noarch.rpm #9
#3.载入elrepo-kernel元数据
dnf --disablerepo=\* --enablerepo=elrepo-kernel repolist
#4.查看可用内核包
dnf --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
#5.安装内核
dnf --enablerepo=elrepo-kernel install kernel-lt kernel-lt-devel -y
#6.查看安装的包
rpm -qa|grep kernel-lt
grubby --default-kernel #查看默认的内核
grubby --info=ALL #查看所有内核
#7.修改启动顺序
rubby --set-default /boot/vmlinuz-.el9.x86_64
grubby --remove-kernel=kernel的路径 #删除不需要的内核
#8.重启机器
reboot
#9.查看内核版本
uname -r
系统设置
#1.安装依赖包
dnf install -y epel-release
dnf install -y gcc gcc-c++ net-tools lrzsz vim telnet make psmisc \
patch socat conntrack ipset ipvsadm sysstat libseccomp chrony perl curl wget git
#2.关闭防火墙
systemctl disable --now firewalld
#3.关闭selinux
getenforce #查看selinux状态
setenforce 0 #临时关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config #修改配置文件
#4.关闭swap分区
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#5.文件句柄数配置
cat <> /etc/security/limits.conf
* soft nofile
* hard nofile
* soft nproc
* hard nproc
* soft memlock unlimited
* hard memlock unlimited
EOF
#6.时间同步
dnf install -y chrony
vim /etc/chrony.conf
server 时间同步服务器 iburst #配置自己的时间服务器
systemctl enable --now chronyd #启动并设置自启动
timedatectl status #查看同步状态
timedatectl set-timezone Asia/Shanghai #时区不对的可调整系统TimeZone
timedatectl set-local-rtc 0 #将当前的UTC时间写入硬件时钟
#重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
#7.内核参数
cat > /etc/sysctl.d/kubernetes.conf < /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
br_netfilter
EOF
#重启后查看
lsmod | grep -e ip_vs -e nf_conntrack
准备证书
cfssl为各个组件生成证书,在某台机器上生成证书,之后将证书拷贝到部署的主机上。
由于各个组件都需要配置证书,并且依赖CA证书来签发证书,所以我们首先要生成好CA证书以及后续的签发配置文件。GitHub
#1.下载
wget https://github.com/cloudflare/cfssl/releases/download/v1./cfssl_1.6.4_linux_amd64 -O /usr/local/sbin/cfssl
wget https://github.com/cloudflare/cfssl/releases/download/v1./cfssljson_1.6.4_linux_amd64 -O /usr/local/sbin/cfssljson
wget https://github.com/cloudflare/cfssl/releases/download/v1./cfssl-certinfo_1.6.4_linux_amd64 -O /usr/local/sbin/cfssl-certinfo
chmod +x /usr/local/sbin/*
#2.签发CA证书
cd /opt/k8s
cat < ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "Kubernetes",
"OU": "System"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF
CN:apiserver从证书中提取该字段作为请求的用户名,浏览器使用该字段验证网站是否合法;
O:apiserver从证书中提取该字段作为请求用户所属的组
kube-apiserver将提取的User、Group作为RBAC授权的用户标识
#3.#证书签发配置
cat < ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
EOF
signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
expiry: 876000h:证书有效期设置为 年;
#4.生成证书,后续组件都依赖CA证书签发证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
#生成后的文件
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
#查看证书有效时间,年
openssl x509 -in ca.pem -noout -text | grep 'Not'
Not Before: Jul :: GMT
Not After : Jun :: GMT
部署etcd
etcd是一个分布式、可靠的key-value存储分布式系统,它不仅能用于存储,还提供共享配置及服务发现。
etcd应用场景
etcd比较多的应用场景是用于服务发现,服务发现解决的是分布式系统中最常见的问题之一,即在同一个分布式集群中的进程或服务如何才能找到对方并建立连接。etcd主要使用场景有:分布式系统配置管理,服务注册发现,选主,应用调度,分布式队列,分布式锁。
etcd如果保证一致性
etcd使用raft协议来维护集群内各个节点状态的一致性,每个etcd节点都维护了一个状态机,并且,任意时刻至多存在一个有效的主节点。主节点处理所有来自客户端写操作,通过 Raft 协议保证写操作对状态机的改动会可靠的同步到其他节点。
准备证书
#1.签发证书
cat < etcd-csr.json
{
"CN": "etcd",
"hosts": [
"",
"",
"",
"",
""
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "Kubernetes",
"OU": "System"
}
]
}
EOF
#hosts字段的IP地址是授权给etcd机器的证书
#2.生成证书和私钥
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
#####
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
etcd-csr.json
etcd-key.pem #
etcd.csr
etcd.pem #
#3.把证书拷贝到各个节点
mkdir -p /etc/kubernetes/ssl && cp *.pem /etc/kubernetes/ssl/
ssh -n "mkdir -p /etc/kubernetes/ssl && exit"
ssh -n "mkdir -p /etc/kubernetes/ssl && exit"
scp -r /etc/kubernetes/ssl/*.pem :/etc/kubernetes/ssl/
scp -r /etc/kubernetes/ssl/*.pem :/etc/kubernetes/ssl/
部署服务
#1.解压二进制文件
#下载地址:https://github.com/etcd-io/etcd/releases
tar -zxvf etcd-v3.-linux-amd64.tar.gz
mv etcd-v3.-linux-amd64/etcd* /usr/local/sbin/
scp -r /usr/local/sbin/etcd* :/usr/local/sbin
scp -r /usr/local/sbin/etcd* :/usr/local/sbin
mkdir -p /app/etcd
ssh -n "mkdir -p /app/etcd && exit"
ssh -n "mkdir -p /app/etcd && exit"
#2.配置启动文件
cat < /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/app/etcd/
ExecStart=/usr/local/sbin/etcd \\
--name=rocky11 \\
--data-dir=/app/etcd \\
--cert-file=/etc/kubernetes/ssl/etcd.pem \\
--key-file=/etc/kubernetes/ssl/etcd-key.pem \\
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
--peer-cert-file=/etc/kubernetes/ssl/etcd.pem \\
--peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \\
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-cluster-token=etcd-cluster-0 \\
--listen-peer-urls=https://: \\
--advertise-client-urls=https://: \\
--initial-advertise-peer-urls=https://: \\
--listen-client-urls=https://:,https://: \\
--initial-cluster=rocky11=https://:,rocky12=https://:,rocky13=https://: \\
--initial-cluster-state=new \\
--auto-compaction-mode=periodic \\
--auto-compaction-retention=1 \\
--max-request-bytes= \\
--quota-backend-bytes= \\
--heartbeat-interval= \\
--election-timeout=
Restart=on-failure
RestartSec=5
LimitNOFILE=
[Install]
WantedBy=multi-user.target
EOF
#启动服务
systemctl enable --now etcd
验证etcd
#1.查看启动状态
etcdctl member list \
--cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/kubernetes/ssl/etcd.pem \
--key=/etc/kubernetes/ssl/etcd-key.pem
################显示如下################
1af68d968c7e3f22, started, rocky12, https://:, https://:, false
7508c5fadccb39e2, started, rocky11, https://:, https://:, false
e8d9a97b17f26476, started, rocky13, https://:, https://:, false
#2.验证服务状态
etcdctl endpoint health --endpoints="https://:,https://:,https://:" --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem
###########显示如下#############
https://: is healthy: successfully committed proposal: took = .307663ms
https://: is healthy: successfully committed proposal: took = .213301ms
https://: is healthy: successfully committed proposal: took = .741529ms
#3.查看领导者
etcdctl -w table --cacert=/etc/kubernetes/ssl/ca.pem \
--cert=/etc/kubernetes/ssl/etcd.pem \
--key=/etc/kubernetes/ssl/etcd-key.pem \
--endpoints="https://:,https://:,https://:" endpoint status
#########显示如下
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://: | 7508c5fadccb39e2 | | kB | true | false | 2 | | | |
| https://: | 1af68d968c7e3f22 | | kB | false | false | 2 | | | |
| https://: | e8d9a97b17f26476 | | kB | false | false | 2 | | | |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
部署负载
kube-apiserver是无状态的,通过nginx进行代理访问,从而保证服务可用性
nginx做反向代理,后端连接所有的kube-apiserver实例,并提供健康检查和负载均衡功能;keepalived提供kube-apiserver对外服务的VIP;
nginx监听的端口需要与kube-apiserver的端口不同,避免冲突。
keepalived在运行过程中周期检查本机的nginx进程状态,如果检测到nginx进程异常,则触发重新选主的过程,VIP将飘移到新选出来的主节点,从而实现VIP的高可用。所有组件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通过VIP和nginx监听的端口访问kube-apiserver服务。
部署nginx
#1.安装nginx
yum install pcre zlib openssl nginx nginx-mod-stream -y
#2.nginx.conf
stream {
upstream apiserver {
hash $remote_addr consistent;
server : max_fails=3 fail_timeout=30s;
server : max_fails=3 fail_timeout=30s;
server : max_fails=3 fail_timeout=30s;
}
server {
listen ;
proxy_connect_timeout 10s;
proxy_timeout 120s;
proxy_pass apiserver;
}
}
#3.启动
systemctl enable --now nginx
netstat -lantup|grep nginx
tcp 0 .0: :* LISTEN /nginx: master
部署keepalived
#1.安装
yum install keepalived -y
#2.配置
cat < /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id apiserver
}
#自定义监控脚本
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval
weight 0
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id
priority
advert_int 1
mcast_src_ip
authentication {
auth_type PAAS
auth_pass kube
}
virtual_ipaddress {
}
track_script {
chk_nginx
}
}
EOF
#配置检测脚本
cat < /etc/keepalived/nginx_check.sh
#!/bin/bash
pid=\`ps -ef|grep nginx|grep -v -E "grep|check"|wc -l\`
if [ \$pid -eq 0 ];then
systemctl start nginx
sleep 2
if [ \`ps -ef|grep nginx|grep -v -E "grep|check"|wc -l\` -eq 0 ];then
killall keepalived
fi
fi
EOF
chmod /etc/keepalived/nginx_check.sh
#3.启动
systemctl --now enable keepalived && systemctl status keepalived
ip add
1: ens160: mtu qdisc mq state UP group default qlen
link/ether :0c:::2f:cf brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet / brd scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet / scope global ens160 ####
valid_lft forever preferred_lft forever
部署master
Master 节点的证书操作只需要做一次,将生成的证书拷到每个 Master 节点上以复用。
kubeconfig 主要是各组件以及用户访问 apiserver 的必要配置,包含 apiserver 地址、client 证书与 CA 证书等信息。
k8s组件的启动脚本参数参考官方文档
下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-.md
#1.解压包
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd /kubernetes/server/bin
#2.拷贝文件
cp kube-apiserver kube-aggregator \
kube-controller-manager kube-scheduler \
kube-proxy kubeadm kubectl kubelet /usr/local/sbin/
#把二进制文件拷贝到所有k8s节点,包含master和node:/usr/local/sbin
scp /usr/local/sbin/kube* :/usr/local/sbin
scp /usr/local/sbin/kube* :/usr/local/sbin
kubectl
kubectl 是 kubernetes 集群的命令行管理工具,默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错。
kube-apiserver 会提取证书中字段 CN 作为用户名,这里用户名叫 admin,但这只是个名称标识,它有什么权限呢?admin 是预置最高权限的用户名吗?不是的!不过 kube-apiserver 确实预置了一个最高权限的 ClusterRole,叫做 cluster-admin,还有个预置的 ClusterRoleBinding 将 cluster-admin 这个 ClusterRole 与 system:masters 这个用户组关联起来了,所以说我们给用户签发证书只要在 system:masters 这个用户组就拥有了最高权限。
~/.kube/config只需要部署一次,然后拷贝到所有节点。
#1.签发最高权限的证书
cat < admin-csr.json
{
"CN": "admin",
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
#O: 为system:masters,kube-apiserver收到该证书后将请求的Group设置为system:masters;
#预定义的ClusterRoleBinding和cluster-admin将Group system:masters与Role cluster-admin 绑定,该Role授予所有API的权限
#该证书只会被kubectl当做client证书使用,所以hosts没写
#2.生成公钥和私钥
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
#忽略告警
This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v., from the CA/Browser Forum (https://cabforum.org);
specifically, section ("Information Requirements")
#生成证书文件
admin-key.pem #管理员证书私钥
admin.pem #管理员证书公钥
cp admin*.pem /etc/kubernetes/ssl
scp admin*.pem :/etc/kubernetes/ssl
scp admin*.pem :/etc/kubernetes/ssl
#3.创建kubectl的kubeconfig
#apiserver的VIP是:
#设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://: \
--kubeconfig=kubectl.kubeconfig
#设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=kubectl.kubeconfig
#设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=kubectl.kubeconfig
#设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
#生成文件:kubectl.kubeconfig
#为所有kubectl机器分发文件 -- 拷贝成 ~/.kube/config
mkdir -p /root/.kube/
cp -rp kubectl.kubeconfig /root/.kube/config
scp kubectl.kubeconfig :/root/.kube/config
scp kubectl.kubeconfig :/root/.kube/config
- --certificate-authority:验证 kube-apiserver 证书的根证书;
- --client-certificate、--client-key:刚生成的 admin 证书和私钥,与 kube-apiserver https 通信时使用;
- --embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(否则,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件,不方便。);
- --server:指定 kube-apiserver 的地址,这里指向第一个节点上的服务;
apiserver
kube-apiserver是k8s的访问核心,所有K8S组件和用户kubectl操作都会请求kube-apiserver,通常启用tls证书认证,证书里面需要包含kube-apiserver可能被访问的地址,这样client校验kube-apiserver证书时才会通过,集群内的Pod一般通过kube-apiserver的Service名称访问。集群外一般是Master集群节点,CLUSTER IP和Master负载均衡器地址访问。
准备证书
#1.准备CSR文件,hosts字段指定授权使用该证书的IP或域名列表
cat < apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"",
"",
"",
"",
"",
"",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "Kubernetes",
"OU": "System"
}
]
}
EOF
#2.生成证书和密钥
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
apiserver-csr.json | cfssljson -bare apiserver
#3.生成2个重要文件
apiserver-key.pem #证书密钥
apiserver.pem #证书
#4.把apiserver公钥和私钥拷贝到每个master节点
cp apiserver*.pem /etc/kubernetes/ssl
scp apiserver*.pem :/etc/kubernetes/ssl/
scp apiserver*.pem :/etc/kubernetes/ssl/
#5.加密配置文件
cat < /etc/kubernetes/encryption-config.yaml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: $(head -c /dev/urandom | base64)
- identity: {}
EOF
scp /etc/kubernetes/encryption-config.yaml :/etc/kubernetes/
scp /etc/kubernetes/encryption-config.yaml :/etc/kubernetes/
#6.metrics-server使用的证书
cat < metrics-server-csr.json
{
"CN": "aggregator",
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "Kubernetes",
"OU": "System"
}
]
}
EOF
#7.生成证书和密钥
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
metrics-server-csr.json | cfssljson -bare metrics-server
cp metrics-server*.pem /etc/kubernetes/ssl
scp metrics-server*.pem :/etc/kubernetes/ssl/
scp metrics-server*.pem :/etc/kubernetes/ssl/
部署服务
#1.启动服务
cat < /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes APIServer
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/sbin/kube-apiserver \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--anonymous-auth=false \\
--secure-port= \\
--bind-address= \\
--advertise-address= \\
--authorization-mode=Node,RBAC \\
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth \\
--max-mutating-requests-inflight= \\
--max-requests-inflight= \\
--delete-collection-workers=2 \\
--service-node-port-range= \\
--service-cluster-ip-range=/ \\
--service-account-issuer=api \\
--service-account-key-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \\
--etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
--etcd-certfile=/etc/kubernetes/ssl/apiserver.pem \\
--etcd-keyfile=/etc/kubernetes/ssl/apiserver-key.pem \\
--etcd-servers=https://:,https://:,https://: \\
--kubelet-timeout=10s \\
--kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/apiserver-key.pem \\
--kubelet-client-certificate=/etc/kubernetes/ssl/apiserver.pem \\
--encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
--proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \\
--proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \\
--requestheader-allowed-names="" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage= \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize= \\
--audit-log-truncate-enabled \\
--audit-log-path=/var/log/kubernetes/kube-apiserver/apiserver.log \\
--event-ttl=168h \\
--v=2
Restart=on-failure
RestartSec=
Type=notify
LimitNOFILE=
[Install]
WantedBy=multi-user.target
EOF
#如果apiserver机器上没运行kube-proxy,则还需要添加--enable-aggregator-routing=true参数
mkdir -p /var/log/kubernetes/kube-apiserver
#启动kube-apiserver
systemctl daemon-reload \
&& systemctl start kube-apiserver \
&& systemctl enable kube-apiserver \
&& systemctl status kube-apiserver
#2.检查
netstat -lntup | grep kube-apiserve
####显示如下
tcp6 0 0 ::: :::* LISTEN /kube-apiserver
kubectl cluster-info
####显示如下
Kubernetes control plane is running at https://:
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
###访问有返回说明正常
curl -k https://:/
授权访问kubelet
kube-apiserver有些情况也会访问kubelet,比如获取metrics、查看容器日志或登录容器,这时kubelet 作为server,kube-apiserver作为client,kubelet监听的https,kube-apiserver经过证书认证访问kubelet,但还需要经过授权才能成功调用接口,我们通过创建RBAC规则授权kube-apiserver访问kubelet。
cat <
controller-manager
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
准备证书
#1.准备CSR文件
cat < kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size":
},
"hosts": [
"",
"",
"",
"",
""
],
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}
EOF
#hosts列表包含所有kube-controller-manager节点IP,
#CN和O 为system:kube-controller-manager
#kubernetes内置的ClusterRoleBindings system:kube-controller-manager赋予kube-controller-manager工作所需的权限。
#2.生成证书
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
#3.两个重要的文件
kube-controller-manager-key.pem #kube-controller-manager证书密钥
kube-controller-manager.pem #kube-controller-manager证书
#拷贝到所有master节点
cp kube-controller-manager*.pem /etc/kubernetes/ssl/
scp kube-controller-manager*.pem :/etc/kubernetes/ssl/
scp kube-controller-manager*.pem :/etc/kubernetes/ssl/
#apiserver有多个实例,前面挂了vip的地址和nginx代理后的端口https://:
#4.创建kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://: \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
#生成文件:kube-controller-manager.kubeconfig拷贝到所有master节点
cp kube-controller-manager.kubeconfig /etc/kubernetes/
scp kube-controller-manager.kubeconfig :/etc/kubernetes/
scp kube-controller-manager.kubeconfig :/etc/kubernetes/
部署服务
#1.启动服务
cat < /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/sbin/kube-controller-manager \\
--bind-address= \\
--service-cluster-ip-range=/ \\
--master=https://: \\
--concurrent-service-syncs=2 \\
--concurrent-deployment-syncs= \\
--concurrent-gc-syncs= \\
--controllers=*,bootstrapsigner,tokencleaner \\
--cluster-cidr=/ \\
--cluster-name=kubernetes \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=876000h \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--use-service-account-credentials=true \\
--feature-gates=RotateKubeletServerCertificate=true \\
--horizontal-pod-autoscaler-sync-period=10s \\
--kube-api-qps= \\
--kube-api-burst= \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
#2.启动kube-controller-manager
systemctl daemon-reload \
&& systemctl start kube-controller-manager \
&& systemctl enable kube-controller-manager \
&& systemctl status kube-controller-manager
#3.检查
netstat -lantup|grep kube-control
tcp6 ::: :::* LISTEN /kube-controlle
tcp6 ::: :::* LISTEN /kube-controlle
scheduler
该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。
准备证书
#1.准备CSR文件
cat < kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"",
"",
"",
"",
""
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}
EOF
#hosts:列表包含所有kube-scheduler节点IP;
#CN和O:为system:kube-scheduler,
#kubernetes内置的ClusterRoleBindings system:kube-scheduler将赋予kube-scheduler工作所需的权限。
#2.生成证书
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
#3.生成两个重要文件
kube-scheduler-key.pem #kube-scheduler证书密钥
kube-scheduler.pem #kube-scheduler证书公钥
##拷贝到所有master节点
cp kube-scheduler*.pem /etc/kubernetes/ssl/
scp kube-scheduler*.pem :/etc/kubernetes/ssl/
scp kube-scheduler*.pem :/etc/kubernetes/ssl/
#apiserver https://:
#4.创建kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://: \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
#5.生成文件kube-scheduler.kubeconfig
cp kube-scheduler.kubeconfig /etc/kubernetes/
scp kube-scheduler.kubeconfig :/etc/kubernetes/
scp kube-scheduler.kubeconfig :/etc/kubernetes/
部署服务
#1.systemd文件
cat < /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/sbin/kube-scheduler \\
--bind-address= \\
--kube-api-burst= \\
--kube-api-qps= \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
#2.启动服务
systemctl daemon-reload \
&& systemctl enable kube-scheduler \
&& systemctl start kube-scheduler \
&& systemctl status kube-scheduler
#3.检查
netstat -lantup|grep kube-schedule
tcp .:.:* LISTEN /kube-scheduler
tcp6 ::: :::* LISTEN /kube-scheduler
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
部署woker
Worker节点主要安装kubelet来管理、运行工作负载,kube-proxy来实现Service的通信与负载均衡。 (Master 节点也可以部署为特殊 Worker 节点来部署关键服务)。
containerd
kubernetes在版本之后就要抛弃docker-shim组件,容器运行时也是从docker转换到了containerd
containerd 实现了kubernetes的Container Runtime Interface (CRI)接口,提供容器运行时核心功能,如镜像管理、容器管理等,相比dockerd更加简单、健壮和可移植。
#1.拷贝二进制
wget https://github.com/containerd/nerdctl/releases/download/v1./nerdctl-full--linux-amd64.tar.gz
#2.配置启动文件
cat < /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/usr/local/sbin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-
LimitNOFILE=
LimitNPROC=infinity
LimitCORE=infinity
Type=notify
TasksMax=infinity
[Install]
WantedBy=multi-user.target
EOF
#3.生成config.toml
mkdir -p /etc/containerd
##自动生成配置,参考我修改好的配置文件
containerd config default > /etc/containerd/config.toml
#4.修改config.toml
sandbox_image = "registry.k8s.io/pause:"
####镜像如果拉取不到可改成阿里源的####
sandbox_image = "registry.aliyuncs.com/google_containers/pause:"
sed -i 's/registry.k8s.io\/pause:/registry.aliyuncs.com\/google_containers\/pause:/' /etc/containerd/config.toml
#5.启动
systemctl daemon-reload \
&& systemctl start containerd \
&& systemctl enable containerd \
&& systemctl status containerd
nerdctl
nerdctl用来兼容docker cli,可以像docker命令一样来管理本地的镜像和容器
二进制包拷贝到usr/local/sbin,就可以,会监听
unix:///run/containerd/containerd.sock
kubelet
kubelet运行在每 worker节点上,接收kube-apiserver发送的请求,管理Pod容器,执行交互式命令,如 exec、run、logs等。kubelet启动时自动向kube-apiserver注册节点信息,内置的cadvisor统计和监控节点的资源使用情况。
kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。
kubeconfig
bootstrap token用于kubelet自动请求签发证书,以Secret形式存储,不需要事先给apiserver配置静态token,这样也易于管理。
创建了bootstrap token后我们利用它使用它来创建
kubelet-bootstrap.kubeconfig以供后面部署 Worker节点用 (kubelet使用
kubelet-bootstrap.kubeconfig自动创建证书)
export BOOTSTRAP_TOKEN=$(kubeadm token create \
--description kubelet-bootstrap-token \
--groups system:bootstrappers:kubelet \
--kubeconfig ~/.kube/config)
#查看创建的token
kubeadm token list --kubeconfig ~/.kube/config
####
x8cwv4.hqo4ju9kalaecqcj 23h -19T13::+: authentication,signing kubelet-bootstrap-token system:bootstrappers:kubelet
#查看token关联的Secret
kubectl get secrets -n kube-system|grep bootstrap
###
bootstrap-token-dv49cd bootstrap.kubernetes.io/token 7 52s
kubectl config set-cluster bootstrap \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://: \
--kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-context bootstrap \
--cluster=bootstrap \
--user=kubelet-bootstrap \
--kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config use-context bootstrap \
--kubeconfig=kubelet-bootstrap.kubeconfig
#生成文件kubelet-bootstrap.kubeconfig,拷贝到所有节点
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
scp kubelet-bootstrap.kubeconfig :/etc/kubernetes/
scp kubelet-bootstrap.kubeconfig :/etc/kubernetes/
部署服务
#1.启动配置 config.yaml
cat < /etc/kubernetes/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ""
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port:
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
anonymous:
enabled: false
webhook:
enabled: true
cacheTTL: 2m0s
x509:
clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
registryPullQPS: 0
registryBurst:
eventRecordQPS: 0
eventBurst:
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort:
healthzBindAddress: ""
clusterDomain: "cluster.local"
clusterDNS:
- ""
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent:
imageGCLowThresholdPercent:
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods:
podCIDR: "/"
podPidsLimit: -1
resolvConf: "/etc/resolv.conf"
maxOpenFiles:
kubeAPIQPS:
kubeAPIBurst:
serializeImagePulls: false
evictionHard:
memory.available: "100Mi"
nodefs.available: "%"
nodefs.inodesFree: "5%"
imagefs.available: "%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles:
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF
#2.配置kubelet.service
cat < /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
WorkingDirectory=/app/kubelet
ExecStart=/usr/local/sbin/kubelet \\
--runtime-request-timeout=15m \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-config.yaml \\
--cert-dir=/etc/kubernetes/ssl \\
--hostname-override= \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
#3.启动服务
mkdir -p /app/kubelet
systemctl daemon-reload
systemctl enable --now kubelet.service && systemctl status kubelet.service
approveCSR
CSR是什么?
CSR是Certificate Signing Request的英文缩写,即证书签名请求文件,是证书申请者在申请数字证书时由CSP(加密服务提供者)在生成私钥的同时也生成证书请求文件,证书申请者只要把CSR文件提交给证书颁发机构后,证书颁发机构使用其根证书私钥签名就生成了证书公钥文件,也就是颁发给用户的证书。
节点kubelet通过Bootstrap Token调用apiserver CSR API请求签发证书,kubelet通过bootstrap token认证后会在system:bootstrappers用户组里,我们还需要给它授权调用CSR API,为这个用户组绑定预定义的system:node-bootstrapper这个ClusterRole就可以。
kublet启动时查找配置的--kubeletconfig文件是否存在,如果不存在则使用--bootstrap-kubeconfig向kube-apiserver发送证书签名请求 (CSR)。kube-apiserver收到CSR请求后,对其中的 Token进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的user设置为 system:bootstrap,group设置为system:bootstrappers,这一过程称为Bootstrap Token Auth。
#创建一个clusterrolebinding,将 group system:bootstrappers和clusterrole system:node-bootstrapper绑定:
cat <
#查看CSR列表
kubectl get csr
###显示如下###
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-54b9r 42s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:y6mj17 Pending
csr-bmvfm 43s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:y6mj17 Pending
csr-szxrd 43s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:y6mj17 Pending
#自动approve csr请求
cat <
kubectl get csr|grep Pending
csr-6vs4g 2m16s kubernetes.io/kubelet-serving system:node: Pending
csr-pzbph 2m23s kubernetes.io/kubelet-serving system:node: Pending
csr-zpmwz 2m23s kubernetes.io/kubelet-serving system:node: Pending
kubectl certificate approve #批准证书签名请求
kubectl certificate deny #拒绝证书签名请求
#批量手动批准
kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
#查看node,因为没有装网络插件,节点状态会是NotReady
kubectl get node
NAME STATUS ROLES AGE VERSION
NotReady 19s v1.. NotReady 35s v1.. NotReady 8s v1.
kube-proxy
kube-proxy运行在所有worker节点上,它监听apiserver中service和endpoint的变化情况,创建路由规则以提供服务IP和负载均衡功能。
准备证书
cat < kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"ST": "ShangHai",
"L": "ShangHai",
"O": "Kubernetes",
"OU": "System"
}
]
}
EOF
#CN:指定该证书的User为system:kube-proxy;
#预定义的RoleBinding system:node-proxier将User system:kube-proxy与Role system:node-proxier绑定,该Role授予了调用kube-apiserver Proxy相关API的权限。
#生成证书和私钥
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
#生成2个文件
kube-proxy-key.pem
kube-proxy.pem
cp kube-proxy*.pem /etc/kubernetes/ssl/
scp kube-proxy*.pem :/etc/kubernetes/ssl/
scp kube-proxy*.pem :/etc/kubernetes/ssl/
#创建kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://: \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
cp kube-proxy.kubeconfig /etc/kubernetes/
scp kube-proxy.kubeconfig :/etc/kubernetes/
scp kube-proxy.kubeconfig :/etc/kubernetes/
部署服务
#1.配置文件
cat < /etc/kubernetes/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
burst:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps:
clusterCIDR: /
bindAddress:
hostnameOverride:
healthzBindAddress: :
metricsBindAddress: :
enableProfiling: true
mode: "ipvs"
portRange: ""
iptables:
masqueradeAll: false
ipvs:
scheduler: rr
excludeCIDRs: []
EOF
#2.kube-proxy.service
cat < /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kube-Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/app/kube-proxy
ExecStart=/usr/local/sbin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy-config.yaml \\
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=
[Install]
WantedBy=multi-user.target
EOF
#3.启动服务
mkdir -p /app/kube-proxy
systemctl daemon-reload \
&& systemctl enable kube-proxy \
&& systemctl restart kube-proxy \
&& systemctl status kube-proxy
#4.检查
netstat -lantup|grep kube-proxy
tcp 0 .: :* LISTEN /kube-proxy
tcp 0 .: :* LISTEN /kube-proxy
ipvsadm -ln
IP Virtual Server version (size=)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP : rr
-> : Masq 1 0 0
-> : Masq 1 0 0
-> : Masq 1 0 0
calico
所有的节点都需要安装calico,主要目的是跨主机的docker能够互相通信,也是保障kubernetes集群的网络基础和保障。
calico使用IPIP或BGP技术(默认为IPIP)为各节点创建一个可以互通的Pod网络。参考官网
#1.下载清单
wget https://raw.githubusercontent.com/projectcalico/calico/v3./manifests/calico.yaml
#2.修改pod网络
- name: CALICO_IPV4POOL_CIDR
value: "/"
#3.创建
kubectl apply -f calico.yaml
#4.查看
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-85578c44bf-9vxn6 1/1 Running 0 2m
kube-system calico-node-4qghn 1/1 Running 0 2m
kube-system calico-node-6bc44 1/1 Running 0 2m
kube-system calico-node-77bf8 1/1 Running 0 2m
coredns
CoreDNS就是一个DNS服务,而DNS作为一种常见的服务发现手段,所以很多开源项目以及工程师都会使用 CoreDNS为集群提供服务发现的功能,Kubernetes就在集群中使用CoreDNS解决服务发现的问题。
k8s版本包里提供了dns的yaml文件,在kubernetes-src\cluster\addons\dns目录里。
#1.修改配置
sed -i -e "s/__DNS__DOMAIN__/cluster.local/g" \
-e "s/__DNS__MEMORY__LIMIT__/500Mi/g" \
-e "s/__DNS__SERVER__//g" coredns.yaml.base
#镜像改成阿里云
image: registry.aliyuncs.com/google_containers/coredns:v1.
#2.创建服务
mv coredns.yaml.base coredns.yaml
kubectl create -f coredns.yaml -n kube-system
#3.查看pod
kubectl get pod -n kube-system
kube-system coredns-5bfcdcfd96-pgttd 1/1 running 0 11s
验证集群
#1.检查节点状态
kubectl get node
NAME STATUS ROLES AGE VERSION
Ready 6m v1.. Ready 5m v1.. Ready 6m v1.
#2.部署服务
cat <