视频版
https://www.bilibili.com/video/BV1qj411T7d5/
背景
之前一直想搭建k8s,但按多年前的印象这玩意很复杂。今天先尝试个青春版k3s
安装准备
按文档说是离线在线都行,离线的需要个私有的镜像中心,但镜像中心最简便的搭建方式是跑个容器。。。死循环了啊,所以先试在线的。
家里的资源不多,尝试做个乞丐版,2~3节点就行了,先弄单机后面再加worker
环境要求
硬件按文档建议是最少1c1g,我就给2c2g吧,机器数量是2
系统要求Ubuntu是22.04
可以使用 --with-node-id
给每个节点后面增加一个随机的后缀。或者使用命令行参数 --node-name
或环境变量 $K3S_NODE_NAME
给集群的每个节点传递唯一的名字
网络每个节点需要6443 可以被访问
Flannel VXLAN 节点间通过 UDP 8472 通信
Flannel Wireguard backend 节点间通过 UDP 51820(ipv4) 51821(ipv6)
安装过程
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
k3s1@k3s1:~$ curl -sfL https://get.k3s.io | sh - [sudo] password for k3s1: [INFO] Finding release for channel stable [INFO] Using v1.25.6+k3s1 as release [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping installation of SELinux RPM [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. |
主节点安装过程如上,疑似成功。。。
查看进程如下,可以看到/run/k3s/containerd/containerd.sock说明containerd已经在运行了
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
k3s1@k3s1:~$ ps aux|grep k3s k3s1 1757 0.0 0.4 19152 9796 ? Ss 11:25 0:00 /lib/systemd/systemd --user k3s1 1759 0.0 0.2 105292 4420 ? S 11:25 0:00 (sd-pam) k3s1 1769 0.0 0.2 8264 5192 tty1 S+ 11:25 0:00 -bash root 2406 0.0 0.4 13788 9016 ? Ss 11:29 0:00 sshd: k3s1 [priv] k3s1 2478 0.0 0.3 13924 6000 ? R 11:29 0:00 sshd: k3s1@pts/0 k3s1 2479 0.0 0.2 8276 5288 pts/0 Ss 11:29 0:00 -bash root 2727 33.2 22.3 1194116 447444 ? Ssl 11:31 1:01 /usr/local/bin/k3s server root 2772 11.0 3.7 771044 75764 ? Sl 11:31 0:19 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd root 3467 0.1 0.5 720748 10652 ? Sl 11:32 0:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9b07e1b9470f0e657d9a00667c44bb955e3d5c7f049036adfd38d086d73026e1 -address /run/k3s/containerd/containerd.sock root 3511 0.1 0.5 721004 10740 ? Sl 11:32 0:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 65fba4a19627e12286bbdb2056f27f1c063c060255510b9b997e291a44e1c736 -address /run/k3s/containerd/containerd.sock root 3521 0.0 0.4 720748 9852 ? Sl 11:32 0:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id 3193ac9aa5db3faec494a7fa431c2bbed4c9a124c67042ef956ba1f1b08783b0 -address /run/k3s/containerd/containerd.sock root 3538 0.0 0.5 720748 10972 ? Sl 11:32 0:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id bed5712d5e71c22a2b4863c61df5402ed3cd2e468b89b3699f3b2db48ea2f522 -address /run/k3s/containerd/containerd.sock root 3563 0.0 0.5 720492 10208 ? Sl 11:32 0:00 /var/lib/rancher/k3s/data/630c40ff866a3db218a952ebd4fd2a5cfe1543a1a467e738cb46a2ad4012d6f1/bin/containerd-shim-runc-v2 -namespace k8s.io -id b76d14ccfaa1a5743f4fa77aef74c210b89badd130c75244e8561dffa643cdba -address /run/k3s/containerd/containerd.sock k3s1 3944 6.1 2.3 751748 47708 ? Ssl 11:32 0:04 /metrics-server --cert-dir=/tmp --secure-port=10250 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s k3s1 4181 0.0 0.1 9128 3480 pts/0 R+ 11:34 0:00 ps aux k3s1 4182 0.0 0.0 6432 720 pts/0 S+ 11:34 0:00 grep --color=auto k3s |
metrics-server不知道是干什么的,后面再翻翻文档。
网络监听情况如下,可以看到大部分都是本地监听,只有10250 6443是有可能对外开放的
按文档描述是个TCP的端口监听,看起来和metrics server进程有关,好像是kubelet在节点之间通讯使用的。官方原文:If you wish to utilize the metrics server, all nodes must be accessible to each other on port 10250.
6443是TCP的监听,流量方向是从Agent向Server的,应该是worker节点向master注册数据用的。官方描述为 supervisor and Kubernetes API Server
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
k3s1@k3s1:~$ netstat -anp|grep LISTEN (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 127.0.0.1:10010 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:6444 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:10256 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:10258 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp6 0 0 :::10250 :::* LISTEN - tcp6 0 0 :::6443 :::* LISTEN - tcp6 0 0 :::22 :::* LISTEN - |
按文档所说配置文件内容如下
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
k3s1@k3s1:~$ sudo cat /etc/rancher/k3s/k3s.yaml apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTnpVMU1UQXlOemt3SGhjTk1qTXdNakEwTVRFek1URTVXaGNOTXpNd01qQXhNVEV6TVRFNQpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTnpVMU1UQXlOemt3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTS0lXTmNvbWZleEhPcndCVjUwdWx5ZjkwQlo2a0sxOStGUW9GN0pNUloKYi9UdndyZW9vNXhwWHNNekpEYlVUWkFzUXdlOXBDK3BoTjRaQ2ZFNVloRlVvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVURPelE3YkFldXh2a3JZN1hnSDI1CkNDOGpRcFF3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnYW5rMlprZncrSHZCdHBvNnBuYmJiRDRuSExHSzlRZ3YKZFRLZTY3dm8zWVVDSUNtK0pJdisxSzBBcUpmUXA5L1JkdDRoMkF6Slc1WW5CenVIYzdLckhiRHgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrakNDQVRlZ0F3SUJBZ0lJVmVMRW9qM1NqUUV3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOamMxTlRFd01qYzVNQjRYRFRJek1ESXdOREV4TXpFeE9Wb1hEVEkwTURJdwpOREV4TXpFeE9Wb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJBSld6ZVpTK0h6bHZCMkgKMkVJUUh5bEZQL3pDMmpZTktoZGtLTFF0NjhtTENWNnpKRWhxVDhYby9EUXdoc20zL2tmQWp1QlJ2R0tRNmsvRgpONDRIU0oyalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCUUdCNlk4LzB5MjhSbmJnemdsejRxcXNURE5QekFLQmdncWhrak9QUVFEQWdOSkFEQkcKQWlFQTcybUw5b1dKeDM0SzZOZlVsWGp2dEQ5WXRJQXZuMWFuUUJkeHlXSUhOYThDSVFEUFllK3RZemtFR0RNbAppcnlkVTJpTHM0dWdVdXpwQThqM1Q2bEd4MU1nYUE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlCZURDQ0FSMmdBd0lCQWdJQkFEQUtCZ2dxaGtqT1BRUURBakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwClpXNTBMV05oUURFMk56VTFNVEF5Tnprd0hoY05Nak13TWpBME1URXpNVEU1V2hjTk16TXdNakF4TVRFek1URTUKV2pBak1TRXdId1lEVlFRRERCaHJNM010WTJ4cFpXNTBMV05oUURFMk56VTFNVEF5Tnprd1dUQVRCZ2NxaGtqTwpQUUlCQmdncWhrak9QUU1CQndOQ0FBUjB1cWV6cUh6S2owRUt2dFdNZ1YrQ09mblk0aGU3UFVLdnZlMWQ4d0VJCjFhb2laYjdzM05lK3ZvMzBQRkY5ZzdVZksrdGVJVjYxem80UytqVm9YU0k0bzBJd1FEQU9CZ05WSFE4QkFmOEUKQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVCZ2VtUFA5TXR2RVoyNE00SmMrSwpxckV3elQ4d0NnWUlLb1pJemowRUF3SURTUUF3UmdJaEFNVUlXUEExeXZYdEJsUUpFRER2aUtyM1hYbVJWb2hyClZGK3YzK0h4VXVGSUFpRUF5cmFhZlRiRG4vaTBwUmNHWnVEOHIzaTZQWHlJbUo0M1FpQnpLbHVjcnBFPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUJzaUxpNU1NaG5wbUpXclJtSkxETUZobDVBSVcxTEQrS0tPRzBFRTlzc2RvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFQWxiTjVsTDRmT1c4SFlmWVFoQWZLVVUvL01MYU5nMHFGMlFvdEMzcnlZc0pYck1rU0dwUAp4ZWo4TkRDR3liZitSOENPNEZHOFlwRHFUOFUzamdkSW5RPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo= |
按文档描述会生成这几个相关命令kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh。具体位置如下,均在/usr/local/bin下,所以可以从任意目录执行。
说实话k3s-killall.sh和k3s-uninstall.sh感觉并不适合放进这个位置,不知道是不是官方怕你找不到故意也放bin里了,这俩实际生产场景下完全没有全局任意目录运行的必要。万一误执行了就GG
0 1 2 3 4 5 6 7 8 |
k3s1@k3s1:~$ which kubectl /usr/local/bin/kubectl k3s1@k3s1:~$ which crictl /usr/local/bin/crictl k3s1@k3s1:~$ which k3s-killall.sh /usr/local/bin/k3s-killall.sh k3s1@k3s1:~$ which k3s-uninstall.sh /usr/local/bin/k3s-uninstall.sh |
尝试获取当前集群的节点,虽然知道只有一个,但需要看看命令是不是能正常执行,别和之前有的服务一样看着没事但实际没正常运行。。。。
0 1 2 3 |
k3s1@k3s1:~$ sudo kubectl get nodes NAME STATUS ROLES AGE VERSION k3s1 Ready control-plane,master 14m v1.25.6+k3s1 |
这里有个需要关注的东西就是ROLES都有啥,需要看文档了。目前的control-plane顾名思义是控制面, master是主节点。不知道k8s的角色分配是不是和CDH一样能灵活转移。。。
查看TOKEN,准备安装第一个worker节点,按文档描述在这里/var/lib/rancher/k3s/server/node-token
注意,我在一个隔离环境测试,所以/var/lib/rancher/k3s/server/node-token和/etc/rancher/k3s/k3s.yaml的内容暴露了无所谓,待会会直接删除虚机,生产的千万不要轻易暴露。
0 1 2 |
sudo cat /var/lib/rancher/k3s/server/node-token K100772d3a421be961da31afce5338280032a42ad8a152c683d22a0d2d72040af71::server:5840b68a95f662baf09269b2eceb7ce9 |
按要求拼接worker的安装命令如下,到第二个机器上执行
0 1 |
curl -sfL https://get.k3s.io | K3S_URL=https://k3s1.k3s.local:6443 K3S_TOKEN=K100772d3a421be961da31afce5338280032a42ad8a152c683d22a0d2d72040af71::server:5840b68a95f662baf09269b2eceb7ce9 sh - |
执行到systemd卡住了(后面已确认只是单纯的慢)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
k3s2@k3s2:~$ curl -sfL https://get.k3s.io | K3S_URL=https://k3s1.k3s.local:6443 K3S_TOKEN=K100772d3a421be961da31afce5338280032a42ad8a152c683d22a0d2d72040af71::server:5840b68a95f662baf09269b2eceb7ce9 sh - [sudo] password for k3s2: [INFO] Finding release for channel stable [INFO] Using v1.25.6+k3s1 as release [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping installation of SELinux RPM [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service [INFO] systemd: Enabling k3s-agent unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service. [INFO] systemd: Starting k3s-agent |
再开一个shell查看进程如下
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
k3s2@k3s2:~$ ps aux|grep k3s k3s2 1794 0.0 0.4 19040 9712 ? Ss 11:25 0:00 /lib/systemd/systemd --user k3s2 1796 0.0 0.2 105288 4416 ? S 11:25 0:00 (sd-pam) k3s2 1801 0.0 0.2 8264 5280 tty1 S+ 11:25 0:00 -bash root 2368 0.0 0.4 13796 8968 ? Ss 11:29 0:00 sshd: k3s2 [priv] k3s2 2442 0.0 0.3 13932 6124 ? S 11:29 0:00 sshd: k3s2@pts/0 k3s2 2443 0.0 0.2 8276 5172 pts/0 Ss 11:29 0:00 -bash k3s2 3063 0.0 0.0 2608 1904 pts/0 S+ 11:53 0:00 sh - root 3241 0.0 0.2 9264 4656 pts/0 S+ 11:54 0:00 sudo systemctl restart k3s-agent root 3242 0.0 0.1 9276 3760 pts/0 S+ 11:54 0:00 systemctl restart k3s-agent root 3263 6.8 4.3 842952 87268 ? Ssl 11:54 0:08 /usr/local/bin/k3s agent root 3328 0.5 0.4 13796 9000 ? Ss 11:56 0:00 sshd: k3s2 [priv] k3s2 3416 0.0 0.3 13932 6140 ? S 11:56 0:00 sshd: k3s2@pts/1 k3s2 3417 1.6 0.2 8276 5100 pts/1 Ss 11:56 0:00 -bash k3s2 3428 0.0 0.1 8888 3380 pts/1 R+ 11:56 0:00 ps aux k3s2 3429 0.0 0.0 6432 724 pts/1 S+ 11:56 0:00 grep --color=auto k3s |
根据systemd服务注册的命令/usr/local/bin/k3s agent,可以看到大概率是执行systemctl restart k3s-agent卡住的
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
k3s2@k3s2:~$ cat /etc/systemd/system/k3s-agent.service [Unit] Description=Lightweight Kubernetes Documentation=https://k3s.io Wants=network-online.target After=network-online.target [Install] WantedBy=multi-user.target [Service] Type=notify EnvironmentFile=-/etc/default/%N EnvironmentFile=-/etc/sysconfig/%N EnvironmentFile=-/etc/systemd/system/k3s-agent.service.env KillMode=process Delegate=yes # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity TimeoutStartSec=0 Restart=always RestartSec=5s ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service' ExecStartPre=-/sbin/modprobe br_netfilter ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/k3s \ agent \ |
token本身错的可能性不大,所以大概率是K3S_URL的问题,因为我用了本地的DNS,所以写的是域名。尝试配置下HOSTS试试
保险起见我去 /var/lib/rancher/k3s/agent/containerd/containerd.log看了下,居然还在执行东西。。。难道只是单纯的慢?这时候主节点已经能看到agent了,但ROLES里是none让我比较纠结是不是正常
0 1 2 3 4 |
k3s1@k3s1:~$ sudo kubectl get nodes NAME STATUS ROLES AGE VERSION k3s1 Ready control-plane,master 37m v1.25.6+k3s1 k3s2 Ready <none> 5m8s v1.25.6+k3s1 |
根据命令行参数打印更加详细的节点信息如下,这样看起来worker工作是正常的
0 1 2 3 4 |
k3s1@k3s1:~$ sudo kubectl get nodes -o wide --show-labels=true NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME LABELS k3s1 Ready control-plane,master 44m v1.25.6+k3s1 192.168.2.227 <none> Ubuntu 20.04.5 LTS 5.4.0-125-generic containerd://1.6.15-k3s1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,egress.k3s.io/cluster=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k3s1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=true,node-role.kubernetes.io/master=true,node.kubernetes.io/instance-type=k3s k3s2 Ready <none> 12m v1.25.6+k3s1 192.168.2.176 <none> Ubuntu 20.04.5 LTS 5.4.0-125-generic containerd://1.6.15-k3s1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=k3s,beta.kubernetes.io/os=linux,egress.k3s.io/cluster=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k3s2,kubernetes.io/os=linux,node.kubernetes.io/instance-type=k3s |
测试
尝试构建一个服务是否能正常运行,从k8s官网看看能不能扒一个现成的yaml测试
0 1 2 3 4 5 |
k3s1@k3s1:~$ sudo kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6+k3s1", GitCommit:"9176e03c5788e467420376d10a1da2b6de6ff31f", GitTreeState:"clean", BuildDate:"2023-01-26T00:47:47Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6+k3s1", GitCommit:"9176e03c5788e467420376d10a1da2b6de6ff31f", GitTreeState:"clean", BuildDate:"2023-01-26T00:47:47Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"} |
尝试创建DEMO
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
k3s1@k3s1:~$ sudo kubectl get deployments No resources found in default namespace. k3s1@k3s1:~$ sudo kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 deployment.apps/kubernetes-bootcamp created k3s1@k3s1:~$ sudo kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-bootcamp 0/1 1 0 4s k3s1@k3s1:~$ sudo kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-bootcamp 1/1 1 1 44s k3s1@k3s1:~$ sudo kubectl get pods NAME READY STATUS RESTARTS AGE kubernetes-bootcamp-75c5d958ff-dzdkw 1/1 Running 0 3m23s k3s1@k3s1:~$ sudo kubectl describe pods Name: kubernetes-bootcamp-75c5d958ff-dzdkw Namespace: default Priority: 0 Service Account: default Node: k3s2/192.168.2.176 Start Time: Sat, 04 Feb 2023 12:23:03 +0000 Labels: app=kubernetes-bootcamp pod-template-hash=75c5d958ff Annotations: <none> Status: Running IP: 10.42.1.3 IPs: IP: 10.42.1.3 Controlled By: ReplicaSet/kubernetes-bootcamp-75c5d958ff Containers: kubernetes-bootcamp: Container ID: containerd://871938aef1c0923d784e32020801861b78b2dfb7d7a317423965e002826b4af9 Image: gcr.io/google-samples/kubernetes-bootcamp:v1 Image ID: gcr.io/google-samples/kubernetes-bootcamp@sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af Port: <none> Host Port: <none> State: Running Started: Sat, 04 Feb 2023 12:23:46 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r2mwg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-r2mwg: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m23s default-scheduler Successfully assigned default/kubernetes-bootcamp-75c5d958ff-dzdkw to k3s2 Normal Pulling 4m22s kubelet Pulling image "gcr.io/google-samples/kubernetes-bootcamp:v1" Normal Pulled 3m40s kubelet Successfully pulled image "gcr.io/google-samples/kubernetes-bootcamp:v1" in 41.916860932s (41.916911292s including waiting) Normal Created 3m40s kubelet Created container kubernetes-bootcamp Normal Started 3m40s kubelet Started container kubernetes-bootcamp k3s1@k3s1:~$ sudo kubectl describe pods kubernetes-bootcamp-75c5d958ff-dzdkw Name: kubernetes-bootcamp-75c5d958ff-dzdkw Namespace: default Priority: 0 Service Account: default Node: k3s2/192.168.2.176 Start Time: Sat, 04 Feb 2023 12:23:03 +0000 Labels: app=kubernetes-bootcamp pod-template-hash=75c5d958ff Annotations: <none> Status: Running IP: 10.42.1.3 IPs: IP: 10.42.1.3 Controlled By: ReplicaSet/kubernetes-bootcamp-75c5d958ff Containers: kubernetes-bootcamp: Container ID: containerd://871938aef1c0923d784e32020801861b78b2dfb7d7a317423965e002826b4af9 Image: gcr.io/google-samples/kubernetes-bootcamp:v1 Image ID: gcr.io/google-samples/kubernetes-bootcamp@sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af Port: <none> Host Port: <none> State: Running Started: Sat, 04 Feb 2023 12:23:46 +0000 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r2mwg (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-r2mwg: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m28s default-scheduler Successfully assigned default/kubernetes-bootcamp-75c5d958ff-dzdkw to k3s2 Normal Pulling 5m28s kubelet Pulling image "gcr.io/google-samples/kubernetes-bootcamp:v1" Normal Pulled 4m46s kubelet Successfully pulled image "gcr.io/google-samples/kubernetes-bootcamp:v1" in 41.916860932s (41.916911292s including waiting) Normal Created 4m46s kubelet Created container kubernetes-bootcamp Normal Started 4m46s kubelet Started container kubernetes-bootcamp |
尝试使用 kubectl exec进入容器
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
k3s1@k3s1:~$ sudo kubectl exec -it kubernetes-bootcamp-75c5d958ff-dzdkw bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@kubernetes-bootcamp-75c5d958ff-dzdkw:/# cat server.js var http = require('http'); var requests=0; var podname= process.env.HOSTNAME; var startTime; var host; var handleRequest = function(request, response) { response.setHeader('Content-Type', 'text/plain'); response.writeHead(200); response.write("Hello Kubernetes bootcamp! | Running on: "); response.write(host); response.end(" | v=1\n"); console.log("Running On:" ,host, "| Total Requests:", ++requests,"| App Uptime:", (new Date() - startTime)/1000 , "seconds", "| Log Time:",new Date()); } var www = http.createServer(handleRequest); www.listen(8080,function () { startTime = new Date();; host = process.env.HOSTNAME; console.log ("Kubernetes Bootcamp App Started At:",startTime, "| Running On: " ,host, "\n" ); }); |
总结
相比多年前的印象,现在的k8s可以借助k3s这个青春版快速的启动一个集群,是时候更新下这个错误认知了,现在启动一个k8s测试用是相当简便并且成本很低的。下次尝试原版k8s,成功后尝试kubekey这个类似ansible一样的自动化安装工具
参考
https://docs.k3s.io/architecture
https://www.suse.com/suse-rancher/support-matrix/all-supported-versions/rancher-v2-7-1/
资源分配和实际占用统计 https://docs.k3s.io/reference/resource-profiling
自动化安装 https://kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/kubekey/
ROLES https://blog.csdn.net/Lingoesforstudy/article/details/116484624
https://kubernetes.io/zh-cn/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/
https://blog.csdn.net/qq_44246980/article/details/120143353
0 Comments