背景
拖了好久都没修这两个问题
拖了好久都没修这两个问题
其实是个副产物,之前做飞连镜像的时候也用过
这次是升级了操作系统版本,构建成功了,在22.04的ubuntu上重新打包了workspcae,当开启jcef后可在vnc环境下正常运行浏览器、JCEF App
代码和说明参考 https://github.com/dev-assistant/skykoma-workspace/tree/main/workspace-ubuntu
注意默认带了个idea干净纯净的ubuntu是base的那个镜像
TODO 后面有时间再详细补充,难点是跳过idea默认的各种弹框
博客地址 https://blog.hylstudio.cn/archives/1350
飞书文档 https://paraparty.feishu.cn/docx/GEwVdulCgoLX7dxSHMecy4mBnch
argocd试玩,对于argocd定位是cd,能否胜任k8s面板存疑
这里作为备份,为了最好的阅读体验可以看飞书
|
0 1 |
mkdir argocd cd argocd wget https://raw.githubusercontent.com/argoproj/argo-cd/v2.9.3/manifests/ha/install.yaml |
|
0 1 2 3 4 5 6 7 8 9 10 11 |
sudo docker pull ghcr.io/dexidp/dex:v2.37.0 sudo docker tag ghcr.io/dexidp/dex:v2.37.0 harbor.hylstudio.local/dexidp/dex:v2.37.0 sudo docker push harbor.hylstudio.local/dexidp/dex:v2.37.0 sudo docker pull redis:7.0.11-alpine sudo docker tag redis:7.0.11-alpine harbor.hylstudio.local/library/redis:7.0.11-alpine sudo docker push harbor.hylstudio.local/library/redis:7.0.11-alpine sudo docker pull quay.io/argoproj/argocd:v2.9.3 sudo docker tag quay.io/argoproj/argocd:v2.9.3 harbor.hylstudio.local/argoproj/argocd:v2.9.3 sudo docker push harbor.hylstudio.local/argoproj/argocd:v2.9.3 |
|
0 1 |
kubectl create namespace argocd kubectl apply -n argocd -f install.yaml |
|
0 1 |
kubectl -n argocd port-forward service/argocd-server :80 |
|
0 1 |
|
0 1 |
|
0 1 |
|
0 1 |
|
0 1 |
|
0 1 |
https://argo-cd.readthedocs.io/en/stable/getting_started/
博客地址 https://blog.hylstudio.cn/archives/1343
飞书文档 https://paraparty.feishu.cn/docx/FTM7d1TIcoxL83xfRWzc34omnFe
安装k8s的时候最后一步会特别慢,之前追了一半,今天继续追完搞清楚发生了啥
懒得排版了,为了最佳阅读体验可以看飞书,这里做备份
|
0 1 2 3 |
verbosity (int) – Control how verbose the output of ansible-playbook is _input (io.FileIO) – An optional file or file-like object for use as input in a streaming pipeline _output (io.FileIO) – An optional file or file-like object for use as output in a streaming pipeline |
|
0 1 2 3 4 |
docker pull harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker run -it --entrypoint /bin/bash harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 cp /hooks/kubesphere/installRunner.py /hooks/kubesphere/installRunner.py.bak vi /hooks/kubesphere/installRunner.py |
|
0 1 2 3 4 5 |
import io ansible_log=io.FileIO('/home/kubesphere/ansible.log', 'w') _output=ansible_log, verbosity=5 |
|
0 1 2 |
docker commit 5274b25c35d5 harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.3 docker push harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.3 |
|
0 1 2 3 |
./vm.sh -c 4 -m 8 -d 80 -p on k8s1 ./vm.sh -c 2 -m 4 -d 40 -p on k8s2 ./vm.sh -c 2 -m 4 -d 40 -p on k8s3 |
|
0 1 2 3 4 5 |
#dns config 192.168.2.206 k8s-control.hylstudio.local 192.168.2.206 k8s1.k8s.local 192.168.2.177 k8s2.k8s.local 192.168.2.203 k8s3.k8s.local |
|
0 1 2 3 4 5 6 7 |
apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.3.2 |
|
0 1 2 3 4 5 |
#做个假的3.3.2骗掉版本检测 docker tag harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2-bak docker image rm harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker tag harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.3 harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker push harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 |
|
0 1 2 3 4 5 6 7 8 9 |
namespace/kubesphere-system unchanged serviceaccount/ks-installer unchanged customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged clusterrole.rbac.authorization.k8s.io/ks-installer unchanged clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged deployment.apps/ks-installer unchanged clusterconfiguration.installer.kubesphere.io/ks-installer created 13:55:05 UTC success: [k8s1] Please wait for the installation to complete: >>---> |
|
0 1 2 3 4 5 6 7 8 9 10 11 |
ks-installer-566ffb8f44-ml9gm:/kubesphere$ ps aux PID USER TIME COMMAND 1 kubesphe 0:00 /shell-operator start 56 kubesphe 0:06 python3 /hooks/kubesphere/installRunner.py 2501 kubesphe 1:21 {ansible-playboo} /usr/local/bin/python /usr/local/bin/ansible-playbook -e @/kubespher 4348 kubesphe 0:01 {ansible-playboo} /usr/local/bin/python /usr/local/bin/ansible-playbook -e @/kubespher 5261 kubesphe 0:00 /bin/sh -c /usr/local/bin/python /home/kubesphere/.ansible/tmp/ansible-tmp-1700921243. 5262 kubesphe 0:00 /usr/local/bin/python /home/kubesphere/.ansible/tmp/ansible-tmp-1700921243.7008557-434 5263 kubesphe 0:00 /usr/local/bin/kubectl apply -f /kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_ 5287 kubesphe 0:00 bash 5299 kubesphe 0:00 ps aux |
|
0 1 2 3 4 5 6 7 8 9 10 |
Start installing monitoring Start installing multicluster Start installing openpitrix Start installing network ************************************************** Waiting for all tasks to be completed ... task network status is successful (1/4) task openpitrix status is successful (2/4) task multicluster status is successful (3/4) 只能怀疑是monitoring了 |
|
0 1 |
docker cp xxx:/hooks/kubesphere/installRunner.py . |
|
0 1 2 3 |
FROM harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2-bak RUN rm -rf /hooks/kubesphere/installRunner.py COPY installRunner.py /hooks/kubesphere/ |
|
0 1 2 3 4 5 6 7 8 |
mkdir imgbuild mv installRunner.py DockerFile imgbuild cd imagebuild docker build -f DockerFile -t harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.4 . docker image rm harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker tag harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.4 harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker push harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.4 docker push harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 |
--skip-push-images来避免harbor的镜像被覆盖|
0 1 2 3 4 5 6 |
Start installing monitoring Start installing multicluster Start installing openpitrix Start installing network ************************************************** Waiting for all tasks to be completed ... |
|
0 1 2 3 4 5 |
readyToEnabledList = [ 'monitoring', 'multicluster', 'openpitrix', 'network'] |
|
0 1 2 3 4 5 6 7 8 9 10 11 12 |
ks-installer-566ffb8f44-zft9h:/hooks/kubesphere$ ps aux|more PID USER TIME COMMAND 1 kubesphe 0:00 /shell-operator start 18 kubesphe 5:49 python3 /hooks/kubesphere/installRunner.py 2171 kubesphe 0:00 bash 4053 kubesphe 1:54 {ansible-playboo} /usr/local/bin/python /usr/local/bin/ansible-playbook -e @/kubesphere/config/ks-config.json -e @/kubesphere/config/ks-status.json -e @/kubesphere/results/env/extravars /kubesphere/playbooks/monitoring.yaml 8876 kubesphe 0:00 {ansible-playboo} /usr/local/bin/python /usr/local/bin/ansible-playbook -e @/kubesphere/config/ks-config.json -e @/kubesphere/config/ks-status.json -e @/kubesphere/results/env/extravars /kubesphere/playbooks/monitoring.yaml 8899 kubesphe 0:00 /bin/sh -c /usr/local/bin/python /home/kubesphere/.ansible/tmp/ansible-tmp-1700926548.0542264-8876-155728454178142/AnsiballZ_command.py && sleep 0 8900 kubesphe 0:01 /usr/local/bin/python /home/kubesphere/.ansible/tmp/ansible-tmp-1700926548.0542264-8876-155728454178142/AnsiballZ_command.py 8926 kubesphe 0:00 /usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/alertmanager 8957 kubesphe 0:00 ps aux 8958 kubesphe 0:00 more |
watch -n 1 'ps aux|more'可以看到执行过程,其实没卡住就是安装的慢|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
def generateTaskLists(): readyToEnabledList, readyToDisableList = getComponentLists() tasksDict = {} for taskName in readyToEnabledList: playbookPath = os.path.join(playbookBasePath, str(taskName) + '.yaml') artifactDir = os.path.join(privateDataDir, str(taskName)) if os.path.exists(artifactDir): shutil.rmtree(artifactDir) tasksDict[str(taskName)] = component( playbook=playbookPath, private_data_dir=privateDataDir, artifact_dir=artifactDir, ident=str(taskName), quiet=False, rotate_artifacts=1 ) return tasksDict def installRunner(self): installer = ansible_runner.run_async( playbook=self.playbook, private_data_dir=self.private_data_dir, artifact_dir=self.artifact_dir, ident=self.ident, quiet=self.quiet, rotate_artifacts=self.rotate_artifacts, verbosity=5 ) task_name = self.ident thread = installer[0] log_file = open('/tmp/'+task_name+'.debug.log', 'w') thread.stdout = log_file return installer[1] |
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
--- a/installRunner.py.bak +++ b/installRunner.py @@ -90,8 +90,13 @@ class component(): artifact_dir=self.artifact_dir, ident=self.ident, quiet=self.quiet, - rotate_artifacts=self.rotate_artifacts + rotate_artifacts=self.rotate_artifacts, + verbosity=5 ) + task_name = self.ident + thread = installer[0] + log_file = open('/tmp/'+task_name+'.debug.log', 'w') + thread.stdout = log_file return installer[1] @@ -263,7 +268,7 @@ def generateTaskLists(): private_data_dir=privateDataDir, artifact_dir=artifactDir, ident=str(taskName), - quiet=True, + quiet=False, rotate_artifacts=1 ) |
|
0 1 2 3 4 5 6 7 8 9 |
mkdir imgbuild mv installRunner.py DockerFile imgbuild cd imagebuild docker build -f DockerFile -t harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.7 . docker image rm harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker tag harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.7 harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker push harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.7 docker push harbor.hylstudio.local/kubesphereio/ks-installer:v3.3.2 docker image ls --digests|grep installer |
Image ID是docker客户端侧的概念,所以仓库不管= =thread.stdout = log_file并不会输出,根据https://github.com/ansible/ansible-runner/blob/e0371d634426dfbdb9d3bfacb20e2dd4b039b499/src/ansible_runner/runner.py#L155C28-L155C48self.config.suppress_output_file为假才会输出到文件self.config.artifact_dir的/stdout和stderr中,artifact_dir来自/kubesphere/results/{task_name},suppress_output_file和suppress_ansible_output默认值也都是False--skip-push-images,修改quite=False后去/kubesphere/results下查看是否有stdout和stderr|
0 1 |
%s/quiet=True/quiet=False/g |
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
--- a/installRunner.py.bak +++ b/installRunner.py @@ -85,6 +85,7 @@ class component(): def installRunner(self): installer = ansible_runner.run_async( + verbosity=5, playbook=self.playbook, private_data_dir=self.private_data_dir, artifact_dir=self.artifact_dir, @@ -263,7 +264,7 @@ def generateTaskLists(): private_data_dir=privateDataDir, artifact_dir=artifactDir, ident=str(taskName), - quiet=True, + quiet=False, rotate_artifacts=1 ) @@ -341,6 +342,7 @@ def preInstallTasks(): for task, paths in preInstallTasks.items(): pretask = ansible_runner.run( + verbosity=5, playbook=paths[0], private_data_dir=privateDataDir, artifact_dir=paths[1], @@ -353,11 +355,12 @@ def preInstallTasks(): def resultInfo(resultState=False, api=None): ks_config = ansible_runner.run( + verbosity=5, playbook=os.path.join(playbookBasePath, 'ks-config.yaml'), private_data_dir=privateDataDir, artifact_dir=os.path.join(privateDataDir, 'ks-config'), ident='ks-config', - quiet=True + quiet=False ) if ks_config.rc != 0: @@ -365,11 +368,12 @@ def resultInfo(resultState=False, api=None): exit() result = ansible_runner.run( + verbosity=5, playbook=os.path.join(playbookBasePath, 'result-info.yaml'), private_data_dir=privateDataDir, artifact_dir=os.path.join(privateDataDir, 'result-info'), ident='result', - quiet=True + quiet=False ) if result.rc != 0: @@ -380,6 +384,7 @@ def resultInfo(resultState=False, api=None): if "migration" in resource['status']['core'] and resource['status']['core']['migration'] and resultState == False: migration = ansible_runner.run( + verbosity=5, playbook=os.path.join(playbookBasePath, 'ks-migration.yaml'), private_data_dir=privateDataDir, artifact_dir=os.path.join(privateDataDir, 'ks-migration'), @@ -395,11 +400,12 @@ def resultInfo(resultState=False, api=None): logging.info(info) telemeter = ansible_runner.run( + verbosity=5, playbook=os.path.join(playbookBasePath, 'telemetry.yaml'), private_data_dir=privateDataDir, artifact_dir=os.path.join(privateDataDir, 'telemetry'), ident='telemetry', - quiet=True + quiet=False ) if telemeter.rc != 0: |
|
0 1 2 3 4 5 |
drwxr-xr-x 3 kubesphe kubesphe 4.0K Dec 2 13:37 common drwxr-xr-x 1 kubesphe kubesphe 4.0K Feb 3 2023 env drwxr-xr-x 3 kubesphe kubesphe 4.0K Dec 2 13:41 ks-core drwxr-xr-x 3 kubesphe kubesphe 4.0K Dec 2 13:37 metrics_server drwxr-xr-x 3 kubesphe kubesphe 4.0K Dec 2 13:37 preinstall |
|
0 1 2 3 4 5 6 7 8 9 |
namespace/kubesphere-system unchanged serviceaccount/ks-installer unchanged customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged clusterrole.rbac.authorization.k8s.io/ks-installer unchanged clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged deployment.apps/ks-installer unchanged clusterconfiguration.installer.kubesphere.io/ks-installer created 13:35:26 UTC success: [k8s1] Please wait for the installation to complete: >>---> |
|
0 1 2 3 4 5 6 |
Start installing monitoring Start installing multicluster Start installing openpitrix Start installing network ************************************************** Waiting for all tasks to be completed ... |
|
0 1 2 3 4 |
PLAY RECAP ********************************************************************* localhost : ok=24 changed=22 unreachable=0 failed=0 skipped=24 rescued=0 ignored=0 task monitoring status is successful (4/4) |
|
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
changed: [localhost] => (item=kubesphere-config.yaml) => { "ansible_loop_var": "item", "changed": true, "cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/kubesphere-config.yaml", "delta": "0:00:07.874273", "end": "2023-12-02 14:09:16.591793", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/kubesphere-config.yaml", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "item": "kubesphere-config.yaml", "rc": 0, "start": "2023-12-02 14:09:08.717520", "stderr": "", "stderr_lines": [], "stdout": "configmap/kubesphere-config created", "stdout_lines": [ "configmap/kubesphere-config created" ] } |
总结一些无需保密的经验作为总结
监听代码变动
打包环境容器化
平滑关机
健康检查
流量路由
多环境打包
配置中心
一般每个独立动作称作action,一次独立完整的流程称作pipeline,一条pipeline由多个action编排组成
pipeline可人为指定触发条件,可以选择手动触发、定时触发、git事件触发等
action之间可串行并行随意组合,action节点可选择自动执行、定时执行、人工执行、人工二次确认等选项。action节点的类型不同平台略有差异。
代码变动监听一般使用git hooks,配置在git server上,可以监听包括但不限于push、merge requests、tag等动作事件并回调给指定的api
一般配合分支和tag规范来共同管理开发到上线的全流程,例如feature分支->dev分支->master分支三层结构或更细化的多层分支结构。在不同类型的分支上触发的pipeline有所区别,例如feature分支可能额外触发代码检查pipeline流程,dev分支额外触发单元测试pipeline流程,master分支或tag触发上线pipeline流程。
打包的环境通常不在开发机而是专用的打包机,打包机的环境除非特殊需求一般使用docker容器通过镜像统一管理打包环境,一般会选择包含docker环境的docker作为打包基础环境,在此基础上定制各种打包的环境依赖、调整打包依赖的缓存文件夹共享策略,以此来保证打包环境的稳定和可控。通常还会配套使用私有镜像中心(如docker registry)和私有包管理中心(如jfrog)来加速打包过程排除网络的影响
对于重要的服务不仅需要滚动重启,还必须实现平滑关机来保证服务滚动重启的过程中的稳定性,一般思路是程序自行实现平滑关机流程并允许通过监听SIGTERM、SIGKILL等信号或单独实现http或rpc接口触发。在上线的pipeline或控制器或上级调度程序提前调整网络流量后,再通知对应的节点执行平滑关机流程并自行退出,在一定时间超时后强行杀死,因此通常配合健康检查和流量路由使用。
健康检查一般也由应用程序自行实现,可分为可达性检查和可用性检查,其中可达性检查主要用于探测网络链路是否通畅,不关心程序本身是否可用,可用性检查在可达性检查的基础还必须满足可用的定义。
可用的定义在不同的场景下定义不同,对web服务来说比较简单的定义就是可接收http请求并成功回应,再严格一点的是要同时满足依赖的数据库服务、缓存服务、队列服务同时可用,但严格的定义会导致底层服务故障时无法让带有降级逻辑的程序执行降级导致服务彻底不可用,因此一般不轻易采用,而是让程序自行选择是执行降级逻辑还是直接认为服务不可用。
流量路由主要负责处理流量的流向,上一篇总结的路径上的流量通常被称作是南北向流量或者纵向流量,应用程序之间的通信通常被称作东西向流量或横向流量
对某个服务集群来说一般通过网络层路由、客户端或服务器服务发现、负载均衡来完成精确的控制流量流向。特别的对于云原生的服务网格来说主要处理的是东西向流量。
针对同一份代码需要运行在多个数据中心、多个公司、多个环境,因此需要通过某种方式来区分运行环境。可选策略一般有几个思路,一种是用同一份产物通过配置中心、配置文件、环境变量等做出区分来改变程序能感知的环境,另一种是打包时根据需要直接按不同环境组合生成多种环境运行的产物。
对于使用配置中心、配置文件、环境变量来处理的方案,需要额外注意启动时候的传入参数和配置中心的设置,方便集中管理但很容易出意外
对于使用直接打包多种产物来处理的方案,虽然不用担心环境参数错误,但需要额外考虑环境参数的保密问题也就是产物的保密问题
总结一些无需保密的经验作为总结
一般web系统都是由接入层网络+业务层网络组成,从用户客户端到业务曾服务的入口这段流量路径通常是人们所说的接入层网络
网络层这中间通常会包括:DNS、X层负载均衡
物理链路上通常会包括三大运营商接入网络、小运营商接入网络、机房间专线、核心交换机互联
相关技术包括虚拟IP、4层/7层网络报文解析等
一个典型的wen服务接入层网络通常是由 DNS系统、4层负载均衡、7层负载均衡组成
无论是哪层,核心思想都是把多个一样的东西对外表现的”像是”一个,理解了这个就能理解接入层网络的大部分逻辑
在DNS层,DNS可以做到分地区、分运营商进行多IP解析、主备策略、BGP机房策略、健康检查、灾备切换
在网络协议栈第4层,可以将多个IP+端口对外当成一个IP+端口使用,多个IP一般叫做L4RS,一个IP一般叫做L4VIP
当然这一层也会包含负载均衡的策略、健康检查、灾备自动切换等功能
在网络协议栈第7曾,可以将多个host+端口对外当成一个host+端口,host的所在一般叫虚拟主机,多个host+端口一般称作L7RS或serviceIP或RealIP
当然这一层也会包含负载均衡策略、健康检查、灾备自动切换
还有一种特殊流程是L4VIP直通serviceIPs,对于IM类特殊应用就不会都使用7层负载均衡
典型拓扑图如下,敏感信息已虚化处理