kubebuilder实战之六:构建部署运行 (2)

去controller的窗口发现打印了不少日志,通过分析日志发现Reconcile方法执行了两次,第一执行时创建了deployment和service等资源:

2021-02-21T10:03:57.108+0800 INFO controllers.ElasticWeb 1. start reconcile logic {"elasticweb": "dev/elasticweb-sample"} 2021-02-21T10:03:57.108+0800 INFO controllers.ElasticWeb 3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [nil] {"elasticweb": "dev/elasticweb-sample"} 2021-02-21T10:03:57.210+0800 INFO controllers.ElasticWeb 4. deployment not exists {"elasticweb": "dev/elasticweb-sample"} 2021-02-21T10:03:57.313+0800 INFO controllers.ElasticWeb set reference {"func": "createService"} 2021-02-21T10:03:57.313+0800 INFO controllers.ElasticWeb start create service {"func": "createService"} 2021-02-21T10:03:57.364+0800 INFO controllers.ElasticWeb create service success {"func": "createService"} 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb expectReplicas [2] {"func": "createDeployment"} 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb set reference {"func": "createDeployment"} 2021-02-21T10:03:57.365+0800 INFO controllers.ElasticWeb start create deployment {"func": "createDeployment"} 2021-02-21T10:03:57.382+0800 INFO controllers.ElasticWeb create deployment success {"func": "createDeployment"} 2021-02-21T10:03:57.382+0800 INFO controllers.ElasticWeb singlePodQPS [500], replicas [2], realQPS[1000] {"func": "updateStatus"} 2021-02-21T10:03:57.407+0800 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"} 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 1. start reconcile logic {"elasticweb": "dev/elasticweb-sample"} 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 3. instance : Image [tomcat:8.0.18-jre8], Port [30003], SinglePodQPS [500], TotalQPS [600], RealQPS [1000] {"elasticweb": "dev/elasticweb-sample"} 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 9. expectReplicas [2], realReplicas [2] {"elasticweb": "dev/elasticweb-sample"} 2021-02-21T10:03:57.407+0800 INFO controllers.ElasticWeb 10. return now {"elasticweb": "dev/elasticweb-sample"} 2021-02-21T10:03:57.407+0800 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "elasticweb", "request": "dev/elasticweb-sample"}

再用kubectl get命令详细检查资源对象,一切符合预期,elasticweb、service、deployment、pod都是正常的:

zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl apply -f config/samples/elasticweb_v1_elasticweb.yaml namespace/dev created elasticweb.elasticweb.com.bolingcavalry/elasticweb-sample created zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get elasticweb -n dev NAME AGE elasticweb-sample 35s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get service -n dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticweb-sample NodePort 10.107.177.158 <none> 8080:30003/TCP 41s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get deployment -n dev NAME READY UP-TO-DATE AVAILABLE AGE elasticweb-sample 2/2 2 2 46s zhaoqin@zhaoqindeMBP-2 elasticweb % kubectl get pod -n dev NAME READY STATUS RESTARTS AGE elasticweb-sample-56fc5848b7-l5thk 1/1 Running 0 50s elasticweb-sample-56fc5848b7-lqjk5 1/1 Running 0 50s 浏览器验证业务功能

本次部署操作使用的docker镜像是tomcat,验证起来非常简单,打开默认页面能见到猫就证明tomcat启动成功了,我这kubernetes宿主机的IP地址是192.168.50.75,于是用浏览器访问:30003,如下图,业务功能正常:

在这里插入图片描述

修改单个Pod的QPS

如果自身优化,或者外界依赖变化(如缓存、数据库扩容),这些都可能导致当前服务的QPS提升,假设单个Pod的QPS从500提升到了800,看看咱们的Operator能不能自动做出调整(总QPS是600,因此pod数应该从2降到1)

在config/samples/目录下新增名为update_single_pod_qps.yaml的文件,内容如下:

spec: singlePodQPS: 800

执行以下命令,即可将单个Pod的QPS从500更新为800(注意,参数type很重要别漏了):

kubectl patch elasticweb elasticweb-sample \ -n dev \ --type merge \ --patch "$(cat config/samples/update_single_pod_qps.yaml)"

此时去看controller日志,如下图,红框1表示spec已经更新,红框2则表示用最新的参数计算出来的pod数量,符合预期:

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zwpygj.html