10、升级集群节点
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000034
Context
kubeadm 配置的集群最近进行了升级,由于工作负载兼容性问题,将一个节点保留在稍旧的版本上。
Task
升级集群节点 node02 以匹配 control plane 节点的版本。
使用如下所示命令连接到此计算节点:
[candidate@cks000034] ssh node02
PS: 不要修改集群中的任何正在运行的工作负责。
完成任务后,不要忘记退出此计算节点。
解题过程
1、查看节点版本 kubectl get node2、搜索可用版本apt-y search kubeletapt-y search kubelet Sorting... Done Full Text Search... Done kubelet/unknown1.32.3-1.1 ppc64el Node agentforKubernetes clusters3、将版本升级到与master01版本一致aptinstallkubelet=1.32.2-1.1 systemctl daemon-reload systemctl restart kubelet11、bom 工具生成 SPDX 文档
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000035
Task
在 alpine namespace 中的 alpine Deployment,有三个运行不同版本的 alpine 镜像的容器。
首先,找出 alpine 镜像的哪个版本包含版本为 3.1.4-r5 的 libcrypto3 软件包。
其次,使用预安装的 bom 工具,在 ~/alpine.spdx 为找出的镜像版本创建一个 SPDX 文档。
最后,更新 alpine Deployment 并删除 使用找出的镜像版本的容器。
Deployment 的清单文件可以在~/alipine-deployment.yaml 中找到。
PS: 请勿修改 Deployment 的任何其他容器
解题思路
1、找出包含指定依赖的alpine镜像,alpine 注意使用apk做包管理
2、使用bom 工具,为找出的镜像版本生成spdx文档
3、删除找出的镜像版本容器重新apply 更新deployment
解题过程
1、查看镜像,确定是alpine-b candidate@master01:~$ kubectlexec-it -n alpine alpine-6cbf67f985-hp8jb -c alpine-a -- apk list|grep"libcrypto3"libcrypto3-3.3.0-r2 x86_64{openssl}(Apache-2.0)[installed]candidate@master01:~$ kubectlexec-it -n alpine alpine-6cbf67f985-hp8jb -c alpine-b -- apk list|grep"libcrypto3"libcrypto3-3.1.4-r5 x86_64{openssl}(Apache-2.0)[installed]candidate@master01:~$ kubectlexec-it -n alpine alpine-6cbf67f985-hp8jb -c alpine-c -- apk list|grep"libcrypto3"2、使用bom工具生成清单 不用特意记命令,直接-help一路提示走就行 bom generate --image registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1 --output ~/alipine.spdx3、删除镜像版本重新apply apiVersion: apps/v1 kind: Deployment metadata: labels: run: alpine name: alpine namespace: alpine spec: replicas:1selector: matchLabels: run: alpine template: metadata: labels: run: alpine spec: containers: - name: alpine-a image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.20.0 imagePullPolicy: IfNotPresent args: - /bin/sh - -c -whiletrue;dosleep360000;done- name: alpine-c image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.16.9 imagePullPolicy: IfNotPresent args: - /bin/sh - -c -whiletrue;dosleep360000;done12、限制性 Pod 安全标准
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000036
Context
为了符合要求,所有用户命名空间都强制执行受限的 Pod 安全标准。
Task
在 confidential namespace 中包含一个不符合限制性的 Pod 安全标准的 Deployment。因此,其 Pod 无法被调度。
修改这个 Deployment 以符合标准,并验证 Pod 可以正常运行。
PS: 部署的清单文件可以在 ~/nginx-unprivileged.yaml 找到
解题思路
因为是受限的deployment,删除原有的deployment文件,直接apply,按报错提示增加内容即可
解题过程
kubectl delete -f nginx-unprivileged.yaml deployment.apps"nginx-unprivileged-deployment"deleted candidate@master01:~$ kubectl apply -f nginx-unprivileged.yaml Warning: would violate PodSecurity"restricted:latest":allowPrivilegeEscalation!=false(container"nginx"mustsetsecurityContext.allowPrivilegeEscalation=false), unrestricted capabilities(container"nginx"mustsetsecurityContext.capabilities.drop=["ALL"]), runAsNonRoot!=true(pod or container"nginx"mustsetsecurityContext.runAsNonRoot=true), seccompProfile(pod or container"nginx"mustsetsecurityContext.seccompProfile.type to"RuntimeDefault"or"Localhost")deployment.apps/nginx-unprivileged-deployment created 删除重新重建提示警告allowPrivilegeEscalation!=false(container"nginx"mustsetsecurityContext.allowPrivilegeEscalation=false), unrestricted capabilities(container"nginx"mustsetsecurityContext.capabilities.drop=["ALL"]), runAsNonRoot!=true(pod or container"nginx"mustsetsecurityContext.runAsNonRoot=true), seccompProfile(pod or container"nginx"mustsetsecurityContext.seccompProfile.type to"RuntimeDefault"or"Localhost")根据警告可以看出以下问题:1、container"nginx"mustsetsecurityContext.allowPrivilegeEscalation=false2、container"nginx"mustsetsecurityContext.capabilities.drop=["ALL"]3、pod or container"nginx"mustsetsecurityContext.runAsNonRoot=true4、pod or container"nginx"mustsetsecurityContext.seccompProfile.type to"RuntimeDefault"or"Localhost"按照报错提示,修改deployment apiVersion: apps/v1 kind: Deployment metadata: name: nginx-unprivileged-deployment namespace: confidential labels: app: nginx spec: replicas:1selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginxinc/nginx-unprivileged imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation:falsecapabilities: drop:["ALL"]runAsNonRoot:trueseccompProfile: type: RuntimeDefault ports: - containerPort:8080kubectl apply -f nginx-unprivileged.yaml deployment.apps/nginx-unprivileged-deployment configured candidate@master01:~$ kubectl get deployments.apps -n confidential NAME READY UP-TO-DATE AVAILABLE AGE nginx-unprivileged-deployment1/11116m13、Docker 守护进程
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000037
Task
执行以下任务,以保护集群节点 cks000037
从 docker 组中删除用户 developer
PS: 不要从任何其他组中删除用户
重新配置并重启 Docker 守护程序,以确保位于/var/run/docker.sock 的套接字文件由 root 组拥有。
重新配置并重启 Docker 守护进程,以确保它不监听任何 TCP 端口。
PS: 完成工作后,确保 Kubernetes 集群保持健康状态。
解题过程:
1、从docker租中删除用户developer root@node02:/home/candidate# id developeruid=1001(developer)gid=0(root)groups=0(root),40(src),100(users),998(docker)root@node02:/home/candidate# gpasswd -d developer dockerRemoving user developer from group docker root@node02:/home/candidate# id developeruid=1001(developer)gid=0(root)groups=0(root),40(src),100(users)root@node02:/home/candidate#2、查看docker服务状态 systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded(/lib/systemd/system/docker.service;enabled;vendor preset: enabled)Active: active(running)since Mon2025-10-2717:12:36 CST;3min 23s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID:1491(dockerd)Tasks:9Memory:50.9M CGroup: /system.slice/docker.service └─1491 /usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock Oct2717:12:36 node02 dockerd[1491]:time="2025-10-27T17:12:36.287293585+08:00"level=infomsg="Loading containers: done."Oct2717:12:36 node02 dockerd[1491]:time="2025-10-27T17:12:36.302064925+08:00"level=warningmsg="[DEPRECATION NOTICE]: API is accessible on http://0.0.0.0:2375 without encrypti> Oct 27 17:12:36 node02 dockerd[1491]: time="2025-10-27T17:12:36.302100215+08:00" level=warning msg="WARNING: No swap limit support" Oct 27 17:12:36 node02 dockerd[1491]: time="2025-10-27T17:12:36.302115885+08:00" level=info msg="Docker daemon" commit=bbd0a17 containerd-snapshotter=false storage-driver=overlay2> Oct 27 17:12:36 node02 dockerd[1491]: time="2025-10-27T17:12:36.302302325+08:00" level=info msg="Initializing buildkit" Oct 27 17:12:36 node02 dockerd[1491]: time="2025-10-27T17:12:36.323114107+08:00" level=info msg="Completed buildkit initialization" Oct 27 17:12:36 node02 dockerd[1491]: time="2025-10-27T17:12:36.330100500+08:00" level=info msg="Daemon has completed initialization" Oct 27 17:12:36 node02 dockerd[1491]: time="2025-10-27T17:12:36.330200860+08:00" level=info msg="API listen on[::]:2375" Oct 27 17:12:36 node02 systemd[1]: Started Docker Application Container Engine. Oct 27 17:12:36 node02 dockerd[1491]: time="2025-10-27T17:12:36.330910540+08:00" level=info msg="API liste3、进入到相关配置文件cd/lib/systemd/system/ ll docker* -rw-r--r--1root root1749Mar152025docker.service -rw-r--r--1root root295Mar152025docker.socketcatdocker.socket[Unit]Description=Docker Socketforthe API[Socket]# If /var/run is not implemented as a symlink to /run, you may need to# specify ListenStream=/var/run/docker.sock instead.ListenStream=/run/docker.sockSocketMode=0660SocketUser=rootSocketGroup=root#修改为root[Install]WantedBy=sockets.targetcatdocker.service[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.targetWants=network-online.target containerd.serviceRequires=docker.socket[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock#ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sockExecReload=/bin/kill -s HUP$MAINPIDTimeoutStartSec=0RestartSec=2Restart=always# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.# Both the old, and new location are accepted by systemd 229 and up, so using the old location# to make them work for either version of systemd.StartLimitBurst=3# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make# this option work for either version of systemd.StartLimitInterval=60s# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNPROC=infinityLimitCORE=infinity# Comment TasksMax if your systemd version does not support it.# Only systemd 226 and above support this option.TasksMax=infinity# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=processOOMScoreAdjust=-500[Install]WantedBy=multi-user.target systemctl daemon-reload systemctl restart docker14、Cilium网络策略
你必须连接到正确的主机。不这样做可能导致零分。
[candidate@base] $ ssh cks000039
Context
这道题,您参考网址:
CiliumNetworkPolicy https://docs.cilium.io/en/stable/network/kubernetes/policy/#ciliumnetworkpolicy
Task
使用 Cilium 执行以下任务,以保护现有应用程序的内部和外部网络流量。
PS: 您可以使用浏览器访问 Cilium 的文档。
首先,在 nodebb namespace 里创建一个名为 nodebb 的 L4 CiliumNetworkPolicy,并按照如下方式配置它:
⚫ 允许 ingress-nginx namespace 中运行的所有 Pod 访问 nodebb Deployment 的 Pod
⚫ 要求相互身份验证
然后,将前一步创建的网络策略扩展如下:
⚫ 允许主机访问 nodebb Deployment 的 Pod
⚫ 不要使用相互身份验证
解题思路
我考试时候没遇到,可以仅做了解,题目给的网址会包含参考写法,需要记住的点
1、注解: k8s:io.kubernetes.pod.namespace: ingress-nginx
与nginx注解不同的是需要用k8s开头
2、题目需求本身不复杂,需要用到的字段名可以直接kubectl explain,记住需要用到的字段名名字和含义即可
解题过程
catcilium.yaml apiVersion:"cilium.io/v2"kind: CiliumNetworkPolicy metadata: name:"nodebb"namespace:"nodebb"spec: endpointSelector: matchLabels: app: nodebb ingress: - fromEndpoints: - matchLabels: k8s:io.kubernetes.pod.namespace: ingress-nginx authentication: mode:"required"- fromEntities: -host