KubeEdge 实践过程的记录

本文记录KubeEdge实践的一些记录,包括疑问和解决方案。本文不定时更新。

杂项

编译kubeedge,内存为2GB会出错,4G正常。
同一个pod导出节点端口相同,扩容会不成功,因为节点端口已被占用。
需要先运行得到配置文件,再修改。注意配置文件位置,注意系统平台框架,如果是arm平台,但pause不使用kubeedge/pause-arm:3.1,则出错。
检查主机名称,必须合规(小写字母、数字、横杠-、点号.),否则注册不了,有时返回信息为err:<nil>,无法排查。
边缘端系统需要有默认网关,否则运行会有段错误。按issue说法,此已解决,但依然有。
KubeEdge 不完全等同于 k8s,k8s的部分命令还没有实现。如查看、运行容器的命令就没有。

我收集的相关的bug

2020.4.27 记录:
led 示例:在创建 crds 时,会创建 configmap,但有时候可能没有 Data,即没有 yaml 文件里面的字段,手动删除 cm,再创建 crds,可能又会出现。如果没有 cm,则边缘端 docker 提示找不到 json 文件。

2020.4.19 记录:
本地制作测试镜像(即在边缘端机器编译 Demo 后直接制作镜像,为测试简单如此做)。在云端创建 deployment,正常,删除,此时云端的 pod 为 Terminating 状态。少时,测试镜像被删除了,查边缘端日志,未有发现。上月亦发现过。
经查:是机器空间不足,低于80%。(注:根目录占用为7%,另挂载的windows目录占90%,不知何故会提示不足)

2020.3.30 记录:
arm边缘端跑约1.5天,段错误。

1
2
3
4
5
panic: runtime error: index out of range

goroutine 100 [running]:
github.com/kubeedge/kubeedge/edgemesh/pkg/proxy.updateServer(0x4cbb180, 0x12, 0x4c47dc8, 0x0, 0x2, 0x4c47dd0, 0x0)
/home/ubuntu/kubeedge/src/github.com/kubeedge/kubeedge/edgemesh/pkg/proxy/proxy.go:457 +0x528

云端为NotReady,边缘端的pod还在。重新运行,连接上之后,pod重新生成新的。
注:再加一台x86的运行,deployment扩容为2,作对比。
接上,经过一晚,早上看,edgecore正常运行,但云端为NotReady,从边缘端日志未发现异常,有日志表示上报。停止边缘,再启动,报超时,约几分钟后,连接上,但此时:边缘docker在运行,云端为Pending或Terminating。云端强制删除,可行,边缘端用docker stop停止,会自动再启动pod,云端未发现,感觉此时状态已乱。
停掉edgecore,将所有docker删除,启动边缘,连上云端,此时,边缘的docker会自动启动,感觉边缘记住了此状态。但云端不知道。

2020.3.19记录:
不支持kubectl execkubectl logs命令,官方说后续支持。待观察。
调度信息不够。从kubectl describe中只知道成功调度到了某个节点,至于成功或失败,不知道。只能到节点机器看用docker logs查日志。

我的一些设想

目前看,在云端配置的mapper,只针对一个节点,即一个设备。因为k8s调度时会通过节点选择。如此一来,则不太适合批量部署。如果改,未知。是否与kubeedge设计理念冲突,未知。

问题

无法调度

环境:3台主机,已部署k8s。清理k8s。
按k8s部署deployment,查看pod,显示Pending,删除pod,显示Terminating。再尝试,发现有一个pod可运行在其中一节点,扩容,该节点可运行,另一节点Pending。经过一晚,依旧。
强制停止cloudcore 和 edgecore,k8s中的节点显示NotReady。节点的容器依旧在运行。

疑问:
无法调度,何解?如果优雅关掉pod,再停止cloudcore?目前找不到方法。

云端打印:

1
messagehandler.go:448] write error, connection for node edge-node2 will be closed, affected event id: dba8d7ec-ffa4-4c6f-ac6e-accfa527a366, parent_id: , group: resource, source: edgecontroller, resource: default/pod/nginx-deployment-77698bff7d-jdm8k, operation: update, reason tls: use of closed connection

边缘端打印:

1
2
process.go:130] failed to send message: tls: use of closed connection
process.go:196] websocket write error: failed to send message, error: tls: use of closed connection

猜测:连接断开,但查看node状态,是Ready状态,不知何故。
后续:删除,过一段时间,再部署,成功。

正常连接,跑,一夜后,NotReady状态。pod不断销毁,不断创建。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# kubectl get pod
NAME READY STATUS RESTARTS AGE
led-light-mapper-deployment-94bbdf88-26h2d 0/1 Terminating 0 14h
led-light-mapper-deployment-94bbdf88-2hwxq 0/1 Terminating 0 90m
led-light-mapper-deployment-94bbdf88-4f8pd 0/1 Terminating 0 80m
led-light-mapper-deployment-94bbdf88-52p9w 0/1 Terminating 0 15m
led-light-mapper-deployment-94bbdf88-8t9cl 0/1 Terminating 0 30m
led-light-mapper-deployment-94bbdf88-9bpt7 0/1 Terminating 0 95m
led-light-mapper-deployment-94bbdf88-9nfk6 0/1 Terminating 0 65m
led-light-mapper-deployment-94bbdf88-c8wtb 0/1 Terminating 0 85m
led-light-mapper-deployment-94bbdf88-kpcx4 0/1 Terminating 0 75m
led-light-mapper-deployment-94bbdf88-kwgqs 0/1 Terminating 0 35m
led-light-mapper-deployment-94bbdf88-l6hn2 0/1 Terminating 0 55m
led-light-mapper-deployment-94bbdf88-pk6fx 0/1 Terminating 0 5m1s
led-light-mapper-deployment-94bbdf88-qk9gj 0/1 Terminating 0 60m
led-light-mapper-deployment-94bbdf88-sgns2 0/1 Terminating 0 100m
led-light-mapper-deployment-94bbdf88-sk8gf 0/1 Terminating 0 20m
led-light-mapper-deployment-94bbdf88-svkgr 0/1 Terminating 0 50m
led-light-mapper-deployment-94bbdf88-tjz7z 0/1 Terminating 0 45m
led-light-mapper-deployment-94bbdf88-vwx7w 0/1 Pending 0 1s
led-light-mapper-deployment-94bbdf88-xfsc8 0/1 Terminating 0 10m
led-light-mapper-deployment-94bbdf88-xpq8k 0/1 Terminating 0 40m
led-light-mapper-deployment-94bbdf88-zhj24 0/1 Terminating 0 25m
led-light-mapper-deployment-94bbdf88-zncjg 0/1 Terminating 0 70m

查边缘端:

1
2
3
4
5
6
7
I0319 09:17:05.425874    2147 communicate.go:151] has msg
I0319 09:17:05.426062 2147 communicate.go:155] redo task due to no recv
I0319 09:17:05.427233 2147 communicate.go:151] has msg
I0319 09:17:05.427416 2147 communicate.go:155] redo task due to no recv
I0319 09:17:05.428657 2147 dtcontext.go:69] CommModule is healthy 1584580625

context_channel.go:175] the message channel is full, message: {Header:{ID:5f072fe2-b8cf-411e-8aee-16e927f27433 ParentID: Timestamp:1584580605260 ResourceVersion:391570 Sync:false} Router:{Source:edgecontroller Group:resource Operation:update Resource:default/pod/led-light-mapper-deployment-94bbdf88-26h2d} Content:map[metadata:map[creationTimestamp:2020-03-18T10:23:50Z deletionGracePeriodSeconds:30 deletionTimestamp:2020-03-18T23:40:09Z generateName:led-light-mapper-deployment-94bbdf88- labels:map[app:led-light-mapper pod-template-hash:94bbdf88] name:led-light-mapper-deployment-94bbdf88-26h2d namespace:default ownerReferences:[map[apiVersion:apps/v1 blockOwnerDeletion:true controller:true kind:ReplicaSet name:led-light-mapper-deployment-94bbdf88 uid:52c44b48-1214-4b10-9007-23093a953a40]] resourceVersion:391570 selfLink:/api/v1/namespaces/default/pods/led-light-mapper-deployment-94bbdf88-26h2d uid:12002c7e-69fe-4a31-bf66-759d78380abe] spec:map[containers:[map[image:latelee/led-light-mapper:v1.1 imagePullPolicy:IfNotPresent name:led-light-mapper-container resources:map[] securityContext:map[privileged:true] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File volumeMounts:[map[mountPath:/opt/kubeedge/ name:config-volume] map[mountPath:/var/run/secrets/kubernetes.io/serviceaccount name:default-token-gb4kq readOnly:true]]]] dnsPolicy:ClusterFirst enableServiceLinks:true hostNetwork:true nodeName:latelee.org.ttucon-2142ec priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] serviceAccount:default serviceAccountName:default terminationGracePeriodSeconds:30 tolerations:[map[effect:NoExecute key:node.kubernetes.io/not-ready operator:Exists tolerationSeconds:300] map[effect:NoExecute key:node.kubernetes.io/unreachable operator:Exists tolerationSeconds:300]] volumes:[map[configMap:map[defaultMode:420 name:device-profile-config-edge-node2] name:config-volume] map[name:default-token-gb4kq secret:map[defaultMode:420 secretName:default-token-gb4kq]]]] status:map[phase:Pending qosClass:BestEffort]]}

DNS警告:

1
2
3
4
I0319 16:25:18.563472   17947 record.go:24] Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
I0319 16:25:18.563724 17947 record.go:24] Warning MissingClusterDNS pod: "webgin-deployment-747c6887f5-dwmtb_default(1ceb1dd6-6dae-4aff-a2c6-d0de64373031)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
I0319 16:25:18.563902 17947 record.go:19] Warning DNSConfigForming Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
E0319 16:25:18.564035 17947 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888

1
2
3
4
5
6
7
8
9
10
11
12
I0319 16:30:09.037479   17947 edged.go:808] consume added pod [webgin-deployment-7ccff86d8b-s227c] successfully
I0319 16:30:10.506631 17947 record.go:19] Normal Started Started container webgin
E0319 16:30:10.507199 17947 kuberuntime_container.go:172] Failed to create legacy symbolic link "/var/log/containers/webgin-deployment-747c6887f5-f6547_default_webgin-1772b70cd7725f77c30b9cf47e3ce57159d9fdccf47c0c19aed8edf779c52c16.log" to container "1772b70cd7725f77c30b9cf47e3ce57159d9fdccf47c0c19aed8edf779c52c16" log "/var/log/pods/default_webgin-deployment-747c6887f5-f6547_abc27c3c-50f1-49e9-9f2e-b00fa802dc7f/webgin/0.log": symlink /var/log/pods/default_webgin-deployment-747c6887f5-f6547_abc27c3c-50f1-49e9-9f2e-b00fa802dc7f/webgin/0.log /var/log/containers/webgin-deployment-747c6887f5-f6547_default_webgin-1772b70cd7725f77c30b9cf47e3ce57159d9fdccf47c0c19aed8edf779c52c16.log: no such file or directory
I0319 16:30:10.507557 17947 edged.go:808] consume added pod [webgin-deployment-747c6887f5-f6547] successfully
I0319 16:30:10.667156 17947 edged.go:648] sync loop ignore event: [ContainerDied], with pod [1ceb1dd6-6dae-4aff-a2c6-d0de64373031] not found
W0319 16:30:10.685178 17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for
W0319 16:30:10.871129 17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for
I0319 16:30:10.914857 17947 container_manager_linux.go:880] Found 44 PIDs in root, 44 of them are not to be moved
I0319 16:30:11.088286 17947 edged.go:645] sync loop get event [ContainerStarted], ignore it now.
I0319 16:30:11.327738 17947 edged.go:645] sync loop get event [ContainerStarted], ignore it now.
W0319 16:30:12.413498 17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for
W0319 16:30:12.543879 17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for

成功部署pod的:

1
2
3
I0319 16:25:18.564503   17947 edged.go:808] consume added pod [webgin-deployment-747c6887f5-dwmtb] successfully
I0319 16:25:18.564974 17947 proxy.go:318] [L4 Proxy] process other resource: kube-system/endpoints/kube-scheduler
I0319 16:25:18.688263 17947 edged_volumes.go:54] Using volume plugin "kubernetes.io/empty-dir" to mount wrapped_default-token-gb4kq