Kubernetes 應(yīng)用問(wèn)題的通用排查思路
本文轉(zhuǎn)載自微信公眾號(hào)「明哥的IT隨筆」,作者IT明哥。轉(zhuǎn)載本文請(qǐng)聯(lián)系明哥的IT隨筆公眾號(hào)。
大家好,我是明哥!
本片文章介紹下 Kubernetes 應(yīng)用問(wèn)題的通用排查思路,分享一個(gè)線上此類(lèi)問(wèn)題的排查案例,總結(jié)下背后的相關(guān)知識(shí),以饗讀者,大家共勉!
1 技術(shù)趨勢(shì)大背景
我們知道,大數(shù)據(jù)進(jìn)一步發(fā)展的一個(gè)趨勢(shì),就是大數(shù)據(jù)和云計(jì)算進(jìn)一步融合(包括在底層更加青睞存儲(chǔ)計(jì)算分離的架構(gòu),在底層更加青睞對(duì)象存儲(chǔ)),在部署架構(gòu)上支持混合云和多云場(chǎng)景,擁抱云計(jì)算走向云原生化。
對(duì)應(yīng)到底層具體技術(shù)堆棧上,體現(xiàn)在各個(gè)主流大數(shù)據(jù)平臺(tái)和底層的大數(shù)據(jù)組件,紛紛開(kāi)始支持以 Kubernetes 和 Docker 為代表的容器系列技術(shù)棧。
所以大數(shù)據(jù)從業(yè)者,需要不斷擴(kuò)展自己的技能包,掌握 Kubernetes 和 Docker 的基礎(chǔ)知識(shí)和常見(jiàn)命令,才能在排查大數(shù)據(jù)相關(guān)問(wèn)題時(shí)不至于捉襟見(jiàn)肘,因技能儲(chǔ)備短缺,無(wú)從下手。
從技術(shù)視角看大數(shù)據(jù)行業(yè)的發(fā)展趨勢(shì)
在此分享一個(gè)大數(shù)據(jù)平臺(tái)中 docker 容器相關(guān)故障的排查案列,并介紹下此類(lèi)問(wèn)題的背后知識(shí)和排查思路,以饗讀者,大家共勉!
2 問(wèn)題現(xiàn)象
星環(huán)大數(shù)據(jù)平臺(tái) TDH 中, zookeeper 服務(wù)無(wú)法正常啟動(dòng)。我們知道 TDH 中,各個(gè)服務(wù)其實(shí)是在 k8s 的管控下運(yùn)行于 docker 容器中,通過(guò) kubectl get pods -owide |grep -i zoo 可以發(fā)現(xiàn),對(duì)應(yīng)的 pod 的狀態(tài)是CrashLoopBackOff,如下圖所示:
pod-CrashLoopBackOff
3 背后知識(shí):什么是 CrashLoopBackOff?
某個(gè) pod 處于 CrashloopBackOff, 意味著該 pod 中的容器被啟動(dòng)了,然后崩潰了,接下來(lái)又被自動(dòng)啟動(dòng)了,但又崩潰了,如此周而復(fù)始,陷入了(starting, crashing, starting,crashing)的循壞.
注意:pod 中的容器之所以會(huì)被自動(dòng)重啟,其實(shí)是通過(guò) PodSpec 中的 restartPolicy 指定的,該配置項(xiàng)默認(rèn)是 Always,即失敗后會(huì)自動(dòng)重啟:
- A PodSpec has a restartPolicy field with possible values Always, OnFailure, and Never which applies to all containers in a pod, the default value is Always;
- The restartPolicy only refers to restarts of the containers by the kubelet on the same node (so the restart count will reset if the pod is rescheduled in a different node).
- Failed containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.
4 背后知識(shí):為什么會(huì)發(fā)生 CrashLoopBackOff 錯(cuò)誤?
pod 的 CrashLoopBackOff 錯(cuò)誤還是挺常見(jiàn)的,該錯(cuò)誤可能會(huì)因?yàn)槎喾N原因被觸發(fā),幾個(gè)主要的上層原因有:
- Kubernetes 集群部署有問(wèn)題;
- 該 pod 或 pod 底層的 container 的某些參數(shù)被配置錯(cuò)了;
- 該 pod 內(nèi)部的 container 中運(yùn)行的應(yīng)用程序,在多次重啟運(yùn)行時(shí)都一直處于失敗狀態(tài);
5 背后知識(shí):如何排查 pod 容器底層的應(yīng)用程序的故障?
當(dāng) pod 容器底層的應(yīng)用程序運(yùn)行出現(xiàn)故障時(shí),通用的排查思路,一般是:
- 步驟一:通過(guò)命令 kubectl describe pod xxx 獲取 pod 詳細(xì)信息
- 步驟二:通過(guò)命令 kubectl logs xxx 查看 pod 容器底層的應(yīng)用程序的日志
- 步驟三:進(jìn)一步獲取并查看 pod 容器底層的應(yīng)用程序的其它日志文件,深挖問(wèn)題原因
有的小伙伴可能會(huì)有疑問(wèn),上述步驟二和步驟三都是查看 pod 容器底層的應(yīng)用程序的日志,有什么區(qū)別呢?
其實(shí)步驟二和步驟三在底層查看的是應(yīng)用程序的不同的日志文件,其底層細(xì)節(jié)跟 kubernetes 的日志機(jī)制,以及該 pod 底層的應(yīng)用程序?qū)⑷罩緦?xiě)向何處有關(guān):
- kubectl logs 展示的是 pod 底層的 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr 的日志;
- 應(yīng)用程序?qū)懙狡渌募娜罩荆琸ubectl logs 展示不了,需要獲取日志文件路徑,并自行查看;
- k8s 建議應(yīng)用程序?qū)⑷罩緦?xiě)到 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr;
- 容器內(nèi)的應(yīng)用程序可以將日志直接寫(xiě)到 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr;
- 如果容器內(nèi)的應(yīng)用程序不能或不方便將日志直接寫(xiě)到 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr,可以使用 sidecar 即邊車(chē)模式,在應(yīng)用程序的 container 所在的 pod 內(nèi)部署另一個(gè) sidecar container,該 sidecar container 負(fù)責(zé)讀取應(yīng)用程序的日志文件并輸出到其標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr 里;
- k8s 在底層會(huì)通過(guò)運(yùn)行在各個(gè)節(jié)點(diǎn)的 kubelet 來(lái)收集節(jié)點(diǎn)中所有 container 的 stdout 和 stderr 日志,并寫(xiě)到一個(gè) kubelet 管理的本地文件中;
- 用戶(hù)執(zhí)行 kubectl logs xx 命令時(shí),該命令在底層會(huì)調(diào)用該 container 對(duì)應(yīng)節(jié)點(diǎn)上的 kubelet 來(lái)檢索其管理的本地日志文件,以獲取日志;
- 用戶(hù)使用 kubectl log xxx 來(lái)檢索應(yīng)用程序日志,省去了用戶(hù)登錄 k8s 集群中對(duì)應(yīng)節(jié)點(diǎn)查看對(duì)應(yīng)日志的繁瑣操作,提供了很大遍歷;
ps. 我們這里討論的是運(yùn)行在 k8s 容器中的應(yīng)用程序的日志,除了應(yīng)用程序的日志,其實(shí)整個(gè)k8s 集群中還有很多系統(tǒng)組件的日志,如:docker,kubelet,kube-proxy,kube-apiserver,kube-scheduler,etcd等。
6 問(wèn)題排查復(fù)盤(pán)
按照上述通用問(wèn)題排查思路,我們復(fù)盤(pán)回顧下該 CrashLoopBackOff 問(wèn)題的排查經(jīng)過(guò)。
6.1:?jiǎn)栴}排查復(fù)盤(pán):通過(guò)命令 kubeclt describe pod xxx 獲取 pod 詳細(xì)信息
該命令輸出的部分截圖如下,通過(guò)輸出中 Events 部分,我們可以獲取如下信息:該 pod 被成功地分配到了某個(gè)節(jié)點(diǎn)上,然后鏡像拉取成功,然后 contaier 創(chuàng)建和啟動(dòng)成功,但隨后 contaier 中程序運(yùn)行失敗,最后 pod 進(jìn)入到了 BackOff 狀態(tài):
kubectl-describe-pod
該命令的詳細(xì)輸出如下:
- kubectl describe pod zookeeper-server-license-7fbfc544fc-h8nn9
- Name: zookeeper-server-license-7fbfc544fc-h8nn9
- Namespace: default
- Priority: 0
- PriorityClassName: <none>
- Node: uf30-tdh3-regression/10.20.159.115
- Start Time: Mon, 11 Oct 2021 16:56:30 +0800
- Labels: name=zookeeper-server-license
- pod-template-hash=3969710097
- podConflictName=zookeeper-server-license
- Annotations: <none>
- Status: Running
- IP: 10.20.159.115
- Controlled By: ReplicaSet/zookeeper-server-license-7fbfc544fc
- Containers:
- zookeeper-server-license:
- Container ID: docker://0887c97ab185f1b004759e8c85b48631f511cb43088424190c3f27c715bb8414
- Image: transwarp/zookeeper:transwarp-6.0.2-final
- Image ID: docker-pullable://transwarp/zookeeper@sha256:19bf952dedc70a1d82ba9dd9217a2b7e34fc018561c2741d8f6065c0d87f8a10
- Port: <none>
- Args:
- boot.sh
- LICENSE_NODE
- State: Terminated
- Reason: Error
- Exit Code: 1
- Started: Mon, 11 Oct 2021 17:12:09 +0800
- Finished: Mon, 11 Oct 2021 17:12:10 +0800
- Last State: Terminated
- Reason: Error
- Exit Code: 1
- Started: Mon, 11 Oct 2021 17:07:07 +0800
- Finished: Mon, 11 Oct 2021 17:07:08 +0800
- Ready: False
- Restart Count: 8
- Environment:
- ZOOKEEPER_CONF_DIR: /etc/license/conf
- Mounts:
- /etc/license/conf from conf (rw)
- /etc/localtime from timezone (rw)
- /etc/tos/conf from tos (rw)
- /etc/transwarp/conf from transwarphosts (rw)
- /usr/lib/transwarp/plugins from plugin (rw)
- /var/license from data (rw)
- /var/log/license/ from log (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-g42jt (ro)
- /vdir from mountbind (rw)
- Conditions:
- Type Status
- Initialized True
- Ready False
- PodScheduled True
- Volumes:
- data:
- Type: HostPath (bare host directory volume)
- Path: /var/license
- HostPathType:
- conf:
- Type: HostPath (bare host directory volume)
- Path: /etc/license/conf
- HostPathType:
- log:
- Type: HostPath (bare host directory volume)
- Path: /var/log/license/
- HostPathType:
- mountbind:
- Type: HostPath (bare host directory volume)
- Path: /transwarp/mounts/license
- HostPathType:
- plugin:
- Type: HostPath (bare host directory volume)
- Path: /usr/lib/transwarp/plugins
- HostPathType:
- timezone:
- Type: HostPath (bare host directory volume)
- Path: /etc/localtime
- HostPathType:
- transwarphosts:
- Type: HostPath (bare host directory volume)
- Path: /etc/transwarp/conf
- HostPathType:
- tos:
- Type: HostPath (bare host directory volume)
- Path: /etc/tos/conf
- HostPathType:
- default-token-g42jt:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-g42jt
- Optional: false
- QoS Class: BestEffort
- Node-Selectors: zookeeper-server-license=true
- Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- Events:
- Type Reason Age From Message
- ---- ------ ---- ---- -------
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "default-token-g42jt"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "conf"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "tos"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "mountbind"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "transwarphosts"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "log"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "plugin"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "data"
- Normal SuccessfulMountVolume 15m kubelet, uf30-tdh3-regression MountVolume.SetUp succeeded for volume "timezone"
- Normal Scheduled 15m default-scheduler Successfully assigned zookeeper-server-license-7fbfc544fc-h8nn9 to uf30-tdh3-regression
- Normal Pulled 15m (x3 over 15m) kubelet, uf30-tdh3-regression Successfully pulled image "transwarp/zookeeper:transwarp-6.0.2-final"
- Normal Created 15m (x3 over 15m) kubelet, uf30-tdh3-regression Created container
- Normal Started 15m (x3 over 15m) kubelet, uf30-tdh3-regression Started container
- Normal Pulling 15m (x4 over 15m) kubelet, uf30-tdh3-regression pulling image "transwarp/zookeeper:transwarp-6.0.2-final"
- Warning BackOff 44s (x70 over 15m) kubelet, uf30-tdh3-regression Back-off restarting failed container
6.2 問(wèn)題排查復(fù)盤(pán):通過(guò)命令 kubectl logs xxx 查看 pod 容器底層的應(yīng)用程序的日志
接下來(lái)我們嘗試通過(guò)命令 kubectl logs xxx 查看 pod 容器底層的應(yīng)用程序的日志,以期找到問(wèn)題的原因,該命令的輸出部分截圖如下所示:圖片
如上圖所見(jiàn),不幸的是,該命令的輸出,沒(méi)有展示出問(wèn)題的根本原因。
在底層日志機(jī)制上,應(yīng)該是星環(huán) tdh 中該 zk 應(yīng)用沒(méi)有將日志打印到標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr, 所以 kubectl logs xxx 查看不到對(duì)應(yīng)的日志。
我們需要進(jìn)一步排查。
6.3 問(wèn)題排查復(fù)盤(pán):進(jìn)一步獲取并查看 pod 容器底層的應(yīng)用程序的其它日志文件,深挖問(wèn)題原因
進(jìn)一步排查問(wèn)題,我們首先需要獲取 pod 容器底層的應(yīng)用程序的其它日志文件的路徑。
由于 tdh 是閉源的,我們查看不到應(yīng)用程序的源碼,在沒(méi)有聯(lián)絡(luò)官方客戶(hù)的情況下,我們可以通過(guò)命令 kubectl describe pod xxx 查看該 pod 掛載了哪些 volume,然后猜測(cè)并驗(yàn)證獲得具體的日志文件的路勁給,(排查問(wèn)題就是要,大膽猜想,小心求證?。?/p>
該命令輸出的部分截圖如下,我們看到其中掛載了路徑 /var/log/license:
接下來(lái)我們查看這些日志文件/var/log/license,嘗試深挖問(wèn)題原因,注意,該文件是本地文件系統(tǒng)的文件,需要登錄到對(duì)應(yīng)的節(jié)點(diǎn)上去查看,該日志文件部分關(guān)鍵截圖如下:
通過(guò)日志,問(wèn)題原因找到了:zk 底層存儲(chǔ)在本地文件系統(tǒng)中的文件 /var/license/version-2/snapshot.70000007a 損壞了,所以無(wú)法啟動(dòng):
- 2021-10-11 17:07:08,330 ERROR org.apache.zookeeper.server.persistence.Util: [myid:16] - [main:Util@239] - Last transaction was partial.
- 2021-10-11 17:07:08,331 ERROR org.apache.zookeeper.server.quorum.QuorumPeer: [myid:16] - [main:QuorumPeer@453] - Unable to load database on disk
- java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392)
該日志文件詳細(xì)內(nèi)容如下:
- tail -50 /var/log/license/zookeeper.log
- 2021-10-11 17:07:08,203 INFO org.apache.zookeeper.server.DatadirCleanupManager: [myid:16] - [main:DatadirCleanupManager@101] - Purge task is not scheduled.
- 2021-10-11 17:07:08,212 INFO org.apache.zookeeper.server.quorum.QuorumPeerMain: [myid:16] - [main:QuorumPeerMain@127] - Starting quorum peer
- 2021-10-11 17:07:08,221 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: [myid:16] - [main:NIOServerCnxnFactory@94] - binding to port 0.0.0.0/0.0.0.0:2291
- 2021-10-11 17:07:08,235 INFO org.apache.zookeeper.server.quorum.QuorumPeer: [myid:16] - [main:QuorumPeer@913] - tickTime set to 9000
- 2021-10-11 17:07:08,235 INFO org.apache.zookeeper.server.quorum.QuorumPeer: [myid:16] - [main:QuorumPeer@933] - minSessionTimeout set to -1
- 2021-10-11 17:07:08,235 INFO org.apache.zookeeper.server.quorum.QuorumPeer: [myid:16] - [main:QuorumPeer@944] - maxSessionTimeout set to -1
- 2021-10-11 17:07:08,236 INFO org.apache.zookeeper.server.quorum.QuorumPeer: [myid:16] - [main:QuorumPeer@959] - initLimit set to 10
- 2021-10-11 17:07:08,285 INFO org.apache.zookeeper.server.persistence.FileSnap: [myid:16] - [main:FileSnap@83] - Reading snapshot /var/license/version-2/snapshot.70000007a
- 2021-10-11 17:07:08,330 ERROR org.apache.zookeeper.server.persistence.Util: [myid:16] - [main:Util@239] - Last transaction was partial.
- 2021-10-11 17:07:08,331 ERROR org.apache.zookeeper.server.quorum.QuorumPeer: [myid:16] - [main:QuorumPeer@453] - Unable to load database on disk
- java.io.EOFException
- at java.io.DataInputStream.readInt(DataInputStream.java:392)
- at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
- at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:558)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:577)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:529)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:504)
- at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:341)
- at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:132)
- at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
- at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417)
- at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:409)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:151)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
- 2021-10-11 17:07:08,332 ERROR org.apache.zookeeper.server.quorum.QuorumPeerMain: [myid:16] - [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally
- java.lang.RuntimeException: Unable to run quorum server
- at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:454)
- at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:409)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:151)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111)
- at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
- Caused by: java.io.EOFException
- at java.io.DataInputStream.readInt(DataInputStream.java:392)
- at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
- at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:558)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:577)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:529)
- at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:504)
- at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:341)
- at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:132)
- at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
- at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:417)
- ... 4 more
7 問(wèn)題解決
通過(guò)以上通用問(wèn)題排查思路,我們查看日志找到了問(wèn)題原因:zk 底層存儲(chǔ)在本地文件系統(tǒng)中的文件 /var/license/version-2/snapshot.70000007a 損壞了,所以無(wú)法啟動(dòng)。由于集群中 zk 是有多個(gè)節(jié)點(diǎn)的,且其它節(jié)點(diǎn)的 zk 啟動(dòng)是成功的,所以我們 可以刪除該問(wèn)題節(jié)點(diǎn)上述目錄下的數(shù)據(jù)文件,然后重啟該節(jié)點(diǎn)的 zk, 重啟后該節(jié)點(diǎn)的 zk 就可以從其它節(jié)點(diǎn)復(fù)制數(shù)據(jù)到本地,就可以正常對(duì)外提供服務(wù)了!
zk 底層存儲(chǔ)在本地文件系統(tǒng)中的文件,在正常節(jié)點(diǎn)于問(wèn)題節(jié)點(diǎn),對(duì)比截圖如下:
zk data on good node
zk data on bad node
按照上述方法,清空目錄重啟zk后,kubectl get pods 查看服務(wù)正常,截圖如下:
kubectl-get-pods-after-fix
注意:其實(shí) zk 也提供了系統(tǒng)工具 zkCleanup.sh 來(lái)清理本地?cái)?shù)據(jù)文件,筆者沒(méi)有使用該工具,而是手工備份和清空了問(wèn)題節(jié)點(diǎn)的本地文件。大家可以自行嘗試該工具。
zkCleanup.sh
8 知識(shí)總結(jié)
- 大數(shù)據(jù)從業(yè)者,需要不斷擴(kuò)展自己的技能包,掌握 Kubernetes 和 Docker 的基礎(chǔ)知識(shí)和常見(jiàn)命令,才能在排查大數(shù)據(jù)相關(guān)問(wèn)題時(shí)不至于捉襟見(jiàn)肘,因技能儲(chǔ)備短缺,無(wú)從下手;
- 某個(gè) pod 處于 CrashloopBackOff, 意味著該 pod 中的容器被啟動(dòng)了,然后崩潰了,接下來(lái)又被自動(dòng)啟動(dòng)了,但又崩潰了,如此周而復(fù)始,陷入了(starting, crashing, starting,crashing)的循壞;
- 當(dāng) pod 容器底層的應(yīng)用程序運(yùn)行出現(xiàn)故障時(shí),通用的排查思路,一般是:
步驟一:通過(guò)命令 kubectl describe pod xxx 獲取 pod 詳細(xì)信息;
步驟二:通過(guò)命令 kubectl logs xxx 查看 pod 容器底層的應(yīng)用程序的日志;
步驟三:進(jìn)一步獲取并查看 pod 容器底層的應(yīng)用程序的其它日志文件,深挖問(wèn)題原因;
- kubectl logs 展示的是 pod 底層的 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr 的日志, 應(yīng)用程序?qū)懙狡渌募娜罩?,kubectl logs 展示不了,需要獲取日志文件路徑,并自行查看;
- k8s 建議應(yīng)用程序?qū)⑷罩緦?xiě)到 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr;
- 容器內(nèi)的應(yīng)用程序可以將日志直接寫(xiě)到 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr;如果容器內(nèi)的應(yīng)用程序不能或不方便將日志直接寫(xiě)到 container 的標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr,可以使用 sidecar 即邊車(chē)模式,在應(yīng)用程序的 container 所在的 pod 內(nèi)部署另一個(gè) sidecar container,該 sidecar container 負(fù)責(zé)讀取應(yīng)用程序的日志文件并輸出到其標(biāo)準(zhǔn)輸出 stdout 和標(biāo)準(zhǔn)錯(cuò)誤 stderr 里;
- k8s 在底層會(huì)通過(guò)運(yùn)行在各個(gè)節(jié)點(diǎn)的 kubelet 來(lái)收集節(jié)點(diǎn)中所有 container 的 stdout 和 stderr 日志,并寫(xiě)到一個(gè) kubelet 管理的本地文件中;
- 用戶(hù)執(zhí)行 kubectl logs xx 命令時(shí),該命令在底層會(huì)調(diào)用該 container 對(duì)應(yīng)節(jié)點(diǎn)上的 kubelet 來(lái)檢索其管理的本地日志文件,以獲取日志;
- 用戶(hù)使用 kubectl log xxx 來(lái)檢索應(yīng)用程序日志,省去了用戶(hù)登錄 k8s 集群中對(duì)應(yīng)節(jié)點(diǎn)查看對(duì)應(yīng)日志的繁瑣操作,提供了很大便利;
- 排查問(wèn)題,需要大膽猜想小心求證!