Spark刷爆磁盤與Java弱引用的關系
一 引用基本概念
如下面,定義兩個變量num,str,存儲模型大致如下圖:
- int num = 6;
- String str = “浪尖聊大數據”;
變量num值直接從6修改為了8;變量str只是修改了其保存的地址,從0x88修改為0x86,對象 “浪尖聊大數據 ”本身還在內存中,并沒有被修改。只是內存中新增了對象 “浪尖是帥哥”。
二 值傳遞&引用傳遞
舉例說明引用傳遞和值傳遞:
- 第一個栗子:基本類型
- void foo(int value) {
- value = 88;
- }
- foo(num); // num 沒有被改變
- 第二個栗子:沒有提供改變自身方法的引用類型
- void foo(String text) {
- text = "mac";
- }
- foo(str); // str 也沒有被改變
- 第三個栗子:提供了改變自身方法的引用類型
- StringBuilder sb = new StringBuilder("vivo");
- void foo(StringBuilder builder) {
- builder.append("5");
- }
- foo(sb); // sb 被改變了,變成了"vivo5"。
- 第四個栗子:提供了改變自身方法的引用類型,但是不使用,而是使用賦值運算符。
- StringBuilder sb = new StringBuilder("oppo");
- void foo(StringBuilder builder) {
- builder = new StringBuilder("vivo");
- }
- foo(sb); // sb 沒有被改變,還是 "oppo"。
三 引用的類型
- 單純的申明一個軟引用,指向一個person對象
- 1 SoftReference pSoftReference=new SoftReference(new Person(“張三”,12));
- 聲明一個引用隊列
- ReferenceQueue<Person> queue = new ReferenceQueue<>();
- 聲明一個person對象,李四,obj是其強引用
- Person obj = new Person(“李四”,13);
- 使軟引用softRef指向李四對應的對象,并且將該軟引用關聯到引用隊列
- 2 SoftReference softRef = new SoftReference<Object>(obj,queue);
- 聲明一個person對象,名叫王酒,并保證其僅含軟引用,且將軟引用關聯到引用隊列queue
- 3 SoftReference softRef = new SoftReference<Object>(new Person(“王酒”,15),queue);
- 使用很簡單softRef.get即可獲取對應的value。
- WeakReference<Person> weakReference = new WeakReference<>(new Person(“浪尖”,18));
- 聲明一個引用隊列
- ReferenceQueue<Person> queue = new ReferenceQueue<>();
- 聲明一個person對象,李四,obj是其強引用
- Person obj = new Person(“李四”,13);
- 聲明一個弱引用,指向強引用obj所指向的對象,同時該引用綁定到引用隊列queue。
- WeakReference weakRef = new WeakReference<Object>(obj,queue);
- 使用弱引用也很簡單,weakRef.get
- 聲明引用隊列
- ReferenceQueue queue = new ReferenceQueue();
- 聲明一個虛引用
- PhantomReference<Person> reference = new PhantomReference<Person>(new Person(“浪尖”,18), queue);
- 獲取虛引用的值,直接為null,因為無法通過虛引用獲取引用對象。
- System.out.println(reference.get());
四 Threadlocal如何使用弱引用
五 spark如何使用弱引用進行數據清理
shuffle相關的引用,實際上是在ShuffleDependency內部實現了,shuffle狀態注冊到ContextCleaner過程:
- _rdd.sparkContext.cleaner.foreach(_.registerShuffleForCleanup(this))
然后,我們翻開registerShuffleForCleanup函數源碼可以看到,注釋的大致意思是注冊ShuffleDependency目的是在垃圾回收的時候清除掉它對應的數據:
- /** Register a ShuffleDependency for cleanup when it is garbage collected. */
- def registerShuffleForCleanup(shuffleDependency: ShuffleDependency[_, _, _]): Unit = {
- registerForCleanup(shuffleDependency, CleanShuffle(shuffleDependency.shuffleId))
- }
其中,registerForCleanup函數如下:
- /** Register an object for cleanup. */
- private def registerForCleanup(objectForCleanup: AnyRef, task: CleanupTask): Unit = {
- referenceBuffer.add(new CleanupTaskWeakReference(task, objectForCleanup, referenceQueue))
- }
referenceBuffer主要作用保存CleanupTaskWeakReference弱引用,確保在引用隊列沒處理前,弱引用不會被垃圾回收。
- /**
- * A buffer to ensure that `CleanupTaskWeakReference`s are not garbage collected as long as they
- * have not been handled by the reference queue.
- */
- private val referenceBuffer =
- Collections.newSetFromMap[CleanupTaskWeakReference](new ConcurrentHashMap)
ContextCleaner內部有一個線程,循環從引用隊列里取被垃圾回收的RDD等相關弱引用,然后完成對應的數據清除工作。
- private val cleaningThread = new Thread() { override def run(): Unit = keepCleaning() }
其中,keepCleaning函數,如下:
- /** Keep cleaning RDD, shuffle, and broadcast state. */
- private def keepCleaning(): Unit = Utils.tryOrStopSparkContext(sc) {
- while (!stopped) {
- try {
- val reference = Option(referenceQueue.remove(ContextCleaner.REF_QUEUE_POLL_TIMEOUT))
- .map(_.asInstanceOf[CleanupTaskWeakReference])
- // Synchronize here to avoid being interrupted on stop()
- synchronized {
- reference.foreach { ref =>
- logDebug("Got cleaning task " + ref.task)
- referenceBuffer.remove(ref)
- ref.task match {
- case CleanRDD(rddId) =>
- doCleanupRDD(rddId, blocking = blockOnCleanupTasks)
- case CleanShuffle(shuffleId) =>
- doCleanupShuffle(shuffleId, blocking = blockOnShuffleCleanupTasks)
- case CleanBroadcast(broadcastId) =>
- doCleanupBroadcast(broadcastId, blocking = blockOnCleanupTasks)
- case CleanAccum(accId) =>
- doCleanupAccum(accId, blocking = blockOnCleanupTasks)
- case CleanCheckpoint(rddId) =>
- doCleanCheckpoint(rddId)
- }
- }
- }
- } catch {
- case ie: InterruptedException if stopped => // ignore
- case e: Exception => logError("Error in cleaning thread", e)
- }
- }
- }
shuffle數據清除的函數是doCleanupShuffle,具體內容如下:
- /** Perform shuffle cleanup. */
- def doCleanupShuffle(shuffleId: Int, blocking: Boolean): Unit = {
- try {
- logDebug("Cleaning shuffle " + shuffleId)
- mapOutputTrackerMaster.unregisterShuffle(shuffleId)
- shuffleDriverComponents.removeShuffle(shuffleId, blocking)
- listeners.asScala.foreach(_.shuffleCleaned(shuffleId))
- logDebug("Cleaned shuffle " + shuffleId)
- } catch {
- case e: Exception => logError("Error cleaning shuffle " + shuffleId, e)
- }
- }
細節就不細展開了。
ContextCleaner的start函數被調用后,實際上啟動了一個調度線程,每隔30min主動調用了一次System.gc(),來觸發垃圾回收。
- /** Start the cleaner. */
- def start(): Unit = {
- cleaningThread.setDaemon(true)
- cleaningThread.setName("Spark Context Cleaner")
- cleaningThread.start()
- periodicGCService.scheduleAtFixedRate(() => System.gc(),
- periodicGCInterval, periodicGCInterval, TimeUnit.SECONDS)
- }
具體參數是:
- spark.cleaner.periodicGC.interval
本文轉載自微信公眾號「浪尖聊大數據」,可以通過以下二維碼關注。轉載本文請聯系浪尖聊大數據公眾號。