ART對象內(nèi)存分配過程解析——內(nèi)存分配階段(Android 8.1)

經(jīng)過內(nèi)存分配過程的準(zhǔn)備階段宗收,我們分析到了Heap的AllocObjectWithAllocator()方法蒲赂。

接下來我們將具體分析對象內(nèi)存分配的過程并淋。

ART對象分配過程解析——內(nèi)存分配階段

AllocObjectWithAllocator方法

首先我們來看Heap的AllocObjectWithAllocator()方法(位置:/art/runtime/gc/heap-inl.h):

template <bool kInstrumented, bool kCheckLargeObject, typename PreFenceVisitor>
inline mirror::Object* Heap::AllocObjectWithAllocator(Thread* self,
                                                      ObjPtr<mirror::Class> klass,
                                                      size_t byte_count,
                                                      AllocatorType allocator,
                                                      const PreFenceVisitor& pre_fence_visitor) {
  ……
  // 對大對象進(jìn)行判斷,因為大對象創(chuàng)建邏輯包含本方法假丧,為了避免無限循環(huán)双揪,這里需要做判斷。
  ObjPtr<mirror::Object> obj;
  if (kCheckLargeObject && UNLIKELY(ShouldAllocLargeObject(klass, byte_count))) {
    obj = AllocLargeObject<kInstrumented, PreFenceVisitor>(self, &klass, byte_count,
                                                           pre_fence_visitor);
    if (obj != nullptr) {
      return obj.Ptr();
    } else {
      // There should be an OOM exception, since we are retrying, clear it.
      self->ClearException();
    }
    // If the large object allocation failed, try to use the normal spaces (main space,
    // non moving space). This can happen if there is significant virtual address space
    // fragmentation.
  }
  // bytes allocated for the (individual) object.
  size_t bytes_allocated; //對象需要分配的字節(jié)數(shù)
  size_t usable_size;
  size_t new_num_bytes_allocated = 0;
  if (IsTLABAllocator(allocator)) { //TLAB中分配對象
    byte_count = RoundUp(byte_count, space::BumpPointerSpace::kAlignment); //對象大小進(jìn)行對齊處理
  }
  // If we have a thread local allocation we don't need to update bytes allocated.
  if (IsTLABAllocator(allocator) && byte_count <= self->TlabSize()) {//TLAB內(nèi)存分配
    obj = self->AllocTlab(byte_count);
    DCHECK(obj != nullptr) << "AllocTlab can't fail";
    obj->SetClass(klass);
    if (kUseBakerReadBarrier) {
      obj->AssertReadBarrierState();
    }
    bytes_allocated = byte_count;
    usable_size = bytes_allocated;
    pre_fence_visitor(obj, usable_size);
    QuasiAtomic::ThreadFenceForConstructor();
  } else if (
      !kInstrumented && allocator == kAllocatorTypeRosAlloc &&
      (obj = rosalloc_space_->AllocThreadLocal(self, byte_count, &bytes_allocated)) != nullptr &&
      LIKELY(obj != nullptr)) {//嘗試在RosAllocSpace上進(jìn)行內(nèi)存分配
    DCHECK(!is_running_on_memory_tool_);
    obj->SetClass(klass);
    if (kUseBakerReadBarrier) {
      obj->AssertReadBarrierState();
    }
    usable_size = bytes_allocated;
    pre_fence_visitor(obj, usable_size);
    QuasiAtomic::ThreadFenceForConstructor();
  } else {
    // bytes allocated that takes bulk thread-local buffer allocations into account.
    size_t bytes_tl_bulk_allocated = 0;
    obj = TryToAllocate<kInstrumented, false>(self, allocator, byte_count, &bytes_allocated,
                                              &usable_size, &bytes_tl_bulk_allocated);
    if (UNLIKELY(obj == nullptr)) {
      // AllocateInternalWithGc can cause thread suspension, if someone instruments the entrypoints
      // or changes the allocator in a suspend point here, we need to retry the allocation.
      obj = AllocateInternalWithGc(self,
                                   allocator,
                                   kInstrumented,
                                   byte_count,
                                   &bytes_allocated,
                                   &usable_size,
                                   &bytes_tl_bulk_allocated, &klass);//進(jìn)行GC之后的內(nèi)存分配
      if (obj == nullptr) {
        // The only way that we can get a null return if there is no pending exception is if the
        // allocator or instrumentation changed.
        if (!self->IsExceptionPending()) {//分配器類型發(fā)生變化之后包帚,重新進(jìn)行內(nèi)存分配
          // AllocObject will pick up the new allocator type, and instrumented as true is the safe
          // default.
          return AllocObject</*kInstrumented*/true>(self,
                                                    klass,
                                                    byte_count,
                                                    pre_fence_visitor);
        }
        return nullptr;
      }
    }
    DCHECK_GT(bytes_allocated, 0u);
    DCHECK_GT(usable_size, 0u);
    obj->SetClass(klass);//設(shè)置新生成的對象類型
    if (kUseBakerReadBarrier) {
      obj->AssertReadBarrierState();
    }
    if (collector::SemiSpace::kUseRememberedSet && UNLIKELY(allocator == kAllocatorTypeNonMoving)) {
      // (Note this if statement will be constant folded away for the
      // fast-path quick entry points.) Because SetClass() has no write
      // barrier, if a non-moving space allocation, we need a write
      // barrier as the class pointer may point to the bump pointer
      // space (where the class pointer is an "old-to-young" reference,
      // though rare) under the GSS collector with the remembered set
      // enabled. We don't need this for kAllocatorTypeRosAlloc/DlMalloc
      // cases because we don't directly allocate into the main alloc
      // space (besides promotions) under the SS/GSS collector.
      WriteBarrierField(obj, mirror::Object::ClassOffset(), klass);
    }
    pre_fence_visitor(obj, usable_size);
    QuasiAtomic::ThreadFenceForConstructor();
    new_num_bytes_allocated = num_bytes_allocated_.FetchAndAddRelaxed(bytes_tl_bulk_allocated) +
        bytes_tl_bulk_allocated;
        if (bytes_tl_bulk_allocated > 0) {
      // Only trace when we get an increase in the number of bytes allocated. This happens when
      // obtaining a new TLAB and isn't often enough to hurt performance according to golem.
      TraceHeapSize(new_num_bytes_allocated + bytes_tl_bulk_allocated);
    }
  }
  if (kIsDebugBuild && Runtime::Current()->IsStarted()) {
    CHECK_LE(obj->SizeOf(), usable_size);
  }
  // TODO: Deprecate.
  if (kInstrumented) {
    if (Runtime::Current()->HasStatsEnabled()) {
      RuntimeStats* thread_stats = self->GetStats();
      ++thread_stats->allocated_objects;
      thread_stats->allocated_bytes += bytes_allocated;
      RuntimeStats* global_stats = Runtime::Current()->GetStats();
      ++global_stats->allocated_objects;
      global_stats->allocated_bytes += bytes_allocated;
    }
  } else {
    DCHECK(!Runtime::Current()->HasStatsEnabled());
  }
  if (kInstrumented) {
    if (IsAllocTrackingEnabled()) {
      // allocation_records_ is not null since it never becomes null after allocation tracking is
      // enabled.
      DCHECK(allocation_records_ != nullptr);
      allocation_records_->RecordAllocation(self, &obj, bytes_allocated);
    }
    AllocationListener* l = alloc_listener_.LoadSequentiallyConsistent();
    if (l != nullptr) {
      // Same as above. We assume that a listener that was once stored will never be deleted.
      // Otherwise we'd have to perform this under a lock.
      l->ObjectAllocated(self, &obj, bytes_allocated);
    }
  } else {
    DCHECK(!IsAllocTrackingEnabled());
  }
  if (AllocatorHasAllocationStack(allocator)) {
    PushOnAllocationStack(self, &obj);
  }
  if (kInstrumented) {
    if (gc_stress_mode_) {
      CheckGcStressMode(self, &obj);
    }
  } else {
    DCHECK(!gc_stress_mode_);
  }
  // IsGcConcurrent() isn't known at compile time so we can optimize by not checking it for
  // the BumpPointer or TLAB allocators. This is nice since it allows the entire if statement to be
  // optimized out. And for the other allocators, AllocatorMayHaveConcurrentGC is a constant since
  // the allocator_type should be constant propagated.
  if (AllocatorMayHaveConcurrentGC(allocator) && IsGcConcurrent()) {
    CheckConcurrentGC(self, new_num_bytes_allocated, &obj);
  }
  VerifyObject(obj);
  self->VerifyStack();
  return obj.Ptr();
}
主要參數(shù)解釋:
  • allocator表示分配器的類型渔期,也就是描述要在哪個空間分配對象。AllocatorType是一個枚舉類型渴邦,它的定義如下所示:
// Different types of allocators.
enum AllocatorType {
  kAllocatorTypeBumpPointer,  // Use BumpPointer allocator, has entrypoints.
  kAllocatorTypeTLAB,  // Use TLAB allocator, has entrypoints.
  kAllocatorTypeRosAlloc,  // Use RosAlloc allocator, has entrypoints.
  kAllocatorTypeDlMalloc,  // Use dlmalloc allocator, has entrypoints.
  kAllocatorTypeNonMoving,  // Special allocator for non moving objects, doesn't have entrypoints.
  kAllocatorTypeLOS,  // Large object space, also doesn't have entrypoints.
  kAllocatorTypeRegion,
  kAllocatorTypeRegionTLAB,
};
  • pre_fence_visitor是一個回調(diào)函數(shù)疯趟,用來在分配對象完成后在當(dāng)前執(zhí)行路徑中執(zhí)行初始化操作,例如分配完成一個數(shù)組對象谋梭,通過該回調(diào)函數(shù)立即設(shè)置數(shù)組的大小信峻,這樣就可以保證數(shù)組對象的完整性和一致性,避免多線程環(huán)境下通過加鎖來完成相同的操作瓮床。
AllocObjectWithAllocator方法的主要工作:
對象內(nèi)存分配流程.png
  1. 判斷是否是大對象盹舞。大對象在獨立的堆上進(jìn)行分配(Large Object Space)。如果是大對象隘庄,首先調(diào)用AllocLargeObject方法踢步,該方法設(shè)置allocator參數(shù)為kAllocatorTypeLOS,然后再次調(diào)用到AllocObjectWithAllocator方法峭沦。
**大對象需要滿足幾個條件:**

1) 請求分配的內(nèi)存大于等于large_object_threshold_(這個值等于3 * kPageSize贾虽,即3個頁面的大小)吼鱼。

2)被分配的對象是一個原子類型數(shù)組(即byte數(shù)組蓬豁、int數(shù)組和boolean數(shù)組等)或者字符串。

3)kCheckLargeObject為ture菇肃。
  1. 如果分配器類型為kAllocatorTypeTLAB或kAllocatorTypeRegionTLAB地粪,并且請求分配的對象大小小于等于線程的TLAB的剩余大小,就會在當(dāng)前ART運行時線程的TLAB中分配對象(線程局部分配緩沖區(qū)中分配對象)琐谤。

這里會調(diào)用Thread對象的AllocTlab方法來進(jìn)行內(nèi)存分配蟆技。之后調(diào)用obj->SetClass(klass)來設(shè)置最終生成對象所屬的類型。

  1. 如果allocator的值為kAllocatorTypeRosAlloc,則嘗試在RosAllocSpace上進(jìn)行內(nèi)存分配质礼。

  2. 否則旺聚,就會調(diào)用TryToAllocate方法進(jìn)行內(nèi)存分配。

  3. 如果4失敗眶蕉,就會調(diào)用AllocateInternalWithGC方法在GC后進(jìn)行內(nèi)存分配砰粹。

  4. 如果GC之后,還是分配失敗造挽,就代表本次對象的內(nèi)存分配工作最終失敗了碱璃。有個例外就是,如果分配過程中沒有發(fā)生異常饭入,并且內(nèi)存分配器類型被改變了嵌器。這樣,就會改變模板參kInstrumented為true谐丢,并調(diào)用AllocObject方法重新嘗試進(jìn)行對象內(nèi)存分配爽航。

  5. 經(jīng)過上述過程,如果對象分配成功了庇谆,調(diào)用新對象的SetClass(klass)方法岳掐,設(shè)置對象所屬的類型。

  6. 如果kUseRememberedSet變量為true饭耳,并且是在非移動空間進(jìn)行分配的串述,這時需要設(shè)置寫入屏障。

  7. 之后會進(jìn)行一些有關(guān)工具化追蹤寞肖、調(diào)試方面的設(shè)置操作纲酗。

  8. 最終返回新創(chuàng)建的對象。

接下來我們繼續(xù)分析這個過程中的重要方法:TryToAllocate方法新蟆、AllocateInternalWithGC方法觅赊。

TryToAllocate方法

TryToAllocate方法(位置:/art/runtime/gc/heap-inl.h):

template <const bool kInstrumented, const bool kGrow>
inline mirror::Object* Heap::TryToAllocate(Thread* self,
                                           AllocatorType allocator_type,
                                           size_t alloc_size,
                                           size_t* bytes_allocated,
                                           size_t* usable_size,
                                           size_t* bytes_tl_bulk_allocated) {
  if (allocator_type != kAllocatorTypeTLAB &&
      allocator_type != kAllocatorTypeRegionTLAB &&
      allocator_type != kAllocatorTypeRosAlloc &&
      UNLIKELY(IsOutOfMemoryOnAllocation(allocator_type, alloc_size, kGrow))) {
    return nullptr;
  }
  mirror::Object* ret;
  switch (allocator_type) {
    case kAllocatorTypeBumpPointer: {
      DCHECK(bump_pointer_space_ != nullptr);
      alloc_size = RoundUp(alloc_size, space::BumpPointerSpace::kAlignment);
      ret = bump_pointer_space_->AllocNonvirtual(alloc_size);
      if (LIKELY(ret != nullptr)) {
        *bytes_allocated = alloc_size;
        *usable_size = alloc_size;
        *bytes_tl_bulk_allocated = alloc_size;
      }
      break;
    }
    case kAllocatorTypeRosAlloc: {
      if (kInstrumented && UNLIKELY(is_running_on_memory_tool_)) {
        // If running on valgrind or asan, we should be using the instrumented path.
        size_t max_bytes_tl_bulk_allocated = rosalloc_space_->MaxBytesBulkAllocatedFor(alloc_size);
        if (UNLIKELY(IsOutOfMemoryOnAllocation(allocator_type,
                                               max_bytes_tl_bulk_allocated,
                                               kGrow))) {
          return nullptr;
        }
        ret = rosalloc_space_->Alloc(self, alloc_size, bytes_allocated, usable_size,
                                     bytes_tl_bulk_allocated);
      } else {
        DCHECK(!is_running_on_memory_tool_);
        size_t max_bytes_tl_bulk_allocated =
            rosalloc_space_->MaxBytesBulkAllocatedForNonvirtual(alloc_size);
        if (UNLIKELY(IsOutOfMemoryOnAllocation(allocator_type,
                                               max_bytes_tl_bulk_allocated,
                                               kGrow))) {
          return nullptr;
        }
        if (!kInstrumented) {
          DCHECK(!rosalloc_space_->CanAllocThreadLocal(self, alloc_size));
        }
        ret = rosalloc_space_->AllocNonvirtual(self,
                                               alloc_size,
                                               bytes_allocated,
                                               usable_size,
                                               bytes_tl_bulk_allocated);
      }
      break;
    }
    case kAllocatorTypeDlMalloc: {
      if (kInstrumented && UNLIKELY(is_running_on_memory_tool_)) {
        // If running on valgrind, we should be using the instrumented path.
        ret = dlmalloc_space_->Alloc(self,
                                     alloc_size,
                                     bytes_allocated,
                                     usable_size,
                                     bytes_tl_bulk_allocated);
      } else {
        DCHECK(!is_running_on_memory_tool_);
        ret = dlmalloc_space_->AllocNonvirtual(self,
                                               alloc_size,
                                               bytes_allocated,
                                               usable_size,
                                               bytes_tl_bulk_allocated);
      }
      break;
    }
    case kAllocatorTypeNonMoving: {
      ret = non_moving_space_->Alloc(self,
                                     alloc_size,
                                     bytes_allocated,
                                     usable_size,
                                     bytes_tl_bulk_allocated);
      break;
    }
    case kAllocatorTypeLOS: {
      ret = large_object_space_->Alloc(self,
                                       alloc_size,
                                       bytes_allocated,
                                       usable_size,
                                       bytes_tl_bulk_allocated);
      // Note that the bump pointer spaces aren't necessarily next to
      // the other continuous spaces like the non-moving alloc space or
      // the zygote space.
      DCHECK(ret == nullptr || large_object_space_->Contains(ret));
      break;
    }
    case kAllocatorTypeRegion: {
      DCHECK(region_space_ != nullptr);
      alloc_size = RoundUp(alloc_size, space::RegionSpace::kAlignment);
      ret = region_space_->AllocNonvirtual<false>(alloc_size,
                                                  bytes_allocated,
                                                  usable_size,
                                                  bytes_tl_bulk_allocated);
      break;
    }
    case kAllocatorTypeTLAB:
      FALLTHROUGH_INTENDED;
    case kAllocatorTypeRegionTLAB: {
      DCHECK_ALIGNED(alloc_size, kObjectAlignment);
      static_assert(space::RegionSpace::kAlignment == space::BumpPointerSpace::kAlignment,
                    "mismatched alignments");
      static_assert(kObjectAlignment == space::BumpPointerSpace::kAlignment,
                    "mismatched alignments");
      if (UNLIKELY(self->TlabSize() < alloc_size)) {
        // kAllocatorTypeTLAB may be the allocator for region space TLAB if the GC is not marking,
        // that is why the allocator is not passed down.
        return AllocWithNewTLAB(self,
                                alloc_size,
                                kGrow,
                                bytes_allocated,
                                usable_size,
                                bytes_tl_bulk_allocated);
      }
      // The allocation can't fail.
      ret = self->AllocTlab(alloc_size);
      DCHECK(ret != nullptr);
      *bytes_allocated = alloc_size;
      *bytes_tl_bulk_allocated = 0;  // Allocated in an existing buffer.
      *usable_size = alloc_size;
      break;
    }
    default: {
      LOG(FATAL) << "Invalid allocator type";
      ret = nullptr;
    }
  }
  return ret;
}
  1. 首先,如果不是指定在當(dāng)前ART運行時線程的TLAB中分配琼稻,并且不是kAllocatorTypeRosAlloc類型吮螺,并且指定分配的對象大小超出了當(dāng)前堆的限制,那么就會分配失敗帕翻,返回一個nullptr指針鸠补。

  2. 接下來跟進(jìn)分配器類型,分別進(jìn)行處理:

    • kAllocatorTypeBumpPointer類型嘀掸,會在Bump Pointer Space中分配對象紫岩,調(diào)用Heap類的成員變量bump_pointer_space_指向的一個BumpPointerSpace對象的成員函數(shù)AllocNonvirtual分配指定大小的內(nèi)存。

    • kAllocatorTypeRosAlloc類型睬塌,會在Ros Alloc Space中分配對象泉蝌。這里會根據(jù)kInstrumented的值和is_running_on_memory_tool_參數(shù)來進(jìn)行判斷歇万,分別會調(diào)用Heap類的成員變量rosalloc_space_指向的RosAllocSpace對象的成員函數(shù)Alloc者AllocNonvirtual分配指定大小的內(nèi)存。

    • kAllocatorTypeDlMalloc類型勋陪,會在DlMalloc Space中分配對象贪磺,調(diào)用Heap類的成員變量dlmalloc_space_指向的一個DlMallocSpace對象的成員函數(shù)Alloc或AllocNonvirtual分配指定大小的內(nèi)存(判斷條件同kAllocatorTypeRosAlloc類型)。

    • kAllocatorTypeNonMoving類型诅愚,會在Non Moving Space中分配對象缘挽,調(diào)用Heap類的成員變量non_moving_space_指向的一個RosAllocSpace對象或者DlMallocSpace對象的成員函數(shù)Alloc分配指定大小的內(nèi)存。

    • kAllocatorTypeLOS類型呻粹,會在Large Object Space中分配對象,調(diào)用Heap類的成員變量large_object_space_指向的一個LargeObjectSpace對象的成員函數(shù)Alloc分配指定大小的內(nèi)存苏研。

    • kAllocatorTypeRegion類型等浊,會在Region Space中分配對象,調(diào)用Heap類的成員變量region_space_指向的一個RegionSpace對象的成員函數(shù)AllocNonvirtual來分配指定大小的內(nèi)存摹蘑。

    • kAllocatorTypeTLAB或kAllocatorTypeRegionTLAB類型筹燕,在當(dāng)前ART運行時線程的TLAB中分配對象。首先會判斷當(dāng)前TLAB剩余大小是否小于將要分配的大小衅鹿,如果小于撒踪,就會調(diào)用Thread對象的AllocWithNewTLAB成員函數(shù)重新請求一塊內(nèi)存,然后進(jìn)行對象分配大渤。如果TLAB剩余大小足夠大制妄,就會直接調(diào)用當(dāng)前Thread對象的成員函數(shù)AllocTlab進(jìn)行內(nèi)存分配。

AllocateInternalWithGc方法

AllocateInternalWithGc方法(位置:/art/runtime/gc/heap.cc)

mirror::Object* Heap::AllocateInternalWithGc(Thread* self,
                                             AllocatorType allocator,
                                             bool instrumented,
                                             size_t alloc_size,
                                             size_t* bytes_allocated,
                                             size_t* usable_size,
                                             size_t* bytes_tl_bulk_allocated,
                                             ObjPtr<mirror::Class>* klass) {
  bool was_default_allocator = allocator == GetCurrentAllocator();
  // Make sure there is no pending exception since we may need to throw an OOME.
  self->AssertNoPendingException();
  DCHECK(klass != nullptr);
  StackHandleScope<1> hs(self);
  HandleWrapperObjPtr<mirror::Class> h(hs.NewHandleWrapper(klass));
  // The allocation failed. If the GC is running, block until it completes, and then retry the
  // allocation.
  collector::GcType last_gc = WaitForGcToComplete(kGcCauseForAlloc, self); //當(dāng)前是否正進(jìn)行GC泵三,如果是耕捞,則等待GC結(jié)束
  // If we were the default allocator but the allocator changed while we were suspended,
  // abort the allocation.
  if ((was_default_allocator && allocator != GetCurrentAllocator()) //如果分配器類型發(fā)生改變,則分配失敗
      (!instrumented && EntrypointsInstrumented())) {
    return nullptr;
  }
  if (last_gc != collector::kGcTypeNone) { //GC成功烫幕,則直接嘗試分配內(nèi)存
    // A GC was in progress and we blocked, retry allocation now that memory has been freed.
    mirror::Object* ptr = TryToAllocate<true, false>(self, allocator, alloc_size, bytes_allocated,
                                                     usable_size, bytes_tl_bulk_allocated);
    if (ptr != nullptr) {
      return ptr;
    }
  }

  collector::GcType tried_type = next_gc_type_; //即將進(jìn)行的GC類型
  const bool gc_ran =
      CollectGarbageInternal(tried_type, kGcCauseForAlloc, false) != collector::kGcTypeNone; //進(jìn)行GC回收俺抽,不回收弱引用、軟引用
  if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
      (!instrumented && EntrypointsInstrumented())) {
    return nullptr;
  }
  if (gc_ran) {
    mirror::Object* ptr = TryToAllocate<true, false>(self, allocator, alloc_size, bytes_allocated,
                                                     usable_size, bytes_tl_bulk_allocated);//再次調(diào)用TryToAllocate成員方法嘗試進(jìn)行內(nèi)存分配较曼。
    if (ptr != nullptr) {
      return ptr;
    }
  }

  // Loop through our different Gc types and try to Gc until we get enough free memory.
  //根據(jù)GC類型由弱到強(qiáng)磷斧,進(jìn)行多次內(nèi)存分配,直至獲得足夠的內(nèi)存進(jìn)行內(nèi)存分配捷犹。
  for (collector::GcType gc_type : gc_plan_) {
    if (gc_type == tried_type) {
      continue;
    }
    // Attempt to run the collector, if we succeed, re-try the allocation.
    const bool plan_gc_ran =
        CollectGarbageInternal(gc_type, kGcCauseForAlloc, false) != collector::kGcTypeNone;
    if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
        (!instrumented && EntrypointsInstrumented())) {
      return nullptr;
    }
    if (plan_gc_ran) {
      // Did we free sufficient memory for the allocation to succeed?
      mirror::Object* ptr = TryToAllocate<true, false>(self, allocator, alloc_size, bytes_allocated,
                                                       usable_size, bytes_tl_bulk_allocated);
      if (ptr != nullptr) {
        return ptr;
      }
    }
  }
  // Allocations have failed after GCs;  this is an exceptional state.
  // Try harder, growing the heap if necessary.
  mirror::Object* ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated,
                                                  usable_size, bytes_tl_bulk_allocated);
  if (ptr != nullptr) {
    return ptr;
  }
  // Most allocations should have succeeded by now, so the heap is really full, really fragmented,
  // or the requested size is really big. Do another GC, collecting SoftReferences this time. The
  // VM spec requires that all SoftReferences have been collected and cleared before throwing
  // OOME.
  VLOG(gc) << "Forcing collection of SoftReferences for " << PrettySize(alloc_size)
           << " allocation";
  // TODO: Run finalization, but this may cause more allocations to occur.
  // We don't need a WaitForGcToComplete here either.
  DCHECK(!gc_plan_.empty());
  CollectGarbageInternal(gc_plan_.back(), kGcCauseForAlloc, true);
  if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
      (!instrumented && EntrypointsInstrumented())) {
    return nullptr;
  }
  ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated, usable_size,
                                  bytes_tl_bulk_allocated);
  if (ptr == nullptr) {
    const uint64_t current_time = NanoTime();
    switch (allocator) {
      case kAllocatorTypeRosAlloc:
        // Fall-through.
      case kAllocatorTypeDlMalloc: {
        if (use_homogeneous_space_compaction_for_oom_ &&
            current_time - last_time_homogeneous_space_compaction_by_oom_ >
            min_interval_homogeneous_space_compaction_by_oom_) {
          last_time_homogeneous_space_compaction_by_oom_ = current_time;
          HomogeneousSpaceCompactResult result = PerformHomogeneousSpaceCompact();
          // Thread suspension could have occurred.
          if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
              (!instrumented && EntrypointsInstrumented())) {
            return nullptr;
          }
          switch (result) {
            case HomogeneousSpaceCompactResult::kSuccess:
              // If the allocation succeeded, we delayed an oom.
              ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated,
                                              usable_size, bytes_tl_bulk_allocated);
              if (ptr != nullptr) {
                count_delayed_oom_++;
              }
              break;
            case HomogeneousSpaceCompactResult::kErrorReject:
              // Reject due to disabled moving GC.
              break;
            case HomogeneousSpaceCompactResult::kErrorVMShuttingDown:
              // Throw OOM by default.
              break;
            default: {
              UNIMPLEMENTED(FATAL) << "homogeneous space compaction result: "
                  << static_cast<size_t>(result);
              UNREACHABLE();
            }
          }
          // Always print that we ran homogeneous space compation since this can cause jank.
          VLOG(heap) << "Ran heap homogeneous space compaction, "
                    << " requested defragmentation "
                    << count_requested_homogeneous_space_compaction_.LoadSequentiallyConsistent()
                    << " performed defragmentation "
                    << count_performed_homogeneous_space_compaction_.LoadSequentiallyConsistent()
                    << " ignored homogeneous space compaction "
                    << count_ignored_homogeneous_space_compaction_.LoadSequentiallyConsistent()
                    << " delayed count = "
                    << count_delayed_oom_.LoadSequentiallyConsistent();
        }
        break;
      }
      case kAllocatorTypeNonMoving: {
        if (kUseReadBarrier) {
          // DisableMovingGc() isn't compatible with CC.
          break;
        }
        // Try to transition the heap if the allocation failure was due to the space being full.
        if (!IsOutOfMemoryOnAllocation(allocator, alloc_size, /*grow*/ false)) {
          // If we aren't out of memory then the OOM was probably from the non moving space being
          // full. Attempt to disable compaction and turn the main space into a non moving space.
          DisableMovingGc();
          // Thread suspension could have occurred.
          if ((was_default_allocator && allocator != GetCurrentAllocator()) ||
              (!instrumented && EntrypointsInstrumented())) {
            return nullptr;
          }
          // If we are still a moving GC then something must have caused the transition to fail.
          if (IsMovingGc(collector_type_)) {
            MutexLock mu(self, *gc_complete_lock_);
            // If we couldn't disable moving GC, just throw OOME and return null.
            LOG(WARNING) << "Couldn't disable moving GC with disable GC count "
                         << disable_moving_gc_count_;
          } else {
            LOG(WARNING) << "Disabled moving GC due to the non moving space being full";
            ptr = TryToAllocate<true, true>(self, allocator, alloc_size, bytes_allocated,
                                            usable_size, bytes_tl_bulk_allocated);
          }
        }
        break;
      }
      default: {
        // Do nothing for others allocators.
      }
    }
  }
  // If the allocation hasn't succeeded by this point, throw an OOM error.
  if (ptr == nullptr) {
    ThrowOutOfMemoryError(self, alloc_size, allocator);
  }
  return ptr;
}
GC回收過程.png
  1. 首先判斷當(dāng)前的GC狀態(tài)弛饭,如果正在進(jìn)行GC,則等待直至GC結(jié)束伏恐。

  2. 判斷當(dāng)前內(nèi)存分配器類型是否發(fā)生了變化孩哑,如果發(fā)生了變化,則分配失敗翠桦。

  3. 如果last_gc != collector::kGcTypeNone横蜒,表明剛剛進(jìn)行了GC操作胳蛮,這時可以直接調(diào)用TryToAllocate成員方法嘗試進(jìn)行內(nèi)存分配。

  4. 調(diào)用CollectGarbageInternal進(jìn)行垃圾回收丛晌,不回收弱引用仅炊、軟引用。

  5. GC成功澎蛛,再次調(diào)用TryToAllocate成員方法嘗試進(jìn)行內(nèi)存分配抚垄。

  6. 根據(jù)GC類型由弱到強(qiáng),進(jìn)行多次內(nèi)存分配谋逻,直至獲得足夠的內(nèi)存進(jìn)行內(nèi)存分配呆馁。這個過程可能會多次調(diào)用TryToAllocate成員方法嘗試進(jìn)行內(nèi)存分配。

注意:以上過程的內(nèi)存分配毁兆,堆大小不會增大浙滤。

  1. 直接增大堆的大小進(jìn)行內(nèi)存分配。具體方法是气堕,調(diào)用TryToAllocate成員方法纺腊,傳遞的模板參數(shù)kGrow為true。

  2. 如果還沒有分配成功茎芭,會再一次進(jìn)行GC揖膜,這次將會回收軟引用。

  3. 直接增大堆的大小進(jìn)行內(nèi)存分配梅桩。具體方法是壹粟,調(diào)用TryToAllocate成員方法,傳遞的模板參數(shù)kGrow為true摘投。

  4. 如果失敗了煮寡,會跟進(jìn)內(nèi)存分配器的類型分別進(jìn)行處理。

    • 如果是kAllocatorTypeRosAlloc犀呼、kAllocatorTypeDlMalloc類型幸撕,會判斷是否支持同構(gòu)空間壓縮,并且距離上一次同構(gòu)空間壓縮的時間大于允許的最小時間間隔外臂,則會調(diào)用PerformHomogeneousSpaceCompact方法進(jìn)行同構(gòu)空間壓縮坐儿。如果壓縮成功,則調(diào)用TryToAllocate最后一次嘗試進(jìn)行內(nèi)存分配宋光。

    • 如果是kAllocatorTypeNonMoving類型貌矿,首先設(shè)置最大堆空間,如果成功罪佳,接著嘗試禁用移動空間的GC逛漫,并將主空間轉(zhuǎn)換為非移動空間。成功后再次調(diào)用TryToAllocate最后一次嘗試進(jìn)行內(nèi)存分配赘艳。

  5. 如果上述步驟都失敗了酌毡,最后會發(fā)送OOM的Error克握。

小結(jié)

對象的內(nèi)存分配過程

  1. AllocObjectWithAllocator方法進(jìn)行對象內(nèi)存的分配工作。

  2. 首先進(jìn)行大對象的判斷枷踏,調(diào)用AllocLargeObject方法進(jìn)行相關(guān)內(nèi)存分配菩暗。

  3. 如果滿足TLAB分配條件,則在當(dāng)前ART運行時線程的TLAB中分配對象旭蠕。

  4. 如果allocator的值為kAllocatorTypeRosAlloc停团,則嘗試在RosAllocSpace上進(jìn)行內(nèi)存分配。否則掏熬,就會調(diào)用TryToAllocate方法進(jìn)行內(nèi)存分配佑稠。

  5. 調(diào)用AllocateInternalWithGC方法在GC后進(jìn)行內(nèi)存分配。

  6. 如果GC之后旗芬,還是分配失敗讶坯,就代表本次對象的內(nèi)存分配工作最終失敗了。有個例外就是岗屏,如果分配過程中沒有發(fā)生異常,并且內(nèi)存分配器類型被改變了漱办。這樣这刷,就會改變模板參kInstrumented為true,并調(diào)用AllocObject方法重新嘗試進(jìn)行對象內(nèi)存分配娩井。

  7. 對象分配成功后暇屋,調(diào)用新對象的SetClass(klass)方法,設(shè)置對象所屬的類型洞辣。

  8. 之后會進(jìn)行一些有關(guān)工具化追蹤咐刨、調(diào)試方面的設(shè)置操作。

  9. 最終返回新創(chuàng)建的對象扬霜。

嘗試GC后的內(nèi)存分配過程

  1. 首先判斷當(dāng)前的GC狀態(tài)定鸟,如果正在進(jìn)行GC,則等待直至GC結(jié)束著瓶。

  2. 如果last_gc != collector::kGcTypeNone联予,表明剛剛進(jìn)行了GC操作,這時可以直接調(diào)用TryToAllocate成員方法嘗試進(jìn)行內(nèi)存分配材原。

  3. 調(diào)用CollectGarbageInternal進(jìn)行垃圾回收沸久,不回收弱引用、軟引用余蟹。GC成功卷胯,再次調(diào)用TryToAllocate成員方法嘗試進(jìn)行內(nèi)存分配。

  4. 根據(jù)GC類型由弱到強(qiáng)威酒,進(jìn)行多次內(nèi)存分配窑睁,直至獲得足夠的內(nèi)存進(jìn)行內(nèi)存分配挺峡。這個過程可能會多次調(diào)用TryToAllocate成員方法嘗試進(jìn)行內(nèi)存分配。

  5. 直接增大堆的大小進(jìn)行內(nèi)存分配卵慰。

  6. 如果還沒有分配成功沙郭,會再一次進(jìn)行GC,這次將會回收軟引用裳朋。

  7. 直接增大堆的大小進(jìn)行內(nèi)存分配病线。

  8. 如果失敗了,會跟進(jìn)內(nèi)存分配器的類型分別進(jìn)行處理鲤嫡。例如送挑,進(jìn)行同構(gòu)空間壓縮或者切換內(nèi)存分配器類型。再次調(diào)用TryToAllocate最后一次嘗試進(jìn)行內(nèi)存分配暖眼。

  9. 如果上述步驟都失敗了惕耕,最后會發(fā)送OOM的Error。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末诫肠,一起剝皮案震驚了整個濱河市司澎,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌栋豫,老刑警劉巖挤安,帶你破解...
    沈念sama閱讀 211,948評論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異丧鸯,居然都是意外死亡蛤铜,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,371評論 3 385
  • 文/潘曉璐 我一進(jìn)店門丛肢,熙熙樓的掌柜王于貴愁眉苦臉地迎上來围肥,“玉大人,你說我怎么就攤上這事蜂怎∧驴蹋” “怎么了?”我有些...
    開封第一講書人閱讀 157,490評論 0 348
  • 文/不壞的土叔 我叫張陵杠步,是天一觀的道長蛹批。 經(jīng)常有香客問我,道長篮愉,這世上最難降的妖魔是什么腐芍? 我笑而不...
    開封第一講書人閱讀 56,521評論 1 284
  • 正文 為了忘掉前任,我火速辦了婚禮试躏,結(jié)果婚禮上猪勇,老公的妹妹穿的比我還像新娘。我一直安慰自己颠蕴,他們只是感情好泣刹,可當(dāng)我...
    茶點故事閱讀 65,627評論 6 386
  • 文/花漫 我一把揭開白布助析。 她就那樣靜靜地躺著,像睡著了一般椅您。 火紅的嫁衣襯著肌膚如雪外冀。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,842評論 1 290
  • 那天掀泳,我揣著相機(jī)與錄音雪隧,去河邊找鬼。 笑死员舵,一個胖子當(dāng)著我的面吹牛脑沿,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播马僻,決...
    沈念sama閱讀 38,997評論 3 408
  • 文/蒼蘭香墨 我猛地睜開眼庄拇,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了韭邓?” 一聲冷哼從身側(cè)響起措近,我...
    開封第一講書人閱讀 37,741評論 0 268
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎女淑,沒想到半個月后熄诡,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 44,203評論 1 303
  • 正文 獨居荒郊野嶺守林人離奇死亡诗力,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,534評論 2 327
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了我抠。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片苇本。...
    茶點故事閱讀 38,673評論 1 341
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖菜拓,靈堂內(nèi)的尸體忽然破棺而出瓣窄,到底是詐尸還是另有隱情,我是刑警寧澤纳鼎,帶...
    沈念sama閱讀 34,339評論 4 330
  • 正文 年R本政府宣布俺夕,位于F島的核電站,受9級特大地震影響贱鄙,放射性物質(zhì)發(fā)生泄漏劝贸。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 39,955評論 3 313
  • 文/蒙蒙 一逗宁、第九天 我趴在偏房一處隱蔽的房頂上張望映九。 院中可真熱鬧,春花似錦瞎颗、人聲如沸件甥。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,770評論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽引有。三九已至瓣颅,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間譬正,已是汗流浹背宫补。 一陣腳步聲響...
    開封第一講書人閱讀 32,000評論 1 266
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留导帝,地道東北人守谓。 一個月前我還...
    沈念sama閱讀 46,394評論 2 360
  • 正文 我出身青樓,卻偏偏與公主長得像您单,于是被迫代替她去往敵國和親斋荞。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 43,562評論 2 349

推薦閱讀更多精彩內(nèi)容