public class c中_Art 之class-linker

9ecba7783017645a25afa9791886cc74.png

写在前面的话:

为啥要学class-linker?疏通art 的java 运行流程

先认识一下class-linker的几个关键数据结构,class-linker的工作流程主要就是围绕这三个数据结构而来:

// class.h

class Class : public Object {
public:
  // Class Status
  //
  // kStatusNotReady: If a Class cannot be found in the class table by
  // FindClass, it allocates an new one with AllocClass in the
  // kStatusNotReady and calls LoadClass. Note if it does find a
  // class, it may not be kStatusResolved and it will try to push it
  // forward toward kStatusResolved.
  //
  // kStatusIdx: LoadClass populates with Class with information from
  // the DexFile, moving the status to kStatusIdx, indicating that the
  // Class value in super_class_ has not been populated. The new Class
  // can then be inserted into the classes table.
  //
  // kStatusLoaded: After taking a lock on Class, the ClassLinker will
  // attempt to move a kStatusIdx class forward to kStatusLoaded by
  // using ResolveClass to initialize the super_class_ and ensuring the
  // interfaces are resolved.
  //
  // kStatusResolved: Still holding the lock on Class, the ClassLinker
  // shows linking is complete and fields of the Class populated by making
  // it kStatusResolved. Java allows circularities of the form where a super
  // class has a field that is of the type of the sub class. We need to be able
  // to fully resolve super classes while resolving types for fields.
  //
  // kStatusRetryVerificationAtRuntime: The verifier sets a class to
  // this state if it encounters a soft failure at compile time. This
  // often happens when there are unresolved classes in other dex
  // files, and this status marks a class as needing to be verified
  // again at runtime.
  //
  // TODO: Explain the other states
  enum Status {
    kStatusError = -1,
    kStatusNotReady = 0,
    kStatusIdx = 1,  // Loaded, DEX idx in super_class_type_idx_ and interfaces_type_idx_.
    kStatusLoaded = 2,  // DEX idx values resolved.
    kStatusResolved = 3,  // Part of linking.
    kStatusVerifying = 4,  // In the process of being verified.
    kStatusRetryVerificationAtRuntime = 5,  // Compile time verification failed, retry at runtime.
    kStatusVerifyingAtRuntime = 6,  // Retrying verification at runtime.
    kStatusVerified = 7,  // Logically part of linking; done pre-init.
    kStatusInitializing = 8,  // Class init in progress.
    kStatusInitialized = 9,  // Ready to go.
  };
//...

private:
// 'Class' Object Fields
  // Order governed by java field ordering. See art::ClassLinker::LinkFields.

  // Defining class loader, or null for the "bootstrap" system loader.
  HeapReference<ClassLoader> class_loader_;

  // For array classes, the component class object for instanceof/checkcast
  // (for String[][][], this will be String[][]). null for non-array classes.
  HeapReference<Class> component_type_;

  // DexCache of resolved constant pool entries (will be null for classes generated by the
  // runtime such as arrays and primitive classes).
  HeapReference<DexCache> dex_cache_;

  // Extraneous class data that is not always needed. This field is allocated lazily and may
  // only be set with 'this' locked. This is synchronized on 'this'.
  // TODO(allight) We should probably synchronize it on something external or handle allocation in
  // some other (safe) way to prevent possible deadlocks.
  HeapReference<ClassExt> ext_data_;

  // The interface table (iftable_) contains pairs of a interface class and an array of the
  // interface methods. There is one pair per interface supported by this class.  That means one
  // pair for each interface we support directly, indirectly via superclass, or indirectly via a
  // superinterface.  This will be null if neither we nor our superclass implement any interfaces.
  //
  // Why we need this: given "class Foo implements Face", declare "Face faceObj = new Foo()".
  // Invoke faceObj.blah(), where "blah" is part of the Face interface.  We can't easily use a
  // single vtable.
  //
  // For every interface a concrete class implements, we create an array of the concrete vtable_
  // methods for the methods in the interface.
  HeapReference<IfTable> iftable_;

  // Descriptor for the class such as "java.lang.Class" or "[C". Lazily initialized by ComputeName
  HeapReference<String> name_;

  // The superclass, or null if this is java.lang.Object or a primitive type.
  //
  // Note that interfaces have java.lang.Object as their
  // superclass. This doesn't match the expectations in JNI
  // GetSuperClass or java.lang.Class.getSuperClass() which need to
  // check for interfaces and return null.
  HeapReference<Class> super_class_;

  // Virtual method table (vtable), for use by "invoke-virtual".  The vtable from the superclass is
  // copied in, and virtual methods from our class either replace those from the super or are
  // appended. For abstract classes, methods may be created in the vtable that aren't in
  // virtual_ methods_ for miranda methods.
  HeapReference<PointerArray> vtable_;

  // instance fields
  //
  // These describe the layout of the contents of an Object.
  // Note that only the fields directly declared by this class are
  // listed in ifields; fields declared by a superclass are listed in
  // the superclass's Class.ifields.
  //
  // ArtFields are allocated as a length prefixed ArtField array, and not an array of pointers to
  // ArtFields.
  uint64_t ifields_;

  // Pointer to an ArtMethod length-prefixed array. All the methods where this class is the place
  // where they are logically defined. This includes all private, static, final and virtual methods
  // as well as inherited default methods and miranda methods.
  //
  // The slice methods_ [0, virtual_methods_offset_) are the direct (static, private, init) methods
  // declared by this class.
  //
  // The slice methods_ [virtual_methods_offset_, copied_methods_offset_) are the virtual methods
  // declared by this class.
  //
  // The slice methods_ [copied_methods_offset_, |methods_|) are the methods that are copied from
  // interfaces such as miranda or default methods. These are copied for resolution purposes as this
  // class is where they are (logically) declared as far as the virtual dispatch is concerned.
  //
  // Note that this field is used by the native debugger as the unique identifier for the type.
  uint64_t methods_;

  // Static fields length-prefixed array.
  uint64_t sfields_;

  // Access flags; low 16 bits are defined by VM spec.
  uint32_t access_flags_;

  // Class flags to help speed up visiting object references.
  uint32_t class_flags_;

  // Total size of the Class instance; used when allocating storage on gc heap.
  // See also object_size_.
  uint32_t class_size_;

  // Tid used to check for recursive <clinit> invocation.
  pid_t clinit_thread_id_;
  static_assert(sizeof(pid_t) == sizeof(int32_t), "java.lang.Class.clinitThreadId size check");

  // ClassDef index in dex file, -1 if no class definition such as an array.
  // TODO: really 16bits
  int32_t dex_class_def_idx_;

  // Type index in dex file.
  // TODO: really 16bits
  int32_t dex_type_idx_;

  // Number of instance fields that are object refs.
  uint32_t num_reference_instance_fields_;

  // Number of static fields that are object refs,
  uint32_t num_reference_static_fields_;

  // Total object size; used when allocating storage on gc heap.
  // (For interfaces and abstract classes this will be zero.)
  // See also class_size_.
  uint32_t object_size_;

  // Aligned object size for allocation fast path. The value is max uint32_t if the object is
  // uninitialized or finalizable. Not currently used for variable sized objects.
  uint32_t object_size_alloc_fast_path_;

  // The lower 16 bits contains a Primitive::Type value. The upper 16
  // bits contains the size shift of the primitive type.
  uint32_t primitive_type_;

  // Bitmap of offsets of ifields.
  uint32_t reference_instance_offsets_;

  // See the real definition in subtype_check_bits_and_status.h
  // typeof(status_) is actually SubtypeCheckBitsAndStatus.
  uint32_t status_;

  // The offset of the first virtual method that is copied from an interface. This includes miranda,
  // default, and default-conflict methods. Having a hard limit of ((2 << 16) - 1) for methods
  // defined on a single class is well established in Java so we will use only uint16_t's here.
  uint16_t copied_methods_offset_;

  // The offset of the first declared virtual methods in the methods_ array.
  uint16_t virtual_methods_offset_;

  // TODO: ?
  // initiating class loader list
  // NOTE: for classes with low serialNumber, these are unused, and the
  // values are kept in a table in gDvm.
  // InitiatingLoaderList initiating_loader_list_;

  // The following data exist in real class objects.
  // Embedded Imtable, for class object that's not an interface, fixed size.
  // ImTableEntry embedded_imtable_[0];
  // Embedded Vtable, for class object that's not an interface, variable size.
  // VTableEntry embedded_vtable_[0];
  // Static fields, variable size.
  // uint32_t fields_[0];

//... 
} 

------------------------------------------------------------------------------------
------------------------------------------------------------------------------------

// C++ mirror of java.lang.reflect.ArtField
class MANAGED ArtField : public Object {
//...
  private:
  // Field order required by test "ValidateFieldOrderOfJavaCppUnionClasses".
  // The class we are a part of
  Class* declaring_class_;

  uint32_t access_flags_;

  // Dex cache index of field id
  uint32_t field_dex_idx_;

  // Offset of field within an instance or in the Class' static fields
  uint32_t offset_;

  static Class* java_lang_reflect_ArtField_;

//...
}

------------------------------------------------------------------------------------
------------------------------------------------------------------------------------

// C++ mirror of java.lang.reflect.Method and java.lang.reflect.Constructor
class MANAGED ArtMethod : public Object {
//....
//
protected:
  // Field order required by test "ValidateFieldOrderOfJavaCppUnionClasses".
  // The class we are a part of
  Class* declaring_class_;

  // short cuts to declaring_class_->dex_cache_ member for fast compiled code access
  ObjectArray<StaticStorageBase>* dex_cache_initialized_static_storage_;

  // short cuts to declaring_class_->dex_cache_ member for fast compiled code access
  ObjectArray<ArtMethod>* dex_cache_resolved_methods_;

  // short cuts to declaring_class_->dex_cache_ member for fast compiled code access
  ObjectArray<Class>* dex_cache_resolved_types_;

  // short cuts to declaring_class_->dex_cache_ member for fast compiled code access
  ObjectArray<String>* dex_cache_strings_;

  // Access flags; low 16 bits are defined by spec.
  uint32_t access_flags_;

  // Offset to the CodeItem.
  uint32_t code_item_offset_;

  // Architecture-dependent register spill mask
  uint32_t core_spill_mask_;

  // Compiled code associated with this method for callers from managed code.
  // May be compiled managed code or a bridge for invoking a native method.
  // TODO: Break apart this into portable and quick.
  const void* entry_point_from_compiled_code_;

  // Called by the interpreter to execute this method.
  EntryPointFromInterpreter* entry_point_from_interpreter_;

  // Architecture-dependent register spill mask
  uint32_t fp_spill_mask_;

  // Total size in bytes of the frame
  size_t frame_size_in_bytes_;

  // Garbage collection map of native PC offsets (quick) or dex PCs (portable) to reference bitmaps.
  const uint8_t* gc_map_;

  // Mapping from native pc to dex pc
  const uint32_t* mapping_table_;

  // Index into method_ids of the dex file associated with this method
  uint32_t method_dex_index_;

  // For concrete virtual methods, this is the offset of the method in Class::vtable_.
  //
  // For abstract methods in an interface class, this is the offset of the method in
  // "iftable_->Get(n)->GetMethodArray()".
  //
  // For static and direct methods this is the index in the direct methods table.
  uint32_t method_index_;

  // The target native method registered with this method
  const void* native_method_;

  // When a register is promoted into a register, the spill mask holds which registers hold dex
  // registers. The first promoted register's corresponding dex register is vmap_table_[1], the Nth
  // is vmap_table_[N]. vmap_table_[0] holds the length of the table.
  const uint16_t* vmap_table_;

  static Class* java_lang_reflect_ArtMethod_;

//....
}

// Classes shared with the managed side of the world need to be packed so that they don't have
// extra platform specific padding.
#define MANAGED PACKED(4)
#define PACKED(x) __attribute__ ((__aligned__(x), __packed__))


bf6329868f73e3dbec6150699457cff3.png
https://www.jianshu.com/p/d0b0c1bfbcf6
  1. SetupClass
void ClassLinker::SetupClass(const DexFile& dex_file,
const DexFile::ClassDef& dex_class_def,
Handle<mirror::Class> klass, mirror::ClassLoader* class_loader) {
const char* descriptor = dex_file.GetClassDescriptor(dex_class_def);
//SetClass是Class的基类mirror Object中的函数。Class也是一种Object,所以此处设置它的
//类类型为”java/lang/Class”对应的那个Class对象
klass->SetClass(GetClassRoot(kJavaLangClass));
uint32_t access_flags = dex_class_def.GetJavaAccessFlags();
//设置访问标志及该类的加载器对象
klass->SetAccessFlags(access_flags);
klass->SetClassLoader(class_loader);
//设置klass的状态为kStatusIdx。
mirror::Class::SetStatus(klass, mirror::Class::kStatusIdx, nullptr);
//设置klass的dex_class_def_idx_和dex_type_idx_成员变量。
klass->SetDexClassDefIndex(dex_file.GetIndexForClassDef(dex_class_def));
klass->SetDexTypeIndex(dex_class_def.class_idx_);
}

2. LoadClass

LoadClass()首先从dex中提取class_data信息,然后调用LoadClassMembers()加载静态、非静态成员变量,direct、virtual函数等信息到klass。

重点需要介绍一下LinkCode();

static void LinkCode(ClassLinker* class_linker,
                     ArtMethod* method,
                     const OatFile::OatClass* oat_class,
                     uint32_t class_def_method_index) REQUIRES_SHARED(Locks::mutator_lock_) {
  ScopedAssertNoThreadSuspension sants(__FUNCTION__);
  Runtime* const runtime = Runtime::Current();
  if (runtime->IsAotCompiler()) { //Android 5.0 安装就全dex2oat,这种直接return
    // The following code only applies to a non-compiler runtime.
    return;
  }

  // Method shouldn't have already been linked.
  DCHECK(method->GetEntryPointFromQuickCompiledCode() == nullptr);

  if (!method->IsInvokable()) {
    EnsureThrowsInvocationError(class_linker, method);
    return;
  }

  const void* quick_code = nullptr;
  if (oat_class != nullptr) {
    // Every kind of method should at least get an invoke stub from the oat_method.
    // non-abstract methods also get their code pointers.
    const OatFile::OatMethod oat_method = oat_class->GetOatMethod(class_def_method_index);
    quick_code = oat_method.GetQuickCode();
  }

  bool enter_interpreter = class_linker->ShouldUseInterpreterEntrypoint(method, quick_code);

  // Note: this mimics the logic in image_writer.cc that installs the resolution
  // stub only if we have compiled code and the method needs a class initialization
  // check.
  if (quick_code == nullptr || Runtime::SimulatorMode()) {
    //------------- 1. SetEntryPointFromQuickCompiledCode 后续介绍: 设置artmethod entry_point_from_quick_compiled_code_ field
    //------------- 4. GetQuickGenericJniStub()返回的是artmethod _data的值
    method->SetEntryPointFromQuickCompiledCode(
        method->IsNative() ? GetQuickGenericJniStub() : GetQuickToInterpreterBridge());
  } else if (enter_interpreter) {
    method->SetEntryPointFromQuickCompiledCode(GetQuickToInterpreterBridge());
  } else if (NeedsClinitCheckBeforeCall(method)) {
    DCHECK(!method->GetDeclaringClass()->IsVisiblyInitialized());  // Actually ClassStatus::Idx.
    // If we do have code but the method needs a class initialization check before calling
    // that code, install the resolution stub that will perform the check.
    // It will be replaced by the proper entry point by ClassLinker::FixupStaticTrampolines
    // after initializing class (see ClassLinker::InitializeClass method).
    method->SetEntryPointFromQuickCompiledCode(GetQuickResolutionStub());
  } else {
    method->SetEntryPointFromQuickCompiledCode(quick_code);
  }

  if (method->IsNative()) {
    // Set up the dlsym lookup stub. Do not go through `UnregisterNative()`
    // as the extra processing for @CriticalNative is not needed yet.
    
    //-------------- 2. SetEntryPointFromJni 后续介绍 :设置artmethod _data
    method->SetEntryPointFromJni(
        //CriticalNative JNI 方法后期再做介绍,这里就简单介绍一下普通native方法
        //-------------- 3.GetJniDlsymLookupStub() 这个方法是查找java native 方法的native实现(依据特殊的命名规则)

        method->IsCriticalNative() ? GetJniDlsymLookupCriticalStub() : GetJniDlsymLookupStub());

    if (enter_interpreter || quick_code == nullptr) {
      // We have a native method here without code. Then it should have the generic JNI
      // trampoline as entrypoint.
      // TODO: this doesn't handle all the cases where trampolines may be installed.
      DCHECK(class_linker->IsQuickGenericJniStub(method->GetEntryPointFromQuickCompiledCode()));
    }
  }
}


// 分析1:总结一下:SetEntryPointFromQuickCompiledCode 给 ArtMethod的entry_point_from_quick_compiled_code_ field赋值
//ArtMethod.h:
  void SetEntryPointFromQuickCompiledCode(const void* entry_point_from_quick_compiled_code)
      REQUIRES_SHARED(Locks::mutator_lock_) {
    SetEntryPointFromQuickCompiledCodePtrSize(entry_point_from_quick_compiled_code,
                                              kRuntimePointerSize);
  }
  ALWAYS_INLINE void SetEntryPointFromQuickCompiledCodePtrSize(
      const void* entry_point_from_quick_compiled_code, PointerSize pointer_size)
      REQUIRES_SHARED(Locks::mutator_lock_) {
    SetNativePointer(EntryPointFromQuickCompiledCodeOffset(pointer_size),
                     entry_point_from_quick_compiled_code,
                     pointer_size);
    // We might want to invoke compiled code, so don't use the fast path.
    ClearFastInterpreterToInterpreterInvokeFlag();
  }

  template<typename T>
  ALWAYS_INLINE void SetNativePointer(MemberOffset offset, T new_value, PointerSize pointer_size)
      REQUIRES_SHARED(Locks::mutator_lock_) {
    static_assert(std::is_pointer<T>::value, "T must be a pointer type");
    const auto addr = reinterpret_cast<uintptr_t>(this) + offset.Uint32Value();
    if (pointer_size == PointerSize::k32) {
      uintptr_t ptr = reinterpret_cast<uintptr_t>(new_value);
      *reinterpret_cast<uint32_t*>(addr) = dchecked_integral_cast<uint32_t>(ptr);
    } else {
      *reinterpret_cast<uint64_t*>(addr) = reinterpret_cast<uintptr_t>(new_value);
    }
  }

  static constexpr MemberOffset EntryPointFromQuickCompiledCodeOffset(PointerSize pointer_size) {
    return MemberOffset(PtrSizedFieldsOffset(pointer_size) + OFFSETOF_MEMBER(
        PtrSizedFields, entry_point_from_quick_compiled_code_) / sizeof(void*)
            * static_cast<size_t>(pointer_size));
  }

//2. SetEntryPointFromJni:SetEntryPointFromQuickCompiledCode 给 ArtMethod的_data赋值

  void SetEntryPointFromJni(const void* entrypoint)
      REQUIRES_SHARED(Locks::mutator_lock_) {
    // The resolution method also has a JNI entrypoint for direct calls from
    // compiled code to the JNI dlsym lookup stub for @CriticalNative.
    DCHECK(IsNative() || IsRuntimeMethod());
    SetEntryPointFromJniPtrSize(entrypoint, kRuntimePointerSize);
  }

  ALWAYS_INLINE void SetEntryPointFromJniPtrSize(const void* entrypoint, PointerSize pointer_size)
      REQUIRES_SHARED(Locks::mutator_lock_) {
    SetDataPtrSize(entrypoint, pointer_size);
  }

  ALWAYS_INLINE void SetDataPtrSize(const void* data, PointerSize pointer_size)
      REQUIRES_SHARED(Locks::mutator_lock_) {
    DCHECK(IsImagePointerSize(pointer_size));
    SetNativePointer(DataOffset(pointer_size), data, pointer_size);
  }

   static constexpr MemberOffset DataOffset(PointerSize pointer_size) {
    return MemberOffset(PtrSizedFieldsOffset(pointer_size) + OFFSETOF_MEMBER(
        PtrSizedFields, data_) / sizeof(void*) * static_cast<size_t>(pointer_size));
  }

// 分析3:GetJniDlsymLookupStub()

static inline const void* GetJniDlsymLookupStub() {
  return reinterpret_cast<const void*>(art_jni_dlsym_lookup_stub);
}

//art/runtime/arch/arm64/jni_entrypoints_arm64.S
    .extern artFindNativeMethod
    .extern artFindNativeMethodRunnable

ENTRY art_jni_dlsym_lookup_stub
    // spill regs.
    stp   x29, x30, [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    .cfi_rel_offset x29, 0
    .cfi_rel_offset x30, 8
    mov   x29, sp
    stp   d6, d7,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    stp   d4, d5,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    stp   d2, d3,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    stp   d0, d1,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    stp   x6, x7,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    stp   x4, x5,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    stp   x2, x3,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16
    stp   x0, x1,   [sp, #-16]!
    .cfi_adjust_cfa_offset 16

    mov x0, xSELF   // pass Thread::Current()
    // Call artFindNativeMethod() for normal native and artFindNativeMethodRunnable()
    // for @FastNative or @CriticalNative.
    ldr   xIP0, [x0, #THREAD_TOP_QUICK_FRAME_OFFSET]      // uintptr_t tagged_quick_frame
    bic   xIP0, xIP0, #1                                  // ArtMethod** sp
    ldr   xIP0, [xIP0]                                    // ArtMethod* method
    ldr   xIP0, [xIP0, #ART_METHOD_ACCESS_FLAGS_OFFSET]   // uint32_t access_flags
    mov   xIP1, #(ACCESS_FLAGS_METHOD_IS_FAST_NATIVE | ACCESS_FLAGS_METHOD_IS_CRITICAL_NATIVE)
    tst   xIP0, xIP1
    b.ne  .Llookup_stub_fast_or_critical_native
    bl    artFindNativeMethod  //--------------------------这里是关键,调用artFindNativeMethod来查找方法
    b     .Llookup_stub_continue
    .Llookup_stub_fast_or_critical_native:
    bl    artFindNativeMethodRunnable
.Llookup_stub_continue:
    mov   x17, x0    // store result in scratch reg.

    // load spill regs.
    ldp   x0, x1,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   x2, x3,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   x4, x5,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   x6, x7,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   d0, d1,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   d2, d3,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   d4, d5,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   d6, d7,   [sp], #16
    .cfi_adjust_cfa_offset -16
    ldp   x29, x30, [sp], #16
    .cfi_adjust_cfa_offset -16
    .cfi_restore x29
    .cfi_restore x30

    cbz   x17, 1f   // is method code null ?
    br    x17       // if non-null, tail call to method's code.

1:
    ret             // restore regs and return to caller to handle exception.
END art_jni_dlsym_lookup_stub


//art/runtime/entrypoints/jni/jni_entrypoints.cc:


// Used by the JNI dlsym stub to find the native method to invoke if none is registered.
extern "C" const void* artFindNativeMethod(Thread* self) {
  DCHECK_EQ(self, Thread::Current());
  Locks::mutator_lock_->AssertNotHeld(self);  // We come here as Native.
  ScopedObjectAccess soa(self);
  return artFindNativeMethodRunnable(self);//-----------------------调用artFindNativeMethodRunnable
}

// Used by the JNI dlsym stub to find the native method to invoke if none is registered.
extern "C" const void* artFindNativeMethodRunnable(Thread* self)
    REQUIRES_SHARED(Locks::mutator_lock_) {
  Locks::mutator_lock_->AssertSharedHeld(self);  // We come here as Runnable.
  uint32_t dex_pc;
  ArtMethod* method = self->GetCurrentMethod(&dex_pc);
  DCHECK(method != nullptr);
  ClassLinker* class_linker = Runtime::Current()->GetClassLinker();

  if (!method->IsNative()) {
    // We're coming from compiled managed code and the `method` we see here is the caller.
    // Resolve target @CriticalNative method for a direct call from compiled managed code.
    uint32_t method_idx = GetInvokeStaticMethodIndex(method, dex_pc);
    ArtMethod* target_method = class_linker->ResolveMethod<ClassLinker::ResolveMode::kNoChecks>(
        self, method_idx, method, kStatic);
    if (target_method == nullptr) {
      self->AssertPendingException();
      return nullptr;
    }
    DCHECK(target_method->IsCriticalNative());
    MaybeUpdateBssMethodEntry(target_method, MethodReference(method->GetDexFile(), method_idx));

    // These calls do not have an explicit class initialization check, so do the check now.
    // (When going through the stub or GenericJNI, the check was already done.)
    DCHECK(NeedsClinitCheckBeforeCall(target_method));
    ObjPtr<mirror::Class> declaring_class = target_method->GetDeclaringClass();
    if (UNLIKELY(!declaring_class->IsVisiblyInitialized())) {
      StackHandleScope<1> hs(self);
      Handle<mirror::Class> h_class(hs.NewHandle(declaring_class));
      if (!class_linker->EnsureInitialized(self, h_class, true, true)) {
        DCHECK(self->IsExceptionPending()) << method->PrettyMethod();
        return nullptr;
      }
    }

    // Replace the runtime method on the stack with the target method.
    DCHECK(!self->GetManagedStack()->GetTopQuickFrameTag());
    ArtMethod** sp = self->GetManagedStack()->GetTopQuickFrameKnownNotTagged();
    DCHECK(*sp == Runtime::Current()->GetCalleeSaveMethod(CalleeSaveType::kSaveRefsAndArgs));
    *sp = target_method;
    self->SetTopOfStackTagged(sp);  // Fake GenericJNI frame.

    // Continue with the target method.
    method = target_method;
  }
  DCHECK(method == self->GetCurrentMethod(/*dex_pc=*/ nullptr));

  // Check whether we already have a registered native code.
  // For @CriticalNative it may not be stored in the ArtMethod as a JNI entrypoint if the class
  // was not visibly initialized yet. Do this check also for @FastNative and normal native for
  // consistency; though success would mean that another thread raced to do this lookup.
  const void* native_code = class_linker->GetRegisteredNative(self, method);
  if (native_code != nullptr) {
    return native_code;
  }

  // Lookup symbol address for method, on failure we'll return null with an exception set,
  // otherwise we return the address of the method we found.
  JavaVMExt* vm = down_cast<JNIEnvExt*>(self->GetJniEnv())->GetVm();
  native_code = vm->FindCodeForNativeMethod(method); //---------------------------关键点:
  if (native_code == nullptr) {
    self->AssertPendingException();
    return nullptr;
  }

  // Register the code. This usually prevents future calls from coming to this function again.
  // We can still come here if the ClassLinker cannot set the entrypoint in the ArtMethod,
  // i.e. for @CriticalNative methods with the declaring class not visibly initialized.
  return class_linker->RegisterNative(self, method, native_code);
}


void* JavaVMExt::FindCodeForNativeMethod(ArtMethod* m) {
  CHECK(m->IsNative());
  ObjPtr<mirror::Class> c = m->GetDeclaringClass();
  // If this is a static method, it could be called before the class has been initialized.
  CHECK(c->IsInitializing()) << c->GetStatus() << " " << m->PrettyMethod();
  std::string detail;
  Thread* const self = Thread::Current();
  void* native_method = libraries_->FindNativeMethod(self, m, detail);//----------------------关键点:
  if (native_method == nullptr) {
    // Lookup JNI native methods from native TI Agent libraries. See runtime/ti/agent.h for more
    // information. Agent libraries are searched for native methods after all jni libraries.
    native_method = FindCodeForNativeMethodInAgents(m);
  }
  // Throwing can cause libraries_lock to be reacquired.
  if (native_method == nullptr) {
    LOG(ERROR) << detail;
    self->ThrowNewException("Ljava/lang/UnsatisfiedLinkError;", detail.c_str());
  }
  return native_method;
}


 // See section 11.3 "Linking Native Methods" of the JNI spec.
  void* FindNativeMethod(Thread* self, ArtMethod* m, std::string& detail)
      REQUIRES(!Locks::jni_libraries_lock_)
      REQUIRES_SHARED(Locks::mutator_lock_) {
    std::string jni_short_name(m->JniShortName());//---------------按照这个名字来查。
    std::string jni_long_name(m->JniLongName());//---------------按照这个名字来查。
    const ObjPtr<mirror::ClassLoader> declaring_class_loader =
        m->GetDeclaringClass()->GetClassLoader();
    ScopedObjectAccessUnchecked soa(Thread::Current());
    void* const declaring_class_loader_allocator =
        Runtime::Current()->GetClassLinker()->GetAllocatorForClassLoader(declaring_class_loader);
    CHECK(declaring_class_loader_allocator != nullptr);
    // TODO: Avoid calling GetShorty here to prevent dirtying dex pages?
    const char* shorty = m->GetShorty();
    {
      // Go to suspended since dlsym may block for a long time if other threads are using dlopen.
      ScopedThreadSuspension sts(self, kNative);
      void* native_code = FindNativeMethodInternal(self,
                                                   declaring_class_loader_allocator,
                                                   shorty,
                                                   jni_short_name,
                                                   jni_long_name);// 关键点:、、、、、、
      if (native_code != nullptr) {
        return native_code;
      }
    }
    detail += "No implementation found for ";
    detail += m->PrettyMethod();
    detail += " (tried " + jni_short_name + " and " + jni_long_name + ")";
    return nullptr;
  }


std::string GetJniShortName(const std::string& class_descriptor, const std::string& method) {
  // Remove the leading 'L' and trailing ';'...
  std::string class_name(class_descriptor);
  CHECK_EQ(class_name[0], 'L') << class_name;
  CHECK_EQ(class_name[class_name.size() - 1], ';') << class_name;
  class_name.erase(0, 1);
  class_name.erase(class_name.size() - 1, 1);

  std::string short_name;
  short_name += "Java_";
  short_name += MangleForJni(class_name);
  short_name += "_";
  short_name += MangleForJni(method);
  return short_name;
}


 void* FindNativeMethodInternal(Thread* self,
                                 void* declaring_class_loader_allocator,
                                 const char* shorty,
                                 const std::string& jni_short_name,
                                 const std::string& jni_long_name)
      REQUIRES(!Locks::jni_libraries_lock_)
      REQUIRES(!Locks::mutator_lock_) {
    MutexLock mu(self, *Locks::jni_libraries_lock_);
    for (const auto& lib : libraries_) {//去各个so查jni_short_name jni_long_name符号地址
      SharedLibrary* const library = lib.second;
      // Use the allocator address for class loader equality to avoid unnecessary weak root decode.
      if (library->GetClassLoaderAllocator() != declaring_class_loader_allocator) {
        // We only search libraries loaded by the appropriate ClassLoader.
        continue;
      }
      // Try the short name then the long name...
      const char* arg_shorty = library->NeedsNativeBridge() ? shorty : nullptr;
      void* fn = library->FindSymbol(jni_short_name, arg_shorty);
      if (fn == nullptr) {
        fn = library->FindSymbol(jni_long_name, arg_shorty);
      }
      if (fn != nullptr) {
        VLOG(jni) << "[Found native code for " << jni_long_name
                  << " in "" << library->GetPath() << ""]";
        return fn;
      }
    }
    return nullptr;
  }

// 我们再看看这个libraries_有哪些

art/runtime/jni/java_vm_ext.cc 

libraries_私有field
bool JavaVMExt::LoadNativeLibrary 会添加 ,这个还是需要动态打印一下


//---------------------------4. GetQuickGenericJniStub()返回的是artmethod _data的值

// Return the address of quick stub code for handling JNI calls.
extern "C" void art_quick_generic_jni_trampoline(ArtMethod*);
static inline const void* GetQuickGenericJniStub() {
  return reinterpret_cast<const void*>(art_quick_generic_jni_trampoline);
}


/*
 * Generic JNI frame layout:
 *
 * #-------------------#
 * |                   |
 * | caller method...  |
 * #-------------------#    <--- SP on entry
 * | Return X30/LR     |
 * | X29/FP            |    callee save
 * | X28               |    callee save
 * | X27               |    callee save
 * | X26               |    callee save
 * | X25               |    callee save
 * | X24               |    callee save
 * | X23               |    callee save
 * | X22               |    callee save
 * | X21               |    callee save
 * | X20               |    callee save
 * | X7                |    arg7
 * | X6                |    arg6
 * | X5                |    arg5
 * | X4                |    arg4
 * | X3                |    arg3
 * | X2                |    arg2
 * | X1                |    arg1
 * | D7                |    float arg 8
 * | D6                |    float arg 7
 * | D5                |    float arg 6
 * | D4                |    float arg 5
 * | D3                |    float arg 4
 * | D2                |    float arg 3
 * | D1                |    float arg 2
 * | D0                |    float arg 1
 * | padding           | // 8B
 * | Method*           | <- X0 (Managed frame similar to SaveRefsAndArgs.)
 * #-------------------#
 * | local ref cookie  | // 4B
 * | padding           | // 0B or 4B to align handle scope on 8B address
 * | handle scope      | // Size depends on number of references; multiple of 4B.
 * #-------------------#
 * | JNI Stack Args    | // Empty if all args fit into registers x0-x7, d0-d7.
 * #-------------------#    <--- SP on native call (1)
 * | Free scratch      |
 * #-------------------#
 * | SP for JNI call   | // Pointer to (1).
 * #-------------------#
 * | Hidden arg        | // For @CriticalNative
 * #-------------------#
 * |                   |
 * | Stack for Regs    |    The trampoline assembly will pop these values
 * |                   |    into registers for native call
 * #-------------------#
 */
    /*
     * Called to do a generic JNI down-call
     */
    .extern artQuickGenericJniTrampoline
ENTRY art_quick_generic_jni_trampoline
    SETUP_SAVE_REFS_AND_ARGS_FRAME_WITH_METHOD_IN_X0

    // Save SP , so we can have static CFI info.
    mov x28, sp
    .cfi_def_cfa_register x28

    // This looks the same, but is different: this will be updated to point to the bottom
    // of the frame when the handle scope is inserted.
    mov xFP, sp

    mov xIP0, #5120
    sub sp, sp, xIP0

    // prepare for artQuickGenericJniTrampoline call
    // (Thread*, managed_sp, reserved_area)
    //    x0         x1            x2   <= C calling convention
    //  xSELF       xFP            sp   <= where they are

    mov x0, xSELF   // Thread*
    mov x1, xFP     // SP for the managed frame.
    mov x2, sp      // reserved area for arguments and other saved data (up to managed frame)
    bl artQuickGenericJniTrampoline  // (Thread*, sp)  //关键点---------------------

    // The C call will have registered the complete save-frame on success.
    // The result of the call is:
    //     x0: pointer to native code, 0 on error.
    //     The bottom of the reserved area contains values for arg registers,
    //     hidden arg register and SP for out args for the call.

    // Check for error (class init check or locking for synchronized native method can throw).
    cbz x0, .Lexception_in_native

    // Save the code pointer
    mov xIP0, x0

    // Load parameters from frame into registers.
    ldp x0, x1, [sp]
    ldp x2, x3, [sp, #16]
    ldp x4, x5, [sp, #32]
    ldp x6, x7, [sp, #48]

    ldp d0, d1, [sp, #64]
    ldp d2, d3, [sp, #80]
    ldp d4, d5, [sp, #96]
    ldp d6, d7, [sp, #112]

    // Load hidden arg (x15) for @CriticalNative and SP for out args.
    ldp x15, xIP1, [sp, #128]

    // Apply the new SP for out args, releasing unneeded reserved area.
    mov sp, xIP1

    blr xIP0        // native call.

    // result sign extension is handled in C code
    // prepare for artQuickGenericJniEndTrampoline call
    // (Thread*, result, result_f)
    //    x0       x1       x2        <= C calling convention
    mov x1, x0      // Result (from saved).
    mov x0, xSELF   // Thread register.
    fmov x2, d0     // d0 will contain floating point result, but needs to go into x2

    bl artQuickGenericJniEndTrampoline

    // Pending exceptions possible.
    ldr x2, [xSELF, THREAD_EXCEPTION_OFFSET]
    cbnz x2, .Lexception_in_native

    // Tear down the alloca.
    mov sp, x28
    .cfi_def_cfa_register sp

    // Tear down the callee-save frame.
    RESTORE_SAVE_REFS_AND_ARGS_FRAME
    REFRESH_MARKING_REGISTER

    // store into fpr, for when it's a fpr return...
    fmov d0, x0
    ret

.Lexception_in_native:
    // Move to x1 then sp to please assembler.
    ldr x1, [xSELF, # THREAD_TOP_QUICK_FRAME_OFFSET]
    add sp, x1, #-1  // Remove the GenericJNI tag.
    .cfi_def_cfa_register sp
    # This will create a new save-all frame, required by the runtime.
    DELIVER_PENDING_EXCEPTION
END art_quick_generic_jni_trampoline




/*
 * Initializes the reserved area assumed to be directly below `managed_sp` for a native call:
 *
 * On entry, the stack has a standard callee-save frame above `managed_sp`,
 * and the reserved area below it. Starting below `managed_sp`, we reserve space
 * for local reference cookie (not present for @CriticalNative), HandleScope
 * (not present for @CriticalNative) and stack args (if args do not fit into
 * registers). At the bottom of the reserved area, there is space for register
 * arguments, hidden arg (for @CriticalNative) and the SP for the native call
 * (i.e. pointer to the stack args area), which the calling stub shall load
 * to perform the native call. We fill all these fields, perform class init
 * check (for static methods) and/or locking (for synchronized methods) if
 * needed and return to the stub.
 *
 * The return value is the pointer to the native code, null on failure.
 */
extern "C" const void* artQuickGenericJniTrampoline(Thread* self,
                                                    ArtMethod** managed_sp,
                                                    uintptr_t* reserved_area)
    REQUIRES_SHARED(Locks::mutator_lock_) {
  // Note: We cannot walk the stack properly until fixed up below.
  ArtMethod* called = *managed_sp;
  DCHECK(called->IsNative()) << called->PrettyMethod(true);
  Runtime* runtime = Runtime::Current();
  uint32_t shorty_len = 0;
  const char* shorty = called->GetShorty(&shorty_len);
  bool critical_native = called->IsCriticalNative();
  bool fast_native = called->IsFastNative();
  bool normal_native = !critical_native && !fast_native;

  // Run the visitor and update sp.
  BuildGenericJniFrameVisitor visitor(self,
                                      called->IsStatic(),
                                      critical_native,
                                      shorty,
                                      shorty_len,
                                      managed_sp,
                                      reserved_area);
  {
    ScopedAssertNoThreadSuspension sants(__FUNCTION__);
    visitor.VisitArguments();
    // FinalizeHandleScope pushes the handle scope on the thread.
    visitor.FinalizeHandleScope(self);
  }

  // Fix up managed-stack things in Thread. After this we can walk the stack.
  self->SetTopOfStackTagged(managed_sp);

  self->VerifyStack();

  // We can now walk the stack if needed by JIT GC from MethodEntered() for JIT-on-first-use.
  jit::Jit* jit = runtime->GetJit();
  if (jit != nullptr) {
    jit->MethodEntered(self, called);
  }

  // We can set the entrypoint of a native method to generic JNI even when the
  // class hasn't been initialized, so we need to do the initialization check
  // before invoking the native code.
  if (NeedsClinitCheckBeforeCall(called)) {
    ObjPtr<mirror::Class> declaring_class = called->GetDeclaringClass();
    if (UNLIKELY(!declaring_class->IsVisiblyInitialized())) {
      // Ensure static method's class is initialized.
      StackHandleScope<1> hs(self);
      Handle<mirror::Class> h_class(hs.NewHandle(declaring_class));
      if (!runtime->GetClassLinker()->EnsureInitialized(self, h_class, true, true)) {
        DCHECK(Thread::Current()->IsExceptionPending()) << called->PrettyMethod();
        self->PopHandleScope();
        return nullptr;  // Report error.
      }
    }
  }

  uint32_t cookie;
  uint32_t* sp32;
  // Skip calling JniMethodStart for @CriticalNative.
  if (LIKELY(!critical_native)) {
    // Start JNI, save the cookie.
    if (called->IsSynchronized()) {
      DCHECK(normal_native) << " @FastNative and synchronize is not supported";
      cookie = JniMethodStartSynchronized(visitor.GetFirstHandleScopeJObject(), self);
      if (self->IsExceptionPending()) {
        self->PopHandleScope();
        return nullptr;  // Report error.
      }
    } else {
      if (fast_native) {
        cookie = JniMethodFastStart(self);
      } else {
        DCHECK(normal_native);
        cookie = JniMethodStart(self);
      }
    }
    sp32 = reinterpret_cast<uint32_t*>(managed_sp);
    *(sp32 - 1) = cookie;
  }

  // Retrieve the stored native code.
  // Note that it may point to the lookup stub or trampoline.
  // FIXME: This is broken for @CriticalNative as the art_jni_dlsym_lookup_stub
  // does not handle that case. Calls from compiled stubs are also broken.
  void const* nativeCode = called->GetEntryPointFromJni(); //  ---------关键点

  VLOG(third_party_jni) << "GenericJNI: "
                        << called->PrettyMethod()
                        << " -> "
                        << std::hex << reinterpret_cast<uintptr_t>(nativeCode);

  // Return native code.
  return nativeCode;
}

  void* GetEntryPointFromJni() const {
    DCHECK(IsNative());
    return GetEntryPointFromJniPtrSize(kRuntimePointerSize);
  }


  ALWAYS_INLINE void* GetEntryPointFromJniPtrSize(PointerSize pointer_size) const {
    return GetDataPtrSize(pointer_size);
  }

  ALWAYS_INLINE void* GetDataPtrSize(PointerSize pointer_size) const {
    DCHECK(IsImagePointerSize(pointer_size));
    return GetNativePointer<void*>(DataOffset(pointer_size), pointer_size);
  }

  ALWAYS_INLINE T GetNativePointer(MemberOffset offset, PointerSize pointer_size) const {
    static_assert(std::is_pointer<T>::value, "T must be a pointer type");
    const auto addr = reinterpret_cast<uintptr_t>(this) + offset.Uint32Value();
    if (pointer_size == PointerSize::k32) {
      return reinterpret_cast<T>(*reinterpret_cast<const uint32_t*>(addr));
    } else {
      auto v = *reinterpret_cast<const uint64_t*>(addr);
      return reinterpret_cast<T>(dchecked_integral_cast<uintptr_t>(v));
    }
  }

  static constexpr MemberOffset DataOffset(PointerSize pointer_size) {
    return MemberOffset(PtrSizedFieldsOffset(pointer_size) + OFFSETOF_MEMBER(
        PtrSizedFields, data_) / sizeof(void*) * static_cast<size_t>(pointer_size));
  }
Android ART执行类方法的过程​www.jianshu.com
8f76a72899367f5f528cd4e3914c5c90.png

3. LinkClass

进入LinkClass之前,先回顾上文SetupClass和LoadClass的结果。

此时:

目标类的信息从dex文件里对应的class_def结构体及其他相关结构体已经提取并转换为一个mirror Class对象。 同时,该Class对象中代表本类的成员变量和成员函数信息也相应创建为对应的ArtField和ArtMethod的对象,并做好了相关设置。从Class类成员来说,它的methods_、sfields_、ifields_都设置好了。

一眼看去,似乎这个类的信息已经完全就绪。但结合上文对Class的类成员介绍可知,我们还有很多信息并不清楚,比如下面这些Class类的成员。 ·iftable_:代表该类所实现或从父类那继承得到的接口类里相关的函数。它具体包含什么信息? ·vtable_:来自父类及本类的virtual函数。它具体包含什么信息? ·methods_:虽然在LoadClass时已经设置了该变量,但却没有包含Miranda方法(读者可回顾Miranda Methods一节最后说的提示的内容。在Android虚拟机中,Miranda方法并不是在dex文件里包含的,而是在LinkClass阶段动态拷贝而来)。那么,完整的methods_应该包含什么? ·其他一些变量的作用,比如object_size_(如果在Java层为这个类创建一个实例对象,则这个实例对象的内存大小)、reference_instance_offsets_表示什么? ·Class的最后有三个隐含成员变量,embedded_imtable_、embedded_vtable_和fields_分别是什么? 以上这些问题都将在LinkClass相关函数中得以解决。

  1. 设置ArtField offset_
  2. 构建vtable imtable
  3. imtable冲突

待续。。。

ART:类的加载、链接和初始化​www.jianshu.com
6263cdcf9f7f973ff6f42053da53df6d.png
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值