性能检测-RAM

简介

CPU、FPS 都已介绍过了,自然避免不了要介绍一波内存。

内存分为两大类:RAM 和 ROM。

  • RAM:RAM 为运行内存,比如,手机助手的悬浮窗,经常提示的运行内存超过 80% 之类的,指的都是运行内存。
  • ROM:ROM 为存储数据的内存,比如,系统设置界面显示的“总空间128G,剩余32.2G”,指的是 ROM。

本篇所说的性能检测-内存,主要说的就是 RAM。

实现

原理

在你对某件事无从下手的时候,不妨去看看大佬们的写法。

Debug.MemoryInfo

这个类我们或许都没听说过,不过从大佬们的源码中,不难发现,检测内存的方法主要依赖于此类。

既然主要依赖此类,我们不妨看看此类主要功能?

/**
 * This class is used to retrieved various statistics about the memory mappings for this
 * process. The returned info is broken down by dalvik, native, and other. All results are in kB.
 * 翻译:这个类用于检索关于这个进程的内存映射的各种统计信息。返回的信息包括 dalvik、native 和 other。所有结果都以kB为单位。
 */
public static class MemoryInfo implements Parcelable {

    //此处省略部分代码

}

从官方文档的解释中我们可以看出,要检测 RAM 可以从这个类出发。

Debug.getMemoryInfo()

要想要使用 MemoryInfo 来检测 RAM,我们首先要能够获取到 MemoryInfo。

我们可以很轻松的发现 Debug 类中有获取 MemoryInfo 的方法:

RAM

ActivityManager

RAM

从上图中可以看出,此方法对 Android Q 不太友好,因此我们只能在 sdk<=28 的时候使用此方法。

代码实现

val memoryData: Float
    get() {
        var mem = 0.0f
        try {
            var memInfo: Debug.MemoryInfo? = null
            //28 为Android P
            if (Build.VERSION.SDK_INT > 28) {
                // 统计进程的内存信息 totalPss
                memInfo = Debug.MemoryInfo()
                Debug.getMemoryInfo(memInfo)
            } else {
                // As of Android Q, for regular apps this method will only return information about the memory info for the processes running as the caller's uid;
                // no other process memory info is available and will be zero. Also of Android Q the sample rate allowed by this API is significantly limited, if called faster the limit you will receive the same data as the previous call.
                val memInfos = mActivityManager.getProcessMemoryInfo(intArrayOf(Process.myPid()))
                if (memInfos != null && memInfos.size > 0) {
                    memInfo = memInfos[0]
                }
            }
            val totalPss = memInfo?.totalPss ?: 0
            if (totalPss >= 0) {
                // Mem in MB
                mem = totalPss / 1024.0f
            }
        } catch (e: Exception) {
            e.printStackTrace()
        }
        return mem
    }

上文若存在问题,欢迎指出!

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
下面是一个使用深度强化学习(DQN)算法训练 OpenAI Gym 中的 `Hero-ram-v0` 环境的示例代码: ```python import gym import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam # 创建 DQN 模型 def create_model(state_shape, action_size): model = Sequential() model.add(Dense(24, input_shape=state_shape, activation='relu')) model.add(Dense(24, activation='relu')) model.add(Dense(action_size, activation='linear')) model.compile(loss='mse', optimizer=Adam(learning_rate=0.001)) return model # 初始化经验回放缓冲区 class ReplayBuffer: def __init__(self, buffer_size): self.buffer_size = buffer_size self.buffer = [] def add(self, experience): self.buffer.append(experience) if len(self.buffer) > self.buffer_size: self.buffer.pop(0) def sample(self, batch_size): return np.random.choice(self.buffer, batch_size) # DQN Agent class DQNAgent: def __init__(self, state_shape, action_size, buffer_size): self.state_shape = state_shape self.action_size = action_size self.buffer = ReplayBuffer(buffer_size) self.model = create_model(state_shape, action_size) def act(self, state): state = np.expand_dims(state, axis=0) q_values = self.model.predict(state)[0] action = np.argmax(q_values) return action def train(self, batch_size, gamma): minibatch = self.buffer.sample(batch_size) for state, action, reward, next_state, done in minibatch: target = reward if not done: next_state = np.expand_dims(next_state, axis=0) target = reward + gamma * np.amax(self.model.predict(next_state)[0]) state = np.expand_dims(state, axis=0) target_f = self.model.predict(state) target_f[0][action] = target self.model.fit(state, target_f, epochs=1, verbose=0) def remember(self, state, action, reward, next_state, done): experience = (state, action, reward, next_state, done) self.buffer.add(experience) # 创建环境和 agent env = gym.make('Hero-ram-v0') state_shape = env.observation_space.shape action_size = env.action_space.n agent = DQNAgent(state_shape, action_size, buffer_size=1000) # 训练 DQN agent num_episodes = 1000 batch_size = 32 gamma = 0.99 for episode in range(num_episodes): state = env.reset() done = False total_reward = 0 while not done: action = agent.act(state) next_state, reward, done, _ = env.step(action) agent.remember(state, action, reward, next_state, done) state = next_state total_reward += reward if len(agent.buffer.buffer) > batch_size: agent.train(batch_size, gamma) print(f"Episode: {episode+1}, Reward: {total_reward}") # 使用训练好的 agent 进行测试 num_test_episodes = 10 for episode in range(num_test_episodes): state = env.reset() done = False total_reward = 0 while not done: action = agent.act(state) state, reward, done, _ = env.step(action) total_reward += reward print(f"Test Episode: {episode+1}, Reward: {total_reward}") ``` 请确保已经安装了 Gym、NumPy 和 TensorFlow 库。该代码使用一个简单的神经网络作为 DQN 的近似函数,并使用经验回放缓冲区来存储和重放过去的经验。在训练过程中,agent 与环境交互,并使用 Q-learning 更新网络权重。最后,代码还提供了一个简单的测试环节,用于评估训练好的 agent 在环境中的性能

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值