Android手机64位APP兼容

随着2022年ArmV9芯片的推出,手机行业开始全面弃用32位,转向64位应用。GooglePlay自2019年起要求应用支持64位,华为、小米、OPPO和vivo等应用商店也相继设定截止日期,逐步停止对32位应用的支持。开发者需调整构建设置,如在modulebuild.gradle中指定abi,以适应这一变化,确保应用在64位设备上的兼容性和性能优化。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

为什么

  • 2022 armv9芯片新机出货,不兼容32位,出现卡顿闪退等问题
  • 64位性能更好
  • 手机硬件升级,存储更大,应用包体积不敏感

应用市场的政策

Google Play声明

2019 年 8 月 1 日起,您在 Google Play 上发布的应用必须支持64 位

2021年 8 月 1 日, 无法搜索到32位,无法更新 升级版本仍是32位的版本

googleplay

小米/OPPO/vivo/应用宝/百度手机助手

为更好地提升APP性能体验,降低APP功耗影响,小米应用商店与OPPO应用商店、vivo应用商店共同推进国内安卓生态对64位架构的升级支持。

行业适配节奏如下:

  • 2021年12月底:现有和新发布的应用/游戏,需上传包含64位包体的APK包(支持双包在架,和64位兼容32位的两个形式,不再接收仅支持32位的APK包)
  • 2022年8月底:硬件支持64位的系统,将仅接收含64位版本的APK包
  • 2023年底:硬件将仅支持64位APK,32位应用无法在终端上运行
华为

根据华为开发者联盟的消息,华为日前发布通知,要求开发者加快淘汰32位应用,具体分为两个阶段,详细如下:

1、自2022年2月1日起,在华为应用市场新上架/升级的应用及游戏,必须包含64位版,华为应用市场不再接收仅包含32位版的应用;

2、自2022年9月1日起,华为应用市场不再接收包含32位版的应用(即只能提交64位版)。

如何适配

module build.gradle 指定abi

统一包
  ndk {
       abiFilters 'armeabi-v7a', 'arm64-v8a'
   }
多个包
//限定universalApk 包含apk
ndk {
  abiFilters 'armeabi-v7a', 'x86' , 'arm64-v8a'
}

splits{
        // Configures multiple APKs based on ABI.
        abi {
            // Enables building multiple APKs per ABI.
            def isReleaseBuild = false
            gradle.startParameter.taskNames.find {
                // Enable split for release builds in different build flavors
                // (assemblePaidRelease, assembleFreeRelease, etc.).
                if (it ==~ /:app:assemble.*Release/) {
                    isReleaseBuild = true
                    return true // break
                }

                return false // continue
            }
            enable isReleaseBuild

            // By default all ABIs are included, so use reset() and include to specify that we only
            // want APKs for x86, armeabi-v7a, and mips.
            reset()

            // Specifies a list of ABIs that Gradle should create APKs for.
            include "armeabi-v7a", "arm64-v8a"

            // Specifies that we want to also generate a universal APK that includes all ABIs.
            universalApk true
        }
    }
    

参考

google play声明64位

技术支持 | 安卓App升级64位架构啦

淘汰32位 华为要求开发者明年全面转向64位应用

armv9

小米高管谈第三方App卡顿闪退:原因曝光

### Strip Convolution in Deep Learning Frameworks Strip convolution represents an interesting variant within the broader family of convolutional operations used in deep learning architectures. Unlike traditional convolutions that operate over both height and width dimensions simultaneously, strip convolutions apply filters along one dimension only—either horizontally or vertically. In practice, this means: - **Horizontal Strips:** Filters slide across rows while maintaining fixed positions along columns. - **Vertical Strips:** Filters move down columns but stay stationary relative to row indices. This approach can be particularly useful when dealing with specific types of structured data where patterns exhibit strong directional characteristics. For instance, certain image features might align predominantly in horizontal lines (e.g., horizon lines in landscapes), making horizontal strips especially effective at capturing these structures without unnecessary computation overhead associated with full 2D convolutions[^1]. Implementing strip convolutions involves modifying standard convolution layers so they act upon single-axis inputs rather than applying square-shaped kernels uniformly across input planes. Below demonstrates how such functionality could look implemented using PyTorch as an example framework: ```python import torch.nn.functional as F from torch import nn class HorizontalStripConv(nn.Module): def __init__(kernel_size=3, padding=1): super().__init__() self.conv = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(1, kernel_size), padding=(0, padding)) def forward(x): return self.conv(x) class VerticalStripConv(nn.Module): def __init__(self, kernel_size=3, padding=1): super().__init__() self.conv = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(kernel_size, 1), padding=(padding, 0)) def forward(self, x): return self.conv(x) ``` While spatially separable convolutions offer computational benefits through decomposition into simpler components, not every scenario allows for efficient splitting of arbitrary kernels into smaller parts. Therefore, employing specialized forms like strip convolutions may provide targeted improvements depending on application-specific requirements.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值