Crash:reportSizeConfigurations ActivityRecord not found for Token xxx

本文分析了一个出现在Android系统中的Bug,该Bug导致应用程序在特定条件下崩溃。问题源于Activity启动Service时,若Service执行耗时任务而用户在此期间关闭应用,则会导致Activity无法正确响应,最终引发系统异常。此问题已在Android10及后续版本中得到修复。

问题

线上有统计到以下错误:

java.lang.reflect.UndeclaredThrowableException
at $Proxy5.reportSizeConfigurations(Unknown Source)
at android.app.ActivityThread.reportSizeConfigurations(ActivityThread.java:3670)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3625)
at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:86)
at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:108)
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:68)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2199)
at android.os.Handler.dispatchMessage(Handler.java:112)
at android.os.Looper.loop(Looper.java:216)
at android.app.ActivityThread.main(ActivityThread.java:7625)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:524)
at com.android.internal.os.ZygoteInit.m
2025 DeepSeek 精品技术资料合集,包含技术报告、使用教程、原理解读、行业应用、模型部署、开发实战、科研赋能、职场应用、创业机会、AIGC结合、数据资产管理和市场分析等多个方面。 1-DeepSeekR1技术报告_22页.pdf 2-DeepSeekV3技术报告_53页.pdf 3-DeepSeek与AIGC应用_北京大学_99页.pdf 4-2025年DeepSeek-R1及类强推理模型开发解读报告_北京大学_76页.pdf 5-大模型概念、技术与应用实践(一文读懂大模型)_厦大团队_140页.pdf 6-DeepSeek模型优势算力成本角度解读报告_浙江大学_24页.pdf 7-DeepSeek与DeepSeek-R1专业研究报告_38页.pdf 8-提示词工程和落地场景_北京大学_86页.pdf 9-DeepSeek-回望AI三大主义与加强通识教育_浙江大学_52页.pdf 10-DeepSeek大模型赋能高校教学和科研报告_厦大团队_124页.pdf 11-2025年DeepSeek使用教程蓝皮书_全球数据资产理事会_35页.pdf 12-2025深度解读DeepSeek原理与效应_天津大学_44页.pdf 13-DeepSeek 从入门到精通_清华_104页.pdf 14-我们该如何看待DeepSeek_湖南大学_82页.pdf 15-DeepSeek入门宝典-个人使用篇_51CTO_19页.pdf 16-DeepSeek入门宝典-技术解析篇_51CTO_22页.pdf 17-DeepSeek入门宝典-开发实战篇__51CTO_25页.pdf 18-DeepSeek入门宝典-行业应用篇_51CTO_18页.pdf 19-2025年DeepSeek模型本地部署简介_智灵动力_55页.pdf 20-DeepSeek完全实用手册V1.0-从技术原理到使用技巧_至顶AI实验室_117页.pdf 21-深入浅出讲解Deepseek(含部署)-47页.pdf 22-详解DeepSeek模型训练优化及数据处理的技术精髓_腾讯云——23页.pdf 23-2025年DeepSeek自学手册-从理论模型训练到实践模型应用_@ai呀蔡蔡_73页.pdf 24-DeepSeek15天指导手册从入门到精通_24页.pdf 25-DeepSeek V3 搭建个人知识库教程2025_6页.pdf 26-ChattingorActing-DeepSeek的突破边界与浙大先生的未来图景_浙江大学_86页.pdf 27- 从DeepSeek探讨大语言模型在建筑及能源行业的应用趋势和技术方法报告_ 浙江大学_81页.pdf 28-2025年DeepSeekDeepResearch让科研像聊天一样简单_清华大学北航_86页.pdf 29-2025年DeepSeek行业应用实践报告_智灵动力_112页.pdf 30-DeepSeek与AIGC应用_北京大学_99页.pdf 31-DeepSeek给我们带来的创业机会-清华大学_76页.pdf 32-DeepSeek如何赋能职场应用_清华大学_75页.pdf 33-DeepSeek与AI幻觉报告_清华大学_38页.pdf 34-普通人如何抓住DeepSeek红利_清华大学_64页.pdf 35-DeepSeek DeepResearch应用报告_清华大学_59页.pdf 36-2025数据资产全过程管理解锁DeepSeek智能引擎报告_全球数据资产理事会_63页.pdf 37-围绕DeepSeek尖刀点加速打造AI产业刀锋链_鼎帷咨询_39页.pdf 38-DeepSeek 7大场景+50大案例+全套提示词 从入门到精通干货_觉醒学院xAI流量坊_112页.pdf 39-DeepSeek对于科技和更广义经济的含义是什么?_摩根斯坦利_70页.pdf 40-2025年DeepSeeK开启AI算法变革元年_甲子光年_17页.pdf 41-2025年DeepSeeK开启AI算法变革元年报告_甲子光年_16页.pdf 42-2025年DeepSeek行业应用案例集解锁智能变革密码_浙江大学_153页.pdf 43-DeepSeek带动国产专用芯片、AIDC、物联网等板块景气度攀升_国金证券_11页.pdf 44-Deepseek国产AI应用的诺曼底时刻-华西证券_33页.pdf 45-DeepSeek惊艳世界,算力与应用将迎来结构性变化_东方证券_18页.pdf 46-DeepSeek题材持续发酵,模型迭代加速推进_上海证券_10页.pdf 47-DeepSeek重塑开源大模型生态AI应用爆发持续推升算力需求-国信证券_42页.pdf 48-大模型步入开源 免费时代,运营商接入DeepSeek拓展新业务_国信证券_17页.pdf 49-为什么DeepSeek最受益方向是云产业链_民生证券_28页.pdf 50-AI人工智能基地2025DeepSeek爆火详细报告_71页.pdf
Simulation ​Simulation Configuration: All configuration files for simulation scenes are located in: /OpenFly-Platform/configs/ Files are ​named after their corresponding scenario (e.g., env_airsim_16.yaml). ​These YAML files define simulation parameters including IP addresses, ports for scene orchestration, and communication ports for toolchain integration. UE (env_ue_xxx) Download files from Huggingface link and unzip, or For custom UE scenes, configure them through UnrealCV plugin integration by following the doc Move env_ue_xxx/ folders to OpenFly-Platform/envs/ue/ AirSim (env_airsim_xxx) Download files from Huggingface link and unzip Move env_airsim_xxx/ folders to OpenFly-Platform/envs/airsim/ 3DGS (env_gs_xxx) Prepare SIBR_viewers Refer to this project # For Ubuntu 22.04, install dependencies: sudo apt install -y cmake libglew-dev libassimp-dev libboost-all-dev libgtk-3-dev libopencv-dev libglfw3-dev libavdevice-dev libavcodec-dev libeigen3-dev libxxf86vm-dev libembree-dev #​ return to the ./OpenFly-Platform directory​ cd envs/gs/SIBR_viewers cmake . -B build -DCMAKE_BUILD_TYPE=Release -DBUILD_IBR_HIERARCHYVIEWER=ON -DBUILD_IBR_ULR=OFF -DBUILD_IBR_DATASET_TOOLS=OFF -DBUILD_IBR_GAUSSIANVIEWER=OFF cmake --build build -j --target install --config Release Download files from Huggingface link (Coming soon), or you can generate custom env following Hierarchial 3DGS Move env_gs_xxx/ folders to OpenFly-Platform/envs/gs/ GTAV (env_game_xxx) ( ! You should configure it under Windows ) Installation of DeepGTAV following this repo Set the game in windowed mode In the graphics setting MSAA has to be disabled, otherwise no objects are detected. The correct settings should be loaded by replacing the files in Documents/Rockstar Games/GTA V/Profiles/, but keep this in mind if you modify the game settings If you have a 4k screen and want to capture 4k data (very hardware hungry, but runs smooth on an RTX3090): Set the screen resolution to "7680x4320DSR" in NVIDIA GeForce Experience. This increases the buffer sizes for pixel perfect 4k segmentation data. ​Simulation Usage: For UE, AirSim and 3DGS, launch the env_bridge, and wait for around 20 seconds until you see 'ready to be connected'. #​ return to the ./OpenFly-Platform directory​ conda activate openfly python scripts/sim/env_bridge.py --env env_xx_xxx #such as env_airsim_16b # wait for 20s Due to the GPU-intensive nature of rendering in UE projects, we start it in headless mode. If you want to view the UE rendering screen, please comment out the headless-related code in envs/ue/env_ue_xxx/CitySample.sh. Note that this may cause the rendering interface to crash. For GTAV pip install pyautogui Clone the DeepGTAV repo, put the compiled plugin to the root directory of GTAV. Start GTA V and enter story mode. Change camera to First-per view (by pressing V) (preferred) In settings, set the screen resolution to "7680x4320DSR" and place the window on top-left corner of your screen Run eval.py for evaluation. Toolchain ​Toolchain Configuration: All configuration files for Toolchain are located in: /OpenFly-Platform/configs/ Files are ​named after their corresponding scenario (e.g., env_airsim_16.yaml). ​For each scenario, the simulation and toolchain share the same configuration file. For the toolchain, the configuration file includes the occupancy map information, trajectory generation parameters, etc. The specific meaning of each parameter can be found in the configuration file /OpenFly-Platform/configs/env_airsim_16.yaml . Scene data files Download files from Huggingface link and unzip,or generate point cloud data by following the toolchain configuration. Move env_xx_xx.pcd files to OpenFly-Platform/scene_data/pcd_map. Raw point cloud files (PCD format) for scenes are stored in: /OpenFly-Platform/scene_data/pcd_map/. Processed segmentation data (e.g., semantic labels, region partitions) is stored in: /OpenFly-Platform/scene_data/seg_map/. ​Toolchain Usage: point cloud generation: UE: We utilized the dataset provided by this project, using depth maps and pose information, and synthesized point clouds according to this code. AirSim: We traversed the simulated environment through a grid, collected lidar point clouds, and transformed the coordinates to the world coordinate system. Finally, we merged all the point clouds to obtain the point cloud of the entire scene. You can adjust the scanning range MapBound and interval LidarDelta in the corresponding configs/env_airsim_xx.yaml. #​ return to the ./OpenFly-Platform directory​ conda activate openfly bash scripts/toolchain/pcdgen_tool.sh env_xx_xxx #such as env_airsim_16 3DGS: We directly utilized the sparse point cloud synthesized by colmap, with the file path at <3dgs_dataset>/camera_calibration/aligned/sparse/0/points3D.ply. GTAV(Coming soon) segmentation generation: UE, AirSim, and 3DGS: Executing the simulation, run the shell script responsible for generation the segmentation.We provide two selectable modes: ​​BEV generation​​ and ​​manual annotation​​. #​ return to the ./OpenFly-Platform directory​ #BEV generation bash scripts/toolchain/seggen_tool.sh env_xx_xxx bev #such as env_airsim_16 ​#manual annotation​ bash scripts/toolchain/seggen_tool.sh env_xx_xxx manual #such as env_airsim_16 Due to the high GPU requirements for rendering urban scenes in UE, to avoid program crashes, the uploaded binary files are configured to use the lowest rendering settings by default (we found that this does not affect the quality of the generated images). If you wish to render images with higher quality, you can refer to env_ue_xxx/City_UE52/Saved/Config/Linux/GameUserSettings_best.ini to modify GameUserSettings.ini. GTAV(Coming soon) trajectory generation: UE, AirSim, and 3DGS: Executing the simulation, run the shell script responsible for generating the trajectory. #​ return to the ./OpenFly-Platform directory​ bash scripts/toolchain/trajgen_tool.sh env_xx_xxx #such as env_airsim_16 GTAV(Coming soon) instruction generation: Our OpenFly dataset has been converted to parquet format. If you want to use this code, you need to convert parquet back to uncompressed format. So we recommend that you use this code to generate your own trajectory instructions. You need to prepare: 1. OpenAI's API 2. Traj in the same format as the OpenFly dataset (a series of images and a jsonl file that records actions) When using GPT to generate instructions, first configure the "key" and "model" in the /OpenFly-Platform/tool_ws/src/ins_gen/gpt_api_config.json file, modify the data directory in /OpenFly-Platform/tool_ws/src/ins_gen/gpt_generation.py. And use a json file to store all the traj directories you want to generate instructions Then run the shell script responsible for generating the instructions. #​ return to the ./OpenFly-Platform directory​ conda activate openfly python3 tool_ws/src/ins_gen/gpt_generation.py --json Your json PATH --type train/test # Training: Data Preparation We are currently optimizing the datasets. The Hugging Face link is temporarily unavailable. Coming soon! For your custom datasets, make sure that you pre-build all the required TensorFlow Datasets (TFDS) datasets. Firstly, prepare the train.json with following format: [ { 'image_path': '', 'gpt_instruction': 'Proceed towards a medium ...', 'action': [9, ...], 'index_list': ['image_2', ...], 'pos': [[-93.81, 536.11, 74.86], ... ], 'yaw': [1.57, 1.57, 1.57, 1.57] }, ... ] Now you can run train/datasets_builder/vln/vln_dataset_builder.py which is used to transfer the generated data format to rlds format. cd train/datasets/vln tfds build --data_dir <TFDS_DATA_DIR> For custom dataset, you need to change the data mixture after transferring the data format in train/datasets/dataset.py to specify the names of datasets. OXE_NAMED_MIXTURES: Dict[str, List[Tuple[str, float]]] = { "vln_mix" : [ ("vln_scene1", 1.0), ("vln_scene2", 1.0), ], } Download Pretrained Weights We released our pretrained weights of Openfly-Agent which is trained by full fine-tuning OpenVLA model checkpoint. Now you can download the weights and directly finetuning your data. Model Link Openfly-Agent huggingface Train The training script are train/train.sh. And you need to change following parameters: grid_size :refers to the token compress ratio. history_frames :refers to the number of frame as history information, which should be corresponded to your custom dataset setting. model_name_or_path :path to the pretrained VLM weights Other hyperparameters like "batch_size", "save_steps" could be customized according to your computation resources. Evaluation You can refer to our evaluation script train/eval.py to evaluate your openfly-agent. We use the eval_test.json file as a demonstration to configure the environments that need to be evaluated. You can customize an evaluation configuration file based on the json files available at https://huggingface.co/datasets/IPEC-COMMUNITY/OpenFly/tree/main/Annotation Test Make sure your trained checkpoint dir has two files: "data_statistics.json". If not, please copy them from downloaded openfly-agent weights or this link. from typing import Dict, List, Optional, Union from pathlib import Path import numpy as np import torch import cv2 from PIL import Image from transformers import LlamaTokenizerFast from transformers import AutoConfig, AutoImageProcessor, AutoModelForVision2Seq, AutoProcessor import os, json from model.prismatic import PrismaticVLM from model.overwatch import initialize_overwatch from model.action_tokenizer import ActionTokenizer from model.vision_backbone import DinoSigLIPViTBackbone, DinoSigLIPImageTransform from model.llm_backbone import LLaMa2LLMBackbone from extern.hf.configuration_prismatic import OpenFlyConfig from extern.hf.modeling_prismatic import OpenVLAForActionPrediction from extern.hf.processing_prismatic import PrismaticImageProcessor, PrismaticProcessor AutoConfig.register("openvla", OpenFlyConfig) AutoImageProcessor.register(OpenFlyConfig, PrismaticImageProcessor) AutoProcessor.register(OpenFlyConfig, PrismaticProcessor) AutoModelForVision2Seq.register(OpenFlyConfig, OpenVLAForActionPrediction) model_name_or_path="IPEC-COMMUNITY/openfly-agent-7b" processor = AutoProcessor.from_pretrained(model_name_or_path) model = AutoModelForVision2Seq.from_pretrained( model_name_or_path, attn_implementation="flash_attention_2", # [Optional] Requires `flash_attn` torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True, ).to("cuda:0") image = Image.fromarray(cv2.imread("example.png")) prompt = "Take off, go straight pass the river" inputs = processor(prompt, [image, image, image]).to("cuda:0", dtype=torch.bfloat16) action = model.predict_action(**inputs, unnorm_key="vln_norm", do_sample=False) print(action)详细解释一下
最新发布
09-26
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

未子涵

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值