Android 使用Stringformat优化你的代码吧

10 篇文章 0 订阅
8 篇文章 0 订阅

在Android开发中,比如我们需要有一个验证码倒计时,

很多开发的时候都会这样写

int count = 60;



textView.setText(""+count);
count--;
handler.sendxxxx;

这样每组合一个数字都会生成一个新的字符串 会重新在堆栈开辟空间

优化方式:

String.format()

tvReSendMessage.setText(String.format(context.getResources().getString(R.string.re_send_message_format),count));
re_send_message_format:
重新发送 %1$d

%1 对应是第几个参数,$d代表 第一个需要格式化的参数是整形

%d 整形

%s 字符型

%f 浮点型

%s 字符型

如果是多参数

可以写成:

你的名字  %1$s %2$d %3$f

如果有C/C++基础的童鞋一定会知道

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要在Android Studio中使用SIFT算法实现增强现实,需要进行以下步骤: 1. 在build.gradle文件中添加OpenCV库的依赖,可以在OpenCV官网下载OpenCV for Android SDK并引入到项目中。 2. 在AndroidManifest.xml文件中添加摄像头和写入外部存储的权限。 3. 在MainActivity中创建一个OpenCVLoader回调函数,以确保OpenCV库已成功加载。 4. 在CameraPreview类中使用Camera2 API获取摄像头预览数据,将预览数据转换为OpenCV中的Mat对象。 5. 使用SIFT算法在Mat对象中检测关键点和描述符。 6. 将检测到的关键点和描述符与预先存储的目标图像的关键点和描述符进行匹配。 7. 根据匹配结果在相机预览中绘制目标图像的边框或其他图形。 以下是一个简单的示例代码: ```java public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback, Camera.PreviewCallback { private static final String TAG = "CameraPreview"; private Camera mCamera; private SurfaceHolder mHolder; private Mat mGray; private MatOfKeyPoint mKeypoints; private Mat mDescriptors; private DescriptorMatcher mMatcher; private MatOfDMatch mMatches; public CameraPreview(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); mGray = new Mat(); mKeypoints = new MatOfKeyPoint(); mDescriptors = new Mat(); mMatcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING); mMatches = new MatOfDMatch(); } @Override public void surfaceCreated(SurfaceHolder holder) { try { mCamera.setPreviewDisplay(holder); mCamera.setPreviewCallback(this); } catch (IOException e) { Log.e(TAG, "Error setting camera preview", e); } } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { if (mHolder.getSurface() == null) { return; } try { mCamera.stopPreview(); } catch (Exception e) { Log.e(TAG, "Error stopping camera preview", e); } try { Camera.Parameters parameters = mCamera.getParameters(); parameters.setPreviewSize(width, height); mCamera.setParameters(parameters); mCamera.setPreviewDisplay(mHolder); mCamera.setPreviewCallback(this); mCamera.startPreview(); } catch (Exception e) { Log.e(TAG, "Error starting camera preview", e); } } @Override public void surfaceDestroyed(SurfaceHolder holder) { mCamera.stopPreview(); mCamera.setPreviewCallback(null); mCamera.release(); mCamera = null; } @Override public void onPreviewFrame(byte[] data, Camera camera) { Camera.Size size = camera.getParameters().getPreviewSize(); int width = size.width; int height = size.height; Mat previewMat = new Mat(height + height / 2, width, CvType.CV_8UC1); previewMat.put(0, 0, data); Imgproc.cvtColor(previewMat, mGray, Imgproc.COLOR_YUV2GRAY_NV21); Features2d.drawKeypoints(mGray, mKeypoints, mGray); if (!mDescriptors.empty()) { mMatcher.match(mDescriptors, mDescriptors, mMatches); List<DMatch> matchesList = mMatches.toList(); if (matchesList.size() > 10) { ArrayList<Point> srcPoints = new ArrayList<>(); ArrayList<Point> dstPoints = new ArrayList<>(); for (DMatch match : matchesList) { srcPoints.add(mKeypoints.toList().get(match.queryIdx).pt); dstPoints.add(mKeypoints.toList().get(match.trainIdx).pt); } MatOfPoint2f src = new MatOfPoint2f(srcPoints.toArray(new Point[srcPoints.size()])); MatOfPoint2f dst = new MatOfPoint2f(dstPoints.toArray(new Point[dstPoints.size()])); Mat homography = Calib3d.findHomography(src, dst, Calib3d.RANSAC, 5); if (!homography.empty()) { MatOfPoint2f cornerPoints = new MatOfPoint2f(new Point(0, 0), new Point(0, height - 1), new Point(width - 1, height - 1), new Point(width - 1, 0)); MatOfPoint2f transformedCornerPoints = new MatOfPoint2f(); Core.perspectiveTransform(cornerPoints, transformedCornerPoints, homography); Point[] points = transformedCornerPoints.toArray(); Canvas canvas = mHolder.lockCanvas(); canvas.drawColor(0, PorterDuff.Mode.CLEAR); canvas.drawLines(new float[]{(float) points[0].x, (float) points[0].y, (float) points[1].x, (float) points[1].y, (float) points[1].x, (float) points[1].y, (float) points[2].x, (float) points[2].y, (float) points[2].x, (float) points[2].y, (float) points[3].x, (float) points[3].y, (float) points[3].x, (float) points[3].y, (float) points[0].x, (float) points[0].y}, new Paint()); mHolder.unlockCanvasAndPost(canvas); } } } } public void setTargetImage(Mat targetImage) { FeatureDetector detector = FeatureDetector.create(FeatureDetector.SIFT); detector.detect(targetImage, mKeypoints); DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.SIFT); extractor.compute(targetImage, mKeypoints, mDescriptors); } } ``` 在MainActivity中,可以使用以下代码初始化CameraPreview并设置目标图像: ```java public class MainActivity extends AppCompatActivity { private static final String TAG = "MainActivity"; private CameraPreview mPreview; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Camera camera = Camera.open(); mPreview = new CameraPreview(this, camera); FrameLayout previewLayout = findViewById(R.id.camera_preview); previewLayout.addView(mPreview); Mat targetImage = Imgcodecs.imread(Environment.getExternalStorageDirectory().getPath() + "/target.jpg", Imgcodecs.IMREAD_GRAYSCALE); mPreview.setTargetImage(targetImage); } @Override protected void onResume() { super.onResume(); if (OpenCVLoader.initDebug()) { Log.d(TAG, "OpenCV loaded successfully"); mPreview.setVisibility(View.VISIBLE); } else { Log.d(TAG, "OpenCV not loaded"); } } } ``` 注意,此示例代码仅用于演示SIFT算法在Android中的应用,实际使用中还需要进行优化和改进。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值