【图像超分】论文精读:Using Deep Learning Super-Resolution for Improved Segmentation of SEM Biofilm Images

第一次来请先看这篇文章:【超分辨率(Super-Resolution)】关于【超分辨率重建】专栏的相关说明,包含专栏简介、专栏亮点、适配人群、相关说明、阅读顺序、超分理解、实现流程、研究方向、论文代码数据集汇总等)


前言

论文题目:Using Deep Learning Super-Resolution for Improved Segmentation of SEM Biofilm Images —— 利用深度学习超分辨率改进SEM生物膜图像分割

论文地址:Using Deep Learning Super-Resolution for Improved Segmentation of SEM Biofilm Images

Bibm 2022!利用超分提升SEM生物图像分割效果

Abstract

扫描电子显微镜(SEM)图像通过提供关于生物膜形成、超微结构、细胞及其与材料相互作用的详细信息,在材料生物膜的定量分析中起着至关重要的作用。除了 SEM 成像的一些内在限制外,SEM 体积中图像的一些特征在放大倍率、高分辨率、场深度和 SEM 协议方面往往有所不同。这些 SEM 体积中的细胞大小、几何形状和密度等生物膜形态的定量表征是一个具有挑战性的问题。本文介绍了基于深度学习的超分辨率 (DLSR) 作为解决这个问题的步骤。三种基于生成对抗网络技术的 DLSR 方法应用于 SEM 生物膜数据集,并

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Introduction Gas metal arc welding (GMAW), also known as metal inert gas (MIG) welding, is a widely used industrial process that involves the transfer of metal droplets from a consumable electrode wire to a workpiece through a welding arc. In this process, the welding operator controls various welding parameters, such as welding current, voltage, wire feed speed, and electrode polarity, to achieve the desired weld bead geometry and properties. The metal transfer mechanism plays a critical role in determining the weld quality and productivity in GMAW. Therefore, there has been significant interest in developing automated methods for analyzing the metal transfer images and extracting useful information about the process. In recent years, deep learning has emerged as a powerful technique for analyzing and processing images. Convolutional neural networks (CNNs) are a type of deep learning model that can learn features from images in an end-to-end manner, without requiring explicit feature engineering. In this paper, we present a deep-learning based approach for analyzing metal transfer images in GMAW. We first discuss the dataset used in this study, followed by a detailed description of the proposed method. We then present the experimental results and discuss the implications of our findings. Dataset The metal transfer images were captured using a high-speed camera at a frame rate of 20,000 frames per second. The camera was positioned perpendicular to the welding direction and had a resolution of 1280 × 1024 pixels. The images were captured during the welding of mild steel plates using a GMAW process with a 1.2 mm diameter wire. The welding current, voltage, and wire feed speed were varied to obtain a range of metal transfer modes, including short-circuiting, globular, and spray transfer modes. The dataset consists of 10,000 metal transfer images, with each image labeled with the corresponding metal transfer mode. Proposed method The proposed method for analyzing metal transfer images in GMAW consists of the following steps: 1. Image preprocessing: The metal transfer images are preprocessed to remove any noise and artifacts. A Gaussian filter is applied to smooth the images, followed by a contrast enhancement step using histogram equalization. 2. Feature extraction: A CNN is used to extract features from the preprocessed images. The CNN architecture used in this study is based on the VGG-16 model, which has shown excellent performance in image classification tasks. The VGG-16 model consists of 13 convolutional layers and 3 fully connected layers. The output of the last convolutional layer is used as the feature vector for each image. 3. Classification: The feature vectors extracted from the metal transfer images are used to train a multiclass classification model. In this study, we used a support vector machine (SVM) classifier with a radial basis function (RBF) kernel. The SVM classifier was trained on 80% of the dataset and tested on the remaining 20%. Experimental results The proposed method was evaluated on the dataset of 10,000 metal transfer images. The classification accuracy achieved by the SVM classifier was 96.7%, indicating that the method can accurately classify the metal transfer modes in GMAW. To further validate the performance of the method, we compared it with two other classification models: a decision tree classifier and a random forest classifier. The decision tree classifier achieved an accuracy of 85.2%, while the random forest classifier achieved an accuracy of 94.5%. These results demonstrate that the proposed method outperforms these traditional machine learning models. To further analyze the performance of the method, we conducted a sensitivity analysis by varying the number of convolutional layers in the CNN. We found that the performance of the method improved with increasing number of convolutional layers, up to a certain point, after which there was no significant improvement
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

十小大

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值