A Sensitivity Analysis of Contribution-Based Cooperative Co-evolutionary Algorithms

0、论文背景

本文在CBCC的基础上,通过分析现有的CBCC技术在更现实的情况下的性能,即当分解误差不可避免,不平衡水平较低或中等时。我们的深入分析表明,即使在这些情况下,CBCC算法也是传统CC技术的优越的替代品。

Kazimipour B, Omidvar M N, Li X, et al. A sensitivity analysis of contribution-based cooperative co-evolutionary algorithms[C]//2015 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2015: 417-424.

1、 实验设置

有关CBCC,请参照博客:CBCC

实验设置1:分解正确率。

首先我们将已分类正确的分到各个组,将所有未分类的变量分组为一个额外的组,我们称之为未标记组。为此,我们从所有组件中均匀地随机选择一个百分比的变量,并将它们聚合到未标记的组中。结果组件包含来自所有其他组的变量。因此,这个组与所有其他组件都有很强的相互作用。

实验设置2:不平衡水平。 

通过以下等式,来使得每个子空间之间处于不平衡的状态。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
Introduction Gas metal arc welding (GMAW), also known as metal inert gas (MIG) welding, is a widely used industrial process that involves the transfer of metal droplets from a consumable electrode wire to a workpiece through a welding arc. In this process, the welding operator controls various welding parameters, such as welding current, voltage, wire feed speed, and electrode polarity, to achieve the desired weld bead geometry and properties. The metal transfer mechanism plays a critical role in determining the weld quality and productivity in GMAW. Therefore, there has been significant interest in developing automated methods for analyzing the metal transfer images and extracting useful information about the process. In recent years, deep learning has emerged as a powerful technique for analyzing and processing images. Convolutional neural networks (CNNs) are a type of deep learning model that can learn features from images in an end-to-end manner, without requiring explicit feature engineering. In this paper, we present a deep-learning based approach for analyzing metal transfer images in GMAW. We first discuss the dataset used in this study, followed by a detailed description of the proposed method. We then present the experimental results and discuss the implications of our findings. Dataset The metal transfer images were captured using a high-speed camera at a frame rate of 20,000 frames per second. The camera was positioned perpendicular to the welding direction and had a resolution of 1280 × 1024 pixels. The images were captured during the welding of mild steel plates using a GMAW process with a 1.2 mm diameter wire. The welding current, voltage, and wire feed speed were varied to obtain a range of metal transfer modes, including short-circuiting, globular, and spray transfer modes. The dataset consists of 10,000 metal transfer images, with each image labeled with the corresponding metal transfer mode. Proposed method The proposed method for analyzing metal transfer images in GMAW consists of the following steps: 1. Image preprocessing: The metal transfer images are preprocessed to remove any noise and artifacts. A Gaussian filter is applied to smooth the images, followed by a contrast enhancement step using histogram equalization. 2. Feature extraction: A CNN is used to extract features from the preprocessed images. The CNN architecture used in this study is based on the VGG-16 model, which has shown excellent performance in image classification tasks. The VGG-16 model consists of 13 convolutional layers and 3 fully connected layers. The output of the last convolutional layer is used as the feature vector for each image. 3. Classification: The feature vectors extracted from the metal transfer images are used to train a multiclass classification model. In this study, we used a support vector machine (SVM) classifier with a radial basis function (RBF) kernel. The SVM classifier was trained on 80% of the dataset and tested on the remaining 20%. Experimental results The proposed method was evaluated on the dataset of 10,000 metal transfer images. The classification accuracy achieved by the SVM classifier was 96.7%, indicating that the method can accurately classify the metal transfer modes in GMAW. To further validate the performance of the method, we compared it with two other classification models: a decision tree classifier and a random forest classifier. The decision tree classifier achieved an accuracy of 85.2%, while the random forest classifier achieved an accuracy of 94.5%. These results demonstrate that the proposed method outperforms these traditional machine learning models. To further analyze the performance of the method, we conducted a sensitivity analysis by varying the number of convolutional layers in the CNN. We found that the performance of the method improved with increasing number of convolutional layers, up to a certain point, after which there was no significant improvement
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

身影王座

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值