VM图像类型转换专题(C++)

前言

当前VM提供了VM算法平台、VM SDK开发、算子SDK开发和算子模块开发四种开发模式兼顾各种开发族群。如何实现图像的正确输入是开发者在使用通用算法库时经常遇到的首要问题,当前市面上又存在多种图像格式,如Bitmap、Mat和Halcon中的图像类型等,而VM这么多种开发模式又分别有不同的图像格式,本文将介绍如何实现这些图像类型之间的互转。本文是C++语言的示例代码,C#语言示例代码请参考上一篇文章。
注:本文章仅限VM4.2版本;

VM中对应的图像类型如下:

  1. 相机:图像数据流(此处图像数据流类型是MyCamera.MV_FRAME_OUT,是来自MvCameraControl.Net.dll,即海康机器人工业相机SDK,在MVS SDK和算子SDK中都有这个dll;算子SDK的MVDCamera.Net.dll也可以进行相机取流,它是对MvCameraControl.Net.dll的二次封装,用MVDCamera.Net.dll时,图像数据流类型是IMvdImage);

  2. VM:脚本输入图像(图像类型是ImageData);

  3. VM SDK:流程输入图像(IoImage)、Group输入图像(IoImage)、图像源SDK输入图像(ImageBaseData)、模块输入图像(ImageBaseData);

  4. 算子SDK:输入图像(IMvdImage);

  5. 算法模块:输入图像(HKA_IMAGE)。

需要注意的是, 三通道图像类型转换时,Bitmap、Mat为BGR,VM和二次开发为RGB。
下面针对常用图像转换场景,提供对应的图像转换示例供大家参考(每种图像转换用函数表达,函数输入某种图像类型,返回转换后的某种图像类型)。

图像转换

1.相机取流转VM对应类型图像格式

相机截取帧格式为MyCamera.MV_FRAME_OUT,以下为分别转换为流程输入图像、Group输入图像、图像源SDK输入图像、模块输入图像、算子输入图像的示例代码。

1.1 相机采图转流程输入(IoImage)、Group输入(IoImage)

//图像转换函数
IoImage MV_FRAME_OUTToProcedureIoImage(MV_FRAME_OUT stImageInfo)
{
	// TODO: 在此处添加实现代码.
	IoImage ioImage{};
	unsigned char* m_pSaveImageBuf = NULL;//建议在头文件声明,控制释放时机防止内存泄漏,写在此处为了方便参考
	m_pSaveImageBuf = (unsigned char*)malloc(sizeof(unsigned char) * stImageInfo.stFrameInfo.nFrameLen);
	memcpy(m_pSaveImageBuf, stImageInfo.pBufAddr, stImageInfo.stFrameInfo.nFrameLen);

	ioImage.stImage.Width = stImageInfo.stFrameInfo.nWidth;
	ioImage.stImage.Height = stImageInfo.stFrameInfo.nHeight;
	ioImage.stImage.DataLen = stImageInfo.stFrameInfo.nFrameLen;
	ioImage.stImage.ImageData = m_pSaveImageBuf;
	if (stImageInfo.stFrameInfo.enPixelType == PixelType_Gvsp_Mono8)
	{
		ioImage.stImage.Pixelformat = MvdPixelFormat::MVD_PIXEL_MONO_08;
	}
	else if (stImageInfo.stFrameInfo.enPixelType == PixelType_Gvsp_RGB8_Packed)
	{
		ioImage.stImage.Pixelformat = MvdPixelFormat::MVD_PIXEL_RGB_RGB24_C3;
	}
	return ioImage;
}

1.2 相机采图转图像源SDK输入(ImageBaseData)、模块输入(ImageBaseData)

//图像转换函数
ImageBaseData MV_FRAME_OUTToImageBaseData(MV_FRAME_OUT stImageInfo)
{
	// TODO: 在此处添加实现代码.
	ImageBaseData imageBaseData{};
	unsigned char* m_pSaveImageBuf = NULL;//建议在头文件声明,控制释放时机防止内存泄漏,写在此处为了方便参考
	m_pSaveImageBuf = (unsigned char*)malloc(sizeof(unsigned char) * stImageInfo.stFrameInfo.nFrameLen);
	memcpy(m_pSaveImageBuf, stImageInfo.pBufAddr, stImageInfo.stFrameInfo.nFrameLen);

	imageBaseData.Width = stImageInfo.stFrameInfo.nWidth;
	imageBaseData.Height = stImageInfo.stFrameInfo.nHeight;
	imageBaseData.DataLen = stImageInfo.stFrameInfo.nFrameLen;
	imageBaseData.ImageData = m_pSaveImageBuf;
	if (stImageInfo.stFrameInfo.enPixelType == PixelType_Gvsp_Mono8)
	{
		imageBaseData.Pixelformat = MvdPixelFormat::MVD_PIXEL_MONO_08;
	}
	else if (stImageInfo.stFrameInfo.enPixelType == PixelType_Gvsp_RGB8_Packed)
	{
		imageBaseData.Pixelformat = MvdPixelFormat::MVD_PIXEL_RGB_RGB24_C3;
	}
	return imageBaseData;
}

1.3 相机采图转算子输入(IMvdImage)

//图像转换函数
IMvdImage* MV_FRAME_OUTToIMvdImage(MV_FRAME_OUT stImageInfo)
{
	// TODO: 在此处添加实现代码.
	IMvdImage* iMvdImage = NULL;
	unsigned char* m_pSaveImageBuf = NULL;//建议在头文件声明,控制释放时机防止内存泄漏,写在此处为了方便参考
	m_pSaveImageBuf = (unsigned char*)malloc(sizeof(unsigned char) * stImageInfo.stFrameInfo.nFrameLen);
	memcpy(m_pSaveImageBuf, stImageInfo.pBufAddr, stImageInfo.stFrameInfo.nFrameLen);
	MVD_IMAGE_DATA_INFO stImageData{ };

	if (stImageInfo.stFrameInfo.enPixelType == PixelType_Gvsp_Mono8)
	{
		stImageData.stDataChannel[0].nRowStep = stImageInfo.stFrameInfo.nWidth;
		stImageData.stDataChannel[0].nLen = stImageInfo.stFrameInfo.nFrameLen;
		stImageData.stDataChannel[0].nSize = stImageInfo.stFrameInfo.nFrameLen;
		stImageData.stDataChannel[0].pData = m_pSaveImageBuf;
		iMvdImage->InitImage(stImageInfo.stFrameInfo.nWidth, stImageInfo.stFrameInfo.nHeight, VisionDesigner::_MVD_PIXEL_FORMAT_::MVD_PIXEL_MONO_08, stImageData);
	}
	else if (stImageInfo.stFrameInfo.enPixelType == PixelType_Gvsp_RGB8_Packed)
	{
		stImageData.stDataChannel[0].nRowStep = stImageInfo.stFrameInfo.nWidth*3;
		stImageData.stDataChannel[0].nLen = stImageInfo.stFrameInfo.nFrameLen;
		stImageData.stDataChannel[0].nSize = stImageInfo.stFrameInfo.nFrameLen;
		stImageData.stDataChannel[0].pData = m_pSaveImageBuf;
		iMvdImage->InitImage(stImageInfo.stFrameInfo.nWidth, stImageInfo.stFrameInfo.nHeight, VisionDesigner::_MVD_PIXEL_FORMAT_::MVD_PIXEL_BGR_BGR24_C3, stImageData);
	}
	return iMvdImage;
}

2.QImage取图与VM对应图像格式互转

QImage转流程输入、Group输入、图像源SDK输入、模块输入、算子输入、流程输出图像互相转换的示例代码如下所示。

2.1 QImage转流程输入(IoImage)、Group输入(IoImage)

IoImage QImageToIoImage(QImage qImage)
{
    QString strReMsg = "";
    IoImage ioImage;
    switch (qImage.format())
    {
    case QImage::Format_Indexed8:
        ioImage.stImage.Width=qImage.width();
        ioImage.stImage.Height=qImage.height();
        ioImage.stImage.DataLen=qImage.sizeInBytes();
        ioImage.stImage.Pixelformat=_MvdPixelFormat_::MVD_PIXEL_MONO_08;
        ioImage.stImage.ImageData=(void *)qImage.constBits();
        //ioImage.stImage.ImageData=qImage.data_ptr();
        //QImage(ioImage.stImage.ImageData,ioImage.stImage.Width,ioImage.stImage.Width,QImage::Format_RGB888);
        break;       
    case QImage::Format_RGB888:
        ioImage.stImage.Width=qImage.width();
        ioImage.stImage.Height=qImage.height();
        ioImage.stImage.DataLen=qImage.sizeInBytes();
        ioImage.stImage.Pixelformat=_MvdPixelFormat_::MVD_PIXEL_RGB_RGB24_C3;
        ioImage.stImage.ImageData=(void *)qImage.constBits();
        break;
    }
    strReMsg = "QImageToIoImage s uccess.";
    ui->textEdit->append(strReMsg);
    return ioImage;
}

2.2 QImage转图像源SDK输入(ImageBaseData)、模块输入(ImageBaseData)

ImageBaseData QImageToImageBaseData(QImage qImage)
{
    ImageBaseData  imageBaseData;
    switch (qImage.format())
    {
    case QImage::Format_Indexed8:
        imageBaseData.Width=qImage.width();
        imageBaseData.Height=qImage.height();
        imageBaseData.DataLen=qImage.sizeInBytes();
        imageBaseData.Pixelformat=_MvdPixelFormat_::MVD_PIXEL_MONO_08;
        imageBaseData.ImageData=(void *)qImage.constBits();
        //ioImage.stImage.ImageData=qImage.data_ptr();
        break;
    case QImage::Format_RGB888:
        imageBaseData.Width=qImage.width();
        imageBaseData.Height=qImage.height();
        imageBaseData.DataLen=qImage.sizeInBytes();
        imageBaseData.Pixelformat=_MvdPixelFormat_::MVD_PIXEL_RGB_RGB24_C3;
        imageBaseData.ImageData=(void *)qImage.constBits();
        break;
    }
    return imageBaseData;
}


2.3 Qimage与算子图像(IMvdImage)互转

IMvdImage* QImageToIMvdImage(QImage qImage)
{
    IMvdImage* iMvdImage;
    CreateImageInstance(&iMvdImage);
    MVD_IMAGE_DATA_INFO stImageData;
    switch (qImage.format())
    {
    case QImage::Format_Indexed8:
        stImageData.stDataChannel[0].nRowStep = qImage.width();
        stImageData.stDataChannel[0].nLen = qImage.sizeInBytes();
        stImageData.stDataChannel[0].nSize = qImage.sizeInBytes();
        stImageData.stDataChannel[0].pData = (unsigned char*)qImage.constBits();
        iMvdImage->InitImage(qImage.width(), qImage.height(),MVD_PIXEL_FORMAT::MVD_PIXEL_MONO_08, stImageData);
        break;
    case QImage::Format_RGB888:
        stImageData.stDataChannel[0].nRowStep = qImage.width()*3;
        stImageData.stDataChannel[0].nLen = qImage.sizeInBytes();
        stImageData.stDataChannel[0].nSize = qImage.sizeInBytes();
        stImageData.stDataChannel[0].pData = (unsigned char*)qImage.constBits();
        iMvdImage->InitImage(qImage.width(), qImage.height(),MVD_PIXEL_FORMAT::MVD_PIXEL_RGB_RGB24_C3, stImageData);
        break;
    }
    return iMvdImage;
}

//算子转QImage
QImage IMvdImageToQImage(IMvdImage * iMvdImage)
{
    if(iMvdImage->GetPixelFormat() == MVD_PIXEL_FORMAT::MVD_PIXEL_MONO_08)
    {
        //QImage qImage((const uchar*)iMvdImage->GetImageData(0)->pData,iMvdImage->GetWidth(),iMvdImage->GetHeight(),iMvdImage->GetImageData(0)->nLen/iMvdImage->GetHeight(),QImage::Format_Indexed8);
        //QImage qImage(iMvdImage->GetImageData(0)->pData,iMvdImage->GetWidth(),iMvdImage->GetHeight(),iMvdImage->GetImageData(0)->nLen/iMvdImage->GetHeight(),QImage::Format_Indexed8);
        QImage qImage((const uchar*)iMvdImage->GetImageData(0)->pData,iMvdImage->GetWidth(),iMvdImage->GetHeight(),QImage::Format_Grayscale8);//Format_Indexed8
        return qImage;
    }
    if(iMvdImage->GetPixelFormat() == MVD_PIXEL_FORMAT::MVD_PIXEL_RGB_RGB24_C3)
    {
        QImage qImage((const uchar*)iMvdImage->GetImageData(0)->pData,iMvdImage->GetWidth(),iMvdImage->GetHeight(),QImage::Format_RGB888);
        return qImage;
    }
}

2.4 流程输出(IoImage)转QImage

QImage IoImageToQImage(IoImage inIoImage)
{
    if(inIoImage.stImage.Pixelformat == _MvdPixelFormat_::MVD_PIXEL_MONO_08)
    {
        QImage qImage((const uchar*)inIoImage.stImage.ImageData,inIoImage.stImage.Width,inIoImage.stImage.Height,QImage::Format_Grayscale8);
        return qImage;
    }
    if(inIoImage.stImage.Pixelformat == _MvdPixelFormat_::MVD_PIXEL_RGB_RGB24_C3)
    {
        QImage qImage((const uchar*)inIoImage.stImage.ImageData,inIoImage.stImage.Width,inIoImage.stImage.Height,QImage::Format_RGB888);
        return qImage;
    }
}

3.Mat取图与VM对应图像格式互转

Mat与流程输入、Group输入、图像源SDK输入、模块输入、算子输入、算子输出、流程输出、脚本图像互转代码示例如下:

3.1 Mat转流程输入(IoImage)、Group输入(IoImage)

IoImage MatToProcedureInputImage(Mat matInputImg)
{
	if (matInputImg.empty())
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_PARAMETER_ILLEGAL);
	}
	if ((CV_8UC1 != matInputImg.type()) && (CV_8UC3 != matInputImg.type()))
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_SUPPORT);
	}

	IoImage  m_pIoImage{};
	uint dataLen = (uint)(matInputImg.cols * matInputImg.rows * matInputImg.channels());
	CString strReMsg = _T("");
	try
	{
		if (CV_8UC1 == matInputImg.type())
		{
			m_pIoImage.stImage.Width = matInputImg.cols;
			m_pIoImage.stImage.Width = matInputImg.cols;
			m_pIoImage.stImage.Height = matInputImg.rows;
			m_pIoImage.stImage.DataLen = dataLen;
			m_pIoImage.stImage.Pixelformat = _MvdPixelFormat_::MVD_PIXEL_MONO_08;
			m_pIoImage.stImage.ImageData = matInputImg.ptr(0);
		}
		else if (CV_8UC3 == matInputImg.type())
		{
			cv::cvtColor(matInputImg, matInputImg, CV_BGR2RGB);
			m_pIoImage.stImage.Width = matInputImg.cols;
			m_pIoImage.stImage.Height = matInputImg.rows;
			m_pIoImage.stImage.DataLen = dataLen;
			m_pIoImage.stImage.Pixelformat = _MvdPixelFormat_::MVD_PIXEL_RGB_RGB24_C3;
			m_pIoImage.stImage.ImageData = matInputImg.ptr(0);
		}
	}
	catch (CVmException e)
	{
		strReMsg.Format(_T("%x"), e.GetErrorCode());
		strReMsg = _T("0x") + strReMsg + _T(" == SaveProcedureToFile()");
	}

	return m_pIoImage;
}

3.2 Mat转图像源SDK输入(ImageBaseData)、模块输出(ImageBaseData)

ImageBaseData MatToImageBaseData(Mat matInputImg)
{
	if (matInputImg.empty())
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_PARAMETER_ILLEGAL);
	}
	if ((CV_8UC1 != matInputImg.type()) && (CV_8UC3 != matInputImg.type()))
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_SUPPORT);
	}

	ImageBaseData  m_pImageBaseData{};
	uint dataLen = (uint)(matInputImg.cols * matInputImg.rows * matInputImg.channels());
	CString strReMsg = _T("");
	try
	{
		if (CV_8UC1 == matInputImg.type())
		{
			m_pImageBaseData.Width = matInputImg.cols;
			m_pImageBaseData.Height = matInputImg.rows;
			m_pImageBaseData.DataLen = dataLen;
			m_pImageBaseData.Pixelformat = _MvdPixelFormat_::MVD_PIXEL_MONO_08;
			m_pImageBaseData.ImageData = matInputImg.ptr(0);
		}
		else if (CV_8UC3 == matInputImg.type())
		{
			cv::cvtColor(matInputImg, matInputImg, CV_BGR2RGB);
			m_pImageBaseData.Width = matInputImg.cols;
			m_pImageBaseData.Height = matInputImg.rows;
			m_pImageBaseData.DataLen = dataLen;
			m_pImageBaseData.Pixelformat = _MvdPixelFormat_::MVD_PIXEL_RGB_RGB24_C3;
			m_pImageBaseData.ImageData = matInputImg.ptr(0);
		}
	}
	catch (CVmException e)
	{
		strReMsg.Format(_T("%x"), e.GetErrorCode());
		strReMsg = _T("0x") + strReMsg + _T(" == SaveProcedureToFile()");
	}

	return m_pImageBaseData;
}

3.3 Mat与算子图像(IMvdImage)互转

IMvdImage* MatToIMvdImage(Mat& matInputImg)
{
	if (matInputImg.empty())
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_PARAMETER_ILLEGAL);
	}
	if ((CV_8UC1 != matInputImg.type()) && (CV_8UC3 != matInputImg.type()))
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_SUPPORT);
	}

	IMvdImage* pMvdImg = NULL;
	uint dataLen = (uint)(matInputImg.cols * matInputImg.rows * matInputImg.channels());
	try
	{
		int nRet = CreateImageInstance(&pMvdImg);
		if (MVD_OK != nRet)
		{
			throw IMVDException(MVD_MODUL_IMAGE, nRet, "Failed to create image instance.");
		}
		if (CV_8UC1 == matInputImg.type())
		{
			MVD_IMAGE_DATA_INFO stImageData{ };
			stImageData.stDataChannel[0].nRowStep = (uint)matInputImg.cols;
			stImageData.stDataChannel[0].nLen = dataLen;
			stImageData.stDataChannel[0].nSize = dataLen;
			stImageData.stDataChannel[0].pData = matInputImg.ptr(0);
			pMvdImg->InitImage((uint)matInputImg.cols, (uint)matInputImg.rows, VisionDesigner::_MVD_PIXEL_FORMAT_::MVD_PIXEL_MONO_08, stImageData);

		}
		else if (CV_8UC3 == matInputImg.type())
		{
			cv::cvtColor(matInputImg, matInputImg, CV_BGR2RGB);
			MVD_IMAGE_DATA_INFO stImageData{ };
			stImageData.stDataChannel[0].nRowStep = (uint)matInputImg.cols*matInputImg.channels();
			stImageData.stDataChannel[0].nLen = dataLen;
			stImageData.stDataChannel[0].nSize = dataLen;
			stImageData.stDataChannel[0].pData = matInputImg.ptr(0);
			pMvdImg->InitImage((uint)matInputImg.cols, (uint)matInputImg.rows, VisionDesigner::_MVD_PIXEL_FORMAT_::MVD_PIXEL_RGB_RGB24_C3, stImageData);
		}
	}
	catch (IMVDException &ex)
	{
		if (NULL != pMvdImg)
		{
			DestroyImageInstance(pMvdImg);
			pMvdImg = NULL;
		}
		throw ex;
	}

	return pMvdImg;
}

Mat IMvdImageToMat(IMvdImage* pMvdImg)
{
	Mat stMatImg;

	if (NULL == pMvdImg)
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_PARAMETER_ILLEGAL);
	}
	MVD_PIXEL_FORMAT enSrcPixelFormat = pMvdImg->GetPixelFormat();
	if ((VisionDesigner::MVD_PIXEL_FORMAT::MVD_PIXEL_MONO_08 != enSrcPixelFormat) && (VisionDesigner::MVD_PIXEL_FORMAT::MVD_PIXEL_RGB_RGB24_C3 != enSrcPixelFormat))
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_SUPPORT);
	}

	// 根据传入的ImvdImage图像初始化mat
	if (VisionDesigner::MVD_PIXEL_FORMAT::MVD_PIXEL_MONO_08 == enSrcPixelFormat)
	{
		stMatImg.create((int)pMvdImg->GetHeight(), (int)pMvdImg->GetWidth(), CV_8UC1);
	}
	else if (VisionDesigner::MVD_PIXEL_FORMAT::MVD_PIXEL_RGB_RGB24_C3 == enSrcPixelFormat)
	{
		stMatImg.create((int)pMvdImg->GetHeight(), (int)pMvdImg->GetWidth(), CV_8UC3);
	}
	if (stMatImg.empty())
	{
		throw IMVDException(MVD_MODUL_APP, MVD_E_RESOURCE);
	}

	// 上述方式为mat分配的内存一定是连续的
	uchar* pdata = stMatImg.ptr<uchar>(0);
	memcpy(pdata, pMvdImg->GetImageData(0)->pData, pMvdImg->GetImageData(0)->nLen);
	if (CV_8UC3 == stMatImg.type())
	{
		cvtColor(stMatImg, stMatImg, CV_RGB2BGR);
	}
	return stMatImg;
}

3.4 流程输出(IoImage)转Mat

Mat IoImageToMat(IoImage m_pIoImage)
{
	Mat stMatImg;
	CString strReMsg = _T("");
	try
	{
		MvdPixelFormat srcPixelFormat = m_pIoImage.stImage.Pixelformat;
		if ((VisionMasterSDK::MvdPixelFormat::MVD_PIXEL_MONO_08 != srcPixelFormat) && (VisionMasterSDK::MvdPixelFormat::MVD_PIXEL_RGB_RGB24_C3 != srcPixelFormat))
		{
			throw CVmException(0xE0000503);
		}
		// 根据传入的IoImage图像初始化mat
		if (VisionMasterSDK::MvdPixelFormat::MVD_PIXEL_MONO_08 == srcPixelFormat)
		{
			stMatImg.create((int)m_pIoImage.stImage.Height, (int)m_pIoImage.stImage.Width, CV_8UC1);
		}
		else if (VisionMasterSDK::MvdPixelFormat::MVD_PIXEL_RGB_RGB24_C3 == srcPixelFormat)
		{
			stMatImg.create((int)m_pIoImage.stImage.Height, (int)m_pIoImage.stImage.Width, CV_8UC3);
		}
		if (stMatImg.empty())
		{
			throw IMVDException(MVD_MODUL_APP, MVD_E_RESOURCE);
		}
		// 上述方式为mat分配的内存一定是连续的
		uchar* pdata = stMatImg.ptr<uchar>(0);
		memcpy(pdata, m_pIoImage.stImage.ImageData, m_pIoImage.stImage.DataLen);
		if (CV_8UC3 == stMatImg.type())
		{
			cvtColor(stMatImg, stMatImg, CV_RGB2BGR);
		}
	}
	catch (CVmException e)
	{
		strReMsg.Format(_T("%x"), e.GetErrorCode());
		strReMsg = _T("0x") + strReMsg + _T(" == SaveProcedureToFile()");
	}

	return stMatImg;
}

4.Halcon与VM图像格式互转

Halcon图像转换为流程输入图像、Group输入图像、图像源SDK输入图像、模块输入图像、算子输入图像,算子输出图像转Halcon图像,流程输出图像转换为Halcon图像,互转示例代码如下 :

4.1 Halcon图像转流程输入(IoImage)、Group输入(IoImage)

IoImage HImageToIoImage(HImage image)
{
	IoImage ioImage{};
	ImageBaseData imageBaseData = HImageToImageBaseData(image);
	ioImage.stImage = imageBaseData;
	return ioImage;
}

4.2 Halcon图像转图像源SDK输入(ImageBaseData)、模块输入(ImageBaseData)

ImageBaseData HImageToImageBaseData(HImage image)
{
	ImageBaseData imageBaseData{};
	//获取图像通道位深度信息
	HString bitdepth = image.GetChannelInfo("type", 1);
	assert(!strcmp(bitdepth.Text(), "byte"));
	int channels = image.CountChannels();
	assert(channels == 1 || channels == 3);
	HString type;
	Hlong width, height;
	//单通道
	if (channels == 1)
	{
		void* imagePtr = image.GetImagePointer1(&type, &width, &height);
		imageBaseData.DataLen = width * height;
		imageBaseData.Width = width;
		imageBaseData.Height = height;
		imageBaseData.Pixelformat = MvdPixelFormat::MVD_PIXEL_MONO_08;
		imageBaseData.ImageData = imagePtr;
	}
	//3通道
	if (channels == 3)
	{
		void* imageRedPtr;
		void* imageGreenPtr;
		void* imageBluePtr;
		image.GetImagePointer3(&imageRedPtr, &imageGreenPtr, &imageBluePtr, &type, &width, &height);
		byte* imageRedBuf = new byte[width * height];
		byte* imageGreenBuf = new byte[width * height];
		byte* imageBlueBuf = new byte[width * height];
		memcpy(imageRedBuf, imageRedPtr, width * height);
		memcpy(imageGreenBuf, imageGreenPtr, width * height);
		memcpy(imageBlueBuf, imageBluePtr, width * height);
		byte* imageBuf = new byte[width * height * 3];
		int index = 0;
		for (int row = 0; row < height; row++)
		{
			for (int col = 0; col < width; col++, index += 3)
			{
				imageBuf[index] = imageRedBuf[row * width + col];
				imageBuf[index + 1] = imageGreenBuf[row * width + col];
				imageBuf[index + 2] = imageBlueBuf[row * width + col];
			}
		}
		delete[] imageRedBuf;
		delete[] imageGreenBuf;
		delete[] imageBlueBuf;
		imageBaseData.DataLen = width * height * 3;
		imageBaseData.Width = width;
		imageBaseData.Height = height;
		imageBaseData.Pixelformat = MvdPixelFormat::MVD_PIXEL_RGB_RGB24_C3;
		imageBaseData.ImageData = imageBuf;
	}
	return imageBaseData;
}


4.3 Halcon图像与算子图像(IMvdImage)互转

IMvdImage* HImageToMvdImage(HImage image)
{
	//获取图像通道位深度信息
	HString bitdepth = image.GetChannelInfo("type", 1);
	assert(!strcmp(bitdepth.Text(), "byte"));
	int channels = image.CountChannels();
	assert(channels == 1 || channels == 3);
	HString type;
	Hlong width, height;
	//单通道
	if (channels == 1)
	{
		IMvdImage* pMvdImage;
		CreateImageInstance(&pMvdImage);
		void* imagePtr = image.GetImagePointer1(&type, &width, &height);
		MVD_IMAGE_DATA_INFO imageDataInfo;
		imageDataInfo.stDataChannel[0].nLen = width * height;
		imageDataInfo.stDataChannel[0].nRowStep = width;
		imageDataInfo.stDataChannel[0].nSize = width * height;
		imageDataInfo.stDataChannel[0].pData = (byte*)imagePtr;
		pMvdImage->InitImage(width, height, MVD_PIXEL_FORMAT::MVD_PIXEL_MONO_08, imageDataInfo);
		return pMvdImage;
	}
	//3通道
	else if (channels == 3)
	{
		IMvdImage* pMvdImage;
		CreateImageInstance(&pMvdImage);
		void* imageRedPtr;
		void* imageGreenPtr;
		void* imageBluePtr;
		image.GetImagePointer3(&imageRedPtr, &imageGreenPtr, &imageBluePtr, &type, &width, &height);
		long size = width * height * 3;
		byte* imageRedBuf = new byte[width * height];
		byte* imageGreenBuf = new byte[width * height];
		byte* imageBlueBuf = new byte[width * height];
		memcpy(imageRedBuf, imageRedPtr, width * height);
		memcpy(imageGreenBuf, imageGreenPtr, width * height);
		memcpy(imageBlueBuf, imageBluePtr, width * height);
		byte* imageBuffer = new byte[size];
		int index = 0;
		for (int row = 0; row < height; row++)
		{
			for (int col = 0; col < width; col++, index += 3)
			{
				imageBuffer[index] = imageRedBuf[row * width + col];
				imageBuffer[index + 1] = imageGreenBuf[row * width + col];
				imageBuffer[index + 2] = imageBlueBuf[row * width + col];
			}
		}
		delete[] imageRedBuf;
		delete[] imageGreenBuf;
		delete[] imageBlueBuf;
		MVD_IMAGE_DATA_INFO imageDataInfo{};
		imageDataInfo.stDataChannel[0].nLen = width * height * 3;
		imageDataInfo.stDataChannel[0].nRowStep = width * 3;
		imageDataInfo.stDataChannel[0].nSize = width * height * 3;
		imageDataInfo.stDataChannel[0].pData = imageBuffer;
		try {
			pMvdImage->InitImage(width, height, MVD_PIXEL_FORMAT::MVD_PIXEL_RGB_RGB24_C3, imageDataInfo);
		}
		catch (IMVDException* ex)
		{
			cout << ex->GetDescription() << ex->GetErrorCode() << endl;
		}
		return pMvdImage;
	}
	else
	{
		return nullptr;
	}
}

HImage MvdImageToHImage(IMvdImage* pMvdImage)
{
	HImage image;
	MVD_IMAGE_DATA_INFO* imageDataInfo = pMvdImage->GetImageData();
	if (pMvdImage->GetPixelFormat() == MVD_PIXEL_FORMAT::MVD_PIXEL_MONO_08)
	{
		image.GenImage1(
			"byte",
			pMvdImage->GetWidth(),
			pMvdImage->GetHeight(),
			imageDataInfo->stDataChannel[0].pData);
	}
	else if (pMvdImage->GetPixelFormat() == MVD_PIXEL_FORMAT::MVD_PIXEL_RGB_RGB24_C3)
	{
		int width = pMvdImage->GetWidth();
		int height = pMvdImage->GetHeight();
		long size = width * height * 3;
		byte* imageRedBuf = new byte[(long)(width * height)];
		byte* imageGreenBuf = new byte[(long)(width * height)];
		byte* imageBlueBuf = new byte[(long)(width * height)];
		byte* imageBuffer = new byte[size];
		memcpy(imageBuffer, pMvdImage->GetImageData()->stDataChannel[0].pData, size);
		int index = 0;
		for (int row = 0; row < height; row++)
		{
			for (int col = 0; col < width; col++, index += 3)
			{
				imageRedBuf[row * width + col] = imageBuffer[index];
				imageGreenBuf[row * width + col] = imageBuffer[index + 1];
				imageBlueBuf[row * width + col] = imageBuffer[index + 2];
			}
		}
		delete[] imageBuffer;
		image.GenImage3(
			"byte",
			pMvdImage->GetWidth(),
			pMvdImage->GetHeight(),
			imageRedBuf,
			imageGreenBuf,
			imageBlueBuf);
	}
	return image;
}

4.4 流程输出(IoImage)转Halcon图像

HImage IoImageToHImage(IoImage ioImage)
{
	HImage image;
	if (ioImage.stImage.Pixelformat == MvdPixelFormat::MVD_PIXEL_MONO_08)
	{
		image.GenImage1("byte", ioImage.stImage.Width, ioImage.stImage.Height, ioImage.stImage.ImageData);
	}
	if (ioImage.stImage.Pixelformat == MvdPixelFormat::MVD_PIXEL_RGB_RGB24_C3)
	{
		int width = ioImage.stImage.Width;
		int height = ioImage.stImage.Height;
		long size = width * height * 3;
		byte* imageRedBuf = new byte[(long)width * height];
		byte* imageGreenBuf = new byte[(long)width * height];
		byte* imageBlueBuf = new byte[(long)width * height];
		byte* imageBuffer = new byte[size];
		memcpy(imageBuffer, ioImage.stImage.ImageData, size);
		int index = 0;
		for (int row = 0; row < height; row++)
		{
			for (int col = 0; col < width; col++, index += 3)
			{
				imageRedBuf[row * width + col] = imageBuffer[index];
				imageGreenBuf[row * width + col] = imageBuffer[index + 1];
				imageBlueBuf[row * width + col] = imageBuffer[index + 2];
			}
		}
		delete[] imageBuffer;
		image.GenImage3("byte", width, height, imageRedBuf, imageGreenBuf, imageBlueBuf);
	}
	return image;
}

5.流程图像与算子图像

VM SDK开发中流程输入输出图像都是ImageBaseData_V2,算子SDK开发中算子输入输出图像都是CmvdImage,两者可以实现互转。

5.1 流程图像转算子图像

public CMvdImage ImageBaseData_V2ToCMvdImage(ImageBaseData_V2 ImageBaseDataV2)
{
    VisionDesigner.CMvdImage cmvdImage = new VisionDesigner.CMvdImage();
    VisionDesigner.MVD_IMAGE_DATA_INFO stImageData = new VisionDesigner.MVD_IMAGE_DATA_INFO();
    if (VMPixelFormat.VM_PIXEL_MONO_08 == ImageBaseDataV2.Pixelformat)
    {
        stImageData.stDataChannel[0].nRowStep = (uint)ImageBaseDataV2.Width;
        stImageData.stDataChannel[0].nLen = (uint)(ImageBaseDataV2.Width * ImageBaseDataV2.Height);
        stImageData.stDataChannel[0].nSize = (uint)(ImageBaseDataV2.Width * ImageBaseDataV2.Height);
        byte[] m_BufForDriver1 = new byte[ImageBaseDataV2.Width * ImageBaseDataV2.Height];
        //数据Copy
        Marshal.Copy(ImageBaseDataV2.ImageData, m_BufForDriver1, 0, ((int)ImageBaseDataV2.Width * ImageBaseDataV2.Height));
        stImageData.stDataChannel[0].arrDataBytes = m_BufForDriver1;
        //初始化CMvdImage
        cmvdImage.InitImage((uint)ImageBaseDataV2.Width, (uint)ImageBaseDataV2.Height, MVD_PIXEL_FORMAT.MVD_PIXEL_MONO_08, stImageData);
    }
    else if (VMPixelFormat.VM_PIXEL_RGB24_C3 == ImageBaseDataV2.Pixelformat)
    {
        stImageData.stDataChannel[0].nRowStep = (uint)ImageBaseDataV2.Width * 3;
        stImageData.stDataChannel[0].nLen = (uint)(ImageBaseDataV2.Width * ImageBaseDataV2.Height * 3);
        stImageData.stDataChannel[0].nSize = (uint)(ImageBaseDataV2.Width * ImageBaseDataV2.Height * 3);
        byte[] m_BufForDriver1 = new byte[3 * (ImageBaseDataV2.Width * ImageBaseDataV2.Height)];
        //数据Copy
        Marshal.Copy(ImageBaseDataV2.ImageData, m_BufForDriver1, 0, ((int)(ImageBaseDataV2.Width * ImageBaseDataV2.Height) * 3));
        stImageData.stDataChannel[0].arrDataBytes = m_BufForDriver1;
        //初始化CMvdImage
        cmvdImage.InitImage((uint)ImageBaseDataV2.Width, (uint)ImageBaseDataV2.Height, MVD_PIXEL_FORMAT.MVD_PIXEL_RGB_RGB24_C3, stImageData);
    }
    return cmvdImage;
}

5.2 算子图像转流程图像

public ImageBaseData_V2 CMvdImageToImageBaseData_V2(CMvdImage cmvdImage)
{
    VM.PlatformSDKCS.ImageBaseData_V2 ImageBaseDataV2 = null;
    if (MVD_PIXEL_FORMAT.MVD_PIXEL_MONO_08 == cmvdImage.PixelFormat)
    {
        var cmvdImageData = cmvdImage.GetImageData();
        IntPtr imagedata = Marshal.AllocHGlobal(cmvdImageData.stDataChannel[0].arrDataBytes.Length);
        Marshal.Copy(cmvdImageData.stDataChannel[0].arrDataBytes, 0, imagedata, cmvdImageData.stDataChannel[0].arrDataBytes.Length);
        ImageBaseDataV2 = new ImageBaseData_V2(imagedata, (uint)cmvdImageData.stDataChannel[0].arrDataBytes.Length, (int)cmvdImage.Width, (int)cmvdImage.Height, VMPixelFormat.VM_PIXEL_MONO_08);
        //使用结束后需要手动释放
        //Marshal.FreeHGlobal(imagedata);
        //imagedata = IntPtr.Zero;
    }
    else if (MVD_PIXEL_FORMAT.MVD_PIXEL_RGB_RGB24_C3 == cmvdImage.PixelFormat)
    {

        var cmvdImageData = cmvdImage.GetImageData();
        IntPtr imagedata = Marshal.AllocHGlobal(cmvdImageData.stDataChannel[0].arrDataBytes.Length);
        Marshal.Copy(cmvdImageData.stDataChannel[0].arrDataBytes, 0, imagedata, cmvdImageData.stDataChannel[0].arrDataBytes.Length);
        ImageBaseDataV2 = new ImageBaseData_V2(imagedata, (uint)cmvdImageData.stDataChannel[0].arrDataBytes.Length, (int)cmvdImage.Width, (int)cmvdImage.Height, VMPixelFormat.VM_PIXEL_RGB24_C3);
        //使用结束后需要手动释放
        //Marshal.FreeHGlobal(imagedata);
        //imagedata = IntPtr.Zero;
    }
    return ImageBaseDataV2;
}

6.算法模块图像与Mat、Halcon、算子图像互转的方法(C++)

算法模块图像与Mat、Halcon图像、算子图像实现互转,自定义算法模块开发时,在C++工程中,算法的图像类型为HKA_IMAGE。

6.1 HKA_IMAGE与Mat互转

Mat CAlgorithmModule::HKAImageToMat(HKA_IMAGE inputimage)
{
    Mat mat, mat1;
    if (inputimage.format == HKA_IMG_MONO_08)
    {
        mat = Mat(inputimage.height, inputimage.width, CV_8UC1, inputimage.data[0]);
    }
    else if (inputimage.format == HKA_IMG_RGB_RGB24_C3)
    {
        mat1 = Mat(inputimage.height, inputimage.width, CV_8UC3, inputimage.data[0]);
        cvtColor(mat1, mat, COLOR_RGB2BGR);
    }
    return mat;
}



HKA_IMAGE CAlgorithmModule::MatToHKAImage(Mat mat)
{	
	HKA_IMAGE inputimage;
	if (mat.channels() == 1)
	{
		inputimage = { HKA_IMG_MONO_08, 0 };
		inputimage.width = mat.cols;
		inputimage.height = mat.rows;
		inputimage.format = HKA_IMG_MONO_08;
		inputimage.step[0] = mat.cols;
		inputimage.data[0] = (char*)malloc(inputimage.width * inputimage.height);
		if (inputimage.data[0] != NULL)
		{
			memset(inputimage.data[0], 0, inputimage.width * inputimage.height);
			memcpy_s(inputimage.data[0], inputimage.width * inputimage.height, mat.data, inputimage.width * inputimage.height);
		}
	}
	else if (mat.channels() == 3)
	{
		cvtColor(mat, mat, COLOR_BGR2RGB);
		inputimage = { HKA_IMG_RGB_RGB24_C3, 0 };
		inputimage.width = mat.cols;
		inputimage.height = mat.rows;
		inputimage.format = HKA_IMG_RGB_RGB24_C3;
		inputimage.step[0] = mat.cols * 3;
		inputimage.data[0] = (char*)malloc(inputimage.width * inputimage.height * 3);
		if (inputimage.data[0] != NULL)
		{
			memset(inputimage.data[0], 0, inputimage.width * inputimage.height * 3);
			memcpy_s(inputimage.data[0], inputimage.width * inputimage.height * 3, mat.data, inputimage.width * inputimage.height * 3);
		}
	}
	return inputimage;
}

6.2 HKA_IMAGE与Halcon图像互转

HImage CAlgorithmModule::HKAImageToHImage(HKA_IMAGE inputimage)
{
	HImage himage;
	if (HKA_IMG_MONO_08 == inputimage.format)
	{
		himage.GenImage1("byte", inputimage.width, inputimage.height, inputimage.data[0]);
	}
	if (HKA_IMG_RGB_RGB24_C3 == inputimage.format)
	{
		int width = inputimage.width;
		int height = inputimage.height;
		long size = width * height * 3;
		byte* imageRedBuf = new byte[(long)width * height];
		byte* imageGreenBuf = new byte[(long)width * height];
		byte* imageBlueBuf = new byte[(long)width * height];
		byte* imageBuffer = new byte[size];
		memcpy(imageBuffer, inputimage.data[0], size);
		int index = 0;
		for (int row = 0; row < height; row++)
		{
			for (int col = 0; col < width; col++, index += 3)
			{
				imageRedBuf[row * width + col] = imageBuffer[index];
				imageGreenBuf[row * width + col] = imageBuffer[index + 1];
				imageBlueBuf[row * width + col] = imageBuffer[index + 2];
			}
		}
		delete[] imageBuffer;
		himage.GenImage3("byte", width, height, imageRedBuf, imageGreenBuf, imageBlueBuf);
	}
	return himage;
}


HKA_IMAGE CAlgorithmModule::HImageToHKAIMAGE(HImage himage)
{
	HKA_IMAGE inputimage;
	HString type;
	Hlong width, height;
	if (himage.CountChannels() == 1)
	{
		inputimage = { HKA_IMG_MONO_08, 0 };
		void* imagePtr = himage.GetImagePointer1(&type, &width, &height);
		inputimage.width = width;
		inputimage.height = height;
		inputimage.format = HKA_IMG_MONO_08;
		inputimage.step[0] = width;
		inputimage.data[0] = (char*)malloc(inputimage.width * inputimage.height);
		if (inputimage.data[0] != NULL)
		{
			memset(inputimage.data[0], 0, inputimage.width * inputimage.height);
			memcpy_s(inputimage.data[0], inputimage.width * inputimage.height, imagePtr, inputimage.width * inputimage.height);
		}
	}
	else if (himage.CountChannels() == 3)
	{
		inputimage = { HKA_IMG_RGB_RGB24_C3, 0 };
		void* imageRedPtr;
		void* imageGreenPtr;
		void* imageBluePtr;
		himage.GetImagePointer3(&imageRedPtr, &imageGreenPtr, &imageBluePtr, &type, &width, &height);		
		long size = width * height * 3;
		byte* imageRedBuf = new byte[width * height];
		byte* imageGreenBuf = new byte[width * height];
		byte* imageBlueBuf = new byte[width * height];
		memcpy_s(imageRedBuf, imageRedPtr, width * height);
		memcpy_s(imageGreenBuf, imageGreenPtr, width * height);
		memcpy_s(imageBlueBuf, imageBluePtr, width * height);
		byte* imageBuffer = new byte[size];
		int index = 0;
		for (int row = 0; row < height; row++)
		{
			for (int col = 0; col < width; col++, index += 3)
			{
				imageBuffer[index] = imageRedBuf[row * width + col];
				imageBuffer[index + 1] = imageGreenBuf[row * width + col];
				imageBuffer[index + 2] = imageBlueBuf[row * width + col];
			}
		}
		delete[] imageRedBuf;
		delete[] imageGreenBuf;
		delete[] imageBlueBuf;
		inputimage.width = width;
		inputimage.height = height;
		inputimage.format = HKA_IMG_RGB_RGB24_C3;
		inputimage.step[0] = width * 3;
		inputimage.data[0] = (char*)malloc(inputimage.width * inputimage.height * 3);
		if (inputimage.data[0] != NULL)
		{
			memset(inputimage.data[0], 0, inputimage.width * inputimage.height * 3);
			memcpy_s(inputimage.data[0], inputimage.width * inputimage.height * 3, imageBuffer, inputimage.width * inputimage.height * 3);
		}
		delete[] imageBuffer;      
	}
	return inputimage;
}

6.3 HKA_IMAGE与算子图像(CmvdImage)互转

IMvdImage* CAlgorithmModule::HKAImageToIMvdImage(HKA_IMAGE inputimage)
{
    IMvdImage* iMvdImage = NULL;
    CreateImageInstance(&iMvdImage);
    MVD_IMAGE_DATA_INFO stImageData;

    if (inputimage.format == HKA_IMG_MONO_08)
    {
        uint dataLen = (uint)(inputimage.width * inputimage.height);
        stImageData.stDataChannel[0].nRowStep = inputimage.width;
        stImageData.stDataChannel[0].nLen = dataLen;
        stImageData.stDataChannel[0].nSize = dataLen;
        stImageData.stDataChannel[0].pData = (unsigned char*)malloc(inputimage.width * inputimage.height);
        memset(stImageData.stDataChannel[0].pData, 0, inputimage.width * inputimage.height);
        stImageData.stDataChannel[0].pData = (unsigned char*)inputimage.data[0];
        iMvdImage->InitImage(inputimage.width, inputimage.height, MVD_PIXEL_MONO_08, stImageData);
    }
    else if (inputimage.format == HKA_IMG_RGB_RGB24_C3)
    {
        uint dataLen = (uint)(inputimage.width * inputimage.height * 3);
        stImageData.stDataChannel[0].nRowStep = inputimage.width * 3;
        stImageData.stDataChannel[0].nLen = dataLen;
        stImageData.stDataChannel[0].nSize = dataLen;
        stImageData.stDataChannel[0].pData = (unsigned char*)malloc(inputimage.width * inputimage.height * 3);
        memset(stImageData.stDataChannel[0].pData, 0, inputimage.width * inputimage.height);
        stImageData.stDataChannel[0].pData = (unsigned char*)inputimage.data[0];
        iMvdImage->InitImage(inputimage.width, inputimage.height, MVD_PIXEL_RGB_RGB24_C3, stImageData);

    }
    return iMvdImage;
}


HKA_IMAGE CAlgorithmModule::IMvdImageToHKA_IMAGE(IMvdImage* iMvdImage)
{
    HKA_IMAGE inputimage;
    if (iMvdImage->GetPixelFormat() == MVD_PIXEL_MONO_08)
    {
        inputimage = { HKA_IMG_MONO_08, 0 };
        inputimage.width = iMvdImage->GetWidth();
        inputimage.height = iMvdImage->GetHeight();
        inputimage.format = HKA_IMG_MONO_08;
        inputimage.step[0] = iMvdImage->GetWidth();
        inputimage.data[0] = (char*)malloc(inputimage.width * inputimage.height);
        if (inputimage.data[0] != NULL)
        {
            memset(inputimage.data[0], 0, inputimage.width * inputimage.height);
            memcpy_s(inputimage.data[0], inputimage.width * inputimage.height, iMvdImage->GetImageData()->stDataChannel[0].pData, inputimage.width * inputimage.height);
        }
    }
    else if (iMvdImage->GetPixelFormat() == MVD_PIXEL_RGB_RGB24_C3)
    {
        inputimage = { HKA_IMG_RGB_RGB24_C3, 0 };
        inputimage.width = iMvdImage->GetWidth();
        inputimage.height = iMvdImage->GetHeight();
        inputimage.format = HKA_IMG_RGB_RGB24_C3;
        inputimage.step[0] = iMvdImage->GetWidth() * 3;
        inputimage.data[0] = (char*)malloc(inputimage.width * inputimage.height * 3);
        if (inputimage.data[0] != NULL)
        {
            memset(inputimage.data[0], 0, inputimage.width * inputimage.height * 3);
            memcpy_s(inputimage.data[0], inputimage.width * inputimage.height * 3, iMvdImage->GetImageData()->stDataChannel[0].pData, inputimage.width * inputimage.height * 3);
        }
    }
    return inputimage;
}

总结

以上基本包含大部分图像类型和VM图像类型互转的方法,其实本质上都是图像数据(地址或byte数组)、图像宽高及像素格式的赋值及拷贝操作,其它图像类型的转换可以参考这些方式实现。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
当安装VM(Virtual Machine,虚拟机)时出现"无法应用转换程序"的错误提示,可能是由以下几个原因引起的。 首先,可能是由于虚拟机软件的版本不兼容导致的。不同的虚拟机软件在操作系统的版本要求上有所差异,如果安装的虚拟机软件版本与当前操作系统不兼容,就会出现无法应用转换程序的错误提示。解决办法是更新虚拟机软件至最新版本,并确保该版本与操作系统相匹配。 其次,可能是由于系统文件损坏或缺失导致的。有时候,操作系统的关键文件可能会被病毒或其他原因损坏或删除,这可能导致虚拟机安装时无法找到或使用所需的转换程序。解决办法是使用系统修复工具(如Windows的系统文件检查工具)来修复系统文件,或者重新安装操作系统。 另外,可能是由于安全软件的阻止导致的。某些防火墙、杀毒软件或其他安全软件可能会拦截虚拟机安装程序的操作,从而导致转换程序无法应用。解决办法是临时关闭或卸载这些安全软件,然后重新尝试安装虚拟机。 最后,可能是由于硬件兼容性问题导致的。如果安装的虚拟机软件不支持当前计算机的硬件,那么也会出现转换程序无法应用的错误提示。解决办法是查看虚拟机软件的硬件兼容性列表,并确保计算机的硬件符合要求。如果不符合,可能需要升级硬件或选择其他虚拟机软件。 总之,遇到"无法应用转换程序"的错误提示时,我们可以从虚拟机软件版本、系统文件、安全软件和硬件兼容性等方面排查问题,并采取相应的解决措施。如果问题仍然存在,可以尝试咨询虚拟机软件的技术支持或寻求更专业的帮助。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值