QuPath学习 ② H&E scripts

文献中详细介绍了处理H&E scripts的详细过程,计算H&E染色的全切片中的肿瘤基质百分比。

步骤:

1.将相关幻灯片添加到QuPath项目中。

2.对于项目中的每张幻灯片,围绕一个有代表性的肿瘤区域绘制一个注释。

3.运行“Estimate _ background _ values . groovy”来自动确定适当的“白色”值,以改进光学计算 密度。这是可选的,但可能对较暗的扫描有帮助;它预先假定背景区域在图像中的某处是可用的。

4.运行“H&E_superpixels_script.groovy”来计算注释区域内的超像素和特征。

5.使用“分类->创建检测分类器”以交互方式训练分类器,以区分 形象。完成后,将此分类器应用于所有图像。

6.运行' Export_H&E_tumor_areas.groovy '来导出结果


为了计算HE染色病理WSI的肿瘤基质百分比(TSP),我们可以使用QuPath软件。QuPath是一款专门用于数字病理学分析的开源软件。 以下是详细步骤: 1. 打开QuPath软件,并导入HE染色的WSI图像文件。 2. 在图像导入后,选择“Annotation”工具栏中的“Create new annotation”来创建一个新的标注区域。 3. 使用“Annotation”工具栏中的“Polygon”工具,在图像中选择一个感兴趣的区域,该区域包含肿瘤和基质组织。 4. 点击“Annotation”工具栏中的“Classify”按钮,选择“Tissue detection”来检测组织区域。 5. 在“Classify”窗口中,选择“Hematoxylin & Eosin”作为颜色通道,并根据需求调整其他参数。 6. 点击“Classify”按钮开始分割图像中的组织。 7. 分割完成后,QuPath会自动根据颜色和形状将组织区域分为肿瘤和基质。这些区域将被标记为不同的类别。 8. 在左侧的“Image Hierarchy”窗口中,展开图像的标注层次结构,找到包含肿瘤和基质的标注区域。 9. 右键点击该标注区域,在弹出菜单中选择“Measurements”>“Calculate features”。 10. 在“Calculate features”窗口中,选择“Area”作为要计算的特征。 11. 确定选择后,QuPath将计算出肿瘤和基质区域的面积。 12. 使用计算出的肿瘤和基质区域面积的比例,即可得到肿瘤基质百分比(TSP)。 通过以上步骤,我们可以使用QuPath软件计算HE染色病理WSI的肿瘤基质百分比(TSP)。这个百分比可以提供有关肿瘤和基质组织在WSI图像中的相对数量的信息,对于病理学研究和诊断具有重要意义。


1,Estimate_background_values.groovy
/**
* Script to estimate the background value in a brightfield whole slide image.
*
* This effectively looks for the circle of a specified radius with the highest
* mean intensity (summed across all 3 RGB channels), and takes the mean red,
* green and blue within this circle as the background values.
*
* The background values are then set in the ColorDeconvolutionStains object
* for the ImageData, so that they are used for any optical density calculations.
*
* This is implemented with the help of ImageJ (www.imagej.net).
*
* @author Pete Bankhead
*/
import ij.plugin.filter.RankFilters
import ij.process.ColorProcessor
import qupath.imagej.images.servers.ImagePlusServerBuilder
import qupath.lib.common.ColorTools
import qupath.lib.regions.RegionRequest
import qupath.lib.scripting.QP
// Radius used for background search, in microns (will be used approximately)
double radiusMicrons = 1000
// Calculate a suitable 'preferred' pixel size
// Keep in mind that downsampling involves a form of smoothing, so we just choose
// a reasonable filter size and then adapt the image resolution accordingly
double radiusPixels = 10
double requestedPixelSizeMicrons = radiusMicrons / radiusPixels
// Get the ImageData & ImageServer
def imageData = QP.getCurrentImageData()
def server = imageData.getServer()
// Check we have the right kind of data
if (!imageData.isBrightfield() || !server.isRGB()) {
 print("ERROR: Only brightfield RGB images can be processed with this script, sorry")
 return
}
// Extract pixel size
double pixelSize = server.getAveragedPixelSizeMicrons()
// Choose a default if we need one (i.e. if the pixel size is missing from the image metadata)
if (Double.isNaN(pixelSize))
 pixelSize = 0.5
// Figure out suitable downsampling value
double downsample = Math.round(requestedPixelSizeMicrons / pixelSize)
// Get a downsampled version of the image as an ImagePlus (for ImageJ)
def request = RegionRequest.createInstance(server.getPath(), downsample, 0, 0, server.getWidth(), server.getHeight())
def serverIJ = ImagePlusServerBuilder.ensureImagePlusWholeSlideServer(server)
def pathImage = serverIJ.readImagePlusRegion(request)
// Check we have an RGB image (we should at this point)
def imp = pathImage.getImage(false)
def ip = imp.getProcessor()
if (!(ip instanceof ColorProcessor)) {
 print("Sorry, the background can only be set for a ColorProcessor, but the current ImageProcessor is " + ip)
 return
}
// Apply filter
if (ip.getWidth() <= radiusPixels*2 || ip.getHeight() <= radiusPixels*2) {
 print("The image is too small for the requested radius!")
 return
}
new RankFilters().rank(ip, radiusPixels, RankFilters.MEAN)
// Find the location of the maximum across all 3 channels
double maxValue = Double.NEGATIVE_INFINITY
double maxRed = 0
double maxGreen = 0
double maxBlue = 0
for (int y = radiusPixels; y < ip.getHeight()-radiusPixels; y++) {
 for (int x = radiusPixels; x < ip.getWidth()-radiusPixels; x++) {
 int rgb = ip.getPixel(x, y)
 int r = ColorTools.red(rgb)
 int g = ColorTools.green(rgb)
 int b = ColorTools.blue(rgb)
 double sum = r + g + b
 if (sum > maxValue) {
 maxValue = sum
 maxRed = r
 maxGreen = g
 maxBlue = b
 }
 }
}
// Print the result
print("Background RGB values: " + maxRed + ", " + maxGreen + ", " + maxBlue)
// Set the ImageData stains
def stains = imageData.getColorDeconvolutionStains()
def stains2 = stains.changeMaxValues(maxRed, maxGreen, maxBlue)
imageData.setColorDeconvolutionStains(stains2)
2,H&E_superpixels_script.groovy
/*
* Generate superpixels & compute features for H&E tumor stromal analysis using QuPath.
* 
* @author Pete Bankhead
*/
// Compute superpixels within annotated regions
selectAnnotations()
runPlugin('qupath.imagej.superpixels.SLICSuperpixelsPlugin', '{"sigmaMicrons": 5.0, "spacingMicrons": 40.0, "maxIterations": 10, 
"regularization": 0.25, "adaptRegularization": false, "useDeconvolved": false}');
// Add Haralick texture features based on optical densities, and mean Hue for each superpixel
selectDetections();
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": 2.0, "region": "ROI", "tileSizeMicrons": 25.0, "colorOD": 
true, "colorStain1": false, "colorStain2": false, "colorStain3": false, "colorRed": false, "colorGreen": false, "colorBlue": false, 
"colorHue": true, "colorSaturation": false, "colorBrightness": false, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": 
false, "doHaralick": true, "haralickDistance": 1, "haralickBins": 32}');
// Add smoothed measurements to each superpixel
selectAnnotations();
runPlugin('qupath.lib.plugins.objects.SmoothFeaturesPlugin', '{"fwhmMicrons": 50.0, "smoothWithinClasses": false, "useLegacyNames": 
false}');

3,Export_H&E_tumor_areas.groovy
/**
* Script to compute areas for detection objects with different classifications.
*
* Primarily useful for converting classified superpixels into areas.
*
* @author Pete Bankhead
*/
import qupath.lib.common.GeneralTools
import qupath.lib.gui.QuPathGUI
import qupath.lib.images.servers.ImageServer
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.classes.PathClass
import qupath.lib.objects.classes.PathClassFactory
import qupath.lib.objects.hierarchy.PathObjectHierarchy
import qupath.lib.roi.interfaces.PathArea
import qupath.lib.roi.interfaces.ROI
import qupath.lib.scripting.QPEx
import qupath.lib.gui.ImageWriterTools
import qupath.lib.regions.RegionRequest
import java.awt.image.BufferedImage
//----------------------------------------------------------------------------
// If 'exportImages' is true, overview images will be written to an 'export' subdirectory
// of the current project, otherwise no images will be exported
boolean exportImages = true
// Define which classes to analyze, in terms of determining areas etc.
def classesToAnalyze = ["Tumor", "Stroma", "Other"]
// Format output 3 decimal places
int nDecimalPlaces = 3
// If the patient ID is encoded in the filename, a closure defined here can parse it
// (By default, this simply strips off anything after the final dot - assumed to be the file extension)
def parseUniqueIDFromFilename = { n ->
 int dotInd = n.lastIndexOf('.')
 if (dotInd < 0)
 return n
 return n.substring(0, dotInd)
}
// The following closure handled the specific naming scheme used for the images in the paper
// Uncomment to apply this, rather than the default method above
//parseUniqueIDFromFilename = {n ->
// def uniqueID = n.trim()
// if (uniqueID.charAt(3) == '-')
// uniqueID = uniqueID.substring(0, 3) + uniqueID.substring(4)
// return uniqueID.split("[-. ]")[0]
// }
//----------------------------------------------------------------------------
// Convert class name to PathClass objects
Set<PathClass> pathClasses = new TreeSet<>()
for (String className : classesToAnalyze) {
 pathClasses.add(PathClassFactory.getPathClass(className))
}
// Get a formatter
// Note: to run this script in earlier versions of QuPath, 
// change createFormatter to getFormatter
def formatter = GeneralTools.createFormatter(nDecimalPlaces)
// Get access to the current ImageServer - required for pixel size information
ImageServer<?> server = QPEx.getCurrentImageData().getServer()
double pxWidthMM = server.getPixelWidthMicrons() / 1000
double pxHeightMM = server.getPixelHeightMicrons() / 1000
// Get access to the current hierarchy
PathObjectHierarchy hierarchy = QPEx.getCurrentHierarchy()
// Loop through detection objects (here, superpixels) and increment total area calculations for each class
Map<PathClass, Double> areaMap = new TreeMap<>()
double areaTotalMM = 0
for (PathObject tile : hierarchy.getObjects(null, PathDetectionObject.class)) {
 // Check the classification
 PathClass pathClass = tile.getPathClass()
 if (pathClass == null)
 continue
 // Add current area
 ROI roi = tile.getROI()
 if (roi instanceof PathArea) {
 double area = ((PathArea)roi).getScaledArea(pxWidthMM , pxHeightMM)
 areaTotalMM += area
 if (areaMap.containsKey(pathClass))
 areaMap.put(pathClass, area + areaMap.get(pathClass))
 else
 areaMap.put(pathClass, area)
 }
}
// Loop through each classification & prepare output
double areaSumMM = 0
double areaTumor = 0
double areaStroma = 0
// Include image name & ID
String delimiter = "\t"
StringBuilder sbHeadings = new StringBuilder("Image").append(delimiter).append("Unique ID")
String uniqueID = parseUniqueIDFromFilename(server.getShortServerName())
StringBuilder sb = new StringBuilder(server.getShortServerName()).append(delimiter).append(uniqueID)
// Compute areas per class
for (PathClass pathClass : pathClasses) {
 // Extract area from map - or zero, if it does not occur in the map
 double area = areaMap.containsKey(pathClass) ? areaMap.get(pathClass) : 0
 // Update total & record tumor area, if required
 areaSumMM += area
 if (pathClass.getName().toLowerCase().contains("tumor") || pathClass.getName().toLowerCase().contains("tumour"))
 areaTumor += area
 if (pathClass.getName().toLowerCase().contains("stroma"))
 areaStroma += area
 // Display area for classification
 sbHeadings.append(delimiter).append(pathClass.getName()).append(" Area mm^2")
 sb.append(delimiter).append(formatter.format(area))
}
// Append the total area
sbHeadings.append(delimiter).append("Total area mm^2")
sb.append(delimiter).append(formatter.format(areaTotalMM))
// Append the calculated stromal percentage
sbHeadings.append(delimiter).append("Stromal percentage")
sb.append(delimiter).append(formatter.format(areaStroma / (areaTumor + areaStroma) * 100.0))
// Export images in a project sub-directory, if required
if (exportImages) {
 // Create the directory, if required
 def path = QPEx.buildFilePath(QPEx.PROJECT_BASE_DIR, "export")
 QPEx.mkdirs(path)
 // We need to get the display settings (colors, line thicknesses, opacity etc.) from the current viewer
 def overlayOptions = QuPathGUI.getInstance().getViewer().getOverlayOptions()
 def imageData = QPEx.getCurrentImageData()
 def name = uniqueID
 // Aim for an output resolution of approx 20 µm/pixel
 double requestedPixelSize = 20
 double downsample = requestedPixelSize / server.getAveragedPixelSizeMicrons()
 RegionRequest request = RegionRequest.createInstance(imageData.getServerPath(), downsample, 0, 0, server.getWidth(), server.getHeight())
 // Write output image, with and without overlay
 def dir = new File(path)
 def fileImage = new File(dir, name + ".jpg")
 BufferedImage img = ImageWriterTools.writeImageRegion(imageData.getServer(), request, fileImage.getAbsolutePath())
 def fileImageWithOverlay = new File(dir, name + "-overlay.jpg")
 ImageWriterTools.writeImageRegionWithOverlay(img, imageData, overlayOptions, request, fileImageWithOverlay.getAbsolutePath())
}
// Write header to output file if it's empty, and print on screen
sbHeadings.append("\n")
sb.append("\n")
def fileOutput = new File(QPEx.buildFilePath(QPEx.PROJECT_BASE_DIR, "Results.txt"))
if (fileOutput.length() == 0) {
 print(sbHeadings.toString())
 fileOutput << sbHeadings.toString()
}
// Write data to output file & print on screen
fileOutput << sb.toString()
print(sb.toString())


参考:

1:QuPath: Open source software for digital pathology image analysis - PMC (nih.gov)

2:【WSI/QuPath】如何使用QuPath导出切片(patch/tile)-CSDN博客

  • 22
    点赞
  • 33
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
你遇到的问题是因为在脚本路径中包含了一个无效的字符&quot;-&rdquo;。根据引用,你是在Windows系统上安装Python的。在Windows系统上,脚本文件的扩展名是&quot;.bat&quot;而不是&quot;-&quot;。 要解决这个问题,你可以尝试以下步骤: 1. 确保你已经正确地安装了Python。可以从Python官网下载并安装最新版本的Python。 2. 检查你的脚本路径是否正确。确保路径中没有任何无效字符或拼写错误。 3. 如果你使用的是命令行运行脚本,确保你已经将Python的安装路径添加到系统的环境变量中。这样系统才能正确地找到Python解释器。 4. 如果问题仍然存在,你可以尝试使用绝对路径来运行脚本,即使用完整路径(例如:&quot;E:\python3.8.5\Scripts\python_script.py&quot;)。 希望这些步骤可以帮助你解决问题。如果还有其他问题,请随时提问。<span class=&quot;em&quot;>1</span><span class=&quot;em&quot;>2</span><span class=&quot;em&quot;>3</span> #### 引用[.reference_title] - *1* [python-3.8.5-amd64.rar](https://download.csdn.net/download/lonlon29/12649980)[target=&quot;_blank&quot; data-report-click={&quot;spm&quot;:&quot;1018.2226.3001.9630&quot;,&quot;extra&quot;:{&quot;utm_source&quot;:&quot;vip_chatgpt_common_search_pc_result&quot;,&quot;utm_medium&quot;:&quot;distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1&quot;}}] [.reference_item style=&quot;max-width: 50%&quot;] - *2* *3* [Centos7配置完整Python3(Python 3.8.5)最强攻略!](https://blog.csdn.net/qq_41983842/article/details/108497287)[target=&quot;_blank&quot; data-report-click={&quot;spm&quot;:&quot;1018.2226.3001.9630&quot;,&quot;extra&quot;:{&quot;utm_source&quot;:&quot;vip_chatgpt_common_search_pc_result&quot;,&quot;utm_medium&quot;:&quot;distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1&quot;}}] [.reference_item style=&quot;max-width: 50%&quot;] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值