我如何使用深度学习通过Fast.ai对医学图像进行分类

by James Dietle

詹姆斯·迪特尔(James Dietle)

Convolutional Neural Networks (CNNs) have rapidly advanced the last two years helping with medical image classification. How can we, even as hobbyists, take these recent advances and apply them to new datasets? We are going to walk through the process, and it’s surprisingly more accessible than you think.

卷积神经网络(CNN)在过去两年中Swift发展,有助于医学图像分类。 我们,即使是业余爱好者,又如何才能将这些最新进展应用于新的数据集? 我们将逐步完成该过程,它比您想象的更容易访问。

As our family moved to Omaha, my wife (who is in a fellowship for pediatric gastroenterology) came home and said she wanted to use image classification for her research.

当我们一家人搬到奥马哈时,我的妻子(正在接受小儿胃肠病学研究金)回到家,说她想在研究中使用图像分类。

Oh, I was soooo ready.

哦,我已经准备好了。

For over two years, I have been playing around with deep learning as a hobby. I even wrote several articles (here and here). Now I had some direction on a problem. Unfortunately, I had no idea about anything in the gastrointestinal tract, and my wife hadn’t programmed since high school.

两年多来,我一直将深度学习作为一种爱好。 我什至写了几篇文章( 在这里这里 )。 现在我对一个问题有了一些指导。 不幸的是,我对胃肠道一无所知,而我的妻子自高中以来就没有编程过。

从头开始 (Start from the beginning)

My entire journey into deep learning has been through the Fast.ai process. It started 2 years ago when I was trying to validate that all the “AI” and “Machine Learning” we were using in the security space wasn’t over-hyped or biased. It was, and we steered clear from those technologies. The most sobering fact was learning that being an expert in the field takes a little experimentation.

我进入深度学习的整个过程都是通过Fast.ai过程进行的。 它始于2年前,当时我试图验证我们在安全领域使用的所有“ AI”和“机器学习”都没有被过分夸大或偏见。 确实如此,我们从这些技术中脱颖而出。 最令人震惊的事实是,学​​习成为该领域的专家需要做一些试验。

建立 (Setup)

I have used Fast.ai for all the steps, and the newest version is making this more straightforward than ever. The ways to create your learning environment are proliferating rapidly. There are now docker images, Amazon amis, and services (like Crestle) that make it easier than ever to set up.

我已将Fast.ai用于所有步骤,并且最新版本使此操作比以往更加直接。 创建学习环境的方法正在Swift激增。 现在, 有了docker映像,Amazon amis和服务(如Crestle ),使设置比以往更加容易。

Whether you are the greenest of coding beginners or experienced ninja, start here on the Fast.ai website.

无论您是最绿色的编码初学者还是经验丰富的忍者,都可以 Fast.ai网站上开始

I opted to build my machine learning rig during a previous iteration of the course. However, it is not necessary, and I would recommend using another service instead. Choose the easiest route for you and start experimenting.

我选择在本课程的上一迭代中构建我的机器学习装备。 但是,这不是必需的,我建议您改用其他服务。 选择最适合您的路线并开始尝试。

版本3对Fast.ai的更改 (Changes to Fast.ai with version 3)

I have taken the other iterations of Fast.ai, and after reviewing the newest course, I noticed how much more straightforward everything was in the notebook. Documentation and examples are everywhere.

我进行了Fast.ai的其他迭代,并且在回顾了最新课程后,我注意到笔记本中的所有内容都更加简单。 文档和示例无处不在。

Let’s dive into “lesson1-pets”, and if you have setup Fast.ai feel free to follow along in your personal jupyter instance.

让我们深入研究“ lesson1-pets ”,如果您已设置Fast.ai,请随时关注您的个人jupyter实例。

I prepared for the first lesson (typically defining between 2 classes — cats and dogs — as I had many times before. However, I saw this time that we were doing something much more complex regarding 33 breeds of cats and dogs using fewer lines of code.

我为第一堂课做准备(通常定义两次,通常分为猫和狗),但是我这次看到的是,我们使用更少的代码行对33种猫和狗进行了更为复杂的处理。

The CNN was up and learning in 7 lines of code!!

CNN建立起来并以7行代码进行学习!!

That wasn’t the only significant change. Another huge stride was in showing errors. For example, we could quickly see a set of the top losses (items we confidently predicted wrong) and corresponding pet pictures from our dataset below.

那不是唯一的重大变化。 另一个巨大的进步是显示错误。 例如,我们可以从下面的数据集中快速看到一组最大的损失(我们有把握地预测为错误的项目)和相应的宠物图片。

This function was pretty much a spot check for bad data. Ensuring a lion, tiger, or bear didn’t sneak into the set. We could also see if there were glaring errors that were obvious to us.

该功能几乎可以对不良数据进行现场检查。 确保狮子,老虎或熊没有潜入布景。 我们还可以查看是否存在明显的错误。

The confusion matrix was even more beneficial to me. It allowed me to look across the whole set for patterns in misclassification between the 33 breeds.

混乱的矩阵对我来说更加有益。 它使我可以对整个33个品种之间的错误分类模式进行查看。

Of the 33 breeds presented, we could see where our data diverged and ask ourselves if it made sense. A few breeds popped out in particular, and here are examples of the commonly confused images:

在介绍的33个品种中,我们可以看到我们的数据出现差异的地方,并问自己是否有意义。 尤其是出现了一些品种,下面是一些常见图像的示例:

Not being a pet owner or enthusiast, I wouldn’t have be able to figure out these subtle details out about a breed’s subtle features. The model is doing a much better job than I would’ve been able to do! While I am certainly getting answers, I am also curious to find that missing feature or piece of data to improve the model.

如果不是宠物主人或发烧友,我将无法弄清这些关于某个品种微妙特征的细节。 该模型比我本来可以做的更好! 虽然我肯定会得到答案,但我也很好奇要找到缺少的功能或数据来改善模型。

There is an important caveat. We are now at the point where the model is teaching us about the data. Sometimes we can get stuck in a mindset where the output is the end of the process. If we fall into that trap, we might miss a fantastic opportunity to create a positive feedback loop.

有一个重要的警告。 现在,我们正处于该模型正在我们有关数据的时刻。 有时我们会陷入一种思维定势,即输出是流程的结束。 如果陷入这个陷阱,我们可能会错失创造积极反馈回路的绝佳机会。

Therefore, we are sitting a little wiser and little more confident in the 4th phase. Given this data, what decisions should I improve accuracy with?

因此,我们对第四阶段的态度更加明智,也更加自信。 有了这些数据,我应该使用哪些决策来提高准确性?

  • More training

    更多训练
  • More images

    更多图片
  • More powerful architecture

    更强大的架构

Trick question! I am going to look at a different dataset. Let’s get up and personal with endoscope images of people’s insides.

技巧问题! 我将看一个不同的数据集。 让我们起床并用内窥镜观察人体内的图像。

获取数据集,查看大量其他内容 (Get the dataset, see a whole lot of sh… stuff)

For anyone else interested in gastroenterology I recommend looking into The Kvasir Dataset. A good description from their site is:

对于任何其他对肠胃病感兴趣的人,我建议您查阅The Kvasir Dataset 。 他们网站上的一个很好的描述是:

the dataset containing images from inside the gastrointestinal (GI) tract. The collection of images are classified into three important anatomical landmarks and three clinically significant findings. In addition, it contains two categories of images related to endoscopic polyp removal. Sorting and annotation of the dataset is performed by medical doctors (experienced endoscopists)
包含胃肠道(GI)图像的数据集。 图像的收集分为三个重要的解剖标志和三个具有临床意义的发现。 此外,它包含与内窥镜息肉清除相关的两类图像。 数据集的排序和注释由医生(经验丰富的内镜医师)执行

There is also a research paper by experts (Pogorelov et al.) describing how they tackled the problem, which includes their findings.

专家(Pogorelov等人)还有一篇研究论文 ,描述了他们如何解决该问题,包括他们的发现。

Perfect, this is an excellent dataset to move from pets to people. Although a less cuddly dataset (that also includes stool samples) it is something exciting and complete.

完美,这是从宠物到人的绝佳数据集。 尽管数据集不太吸引人(还包括粪便样本),但它却令人兴奋且完整。

As we download the data, the first thing we notice is that there are 8 classes in this dataset for us to classify instead of the 33 from before. However, it shouldn’t change any of our other operations.

下载数据时,我们注意到的第一件事是,该数据集中有8个类可供我们分类,而不是之前的33个类。 但是,它不应更改我们的其他任何操作。

Side Note: Originally, I spent a few hours scripting out how to move folders into validation folders, and spent some good time setting everything up. The scripting effort turned out to be a waste of time because there is already a simple function to create a validation set.

旁注:最初,我花了几个小时编写脚本,说明如何将文件夹移入验证文件夹,并花了一些时间设置所有内容。 编写脚本的工作原来是浪费时间,因为已经存在创建验证集的简单功能。

The lesson is “if something is a pain, chances are someone from the Fast.ai community has already coded it for you.”

该课程是“如果遇到麻烦,Fast.ai社区中的某人可能已经为您编写了代码。”

潜入笔记本 (Diving into the notebook)

You can pick up my Jupyter notebook from GitHub here.

您可以在GitHub上获取我的Jupyter笔记本。

建立速度和实验 (Building for speed and experimentation)

As we start experimenting, it is crucial to get the framework correct. Try setting up the minimum needed to get it working that can scale up later. Make sure data is being taken in, processed, and provides outputs that make sense.

在我们开始试验时,正确构建框架至关重要。 尝试设置使它正常运行所需的最低限额,以后可以扩展。 确保正在接收,处理数据并提供有意义的输出。

This means:

这表示:

  • Use smaller batches

    使用较小的批次
  • Use lower numbers of epochs

    使用较少的时期
  • Limit transforms

    极限变换

If a run is taking longer than 2 minutes, figure out a way to go faster. Once everything is in place, we can get crazy.

如果跑步时间超过2分钟,请找出加快跑步速度的方法。 一旦一切就绪,我们就会发疯。

数据处理 (Data Handling)

Data prioritization, organization, grooming, and handling is the most important aspect of deep learning. Here is a crude picture showing how data handling occurs, or you can read the documentation.

数据优先级,组织,整理和处理是深度学习的最重要方面。 这是一张粗略的图片,显示了如何进行数据处理,或者您可以阅读文档

Therefore we need to do the same thing for the endoscope data, and it is one line of code.

因此,我们需要对内窥镜数据做同样的事情,这是一行代码。

Explaining the variables:

解释变量:

  • Path points to our data (#1)

    路径指向我们的数据(#1)
  • The validation set at 20% to properly create dataloaders

    验证设置为20%以正确创建数据加载器
  • default transforms

    默认转换
  • the image size set at 224

    图像尺寸设置为224

That’s it! The data block is all set up and ready for the next phase.

而已! 数据块已全部设置好,可用于下一阶段。

Resnet (Resnet)

We have data and we need to decide on an architecture. Nowadays Resnet is popularly used for image classification. It has a number after it which equates to the number of layers. Many better articles exist about Resnet, therefore, to simplify for this article:

我们有数据,我们需要确定架构。 如今,Resnet广泛用于图像分类。 它后面有一个数字,等于层数。 因此,为了简化本文,存在许多关于Resnet的 更好的文章

More layers = more accurate (Hooray!)
更多的图层=更准确(万岁!)
More layers = more compute and time needed (Boo..)
更多的层=需要更多的计算和时间(Boo ..)

Therefore Resnet34 has 34 layers of image finding goodness.

因此,Resnet34具有34层图像查找效果。

准备? 我准备好了! (Ready? I’m ready!)

With the structured data, architecture, and a default error metric we have everything we need for the learner to start fitting.

有了结构化的数据,体系结构和默认错误度量,我们就拥有了学习者开始适应所需的一切。

Let’s look at some code:

让我们看一些代码:

We see that after the cycles and 7 minutes we get to 87% accuracy. Not bad. Not bad at all.

我们看到,在循环和7分钟之后,我们的准确率达到了87%。 不错。 一点也不差。

Not being a doctor, I have a very untrained eye looking at these. I have no clue what to be looking for, categorization errors, or if the data is any good. So I went straight to the confusion matrix to see where mistakes were being made.

不是医生,我的眼睛非常未经训练。 我不知道要查找什么,分类错误或数据是否良好。 因此,我直接进入混乱矩阵,看看哪里出错了。

Of the 8 classes, 2 sets of 2 are often confused with each other. As a baseline, I could only see if they are dyed, polyps, or something else. So compared to my personal baseline of 30% accuracy, the machine is getting an amazing 87%.

在这8个类别中,2个集合(每2个集合)经常彼此混淆。 作为基准,我只能看到它们是否被染色,息肉或其他东西。 因此,与我个人的30%准确度基准相比,该机器获得了惊人的87%。

After looking at the images from these 2 sets side by side, you can see why. (Since they are medical images, they might be NSFW and are present in the Jupyter notebook.)

并排查看这两组图像后,您可以了解原因。 (由于它们是医学图像,因此它们可能是NSFW,并且存在于Jupyter笔记本中。)

  1. The dyed sections are being confused with each other. This type of error can be expected. They are both blue and look very similar to each other.

    染色部分彼此混淆。 可能会出现这种类型的错误。 它们都是蓝色的,彼此看起来非常相似。
  2. Esophagitis is hard to tell from a normal Z-line. Perhaps esophagitis presents redder than Z-line? I’m not certain.

    从正常的Z线很难分辨出食管炎。 也许食管炎比Z线更红? 我不确定。

Regardless, everything seems great, and we need to step up our game.

无论如何,一切似乎都很棒,我们需要加强比赛。

更多的图层,更多的图像,更多的功能! (More layers, more images, more power!)

Now that we see our super fast model working, let’s switch over to the powerhouse.

现在,我们看到我们的超快速模型正在运行,让我们切换到超级计算机。

  • I increased the size of the dataset from v1 to v2. The larger set doubles the number of images available from 4000 to 8000. (Note: All examples in this article show v2.)

    我将数据集的大小从v1增加到v2。 较大的设置将可使用的图像数量增加了一倍,从4000到8000。 (注意:本文中的所有示例均显示v2。)

  • Transform everything that makes sense. There are lots of things you can tweak. We are going to go into more of this shortly.

    改变一切有意义的东西。 您可以调整很多东西。 我们将在短期内详细介绍。
  • Since the images from the dataset are relatively large, I decided to try making the size bigger. Although this would be slower, I was curious if it would be better able to pick out little details. This hypothesis still requires some experimentation.

    由于来自数据集的图像相对较大,因此我决定尝试增大尺寸。 尽管这样做会慢一些,但我很好奇是否可以挑出一些细节。 这个假设仍然需要一些实验。
  • More and more epochs.

    越来越多的时代。
  • If you remember from before, Resnet50 would have more layers (be more accurate) but would require more compute time and therefore be slower. So we will change the model from Resnet34 to Resnet50.

    如果您记得以前,Resnet50将具有更多的层(更准确),但是将需要更多的计算时间,因此速度会更慢。 因此,我们将模型从Resnet34更改为Resnet50。
变换:充分利用图像 (Transforms: Getting the most of an image)

Image transforms are a great way to improve accuracy. If we make random changes to an image (rotate, change color, flip, etc.) we can make it seem like we have more images to train from and we are less likely to overfit. Is it as good as getting more images? No, but it’s fast and cheap.

图像变换是提高准确性的好方法。 如果我们对图像进行随机更改(旋转,更改颜色,翻转等),则可以使我们觉得有更多图像可以训练,而过拟合的可能性也较小。 它与获取更多图像一样好吗? 不,但是它又快又便宜。

When choosing which transforms to use, we want something that makes sense. Here are some examples of normal transforms of the same image if we were looking at dog breeds. If any of these individually came into the dataset, we would think it makes sense. Putting in transforms we now we have 8 images instead for every 1.

在选择要使用的变换时,我们需要一些有意义的东西。 如果我们查看狗的品种,以下是同一图像的正常变换的一些示例。 如果其中任何一个单独进入数据集,我们认为这是有道理的。 进行变换后,我们现在每1个就有8张图像。

What if in the transformation madness we go too far? We could get the images below that are a little too extreme. We wouldn’t want to use many of these because they are not clear and do not correctly orient in a direction we would expect data to come in. While a dog could be tilted, it would never be upside down.

如果在转型疯狂中我们走得太远怎么办? 我们可以得到下面这些太极端的图像。 我们不希望使用其中的许多内容,因为它们不清楚,并且无法正确定位我们希望数据进入的方向。尽管狗可能会倾斜,但它永远不会倒挂。

For the endoscope images, we are not as concerned about it being upside down or over tilted. An endoscope goes all over the place and can have a 360-degree rotation here, so I went wild with rotational transforms. Even a bit with the color as the lighting inside the body would be different. All of these seem to be in the realm of possibility.

对于内窥镜图像,我们不必担心其上下颠倒或倾斜。 内窥镜无处不在,可以在这里进行360度旋转,所以我疯狂地进行了旋转变换。 甚至颜色与体内的灯光也会有所不同。 所有这些似乎都存在可能性。

(Note: the green box denotes how far the scope traveled. Therefore, this technique might be cutting off the value that could have provided.)

(注意:绿色框表示示波器行进了多远。因此,此技术可能会切断可能提供的值。)

重建数据并启动 (Reconstructing data and launching)

Now we can see how to add transforms and how we would shift other variables for data:

现在,我们可以看到如何添加转换以及如何移动数据的其他变量:

Then we change the learner:

然后,我们更改学习者:

Then we are ready to fire!

然后,我们准备开火!

Many epochs later…

以后很多时代……

93% accurate! Not that bad, let’s look a the confusion matrix again.

93%准确! 还不错,让我们再次看一下混乱矩阵。

It looks like the problem with dyed classification has gone away, but the esophagitis errors remain. In fact, the numbers of errors get worse in some of my iterations.

染色分类的问题似乎已经消失,但是食管炎错误仍然存​​在。 实际上,在我的某些迭代中,错误的数量越来越多。

可以在生产中运行吗? (Can this run in production?)

Yes, there are instructions to quickly host this information as a web service. As long as the license isn’t up and you don’t mind waiting… you can try it on Render right here!

是的,有说明将这些信息作为Web服务快速托管。 只要许可证未到期并且您不介意等待,您都可以在此处尝试“ 渲染”

结论和后续行动: (Conclusion and Follow-up:)

As you can see, it is straightforward to transfer the new course from Fast.ai to a different dataset. Much more accessible than ever before.

如您所见,将新课程从Fast.ai转移到另一个数据集很简单。 比以往任何时候都更容易访问。

When going through testing, make sure you start with a fast concept to make sure everything is on the right path, then turn up the power later. Create a positive feedback loop to make sure you are both oriented correctly and as a mechanism to force you to learn more about the dataset. You will have a much richer experience in doing so.

在进行测试时,请确保从一个快速的概念开始,以确保一切都在正确的路径上,然后在以后打开电源。 创建一个积极的反馈循环,以确保您都正确定向,并且可以作为一种机制来迫使您了解有关数据集的更多信息。 这样您将拥有更加丰富的经验。

Some observations on this dataset.

对这个数据集的一些观察。

  • I am trying to solve this problem wrong. I am using a single classifier when these slides have multiple classifications. I discovered this later while reading the research paper. Don’t wait until the end to read papers!

    我试图解决这个问题是错误的。 当这些幻灯片具有多个分类时,我正在使用单个分类器。 我后来在阅读研究论文时发现了这一点。 不要等到读完论文!

  • As a multi-classification problem, I should be including bounding boxes for essential features.

    作为一个多分类问题,我应该包括基本要素的边界框。
  • Classifications can benefit from a feature describing how far the endoscope is in the body. Significant landmarks in the body would help to classify the images. The small green box on the bottom left of the images is a map describing where the endoscope is and might be a useful feature to explore.

    分类可受益于描述内窥镜在体内的距离的功能。 身体上的重要地标将有助于对图像进行分类。 图像左下方的绿色小方框是一张地图,描述了内窥镜所在的位置,并且可能是探索内窥镜的有用功能。
  • If you haven’t seen the new fast.ai course take a look, it took me more time to write this post than it did to code the program, it was that simple.

    如果您还没有看过新的fast.ai课程,那么写这篇文章所花的时间比编写程序要花费更多的时间,就这么简单。

Resources

资源资源

翻译自: https://www.freecodecamp.org/news/how-i-used-deep-learning-to-classify-medical-images-with-fast-ai-cc4cfd64173c/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
清晰完整PDF版本,是我从网上买来的,是第二版,在 CSDN 上只有我整个是清晰完整的。 共 50MB,分为5个分卷 .NET设计规范:约定、惯用法与模式(第2版)克瓦林纳 4/5 .NET 设计规范 约定、惯用法与模式 第2版 克瓦林纳 定 价:69.00元 作  者:(美)克瓦林纳 等著,葛子昂 译 出 版 社:人民邮电出版社 出版时间:2010-5-1 页  数:370 字  数:579000 I S B N:9787115226518 -------------------------------------------------------------------------------- 数千名微软精锐开发人员的经验和智慧,最终浓缩在这本设计规范之中。与上一版相比,书中新增了许多评注,解释了相应规范的背景和历史,从中你能聆听到微软技术大师Anders Hejlsberg、Jeffrey Richter和Paul Vick等的声音,读来令人兴味盎然。   本书虽然是针对.NET平台上的框架设计的,但对其他平台的框架设计同样具有借鉴意义。新版根据.NET Framework 3.0和3.5的新特性做了全面更新,主要关注的是直接影响框架可编程能力的设计问题。遵守这些规范对于使用.NET Framework创建高质量的应用程序至关重要。   本书提供配套光盘,内含Designing .NET Class Libraries等13个演讲视频。此外,光盘还包括.NET Framework类和组件设计指南、API规范样例以及其他有用的资源和工具。 :数千名微软精锐开发人员的经验和智慧,最终浓缩在这本设计规范之中。与上一版相比,书中新增了许多评注,解释了相应规范的背景和历史,从中你能聆听到微软技术大师Anders Hejlsberg、Jeffrey Richter和Paul Vick等的声音,读来令人兴味盎然。   本书虽然是针对.NET平台上的框架设计的,但对其他平台的框架设计同样具有借鉴意义。新版根据.NET Framework 3.0和3.5的新特性做了全面更新,主要关注的是直接影响框架可编程能力的设计问题。遵守这些规范对于使用.NET Framework创建高质量的应用程序至关重要。   本书提供配套光盘,内含Designing .NET Class Libraries等13个演讲视频。此外,光盘还包括.NET Framework类和组件设计指南、API规范样例以及其他有用的资源和工具。 作者简介 -------------------------------------------------------------------------------- Krzysztof Cwalina 微软公司.NET Framework开发组项目经理。他为.NET Framework设计了多个API,还开发了FxCop等框架开发工具。目前,他正致力于在微软内部开发推广设计规范,将其应用到.NET Framework中,同时负责核心.NET Framework API的交付。 :Krzysztof Cwalina 微软公司.NET Framework开发组项目经理。他为.NET Framework设计了多个API,还开发了FxCop等框架开发工具。目前,他正致力于在微软内部开发推广设计规范,将其应用到.NET Framework中,同时负责核心.NET Framework API的交付。 目录 -------------------------------------------------------------------------------- 第1章 概述  1.1 精心设计的框架所具备的品质   1.1.1 精心设计的框架是简单的   1.1.2 精心设计的框架设计代价高   1.1.3 精心设计的框架充满利弊权衡   1.1.4 精心设计的框架应该借鉴过去的经验   1.1.5 精心设计的框架要考虑未来发展   1.1.6 精心设计的框架应具有良好的集成性   1.1.7 精心设计的框架是一致的 第2章 框架设计基础  2.1 渐进框架  2.2 框架设计的基本原则   2.2.1 围绕场景进行设计的原则   2.2.2 低门槛原则   2.2.3 自说明对象模型原则 显示全部信息 :第1章 概述  1.1 精心设计的框架所具备的品质   1.1.1 精心设计的框架是简单的   1.1.2 精心设计的框架设计代价高   1.1.3 精心设计的框架充满利弊权衡   1.1.4 精心设计的框架应该借鉴过去的经验   1.1.5 精心设计的框架要考虑未来发展   1.1.6 精心设计的框架应具有良好的集成性   1.1.7 精心设计的框架是一致的 第2章 框架设计基础  2.1 渐进框架  2.2 框架设计的基本原则   2.2.1 围绕场景进行设计的原则   2.2.2 低门槛原则   2.2.3 自说明对象模型原则   2.2.4 分层架构原则  2.3 小结 第3章 命名规范  3.1 大小写约定   3.1.1 标识符的大小写规则   3.1.2 首字母缩写词的大小写   3.1.3 复合词和常用术语的大小写   3.1.4 是否区分大小写  3.2 通用命名约定   3.2.1 单词的选择   3.2.2 使用单词缩写和首字母缩写词   3.2.3 避免使用编程语言特有的名字   3.2.4 为已有API的新版本命名  3.3 程序集和DLL的命名  3.4 名字空间的命名  3.5 类、结构和接口的命名   3.5.1 泛型类型参数的命名   3.5.2 常用类型的命名   3.5.3 枚举类型的命名  3.6 类型成员的命名   3.6.1 方法的命名   3.6.2 属性的命名   3.6.3 事件的命名   3.6.4 字段的命名  3.7 参数的命名  3.8 资源的命名  3.9 小结 第4章 类型设计规范  4.1 类型和名字空间  4.2 类和结构之间的选择  4.3 类和接口之间的选择  4.4 抽象类的设计  4.5 静态类的设计  4.6 接口的设计  4.7 结构的设计  4.8 枚举的设计   4.8.1 标记枚举的设计   4.8.2 给枚举添加值  4.9 嵌套类型  4.10 类型和程序集元数据  4.11 小结 第5章 成员设计 第6章 扩展性设计 第7章 异常 第8章 使用规范 第9章 常用的设计模式 附录A C#编程风格约定 附录B 通过FxCop来实施设计规范 附录C API规格书样例 术语表 推荐读物

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值