NPerf, A Performance Benchmark Framework for .Net

NPerf, A Performance Benchmark Framework for .Net

Demo

Introduction

This article present NPerf?a flexible performance benchmark framework. The framework provides custom attributes that the user uses the tag benchmark?classes and methods. If you are familiar with NUnit [1], this is similar to the custom attributes they provide.??

The framework uses reflection to gather the benchmark testers, the tested types, runs the tests and output the results. The user just have to write the benchmark methods.

At the end of the article, I illustrate NPerf with some metaphysic .Net question: interface vs delegates, string concatanation race, fastest dictionary.

QuickStart: Benchmarking IDictionary

Let's start with a small introductory example: benchmarking the [] assignement for the different implementation of IDictionary. To do so, we would like to test the assignment on a growing number of assignment calls.

All the custom attributes are located in the NPerf.Framework namespace, NPerf.Framework.dll assembly.

PerfTester attribute: defining testers

First, you need to create a tester class that will contains method to do the benchmark. This tester method has to be decorated with the PerfTester attribute.

using NPerf.Framework;

[PerfTester(typeof(IDictionary),10)] public class DictionaryTester
{
...
}

The PerfTesterAttribute constructor takes two argument:

  • the Type of the tested class, interface or struct,
  • the number of test runs.?The?framework will use this value to call test methods?multiple times?(explained below).

PerfTest attribute: adding benchmark tests

The PerfTest attribute marks a specific method inside a class that has already been marked with the PerfTester attribute , as a performance test method. The method should take the tested type as parameter,? IDictionary?here, ?and the return type should be? void:

[PerfTester(typeof(IDictionary),10)] 
public class DictionaryTester
{
// explained below private int count;
private Random rnd = new Random();

[PerfTest]
public void ItemAssign(IDictionary dic)
{
for(int i=0;i<this.count;++i)
dic[rnd.Next()]=null;
}
}

PerfSetUp and PerfTearDown Attributes

Often, you will need to set up you tester and tested class before actually starting the benchmark test. In our example, we want to update the number of insertion depending the test repetition number. The PerfSetUp attribute can be used to tag a method that will be called before each test repetition. In our test case, we use this method to update the DictionaryTester.count member:

[PerfTester(typeof(IDictionary),10)] 
public class DictionaryTester
{
private int count;
private Random rnd = new Random();

[PerfSetUp]
public void SetUp(int index, IDictionary dic)
{
this.count = index * 1000;
}
}

 

The set-up method must return void and take two arguments:

  • index, current test repetition index. This value can be used to modify the number of elements tested, collection size, etc...
  • dic, the tested class instance

If you need to clean up resources after the tests are run, you can use the PerfTearDown attribute to tag a cleaning method:

[PerfTester(typeof(IDictionary),10)] 
public class DictionaryTester
{
...

[PerfTearDown]
public void TearDown(IDictionary dic)
{
...
}
}

 

PerfRunDescriptor attribute: giving some information to the framework

In our example, we test the IDictionary object with an increasing number of elements. It would be nice to store this number in the results, and not store just the test index: we would like to store 1000,2000,.... and not 1,2,...

The PerfRunDescriptor attribute can be used to tag a method that returns a double from the test index. This double is typically used for charting the results, as x coordinate.

[PerfTester(typeof(IDictionary),10)] 
public class DictionaryTester
{
[PerfRunDescriptor]
public double Count(int index)
{
return index*1000;
}
}

 

Full example source.

The full source of the example is as follows:

using System;
using System.Collections;
using NPerf.Framework;

[PerfTester(typeof(IDictionary),10)]
public class DictionaryTester
{
private int count = 0;
private Random rnd = new Random();
[PerfRunDescriptor]
public double Count(int index)
{
return index*1000;
}

[PerfSetUp]
public void SetUp(int index, IDictionary dic)
{
this.count = (int)Math.Floor(Count(index));
}

[PerfTest]
public void ItemAssign(IDictionary dic)
{
for(int i =0;i<this.count;++i)
dic[rnd.Next()]=null;
}
}

Compiling and Running

Compile this class to an assembly and copy the NPerf binaries in the output folder: (NPerf.Cons.exe, NPerf.Core.dll, NPerf.Framework.Dll, NPerf.Report.Dll, ScPl.dll).

NPerf.Cons.exe is a console application that dynamically loads the tester assemblies (that you need to specify), the assemblies that contains the tested types (you need to specify), runs the test and output charts using ScPl [2] (ScPl is a chart library under GPL).

The call to NPerf.Cons.exe looks like this:

NPerf.Cons -ta=MyPerf.dll -tdfap=System -tdfap=mscorlib

where

  • ta defines an assembly that contains terster classes (DictionaryTester),
  • tdfap defines an assembly that contains tested type. Moreover, the assembly names are given as partial name and will be loaded by AssemblyLoadWithPartialName.

There are a number of other options that you can get by typing NPerf.Cons -h. Running the command line above will produce the following?chart:

Sample screenshot

In the graph, you can see that some type failed the tests (PropertyDescriptorCollection). It is possible to specify to NPerf to avoid those types by passing them in the command line:

NPerf.Cons -ta=MyPerf.dll -tdfap=System -tdfap=mscorlib -it=PropertyDescriptorCollection

 

Saving to xml

You can also output the results to xml by adding the -x parameter. Internally, .Net Xml Serialization is used to render the results to XML.

A few remarks

  • You can add as many test method (PerfTest)?as you want in the PerfTester classes,
  • You can define as many tester class as you want,
  • You can load tester/tested types?from multiple assemblies

Overview of the Core

The NPerf.Core namespace contains the methods that do the job in the background. I do not plan to explain them in details but I'll discuss some problem I ran into while writing the framework.

Getting the machine properties

Getting the physical properties of the machine was a suprisingly difficult task. It took me a bunch of google tries to get on the right pages. Anyway, here's the self-explaining code that get the machine properties:

ManagementObjectSearcher query = new ManagementObjectSearcher("SELECT * From Win32_ComputerSystem");
foreach(ManagementObject obj in query.Get())
{
long ram = long.Parse(obj["TotalPhysicalMemory"].ToString());
break;
}

query = new ManagementObjectSearcher("SELECT * From Win32_Processor");
foreach(ManagementObject obj in query.Get())
{
string cpu =(string)obj["Name"];
long cpuFrequency =long.Parse(obj["CurrentClockSpeed"].ToString());
break;
}

TypeHelper, easier CustomAttribute support

A Type helper static class was added to automate tedious tasks like check for custom attribute, get a custom attribute, etc... The TypeHelper class declaration is as follows:

public sealed class TypeHelper
{
public static bool HasCustomAttribute(
Type t,
Type customAttributeType);
public static bool HasCustomAttribute(
MethodInfo t,
Type customAttributeType)
public static Object GetFirstCustomAttribute(
Type t,
Type customAttributeType)
public static Object GetFirstCustomAttribute(
MethodInfo mi,
Type customAttributeType)
public static MethodInfo GetAttributedMethod(
Type t,
Type customAttributeType)
public static AttributedMethodCollection GetAttributedMethods(
Type t,
Type customAttributeType);
public static void CheckSignature(
MethodInfo mi,
Type returnType,
params Type[] argumentTypes);
public static void CheckArguments(
MethodInfo mi,
params Type[] argumentTypes);
}

 

Benchmark Bonus

In order to illustrate the framework, I have written a few benchmark testers for classic performance questions about .Net. All these benchmarks are provided in the System.Perf project.

IDictionary benchmark

Adding items


Sample screenshot

Sample screenshot

Sample screenshot

String concatenation benchmark

Sample screenshot

Interface vs Delegate

Sample screenshot

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
"大规模基准数据集用于评估泛锐化性能"是一个用于评估图像泛锐化算法表现的数据集。泛锐化是一种图像处理技术,旨在通过将低分辨率的多光谱图像与高分辨率的全色图像融合,以产生具有较高空间分辨率和丰富光谱信息的图像。这种技术在许多遥感应用中都很有用,例如土地利用监测、资源管理和环境监测。 该数据集的规模大,包含了大量的多光谱和全色图像对,这些图像对均具有全面的注释和质量测量指标。这些图像对来自各种不同的遥感源,涵盖不同的场景和条件。数据集的构建过程经过精心设计,以保证评估结果的准确性和可靠性。 使用该数据集,研究人员和开发者可以对他们的泛锐化算法进行全面的评估和对比。他们可以将自己的算法应用于数据集中的图像对,并使用数据集中提供的注释进行性能评估。这些注释可以包括图像质量评价指标,如结构相似性指数(SSIM)和峰值信噪比(PSNR),或者一些更复杂的图像质量评价方法,如目标检测和目标分类任务的准确率。通过与其他算法进行比较,开发者可以了解他们的算法在不同场景和条件下的表现如何,并进一步改进和优化他们的方法。 "大规模基准数据集用于评估泛锐化性能"的建立为泛锐化算法的发展提供了一个公共的平台,促进了该领域的研究和进步。研究人员和开发者可以根据数据集中的结果和经验得出更好的算法和技术,进一步提高泛锐化算法在实际应用中的效果。这个数据集的存在为遥感图像处理的研究和应用带来了很大的推动力。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值