WebPanelBar ASP.NET控件,制作动态XP风格菜单的ASP.NET控

<script language=javascript type=text/javascript> </script> <script language=javascript> </script> <script> function init() { if(event.srcElement.tagName=="A") { oTip.style.left=event.x-event.offsetX-2 oTip.style.width=event.srcElement.parentElement.offsetWidth oTip.style.display="inline" } } function hide() { oTip.style.display='none' } </script>
    
<script language=JavaScript> </script>
Home > Program > tq
WebPanelBar ASP.NET控件,制作动态XP风格菜单的ASP.NET控件,含C#源码 by, 陶清 更新日期: 2004-1-6 (9937 View)      
加入您的补充说明

 

 

 

 

该控件的制作日期较早,但是一个功能实用,很精典的控件程序。

下载:www.pdriver.com/pb04/01/WebPanelBarSourceCode.rar (87kb) (在.net 1.0和.net 1.1都试用过)

另外在WebPanelBar中用到了sn.exe来产生Strong Names。Strong Names是用于产生公用钥,以便于该控件可以在.net平台被所有程序公用。

下面这篇文章是微软2003/12月的一份技术文档(英文版),关于讲述Strong Names,各位网友可以看看。

Security Briefs: Strong Names and Security in the .NET Framework

原文地址在:
http://msdn.microsoft.com/netframework/?pull=/library/en-us/dnnetsec/html/strongNames.asp

以下是原文内容:
Summary:
Strong names are required to uniquely identify an assembly, allowing it to be placed in the Global Assembly Cache, and are also required to use the versioning system in the Microsoft .NET Framework common language runtime. Learn more about strong names and how to use them. (10 printed pages)

Contents

From GUIDs to Public Keys
RSA and Digital Signatures
The CLR and Public Keys
Strong Names and Verification
Strong Names and .NET Security Policy
Public Keys and Versioning
Using Delay Signing to Reduce Exposure
Protecting Your Development Team
Conclusion

From GUIDs to Public Keys

To understand the idea behind strong names, it's helpful to look at the previous component naming scheme on the Microsoft® Windows® platform—the Globally Unique Identifier (GUID). A GUID is a 128-bit (16-byte) unique integer that is used to name things in the COM world. Anyone who has ever poked around in the registry knows that there is a plethora of GUIDs in use today. Any COM programmer knows that this is because GUIDs were used to name very fine-grained items. Each COM class, COM interface, application, type library, and enumeration needs its own GUID. We have an overabundance of GUIDs, really.

Prior to Windows 2000, GUIDs were generated based on the Universally Unique Identifier (UUID), defined as part of DCE RPC. In a nutshell, the UUID derives its uniqueness from the current date and time, plus the unique 48-bit IEEE 802 address from your network interface card (NIC).

Beginning with Windows 2000, GUIDs are no longer generated based on this algorithm. Instead, they are random 16-byte integers generated by invoking the random number generator provided by the CryptoAPI. Why the change? Most likely it was because of a privacy issue that emerged early in 1999. Microsoft® Office was using the GUID as a unique identifier in data files. This had the unfortunate side effect of allowing a document to be linked back to its creator via the NIC address used in the GUID. Putting GUIDs in data files, it turned out, was bad for security: it broke anonymity.

Given all the problems with naming in COM, the common language runtime (CLR) team wanted to devise a better way to uniquely identify components. One critical decision was to use a hierarchical naming scheme. Instead of bestowing each individual type with its own unique identifier, as COM did with GUIDs, CLR types would be identified based on their full type name, including the namespace, plus the name of the assembly in which the type was packaged. This would allow us to use simple type names for classes, interfaces, and so on. Because the loader considers the name of the assembly to be part of the name of each type, we really only need to give each assembly a name unique in space and time, although, practically speaking, namespaces are important for resolving compile-time naming conflicts. With a hierarchical scheme for naming, the problem of unfettered GUID proliferation was solved once and for all.

Now all the CLR team needed was to figure out an algorithm for generating a unique assembly name. Perhaps part of the name could be a large random number akin to a GUID—maybe even larger than 16 bytes, to gain some extra collision resistance. This would protect you against someone accidentally picking the same identifier that you happened to be using to name your assemblies. But it wouldn't protect you against a bad guy purposely trying to make a Trojan horse assembly that looked just like yours. So the CLR team made an interesting decision. Instead of simply using a large random number, it decided to use a 1024-bit RSA public key, which is a 128-byte number constructed from two very large, random prime numbers multiplied together.

In order to understand strong names, it's important to understand the role that cryptography plays, so let's now turn our attention to the crypto.

RSA and Digital Signatures

For a detailed introduction to this topic, see Practical Cryptography by Ferguson and Schneier (Wiley 2003). Briefly, the idea behind RSA is that keys are generated in pairs: a public key and a private key. The private key is a secret; you don't disclose it to anyone. The public key, on the other hand, can be shared with anyone. It's believed to be infeasible to compute the private key, given the public key. Any data encrypted with the public key can be decrypted only with the private key, and correspondingly, any data encrypted with the private key can be decrypted only with the public key.

RSA keys can be used to ensure the integrity of data via a digital signature. To sign some data, you first hash that data using a cryptographic hash algorithm, and then you encrypt the resulting hash value with your private key. The signature is really just this encrypted hash value. If you publish the data and the signature, anyone who knows your public key can validate the signature by hashing the data themselves, and comparing their own hash value with the one in your signature, which they decrypt using your public key. The idea is that the hashes won't match unless two things are true: the data received is the same data that was originally signed, and the signer knew the private key. By applying this technique to an assembly, you make it infeasible for an attacker to replace your assembly with his own Trojan version, because the attacker won't be able to forge your signature. This, of course, assumes he doesn't know your private key. Protecting your private key or keys is crucial, and communicating them is one of the main reasons for this article.

By choosing to use an RSA public key as part of an assembly name, the CLR team elegantly killed two birds with one stone. First, the large, collision-free nature of randomly-generated RSA public keys provides safety from accidental naming conflicts: a 1024-bit public key is eight times the size of a GUID. Secondly, the corresponding private key can be used to sign an assembly, providing security against an attacker attempting to replace your assembly with his own code. But, as with all security countermeasures, this one comes with a cost: we now have secrets that must be managed. If your private key is compromised, security breaches and catastrophic compatibility breaks can occur.

The CLR and Public Keys

A strongly-named assembly is one that has been assigned a public key. This is the job of a compiler. For example, if you generate a key pair for yourself, you can assign your public key to assemblies that you create simply by telling the compiler where to look for your key file:

using System.Reflection;
[assembly: AssemblyKeyFile(@"c:/temp/mykeyfile")]

class Foo {...}

Most of the wizard-generated projects in Microsoft Visual Studio® .NET place this attribute in a file called AssemblyInfo, but you can put it in any source file you like. When the compiler sees this attribute, it copies the entire public key into the assembly's metadata, and uses the private key to form a digital signature. This is done by hashing the files in the assembly, incorporating those hash values into the manifest for the assembly, hashing the manifest, and, finally, encrypting this final hash value using the corresponding private key and tucking it away as yet another block of metadata in the assembly.

(Note that using this simplistic approach, the compiler needs to have not only the public key, but the private key as well, from the file c:/temp/mykeyfile. You'll see a safer way to approach code signing later in this article.)

If you look at an assembly's manifest using ILDASM, you'll be able to see its public key quite plainly, as shown in Figure 1.

Figure 1. A public key assigned to an assembly

What you won't see is the signature or the intermediate hash values. This sometimes confuses people who are learning about strong names. ILDASM doesn't show these things because ILDASM is a disassembler—it is supposed to produce IL that can be compiled into an assembly, and remember that the signature and hash values are output by the compiler (which would be ILASM in this case). If you want to see if an assembly has been given a public key, use the strong-name tool, SN.EXE:

sn -Tp foo.dll

This will either print out the public key of the assembly, or tell you that the assembly is not strongly named, which means it has not been assigned a public key. But just because an assembly has a public key doesn't necessarily mean it has a corresponding signature, or that the signature is valid. To test for the presence and validity of a signature, use the following command:

sn -vf foo.dll

This causes SN.EXE to compute its own hash of each file in the assembly, to make sure the assembly binaries haven't changed since they were signed. It then computes its own hash of the assembly manifest, decrypts the signature packaged into the assembly (the one that ILDASM doesn't show you), and compares its own calculated hash value with the decrypted signature. If the hash matches, it reports that the assembly is valid. Otherwise, it will report an error. One error might be that the assembly simply hasn't had a signature applied yet, which will be the case with delay-signed assemblies (more on this later).

When you use the tools that come with the .NET Framework to install a new assembly into the Global Assembly Cache (GAC) (GACUTIL.EXE or the Fusion cache viewer), they perform a signature check equivalent to the following command:

sn -v foo.dll

The same thing happens each time the CLR loads a strongly-named assembly that does not reside in the GAC. If a public key is present, it will verify the signature. The subtle difference between the sn –vf and sn -v commands is that the latter will skip signature verification for any public key registered by the administrator as trusted (more on this later). To summarize, the CLR verifies assembly signatures either at load time or when the assembly is installed in the GAC.

Note that the GAC is considered a trusted repository: the only thing protecting an assembly from modification once it is installed in the GAC is the strong file system ACL on everything in the GAC. This is essentially the same ACL that protects the CLR and other operating system binaries. If an attacker gets administrative control of a machine, he can replace assemblies in the GAC with Trojan horse versions; the CLR will be none the wiser, because it doesn't recheck signatures when loading assemblies from the GAC. But if the attacker has enough access to the file system to modify or replace assemblies in the GAC, he can also do the same with the CLR binaries (MSCORWKS.DLL and friends), or the operating system itself. To help speed load times, the CLR doesn't bother rechecking signatures for assemblies loaded from the GAC.

Strong Names and Verification

The CLR assembly resolver refers to assemblies in one of two ways: weak or strong. The weak method considers only the short assembly name, which is just the file name minus the extension. For example, the short assembly name for FOO.DLL is simply FOO. No version checking is performed at load time. A strong name, on the other hand, consists of the short name plus three other parts: a version number, culture, and a public key. If you assign a public key to your assembly, it is considered "strongly named"; other assemblies that reference yours will use this stronger four-part name of your assembly. Practically speaking, this means that you will be able to place your assembly in the GAC and take advantage of version policy.

With all this talk of public keys, signatures, hashing, and so on, it's easy to lose sight of what protections we're really getting. Here's what the CLR tries to guarantee: if you build an assembly FOO.DLL and sign it (thus giving it a strong name), anyone who references FOO.DLL from their assembly at compile time should get a FOO.DLL at runtime that was also produced by you (the person who knows the private key behind the strong name). It may not be the exact same FOO.DLL, because version policy might allow the CLR to use a different version in place of the original, but there should be some guarantee that the code at runtime was produced by the same person who produced the original FOO.DLL. This protects against a third party replacing FOO.DLL with a Trojan horse version that might be malicious.

Here's how it's implemented. When you compile an assembly—for instance, BAR.EXE—and it refers to a strongly-named assembly—for instance, FOO.DLL—the compiler will record the strong name of FOO.DLL into the manifest of BAR.EXE. This includes a reference to the public key (see Figure 2). At load time, besides the normal signature checks designed to watch for unauthorized modification of the assembly's binaries, the loader will ensure that the public key in FOO.DLL matches the one recorded in BAR.EXE. Thus the links between assemblies are protected.

Figure 2. A reference to a strongly-named assembly

What protects BAR.EXE? If you open a command shell and simply run BAR.EXE by typing BAR <Enter>, what guarantee do you have that it hasn't been replaced with a Trojan horse? In this case, none. Think about it. If you wanted a guarantee, you'd need some way of providing the operating system with the known good public key for BAR.EXE. Otherwise, the attacker could sign his Trojan horse version of BAR.EXE using a key that he generated randomly. The CLR would certainly be able to verify that BAR.EXE is self-consistent—that is, its signature can be verified, given the public key in the manifest. But it's not the public key assigned by the original author of BAR.EXE. You might solve this problem by writing a little loader program that simply calls Assembly.Load(), passing in the public key information along with the assembly name. Maybe you'd call this program LOADER.EXE; but you'd have the same problem trying to verify LOADER.EXE.

You can think of a Microsoft® ASP.NET page as being similar to BAR.EXE. While you can reference a strongly-named assembly from your page,

<%@assembly name='foo, Version=1.0.0.0,
Culture=neutral,PublicKeyToken=2d7adc3047e7238d'%>

...there is no way to tell ASP.NET the strong name of your page's assembly. Even if you could give your page a strong name, it wouldn't help much. On the other hand, a precompiled handler or module registered in web.config or machine.config can be referred to via a strong name. Of course, this verification fails if the attacker can modify the configuration file and replace your public key with his.

One implementation detail is worth noting here. In the above assembly reference, notice how we refer to assemblies using a "public key token," as opposed to the full public key. This token is like a thumbprint of the public key. When we refer to an assembly using its strong name, since we always use the public key token, the best verification the loader can give us is limited to ensuring that the thumbprint we specified matches the public key of the assembly we're loading. How hard would it be for an attacker to generate another RSA key pair (for which he knows the private key) whose public key has the same thumbprint as ours? A public key token is constructed by taking the low 8 bytes of the 20-byte SHA1 hash of the public key. There are 2^64 possible values for an 8-byte token, which doesn't immediately rule out an attack, because completing 2^64 steps is quite easy with today's hardware. But calculating RSA key pairs is expensive, and doing it 2^64 times may not be feasible without significant funding to pay for specialized hardware. In other words, your average hacker probably couldn't afford to do it alone, but your friendly neighborhood intelligence agency may have already done it.

If strong-name verification is a security feature you depend on, then your threat model should include attacks against the thumbprint, as it is clearly the weakest link in the chain. It's not too hard to understand why the CLR team chose this particular security trade-off: typing an 8-byte public key token takes 16 keystrokes. Typing the full 20-byte SHA1 hash, which would provide a much higher level of security, would take 40 keystrokes. But how often do we need to type public key tokens? Not too often, given tools such as the .NET Framework Configuration snap-in. And it only takes one person to calculate and publish an alternative RSA key pair that has the same public key token as, say, Microsoft's public key. We can hope the length of this thumbprint is extended in the future.

None of this theoretical discussion of thumbprints matters if you allow your private key to be compromised, which would allow an attacker to sign any assembly desired for use as a Trojan horse.

Strong Names and .NET Security Policy

Trojan horse assemblies aren't the only thing you need to worry about if your private key is compromised. To replace a strongly-named assembly on your machine with a Trojan horse version, the attacker will need to get around strong-name verification. The attacker will also need to get code onto your machine, which hopefully isn't a trivial thing to do. But there's a more direct and dangerous attack, and it has to do with .NET security policy; that's the repository where trust decisions are made. This policy is what keeps your computer safe from managed malware (malicious software) that might be sent to you over the network. The next time you're logged in as an administrator on your machine, try opening up the .NET Framework Configuration tool, which can be found on the Start menu under Administrative Tools. Use this tool to poke around the runtime security policy a bit. If you fully expand the tree of code groups under the machine policy level, you'll see that security policy sometimes grants an awful lot of trust based on strong names. For example, there's a code group called Microsoft_Strong_Name that grants full trust to any locally-installed code signed with a special key that Microsoft owns. This key is used to sign the core assemblies that make up the .NET Framework itself.

As an organization adopts the .NET Framework and begins to use features like no-touch deployment, you may expect to see more and more security policy being determined based on strong names. It seems doubtful that many shops will want to write programs that run in partial-trust environments, so expect to see policy being configured to grant full trust based on internal company strong names. Fortunately, when specifying a strong name as part of security policy, the full public key is specified, not just the thumbprint. But this doesn't help if an attacker has stolen your private key. Once an attacker has your private key, he can sign any code he likes and give it your strong name.

The following scenario clarifies this threat. Say you have a Windows Forms application that acts as a thick client to a Web service. For convenience, you've published that thick client via no-touch deployment. Users simply click a link in their browser and they always get the latest and greatest version of the client, without any hassle. But the client program calls through P/Invoke, on occasion, to access some legacy code, and this throws an exception because of the way you've deployed the client. When code is downloaded from a network, it's considered mobile code and by default won't be trusted enough to call unmanaged code directly. To solve this problem, say you've deployed an update to .NET security policy throughout your organization that grants full trust to any assembly with your strong name. Figure 3 shows what this might look like.

Figure 3. Granting full trust to a strong name

In this scenario, if an attacker has stolen your private key, he can publish code on his Web site that has your strong name. If he can convince somebody in your organization to click a link that points to his code, his code will run with the full permissions of the person that clicked the link—silently, and without warning. This is pretty scary stuff. Scenarios such as this demonstrate why it's critical to carefully protect your private keys.

Note that this applies to other types of private keys as well, including those used to form Authenticode signatures. Imagine the same scenario, where an Authenticode signature (which translates to Publisher evidence in .NET security policy) was used to grant full trust instead of a strong name. The dangers are essentially the same.

Note that you can provide defense in-depth, by putting the ACME_Strong_Name code group not at the root, but rather under the LocalIntranet_Zone code group (as shown in Figure 3), similar to the way the Microsoft_Strong_Name is placed under the My_Computer_Zone code group. Child code groups aren't evaluated unless their parents match, so by doing this you're creating a policy that requires two things to be true before granting full trust: the assembly must be loaded from the LocalIntranet zone, and it must have your strong name. Of course, now you need to worry about attacks from insiders, which is a more serious threat than most companies like to admit.

It can't be emphasized enough. If you use strong names, you need to have a secure process for signing your assemblies, or your private keys will be vulnerable. We'll look at such a process shortly, but first, there's one more reason to consider securing your private keys.

Public Keys and Versioning

Version policy is an interesting beast to consider with respect to public keys. The assumption here is that an assembly will always have the same public key, but its version may change over time. There are various ways that version policy allows a system administrator or software publisher to affect the version of an assembly that gets loaded into an application. But version policy only works if the name of the assembly and its public key are held constant. In other words, you don't want to publish version 1 of assembly FOO with one public key, and then try to publish version 2 of FOO using a second public key. Assembly public keys are long-term commitments: once you choose one and apply it to an assembly, you really need to keep it for the lifetime of that assembly, through all versions.

This totally goes against the grain of key revocation, which is something we worry about a great deal in Public Key Infrastructure (PKI) systems. The idea is that if you lose your private key, or if you believe that it has been compromised, you can revoke your public key and get a new one (to be fair, it's not nearly as simple as it sounds). But in the CLR, if you're using version policy and the GAC to manage assemblies that are shared among many applications, it's catastrophic from a versioning and compatibility standpoint to have your private key compromised. If you issue yourself a new key pair, you'll need to recompile all applications that were relying on your old public key. There is no clean upgrade path in this case, which is yet another very good reason to protect your private key.

If you are really serious about security (and you should be), you'll invest in some hardware that allows you to store your keys offline. It's called a smart card. This is such a good idea that a future article will be devoted to the topic. But in the meantime, here's how you can get started immediately with a technique called delay signing.

Using Delay Signing to Reduce Exposure

One of the easiest things you can do to reduce exposure of your private keys is to delay sign your assemblies. This technique allows your compiler to build an assembly without knowledge of your private key; the compiler only needs the public key, which isn't a secret. Here's how it works.

The first step is to generate one or more RSA key pairs for your strong names. If you're still waiting for your smart-card hardware to arrive (you have ordered it, haven't you?), you'll need to store your keys in the file system, so be sure to do this on a secure machine that is not connected to any network. Use the following command to generate each RSA key pair:

sn -k pubpriv

The SN tool will create a new key pair in a file called pubpriv. Immediately run the following command to copy just the public key into a second file called pub:

sn -p pubpriv pub

Copy pub onto removable media and move it onto another machine. Now remove pubpriv from the machine and place it in a vault. You will not need this file until you are ready to ship your first signed assembly to someone outside your product group. Distribute pub to anyone who needs to compile your assemblies. It's not a secret, so don't worry about a bad guy stealing it.

Follow this procedure once for each key pair you plan to use for strong names. Then, once all of the private keys are safely stored in a vault, and you're confident that they won't be lost, damaged, or stolen, destroy the hard drive of the machine used to generate them. Seriously. Vaporize it. Of course, if you were using a smart card, no private key would have ever been stored on a hard drive anyway, simplifying storage and maintenance (has your smart card hardware arrived yet?).

To delay sign an assembly, use the AssemblyKeyFile attribute to refer to the pub file, which should be on each developer's machine that needs it. Also apply the AssemblyDelaySign attribute as follows:

[assembly: AssemblyKeyFile(@"c:/keys/pub")]
[assembly: AssemblyDelaySign(true)]

This tells the compiler to embed the public key in the assembly it produces, but not to bother generating the signature. The compiler will leave space in the assembly for the signature to be added at a later date.

Finally, on any machine used for testing these unsigned assemblies, instruct the CLR to skip strong-name verification for the corresponding public key. To do this, you'll need the public key token for your public key—that fingerprint we discussed earlier. You can discover this with the following command:

sn –t pub

Let's say the public key token was bc19568c6e03e7e6. Here's how to register the token for verification skipping on a machine:

sn -Vr *,bc19568c6e03e7e6

This will prevent the CLR from attempting to verify the signature of any assembly with the above public key token either at load time, or when you install the assembly into the GAC.

When you're ready to ship an assembly to anyone outside your development team, bring the compiled assembly to a secure machine, retrieve the pubpriv file from your vault, install it on the machine, and run the following command:

sn -R assemblyfile

This will use the private key to sign the assembly, filling in the space left by the compiler. Looks like another hard drive needs vaporizing. Smart cards aren't looking so bad now, are they?

If you don't like all this talk of annihilating hard drives, and for some reason you don't want to spend about $100 for that smart card hardware, then, by all means, use a RAM disk instead of a hard drive whenever you need temporary storage of private keys. Rebooting the machine when you're done will ensure your secrets aren't accessible to any but the most well-funded attackers.

Protecting Your Development Team

Delay signing isn't perfect. Because your team must turn off strong name verification for one or more public keys in order to test their own assemblies during development, be very careful not to confer trust on an assembly based solely on the presence one of these unverified strong names. Given your public key, an attacker can delay sign a malicious assembly just as easily as you can delay sign your own assemblies. And you must assume that any attacker will be able to easily obtain your public key, since it is embedded as metadata in each strong-named assembly you build.

Make sure to educate your team about this issue. If you need to identify your assemblies to .NET security policy, don't use an unverified strong name, as it can easily be made into a Trojan horse by a bad guy, as described. An alternative might be to use a temporary, internally-issued, code-signing certificate (good old Authenticode) whose private key is known among your team and used to sign all assemblies. In this case, you'd use publisher evidence instead of strong-name evidence to identify your assemblies in policy during development and testing.

Conclusion

Strong names are very powerful, but with great power comes great responsibility. The ability of the CLR to make security guarantees based on strong names is only as good as the protection we give to our private keys. So protect those keys.

 

相关链接:
 

Comments:

为什么我使用的时候报错呢
By ? on 2004年5月21日 (中)
 

“/”应用程序中的服务器错误。

无法找到资源。

说明: HTTP 404。您正在查找的资源(或者它的一个依赖项)可能已被移除,或其名称已更改,或暂时不可用。请检查以下 URL 并确保其拼写正确。

请求的 URL: /TestPanelBa/WebForm1.aspx


版本信息: Microsoft .NET Framework 版本:1.1.4322.573; ASP.NET 版本:1.1.4322.573

Reply to this Comment

 

加入您的补充说明



Copyright 1999-2004 Pdriver .com , All Rights Reserved
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值