云应用设计模式(二)

Federated Identity Pattern 联邦身份模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 8 minutes to read还有8分钟

In this article在这篇文章中Issues and Considerations 问题及考虑
在这里插入图片描述在这里插入图片描述在这里插入图片描述
Delegate authentication to an external identity provider. This pattern can simplify development, minimize the requirement for user administration, and improve the user experience of the application.

将身份验证委托给外部标识提供程序。此模式可以简化开发,最大限度地减少用户管理需求,并改善应用程序的用户体验。

Context and Problem 背景与问题

Users typically need to work with multiple applications provided by, and hosted by different organizations with which they have a business relationship. However, these users may be forced to use specific (and different) credentials for each one. This can:

用户通常需要使用与他们有业务关系的不同组织提供和托管的多个应用程序。但是,这些用户可能被迫为每个用户使用特定(和不同)凭据。这个可以:

  • Cause a disjointed user experience 导致用户体验脱节. Users often forget sign-in credentials when they have many different ones. 。当用户有许多不同的登录凭证时,他们常常忘记登录凭证
  • Expose security vulnerabilities 暴露安全漏洞. When a user leaves the company the account must immediately be deprovisioned. It is easy to overlook this in large organizations. .当用户离开公司时,帐户必须立即被删除。在大型组织中很容易忽视这一点
  • Complicate user management 复杂的用户管理. Administrators must manage credentials for all of the users, and perform additional tasks such as providing password reminders. 。管理员必须管理所有用户的凭据,并执行其他任务,如提供密码提醒

Users will, instead, typically expect to use the same credentials for these applications.

相反,用户通常希望对这些应用程序使用相同的凭据。

Solution 解决方案

Implement an authentication mechanism that can use federated identity. Separating user authentication from the application code, and delegating authentication to a trusted identity provider, can considerably simplify development and allow users to authenticate using a wider range of identity providers (IdPs) while minimizing the administrative overhead. It also allows you to clearly decouple authentication from authorization.

实现可以使用联邦标识的身份验证机制。将用户身份验证与应用程序代码分离,并将身份验证委托给受信任的身份提供程序,可以极大地简化开发,并允许用户使用更广泛的身份提供程序(IDP)进行身份验证,同时最大限度地减少管理开销。它还允许您清楚地将身份验证与授权解耦。

The trusted identity providers may include corporate directories, on-premises federation services, other security token services (STSs) provided by business partners, or social identity providers that can authenticate users who have, for example, a Microsoft, Google, Yahoo!, or Facebook account.

受信任的身份提供者可能包括公司目录、联合服务、其他由商业伙伴提供的安全令牌服务(STSs) ,或者可以验证拥有微软、谷歌、雅虎或 Facebook 账户的用户身份的社交身份提供者。

Figure 1 illustrates the principles of the federated identity pattern when a client application needs to access a service that requires authentication. The authentication is performed by an identity provider (IdP), which works in concert with a security token service (STS). The IdP issues security tokens that assert information about the authenticated user. This information, referred to as claims, includes the user’s identity, and may also include other information such as role membership and more granular access rights.

图1说明了当客户端应用程序需要访问需要身份验证的服务时,联邦标识模式的原则。身份验证由身份提供者(IDP)执行,它与安全令牌服务(STS)协同工作。IdP 发出安全令牌,断言有关经过身份验证的用户的信息。这些信息称为声明,包括用户的标识,还可能包括其他信息,如角色成员资格和更细粒度的访问权限。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KienzqwP-1655720494301)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589790.fd3e6504c1a86ed8ad878d237986febc(en-us,pandp.10)].png)

Figure 1 - An overview of federated authentication

图1-联邦身份验证的概述

This model is often referred to as claims-based access control. Applications and services authorize access to features and functionality based on the claims contained in the token. The service that requires authentication must trust the IdP. The client application contacts the IdP that performs the authentication. If the authentication is successful, the IdP returns a token containing the claims that identify the user to the STS (note that the IdP and STS may be the same service). The STS can transform and augment the claims in the token based on predefined rules, before returning it to the client. The client application can then pass this token to the service as proof of its identity.

这种模型通常称为基于索赔的访问控制。应用程序和服务授权基于令牌中包含的声明访问特性和功能。需要身份验证的服务必须信任 IdP。客户端应用程序联系执行身份验证的 IdP。如果身份验证成功,IdP 将返回一个令牌,其中包含向 STS 标识用户的声明(请注意,IdP 和 STS 可能是相同的服务)。STS 可以根据预定义的规则转换和增加令牌中的索赔,然后再将其返回给客户机。然后,客户端应用程序可以将此令牌作为其标识的证明传递给服务。

Note

注意

In some scenarios there may be additional STSs in the chain of trust. For example, in the Microsoft Azure scenario described later, an on-premises STS trusts another STS that is responsible for accessing an identity provider to authenticate the user. This approach is common in enterprise scenarios where there is an on-premises STS and directory.

在某些情况下,信任链中可能存在额外的 STS。例如,在后面描述的 Microsoft Azure 场景中,内部 STS 信任另一个 STS,该 STS 负责访问身份提供者以验证用户。这种方法在企业场景中很常见,在这些场景中有一个内部 STS 和目录。

Federated authentication provides a standards-based solution to the issue of trusting identities across diverse domains, and can support single sign on. It is becoming more common across all types of applications, especially cloud-hosted applications, because it supports single sign on without requiring a direct network connection to identity providers. The user does not have to enter credentials for every application. This increases security because it prevents the proliferation of credentials required to access many different applications, and it also hides the user’s credentials from all but the original identity provider. Applications see just the authenticated identity information contained within the token.

联合身份验证提供了一种基于标准的解决方案,用于解决跨不同域的信任身份问题,并支持单点登录。它在所有类型的应用程序中变得越来越普遍,特别是云托管应用程序,因为它支持单点登录,而不需要直接与身份提供商建立网络连接。用户不必为每个应用程序输入凭据。这增强了安全性,因为它可以防止访问许多不同应用程序所需的凭据的扩散,而且它还向除原始标识提供程序之外的所有用户隐藏了用户的凭据。应用程序只看到令牌中包含的经过身份验证的身份信息。

Federated identity also has the major advantage that management of the identity and credentials is the responsibility of the identity provider. The application or service does not need to provide identity management features. In addition, in corporate scenarios, the corporate directory does not need to know about the user (providing it trusts the identity provider), which removes all the administrative overhead of managing the user identity within the directory.

联邦身份还有一个主要优势,即身份和凭据的管理是身份提供者的责任。应用程序或服务不需要提供身份管理特性。此外,在公司场景中,公司目录不需要了解用户(前提是它信任身份提供者) ,这样可以消除管理目录中的用户身份的所有管理开销。

Issues and Considerations 问题及考虑

Consider the following when designing applications that implement federated authentication:

在设计实现联邦身份验证的应用程序时,请考虑以下事项:

  • Authentication can be a single point of failure. If you deploy your application to multiple datacenters, consider deploying your identity management mechanism to the same datacenters in order to maintain application reliability and availability. 身份验证可以是单点故障。如果将应用程序部署到多个数据中心,请考虑将身份管理机制部署到相同的数据中心,以维护应用程序的可靠性和可用性
  • Authentication mechanisms may provide facilities to configure access control based on role claims contained in the authentication token. This is often referred to as 身份验证机制可以提供基于身份验证令牌中包含的角色声明配置访问控制的工具。这通常被称为role-based access control 以角色为基础的存取控制 (RBAC), and it may allow a more granular level of control over access to features and resources. (RBAC) ,它可以允许对特性和资源的访问进行更细粒度的控制
  • Unlike a corporate directory, claims-based authentication using social identity providers does not usually provide information about the authenticated user other than an email address, and perhaps a name. Some social identity providers, such as a Microsoft account, provide only a unique identifier. The application will usually need to maintain some information on registered users, and be able to match this information to the identifier contained in the claims in the token. Typically this is done through a registration process when the user first accesses the application, and information is then injected into the token as additional claims after each authentication. 与公司目录不同,使用社交身份提供者的基于声明的身份验证通常不提供除电子邮件地址以外的关于已验证用户的信息,也许还有姓名。一些社交身份认证提供商,比如微软的账户,只提供唯一标识符。应用程序通常需要维护关于已注册用户的一些信息,并且能够将这些信息与令牌中声明中包含的标识符匹配。通常这是通过注册过程完成的,当用户首次访问应用程序时,然后在每次身份验证之后将信息作为附加声明注入到令牌中
  • If there is more than one identity provider configured for the STS, it must detect which identity provider the user should be redirected to for authentication. This process is referred to as 如果为 STS 配置了多个标识提供程序,它必须检测用户应该重定向到哪个标识提供程序以进行身份验证。这个过程被称为home realm discovery 家园探索. The STS may be able to do this automatically based on an email address or user name that the user provides, a subdomain of the application that the user is accessing, the user’s IP address scope, or on the contents of a cookie stored in the user’s browser. For example, if the user entered an email address in the Microsoft domain, such as .STS 可以根据用户提供的电子邮件地址或用户名、用户正在访问的应用程序的子域、用户的 IP 地址范围或存储在用户浏览器中的 cookie 内容自动执行此操作。例如,如果用户在 Microsoft 域中输入电子邮件地址,例如user 使用者@live.com, the STS will redirect the user to the Microsoft account sign-in page. On subsequent visits, the STS could use a cookie to indicate that the last sign in was with a Microsoft account. If automatic discovery cannot determine the home realm, the STS will display a home realm discovery (HRD) page that lists the trusted identity providers, and the user must select the one they want to use.

When to Use this Pattern 何时使用此模式

This pattern is ideally suited for a range of scenarios, such as:

这种模式非常适合一系列场景,例如:

  • Single sign on in the enterprise 企业中的单点登录. In this scenario you need to authenticate employees for corporate applications that are hosted in the cloud outside the corporate security boundary, without requiring them to sign on every time they visit an application. The user experience is the same as when using on-premises applications where they are initially authenticated when signing on to a corporate network, and from then on have access to all relevant applications without needing to sign on again. .在这个场景中,您需要对托管在企业安全边界之外的云中的企业应用程序的员工进行身份验证,而不需要他们每次访问应用程序时都进行签名。用户体验与使用内部应用程序时相同,这些应用程序在登录到公司网络时进行初始身份验证,然后无需再次登录即可访问所有相关应用程序
  • Federated identity with multiple partners 具有多个伙伴的联邦身份. In this scenario you need to authenticate both corporate employees and business partners who do not have accounts in the corporate directory. This is common in business-to-business (B2B) applications, applications that integrate with third party services, and where companies with disparate IT systems have merged or share resources. .在这个场景中,您需要对公司目录中没有帐户的公司雇员和业务合作伙伴进行身份验证。这在 B2B (B2B)应用程序、与第三方服务集成的应用程序以及拥有不同 IT 系统的公司合并或共享资源的情况下很常见
  • Federated identity in SaaS applications SaaS 应用程序中的联邦标识. In this scenario independent software vendors (ISVs) provide a ready to use service for multiple clients or tenants. Each tenant will want to authenticate using a suitable identity provider. For example, business users will want to us their corporate credentials, while consumers and clients of the tenant may want to use their social identity credentials. .在这个场景中,独立的软件供应商(ISV)为多个客户或租户提供一个现成的服务。每个租户都希望使用合适的标识提供程序进行身份验证。例如,业务用户希望我们使用他们的企业凭证,而租户的消费者和客户可能希望使用他们的社会身份凭证

This pattern might not be suitable in the following situations:

这种模式可能不适用于下列情况:

  • All users of the application can be authenticated by one identity provider, and there is no requirement to authenticate using any other identity provider. This is typical in business applications that use only a corporate directory for authentication, and access to this directory is available in the application directly, by using a VPN, or (in a cloud-hosted scenario) through a virtual network connection between the on-premises directory and the application. 应用程序的所有用户都可以由一个身份提供程序进行身份验证,而且不需要使用任何其他身份提供程序进行身份验证。这在只使用企业目录进行身份验证的业务应用程序中很典型,通过使用 VPN 或(在云托管的场景中)通过内部目录和应用程序之间的虚拟网络连接,可以在应用程序中直接访问这个目录
  • The application was originally built using a different authentication mechanism, perhaps with custom user stores, or does not have the capability to handle the negotiation standards used by claims-based technologies. Retrofitting claims-based authentication and access control into existing applications can be complex, and may not be cost effective. 应用程序最初是使用不同的身份验证机制构建的,可能使用自定义用户存储,或者不具备处理基于声明的技术所使用的协商标准的能力。将基于声明的身份验证和访问控制改进到现有应用程序中可能很复杂,而且可能不具有成本效益

Example 例子

An organization hosts a multi-tenant Software as a Service (SaaS) application in Azure. The application incudes a website that tenants can use to manage the application for their own users. The application allows tenants to access the tenant’s website by using a federated identity that is generated by Active Directory Federation Services (ADFS) when a user is authenticated by that organization’s own Active Directory. Figure 2 shows an overview of this process.

组织在 Azure 中托管多租户软件即服务(SaaS)应用程序。应用程序包含一个网站,租户可以使用该网站为自己的用户管理应用程序。该应用程序允许租户使用联邦身份访问租户的网站,当用户通过该组织自己的 ActiveDirectory 进行身份验证时,联邦身份由活动目录联合服务(adFS)生成。图2显示了此过程的概述。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0J7PcflB-1655720494302)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589790.08a7dc69f2c68895ed3bb44f821ee0b6(en-us,pandp.10)].png)

Figure 2 - How users at a large enterprise subscriber access the application

图2-大型企业订阅者中的用户如何访问应用程序

In the scenario shown in Figure 2, tenants authenticate with their own identity provider (step 1), in this case ADFS. After successfully authenticating a tenant, ADFS issues a token. The client browser forwards this token to the SaaS application’s federation provider, which trusts tokens issued by the tenant’s ADFS, in order to get back a token that is valid for the SaaS federation provider (step 2). If necessary, the SaaS federation provider performs a transformation on the claims in the token into claims that the application recognizes (step 3) before returning the new token to the client browser. The application trusts tokens issued by the SaaS federation provider and uses the claims in the token to apply authorization rules (step 4).

在图2所示的场景中,租户使用自己的身份提供者进行身份验证(步骤1) ,在本例中为 ADFS。在成功地对租户进行身份验证之后,ADFS 发出一个令牌。客户机浏览器将这个令牌转发给 SaaS 应用程序的联合提供程序(它信任租户的 ADFS 发出的令牌) ,以便获得对 SaaS 联合提供程序有效的令牌(步骤2)。如果需要,SaaS 联合提供者在将新令牌返回给客户端浏览器之前,将令牌中的声明转换为应用程序能够识别的声明(步骤3)。应用程序信任 SaaS 联合提供程序发出的令牌,并使用令牌中的声明应用授权规则(步骤4)。

Tenants will not need to remember separate credentials to access the application, and an administrator at the tenant’s company will be able to configure in its own ADFS the list of users that can access the application.

租户不需要记住单独的凭据来访问应用程序,租户公司的管理员将能够在自己的 ADFS 中配置可以访问应用程序的用户列表。

Related Patterns and Guidance 相关模式及指引

At this time, there are no related patterns and guidance.

目前,还没有相关的模式和指导。

Gatekeeper Pattern 守门人模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 4 minutes to read还有4分钟

In this article
在这里插入图片描述在这里插入图片描述在这里插入图片描述

Protect applications and services by using a dedicated host instance that acts as a broker between clients and the application or service, validates and sanitizes requests, and passes requests and data between them. This can provide an additional layer of security, and limit the attack surface of the system.

通过使用专用的主机实例来保护应用程序和服务,该主机实例充当客户机与应用程序或服务之间的代理,验证和清理请求,并在它们之间传递请求和数据。这可以提供额外的安全层,并限制系统的攻击面。

Context and Problem 背景与问题

Applications expose their functionality to clients by accepting and processing requests. In cloud-hosted scenarios, applications expose endpoints to which clients connect, and typically include the code to handle the requests from clients. This code may perform authentication and validation, some or all request processing, and is likely to accesses storage and other services on behalf of the client.

应用程序通过接受和处理请求向客户机公开其功能。在云托管方案中,应用程序公开客户端连接到的端点,通常包括处理来自客户端的请求的代码。此代码可以执行身份验证和验证、部分或全部请求处理,并且可能代表客户端访问存储和其他服务。

If a malicious user is able to compromise the system and gain access to application’s hosting environment, the security mechanisms it uses such as credentials and storage keys, and the services and data it accesses, are exposed. As a result, the malicious user may be able to gain unrestrained access to sensitive information and other services.

如果恶意用户能够危害系统并获得对应用程序宿主环境的访问权限,那么它所使用的安全机制(如凭据和存储密钥)以及它所访问的服务和数据就会公开。因此,恶意用户可以不受限制地访问敏感信息和其他服务。

Solution 解决方案

To minimize the risk of clients gaining access to sensitive information and services, decouple hosts or tasks that expose public endpoints from the code that processes requests and accesses storage. This can be achieved by using a façade or a dedicated task that interacts with clients and then hands off the request (perhaps through a decoupled interface) to the hosts or tasks that will handle the request. Figure 1 shows a high-level view of this approach.

为了尽量减少客户端访问敏感信息和服务的风险,需要将公开公共端点的主机或任务与处理请求和访问存储的代码分离开来。这可以通过使用与客户机交互的 facade 或专用任务来实现,然后将请求(可能通过分离接口)传递给将处理请求的主机或任务。图1显示了这种方法的高级视图。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-aYveTB9u-1655720494314)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589793.4d5737628f4fc8e778f773277d6b351b(en-us,pandp.10)].png)

Figure 1 - High level overview of this pattern

图1-此模式的高级概述

The gatekeeper pattern may be used simply to protect storage, or it may be used as a more comprehensive façade to protect all of the functions of the application. The important factors are:

网守模式可以简单地用于保护存储,也可以用作更全面的外观来保护应用程序的所有功能。重要因素包括:

  • Controlled validation 受控验证. The Gatekeeper validates all requests, and rejects those that do not meet validation requirements. 。门卫验证所有请求,并拒绝那些不符合验证要求
  • Limited risk and exposure 风险和风险有限. The Gatekeeper does not have access to the credentials or keys used by the trusted host to access storage and services. If the Gatekeeper is compromised, the attacker does not obtain access to these credentials or keys. .网守不能访问受信任的主机用于访问存储和服务的凭据或密钥。如果网守被破坏,攻击者就无法获得对这些凭据或密钥的访问权限
  • Appropriate security 适当的保安. The Gatekeeper runs in a limited privilege mode, whereas the remainder of the application runs in the full trust mode required to access storage and services. If the Gatekeeper is compromised, it cannot directly access the application services or data. .Gatekeep 以有限特权模式运行,而应用程序的其余部分则以访问存储和服务所需的完全信任模式运行。如果网守被破坏,它就不能直接访问应用程序服务或数据

This pattern effectively acts like a firewall in a typical network topography. It allows the Gatekeeper to examine requests and make a decision about whether to pass the request on to the trusted host (sometimes called the Keymaster) that performs the required tasks. This decision will typically require the Gatekeeper to validate and sanitize the request content before passing it on to the trusted host.

这种模式在典型的网络地形中有效地起到防火墙的作用。它允许 Gatekeep 检查请求,并决定是否将请求传递给执行所需任务的受信任主机(有时称为 Keymaster)。这个决定通常需要 Gatekeep 在将请求内容传递给受信任的主机之前对其进行验证和消毒。

Issues and Considerations 问题及考虑

Consider the following points when deciding how to implement this pattern:

在决定如何实现此模式时,请考虑以下几点:

  • Ensure that the trusted hosts to which the Gatekeeper passes requests expose only internal or protected endpoints, and connect only to the Gatekeeper. The trusted hosts should not expose any external endpoints or interfaces. 确保 Gatekeep 传递请求的受信任主机只公开内部或受保护的端点,并且只连接到 Gatekeep。受信任的主机不应公开任何外部端点或接口
  • The Gatekeeper must run in a limited privilege mode. Typically this means running the Gatekeeper and the trusted host in separate hosted services or virtual machines. “守门人”必须在有限的特权模式下运行。通常这意味着在单独的托管服务或虚拟机中运行 Gatekeep 和受信任的主机
  • The Gatekeeper should not perform any processing related to the application or services, or access any data. Its function is purely to validate and sanitize requests. The trusted hosts may need to perform additional validation of requests, but the core validation should be performed by the Gatekeeper. 网守不应该执行任何与应用程序或服务相关的处理,也不应该访问任何数据。它的功能纯粹是验证和消毒请求。受信任的主机可能需要对请求执行额外的验证,但是核心验证应该由“看门人”执行
  • Use a secure communication channel (HTTPS, SSL, or TLS) between the Gatekeeper and the trusted hosts or tasks where this is possible. However, some hosting environments may not support HTTPS on internal endpoints. 在可能的情况下,在网守和受信任的主机或任务之间使用安全通信通道(HTTPS、 SSL 或 TLS)。但是,一些宿主环境可能不支持内部端点上的 HTTPS
  • Adding the extra layer to the application to implement the Gatekeeper pattern is likely to have some impact on performance of the application due to the additional processing and network communication it requires. 向应用程序添加额外的层以实现 Gatekeep 模式可能会对应用程序的性能产生一些影响,因为它需要额外的处理和网络通信
  • The Gatekeeper instance could be a single point of failure. To minimize the impact of a failure, consider deploying additional instances and using an autoscaling mechanism to ensure sufficient capacity to maintain availability. Gatekeep 实例可能是单点故障。为了最小化故障的影响,请考虑部署额外的实例并使用自动伸缩机制来确保有足够的容量来维护可用性

When to Use this Pattern 何时使用此模式

This pattern is ideally suited for:

这种模式非常适合:

  • Applications that handle sensitive information, expose services that must have high a degree of protection from malicious attacks, or perform mission-critical operations that must not be disrupted. 处理敏感信息的应用程序、公开必须具有高度保护以免受恶意攻击的服务的应用程序或执行不能中断的关键任务操作的应用程序
  • Distributed applications where it is necessary to perform request validation separately from the main tasks, or to centralize this validation to simplify maintenance and administration. 在分布式应用程序中,需要独立于主要任务执行请求验证,或者集中此验证以简化维护和管理

Example 例子

In a cloud-hosted scenario, this pattern can be implemented by decoupling the Gatekeeper role or virtual machine from the trusted roles and services in an application by using an internal endpoint, a queue, or storage as an intermediate communication mechanism. Figure 2 shows the basic principle when using an internal endpoint.

在云托管场景中,可以通过使用内部端点、队列或存储作为中间通信机制,将 Gatekeep 角色或虚拟机与应用程序中的受信任角色和服务解耦,从而实现此模式。图2显示了使用内部端点时的基本原则。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YHOGILpo-1655720494315)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589793.99ee7d398ddad1f78262834903aa066b(en-us,pandp.10)].png)

Figure 2 - An example of the pattern using Cloud Services web and worker roles

图2-使用 CloudServicesweb 和 worker 角色的模式示例

Related Patterns and Guidance 相关模式及指引

The following pattern may also be relevant when implementing this pattern:

在实现此模式时,下列模式也可能是相关的:

  • Valet Key Pattern 代客泊车钥匙模式. When communicating between the Gatekeeper and trusted roles it is good practice to enhance security by using keys or tokens that limit permissions for accessing resources. The Valet Key pattern describes how to use a token or key that provides clients with restricted direct access to a specific resource or service. .在 Gatekeep 和受信任角色之间进行通信时,最好通过使用限制访问资源权限的密钥或令牌来增强安全性。Valet Key 模式描述如何使用令牌或密钥,该令牌或密钥为客户机提供对特定资源或服务的有限直接访问

Health Endpoint Monitoring Pattern 健康端点监测模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 13 minutes to read还有13分钟

In this article在这篇文章中Monitoring Endpoints in Azure Hosted Applications Azure 托管应用程序中的端点监控

在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
Implement functional checks within an application that external tools can access through exposed endpoints at regular intervals. This pattern can help to verify that applications and services are performing correctly

在应用程序中实现功能检查,外部工具可以定期通过公开的端点访问这些检查。此模式有助于验证应用程序和服务是否正确执行

Context and Problem 背景与问题

It is good practice—and often a business requirement—to monitor web applications, and middle-tier and shared services, to ensure that they are available and performing correctly. However, it is more difficult to monitor services running in the cloud than it is to monitor on-premises services. For example, you do not have full control of the hosting environment, and the services typically depend on other services provided by platform vendors and others.

监视 Web 应用程序以及中间层和共享服务,以确保它们可用并正确执行,这是一种很好的做法,通常也是一种业务需求。然而,与监视内部服务相比,监视在云中运行的服务更加困难。例如,您不能完全控制宿主环境,而且服务通常依赖于平台供应商和其他人提供的其他服务。

There are also many factors that affect cloud-hosted applications such as network latency, the performance and availability of the underlying compute and storage systems, and the network bandwidth between them. The service may fail entirely or partially due to any of these factors. Therefore, you must verify at regular intervals that the service is performing correctly to ensure the required level of availability—which might be part of your Service Level Agreement (SLA).

影响云托管应用程序的因素还有很多,比如网络延迟、底层计算和存储系统的性能和可用性以及它们之间的网络带宽。服务可能完全或部分由于这些因素中的任何一个而失败。因此,必须定期验证服务是否正确执行,以确保所需的可用性级别ーー这可能是服务水平协议(SLA)的一部分。

Solution 解决方案

Implement health monitoring by sending requests to an endpoint on the application. The application should perform the necessary checks, and return an indication of its status.

通过向应用程序上的端点发送请求来实现健康监视。应用程序应该执行必要的检查,并返回其状态的指示。

A health monitoring check typically combines two factors: the checks (if any) performed by the application or service in response to the request to the health verification endpoint, and analysis of the result by the tool or framework that is performing the health verification check. The response code indicates the status of the application and, optionally, any components or services it uses. The latency or response time check is performed by the monitoring tool or framework. Figure 1 shows an overview of the implementation of this pattern.

健康监视检查通常结合了两个因素: 应用程序或服务响应健康验证端点的请求而执行的检查(如果有的话) ,以及执行健康验证检查的工具或框架对结果的分析。响应代码指示应用程序的状态,还可以指示它使用的任何组件或服务。延迟或响应时间检查由监视工具或框架执行。图1显示了此模式的实现概述。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-k9IUG6MD-1655720494319)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589789.7f56dc9a010ee1fe1fdf4ffbf37fec98(en-us,pandp.10)].png)

Figure 1 - Overview of the pattern

图1-模式概述

Additional checks that might be carried out by the health monitoring code in the application include:

应用程序中的健康监测代码可能进行的其他检查包括:

  • Checking cloud storage or a database for availability and response time. 检查云存储或数据库的可用性和响应时间
  • Checking other resources or services located within the application, or located elsewhere but used by the application. 检查位于应用程序内部或位于其他位置但由应用程序使用的其他资源或服务

Several existing services and tools are available for monitoring web applications by submitting a request to a configurable set of endpoints, and evaluating the results against a set of configurable rules. It is relatively easy to create a service endpoint whose sole purpose is to perform some functional tests on the system.

通过向一组可配置的端点提交请求,并根据一组可配置的规则评估结果,可以使用一些现有的服务和工具来监视 Web 应用程序。创建一个服务端点相对容易,它的唯一目的是在系统上执行一些功能测试。

Typical checks that can be performed by the monitoring tools include:

监测工具可以执行的典型检查包括:

  • Validating the response code. For example, an HTTP Response of 200 (OK) indicates that the application responded without error. The monitoring system might also check for other response codes to give a more comprehensive indication of the result. 验证响应代码。例如,HTTPResponse 为200(OK)表示应用程序响应没有错误。监测系统还可以检查其他应对守则,以便更全面地说明结果
  • Checking the content of the response to detect errors, even when a 200 (OK) status code is returned. This can detect errors that affect only a section of the returned web page or service response. For example, checking the title of a page or looking for a specific phrase that indicates the correct page was returned. 检查响应的内容以检测错误,即使返回200(OK)状态代码也是如此。这可以检测只影响返回的 Web 页面或服务响应的一部分的错误。例如,检查页面标题或查找指示返回正确页面的特定短语
  • Measuring the response time, which indicates a combination of the network latency and the time that the application took to execute the request. An increasing value may indicate an emerging problem with the application or network. 测量响应时间,它指示网络延迟和应用程序执行请求所用的时间的组合。增加的值可能表明应用程序或网络出现了问题
  • Checking resources or services located outside the application, such as a content delivery network used by the application to deliver content from global caches. 检查位于应用程序之外的资源或服务,例如应用程序用于从全局缓存交付内容的内容传递网路
  • Checking for expiration of SSL certificates. 检查 SSL 证书是否过期
  • Measuring the response time of a DNS lookup for the URL of the application in order to measure DNS latency and DNS failures. 测量应用程序 URL 的 DNS 查找的响应时间,以便测量 DNS 延迟和 DNS 失败
  • Validating the URL returned by the DNS lookup to ensure correct entries. This can help to avoid malicious request redirection through a successful attack on the DNS server. 验证 DNS 查找返回的 URL 以确保正确的条目。这有助于通过成功攻击 DNS 服务器来避免恶意请求重定向

It is also useful, where possible, to run these checks from different on-premises or hosted locations to measure and compare response times from different places. Ideally you should monitor applications from locations that are close to customers in order to get an accurate view of the performance from each location. In addition to providing a more robust checking mechanism, the results may influence the choice of deployment location for the application—and whether to deploy it in more than one datacenter.

在可能的情况下,从不同的场所或托管地点进行这些检查,以测量和比较来自不同地点的响应时间也是有用的。理想情况下,您应该从离客户较近的位置监视应用程序,以便从每个位置获得性能的准确视图。除了提供更健壮的检查机制之外,结果还可能影响应用程序部署位置的选择ーー以及是否将其部署到多个数据中心。

Tests should also be run against all the service instances that customers use to ensure the application is working correctly for all customers. For example, if customer storage is spread across more than one storage account, the monitoring process must check all of these.

还应针对客户使用的所有服务实例运行测试,以确保应用程序对所有客户正确工作。例如,如果客户存储分布在多个存储帐户上,则监视流程必须检查所有这些帐户。

Issues and Considerations 问题及考虑

Consider the following points when deciding how to implement this pattern:

在决定如何实现此模式时,请考虑以下几点:

  • How to validate the response. For example, is just a single a 200 (OK) status code sufficient to verify the application is working correctly? While this provides the most basic measure of application availability, and is the minimum implementation of this pattern, it provides little information about the operations, trends, and possible upcoming issues in the application.

    如何验证响应。例如,仅仅一个200(OK)状态码是否足以验证应用程序正常工作?虽然这提供了应用程序可用性的最基本度量,并且是此模式的最小实现,但是它很少提供有关应用程序中的操作、趋势和可能即将出现的问题的信息。

    Note

    注意

    Make sure that the application does correctly return a 200 status code only when the target resource is found and processed. In some scenarios, such as when using a master page to host the target web page, the server may send back a 200 OK status code instead of a 404 Not Found code, even when the target content page was not found.

    确保只有在找到并处理目标资源时,应用程序才正确返回200状态码。在某些场景中,例如当使用母版页托管目标网页时,即使没有找到目标内容页,服务器也可能发回一个200 OK 状态代码,而不是404 Not Find 代码。

  • The number of endpoints to expose for an application. One approach is to expose at least one endpoint for the core services the application uses and another for ancillary or lower priority services, allowing different levels of importance to be assigned to each monitoring result. Also consider exposing more endpoints, such as one for each core service, to provide additional monitoring granularity. For example, a health verification check might check the database, storage, and an external geocoding service an application uses; each requiring a different level of uptime and response time. The application may still be healthy if the geocoding service, or some other background task, is unavailable for a few minutes.

    要为应用程序公开的端点数。一种方法是至少为应用程序使用的核心服务公开一个端点,为辅助或较低优先级的服务公开另一个端点,允许为每个监测结果分配不同级别的重要性。还要考虑公开更多的端点,比如为每个核心服务公开一个端点,以提供额外的监视粒度。例如,健康验证检查可能会检查应用程序使用的数据库、存储和外部地理编码服务; 每个服务都需要不同级别的正常运行时间和响应时间。如果地理编码服务或其他后台任务在几分钟内不可用,应用程序可能仍然是健康的。

  • Whether to use the same endpoint for monitoring as is used for general access, but to a specific path designed for health verification checks; for example, /HealthCheck/{GUID}/ on the general access endpoint. This allows some functional tests within the application to be executed by the monitoring tools, such as adding a new user registration, signing in, and placing a test order, while also verifying that the general access endpoint is available.

    是否使用与一般访问相同的端点进行监视,但是使用为健康验证检查设计的特定路径; 例如,/HealthCheck/{ GUID }/在一般访问端点上。这允许监视工具在应用程序中执行一些功能测试,例如添加新的用户注册、登录和下达测试订单,同时还验证通用访问端点是否可用。

  • The type of information to collect in the service in response to monitoring requests, and how to return this information. Most existing tools and frameworks look only at the HTTP status code that the endpoint returns. To return and validate additional information it may be necessary to create a custom monitoring utility or service.

    要在服务中收集以响应监视请求的信息的类型,以及如何返回此信息。大多数现有的工具和框架只关注端点返回的 HTTP状态码。要返回和验证其他信息,可能需要创建自定义监视实用程序或服务。

  • How much information to collect. Performing excessive processing during the check may overload the application and impact other users, and the time it takes may exceed the timeout of the monitoring system so that it marks the application as unavailable. Most applications include instrumentation such as error handlers and performance counters that log performance and detailed error information, and this may be sufficient instead of returning additional information from a health verification check.

    需要收集多少信息。在检查期间执行过多的处理可能会使应用程序超载并影响其他用户,所花费的时间可能会超过监视系统的超时时间,从而将应用程序标记为不可用。大多数应用程序包括记录性能和详细错误信息的检测工具,如错误处理程序和性能计数器,这可能足以代替从健康验证检查返回额外信息。

  • How to configure security for the monitoring endpoints to protect them from public access; which might expose the application to malicious attacks, risk the exposure of sensitive information, or attract denial of service (DoS) attacks. Typically this should be done in the application configuration so that it can be updated easily without restarting the application. Consider using one or more of the following techniques:

    如何为监察端点配置安全性,以保护它们免受公众访问,因为公众访问可能会令应用程式受到恶意攻击、敏感资料可能受到暴露,或招致分布式拒绝服务攻击(DoS)攻击。通常,这应该在应用程序配置中完成,以便在不重新启动应用程序的情况下轻松地更新它。考虑使用下列一种或多种技术:

    • Secure the endpoint by requiring authentication. This may be achieved by using an authentication security key in the request header or by passing credentials with the request, provided that the monitoring service or tool supports authentication. 通过要求身份验证来保护端点。这可以通过在请求头中使用身份验证安全密钥或在请求中传递凭据来实现,前提是监视服务或工具支持身份验证
    • Use an obscure or hidden endpoint. For example, expose the endpoint on a different IP address to that used by the default application URL, configure the endpoint on a non-standard HTTP port, and/or use a complex path to the test page. It is usually possible to specify additional endpoint addresses and ports in the application configuration, and add entries for these endpoints to the DNS server if required to avoid having to specify the IP address directly. 使用模糊的或隐藏的端点。例如,将不同 IP 地址上的端点公开为默认应用程序 URL 所使用的端点,在非标准 HTTP 端口上配置端点,以及/或使用到测试页的复杂路径。通常可以在应用程序配置中指定额外的端点地址和端口,并在需要时将这些端点的条目添加到 DNS 服务器,以避免直接指定 IP 地址
    • Expose a method on an endpoint that accepts a parameter such as a key value or an operation mode value. Depending on the value supplied for this parameter when a request is received the code can perform a specific test or set of tests, or return a 404 (Not Found) error if the parameter value is not recognized. The recognized parameter values could be set in the application configuration. 在接受参数(如键值或操作模式值)的端点上公开方法。根据接收到请求时为此参数提供的值,代码可以执行特定的测试或测试集,或者如果无法识别参数值,则返回404(Not Found)错误。可以在应用程序配置中设置可识别的参数值

    Note

    注意

    DoS attacks are likely to have less impact on a separate endpoint that performs basic functional tests without compromising the operation of the application. Ideally, avoid using a test that might expose sensitive information. If you must return any information that might be useful to an attacker, consider how you will protect the endpoint and the data from unauthorized access. In this case just relying on obscurity is not sufficient. You should also consider using an HTTPS connection and encrypting any sensitive data, although this will increase the load on the server.

    DoS 攻击可能对执行基本功能测试而不损害应用程序操作的单独端点影响较小。理想情况下,避免使用可能公开敏感信息的测试。如果必须返回任何可能对攻击者有用的信息,请考虑如何保护端点和数据不受未经授权的访问。在这种情况下,仅仅依靠默默无闻是不够的。您还应该考虑使用 HTTPS 连接并加密任何敏感数据,尽管这将增加服务器上的负载。

  • How to access an endpoint that is secured using authentication. Not all tools and frameworks can be configured to include credentials with the health verification request. For example, Microsoft Azure built-in health verification features cannot provide authentication credentials. Some third party alternatives that can are Pingdom, Panopta, NewRelic, and Statuscake.

    如何访问使用身份验证保护的端点。并非所有工具和框架都可以配置为在健康验证请求中包含凭据。例如,MicrosoftAzure 内置的健康验证特性不能提供身份验证凭据。一些第三方替代品可以是 Pingdom、 Panopta、 NewRelic 和 Statuscake。

  • How to ensure that the monitoring agent is performing correctly. One approach is to expose an endpoint that simply returns a value from the application configuration or a random value that can be used to test the agent.

    如何确保监视代理正确执行。一种方法是公开一个端点,该端点仅从应用程序配置中返回一个值或一个可用于测试代理的随机值。

Note

注意

Also ensure that the monitoring system performs checks on itself, such as a self-test and built-in test, to avoid it issuing false positive results.

还要确保监控系统对自身执行检查,例如自我测试和内置测试,以避免发出错误的阳性结果。

When to Use this Pattern 何时使用此模式

This pattern is ideally suited for:

这种模式非常适合:

  • Monitoring websites and web applications to verify availability. 监控网站和网络应用程序以验证可用性
  • Monitoring websites and web applications to check for correct operation. 监察网站及网上应用程式的正确运作
  • Monitoring middle-tier or shared services to detect and isolate a failure that could disrupt other applications. 监视中间层或共享服务,以检测和隔离可能破坏其他应用程序的故障
  • To complement existing instrumentation within the application, such as performance counters and error handlers. Health verification checking does not replace the requirement for logging and auditing in the application. Instrumentation can provide valuable information for an existing framework that monitors counters and error logs to detect failures or other issues. However, it cannot provide information if the application is unavailable. 补充应用程序中的现有检测,例如性能计数器和错误处理程序。健康验证检查并不能取代应用程序中的日志记录和审核要求。检测可以为监视计数器和错误日志以检测故障或其他问题的现有框架提供有价值的信息。但是,如果应用程序不可用,则不能提供信息

Example 例子

The following code examples, taken from the HealthCheckController class in the HealthEndpointMonitoring.Web project that is included in the samples you can download for this guide, demonstrates exposing an endpoint for performing a range of health checks.

下面的代码示例取自 HealthEndpointMonitoring 中的 HealthCheckController 类。包含在可为本指南下载的示例中的 Web 项目演示了如何公开端点以执行一系列健康检查。

The CoreServices method, shown below, performs a series of checks on services used in the application. If all of the tests execute without error, the method returns a 200 (OK) status code. If any of the tests raises an exception, the method returns a 500 (Internal Error) status code. The method could optionally return additional information when an error occurs, if the monitoring tool or framework is able to make use of it.

如下所示的 CoreServices 方法对应用程序中使用的服务执行一系列检查。如果所有测试都正常执行,则该方法返回一个200(OK)状态代码。如果任何测试引发异常,该方法将返回500(内部错误)状态代码。如果监视工具或框架能够利用错误发生时,该方法可以选择返回附加信息。

C# C #Copy 收到

public ActionResult CoreServices()
{
  try
  {
    // Run a simple check to ensure the database is available.
    DataStore.Instance.CoreHealthCheck();

    // Run a simple check on our external service.
    MyExternalService.Instance.CoreHealthCheck();
  }
  catch (Exception ex)
  {
    Trace.TraceError("Exception in basic health check: {0}", ex.Message);

    // This can optionally return different status codes based on the exception.
    // Optionally it could return more details about the exception.
    // The additional information could be used by administrators who access the
    // endpoint with a browser, or using a ping utility that can display the
    // additional information.
    return new HttpStatusCodeResult((int)HttpStatusCode.InternalServerError);
  }
  return new HttpStatusCodeResult((int)HttpStatusCode.OK);
}

The ObscurePath method shows how you can read a path from the application configuration and use it as the endpoint for tests. This example also shows how you can accept an ID as a parameter and use it to check for valid requests.

ObscurePath 方法显示如何从应用程序配置读取路径并将其用作测试的端点。此示例还显示了如何接受 ID 作为参数并使用它检查有效请求。

C# C #Copy 收到

public ActionResult ObscurePath(string id){ // The id could be used as a simple way to obscure or hide the endpoint.
  // The id to match could be retrieved from configuration and, if matched, 
  // perform a specific set of tests and return the result. It not matched it
  // could return a 404 Not Found status.

  // The obscure path can be set through configuration in order to hide the endpoint.  var hiddenPathKey = CloudConfigurationManager.GetSetting("Test.ObscurePath");  // If the value passed does not match that in configuration, return 403 "Not Found".  if (!string.Equals(id, hiddenPathKey))  {    return new HttpStatusCodeResult((int)HttpStatusCode.NotFound);  }  // Else continue and run the tests...  // Return results from the core services test.  return this.CoreServices();}

The TestResponseFromConfig method shows how you can expose an endpoint that performs a check for a specified configuration setting value.

TestResponseFromConfig 方法显示如何公开对指定配置设置值执行检查的端点。

C# C #Copy 收到

public ActionResult TestResponseFromConfig(){  // Health check that returns a response code set in configuration for testing.  var returnStatusCodeSetting = CloudConfigurationManager.GetSetting(                                                          "Test.ReturnStatusCode");  int returnStatusCode;  if (!int.TryParse(returnStatusCodeSetting, out returnStatusCode))  {    returnStatusCode = (int)HttpStatusCode.OK;  }  return new HttpStatusCodeResult(returnStatusCode);}

Monitoring Endpoints in Azure Hosted Applications Azure 托管应用程序中的端点监控

Some options for monitoring endpoints in Azure applications are:

在 Azure 应用程序中监视端点的一些选项是:

  • Use the built-in features of Microsoft Azure, such as the Management Services or Traffic Manager. 使用微软 Azure 的内置特性,如管理服务或流量管理器
  • Use a third party service or a framework such as Microsoft System Center Operations Manager. 使用第三方服务或框架,如 MicrosoftSystemCenter 操作管理器
  • Create a custom utility or a service that runs on your own or on a hosted server. 创建一个自定义实用程序或服务,它可以在您自己的服务器或托管服务器上运行

Note

注意

Even though Azure provides a reasonably comprehensive set of monitoring options, you may decide to use additional services and tools to provide extra information.

即使 Azure 提供了一组相当全面的监视选项,您也可以决定使用其他服务和工具来提供额外的信息。

Azure Management Services provides a comprehensive built-in monitoring mechanism built around alert rules. The Alerts section of the Management Services page in the Azure management portal allows you to configure up to ten alert rules per subscription for your services. These rules specify a condition and a threshold value for a service such as CPU load, or the number of requests or errors per second, and the service can automatically send email notifications to addresses you define in each rule.

Azure 管理服务提供了一个围绕警报规则构建的全面的内置监控机制。Azure 管理门户中的 ManagementServices 页面的 Alerts 部分允许您为每个服务订阅配置最多10个警报规则。这些规则指定服务的条件和阈值,比如 CPU 负载,或者每秒的请求数或错误数,服务可以自动向您在每个规则中定义的地址发送电子邮件通知。

The conditions you can monitor vary depending on the hosting mechanism you choose for your application (such as Web Sites, Cloud Services, Virtual Machines, or Mobile Services), but all of these include the capability to create an alert rule that uses a web endpoint you specify in the settings for your service. This endpoint should respond in a timely way so that the alert system can detect that the application is operating correctly.

您可以监视的条件取决于您为应用程序选择的托管机制(如 Web 站点、云服务、虚拟机或移动服务) ,但是所有这些都包括创建警报规则的能力,该规则使用您在服务设置中指定的 Web 端点。这个端点应该及时响应,以便警报系统能够检测到应用程序正常运行。

Note

注意

For more information about creating monitoring alerts, see Management Services on MSDN.

有关创建监视警报的详细信息,请参阅 MSDN 上的 ManagementServices。

If you host your application in Azure Cloud Services web and worker roles or Virtual Machines, you can take advantage of one of the built-in services in Azure called Traffic Manager. Traffic Manager is a routing and load-balancing service that can distribute requests to specific instances of your Cloud Services hosted application based on a range of rules and settings.

如果您将应用程序托管在 Azure 云服务 Web 和工作者角色或虚拟机中,那么您可以利用 Azure 中称为流量管理器的内置服务。流量管理器是一个路由和负载平衡服务,可以根据一系列规则和设置将请求分发到云服务托管应用程序的特定实例。

In addition to routing requests, Traffic Manager pings a URL, port, and relative path you specify on a regular basis to determine which instances of the application defined in its rules are active and are responding to requests. If it detects a status code 200 (OK) it marks the application as available, any other status code causes Traffic Manager to mark the application as offline. You can view the status in the Traffic Manager console, and configure the rule to reroute requests to other instances of the application that are responding.

除了路由请求之外,流量管理器还定期 ping 一个 URL、端口和您指定的相对路径,以确定在其规则中定义的应用程序的哪些实例处于活动状态并响应请求。如果它检测到状态码200(OK) ,它会将应用程序标记为可用,任何其他状态码都会导致流量管理器将应用程序标记为脱机。您可以在流量管理器控制台中查看状态,并配置规则以将请求重新路由到正在响应的应用程序的其他实例。

However, keep in mind that Traffic Manager will only wait ten seconds to receive a response from the monitoring URL. Therefore, you should ensure that your health verification code executes within this timescale, allowing for network latency for the round trip from Traffic Manager to your application and back again.

但是,请记住,流量管理器只需等待10秒钟就可以从监视 URL 收到响应。因此,您应该确保您的健康验证代码在这个时间范围内执行,允许流量管理器与应用程序之间来回的网络延迟。

Note

注意

For more information about using Windows Traffic Manager to monitor your applications, see Microsoft Azure Traffic Manager on MSDN. Traffic Manager is also discussed in Multiple Datacenter Deployment Guidance.

有关使用 Windows 流量管理器监视应用程序的详细信息,请参阅 MSDN 上的 MicrosoftAzure 流量管理器。多数据中心部署指南中还讨论了流量管理器。

Related Patterns and Guidance 相关模式及指引

The following guidance may also be relevant when implementing this pattern:

在实施这一模式时,下列指导意见也可能是相关的:

  • Instrumentation and Telemetry Guidance 仪器和遥测导则. Checking the health of services and components is typically done by probing, but it is also useful to have the appropriate information in place to monitor application performance and detect events that occur at runtime. This data can be transmitted back to monitoring tools to provide an additional feature for health monitoring. The Instrumentation and Telemetry****guidance explores the process of gathering remote diagnostics information that is collected by instrumentation in applications. .检查服务和组件的健康状况通常是通过探测来完成的,但是拥有适当的信息来监视应用程序性能和检测运行时发生的事件也很有用。这些数据可以传输回监视工具,为健康监视提供一个附加特性。《仪器和遥测》 * * * 指南探讨了在应用中通过仪器收集远端诊断信息的过程

Index Table Pattern 索引表模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 10 minutes to read还有10分钟

在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
Create indexes over the fields in data stores that are frequently referenced by query criteria. This pattern can improve query performance by allowing applications to more quickly locate the data to retrieve from a data store.

在数据存储区中经常被查询条件引用的字段上创建索引。通过允许应用程序更快地定位要从数据存储区检索的数据,此模式可以提高查询性能。

Context and Problem 背景与问题

Many data stores organize the data for a collection of entities by using the primary key. An application can use this key to locate and retrieve data. Figure 1 shows an example of a data store holding customer information. The primary key is the Customer ID.

许多数据存储区通过使用主键来组织实体集合的数据。应用程序可以使用此密钥来定位和检索数据。图1显示了一个保存客户信息的数据存储示例。主键是 Customer ID。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zaRpgbwe-1655720494324)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.d23d1b2ebe305cefe712a2fd80e1fa75(en-us,pandp.10)].png)

Figure 1 - Customer information organized by the primary key (Customer ID)

图1-由主键(Customer ID)组织的客户信息

While the primary key is valuable for queries that fetch data based on the value of this key, an application might not be able to use the primary key if it needs to retrieve data based on some other field. In the Customers example, an application cannot use the Customer ID primary key to retrieve customers if it queries data solely by specifying criteria that reference the value of some other attribute, such as the town in which the customer is located. To perform a query such as this may require the application to fetch and examine every customer record, and this could be a slow process.

虽然主键对于基于该键的值获取数据的查询很有价值,但是如果应用程序需要基于其他字段检索数据,则可能无法使用主键。在“客户”示例中,如果应用程序仅通过指定引用某些其他属性(例如客户所在的城镇)的值的标准来查询数据,则不能使用“客户 ID”主键来检索客户。要执行这样的查询,可能需要应用程序获取和检查每个客户记录,这可能是一个缓慢的过程。

Many relational database management systems support secondary indexes. A secondary index is a separate data structure that is organized by one or more non-primary (secondary) key fields, and it indicates where the data for each indexed value is stored. The items in a secondary index are typically sorted by the value of the secondary keys to enable fast lookup of data. These indexes are usually maintained automatically by the database management system.

许多关系数据库管理系统支持二级指数。辅助索引是一个单独的数据结构,由一个或多个非主(辅助)键字段组织,它指示每个索引值的数据存储位置。辅助索引中的项通常按辅助键的值排序,以启用数据的快速查找。这些索引通常由数据库管理系统自动维护。

You can create as many secondary indexes as are required to support the different queries that your application performs. For example, in a Customers table in a relational database where the customer ID is the primary key, it may be beneficial to add a secondary index over the town field if the application frequently looks up customers by the town in which they reside.

您可以创建支持应用程序执行的不同查询所需的任意多个辅助索引。例如,在以客户 ID 为主键的客户表中,如果应用程序经常按客户所在的城镇查找客户,那么在 town 字段上添加辅助索引可能是有益的关系数据库。

However, although secondary indexes are a common feature of relational systems, most NoSQL data stores used by cloud applications do not provide an equivalent feature.

然而,尽管辅助索引是关系系统的一个常见特性,但云应用程序使用的大多数 NoSQL 数据存储都没有提供相应的特性。

Solution 解决方案

If the data store does not support secondary indexes, you can emulate them manually by creating your own index tables. An index table organizes the data by a specified key. Three strategies are commonly used for structuring an index table, depending on the number of secondary indexes that are required and the nature of the queries that an application performs:

如果数据存储区不支持辅助索引,则可以通过创建自己的索引表手动模拟它们。索引表按指定键组织数据。根据所需的辅助索引数量和应用程序执行的查询的性质,通常使用三种策略来构造索引表:

  • Duplicate the data in each index table but organize it by different keys (complete denormalization). Figure 2 shows index tables that organize the same customer information by Town and LastName:

    复制每个索引表中的数据,但按不同的键组织(完全反规范化)。图2显示了按 Town 和 LastName 组织相同客户信息的索引表:

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5cOpheV9-1655720494325)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.714d97ec0fc214bec6915a669a278d98(en-us,pandp.10)].png)

    Figure 2 - Index tables implementing secondary indexes for customer data. The data is duplicated in each index table.

    图2-为客户数据实现二级索引的索引表。每个索引表中的数据都是重复的。

    This strategy may be appropriate if the data is relatively static compared to the number of times it is queried by using each key. If the data is more dynamic, the processing overhead of maintaining each index table may become too great for this approach to be useful. Additionally, if the volume of data is very large, the amount of space required to store the duplicate data will be significant.

    如果与使用每个键查询数据的次数相比,数据是相对静态的,则此策略可能是合适的。如果数据更加动态,那么维护每个索引表的处理开销可能会变得太大,以至于这种方法不再有用。此外,如果数据量非常大,则存储重复数据所需的空间量将非常大。

  • Create normalized index tables organized by different keys and reference the original data by using the primary key rather than duplicating it, as shown in Figure 3. The original data is referred to as a fact table:

    创建由不同键组织的规范化索引表,并通过使用主键而不是复制原始数据来引用原始数据,如图3所示。原始数据称为事实表:

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DNny1avn-1655720494325)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.5acf9c5b8878a6b6d26715b68aebf9b7(en-us,pandp.10)].png)

    Figure 3 - Index tables implementing secondary indexes for customer data. The data is referenced by each index table.

    图3-实现客户数据辅助索引的索引表。每个索引表引用数据。

    This technique saves space and reduces the overhead of maintaining duplicate data. The disadvantage is that an application has to perform two lookup operations to find data by using a secondary key (find the primary key for the data in the index table, and then look up the data in the fact table by using the primary key).

    这种技术节省了空间,减少了维护重复数据的开销。缺点是应用程序必须执行两个查找操作才能使用辅助键查找数据(查找索引表中数据的主键,然后使用主键查找事实表中的数据)。

  • Create partially normalized index tables organized by different keys that duplicate frequently retrieved fields. Reference the original data to access less frequently accessed fields. Figure 4 shows this structure.

    创建部分规范化的索引表,这些索引表由不同的键组织,重复频繁检索的字段。引用原始数据以访问访问频率较低的字段。图4显示了这个结构。

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WseDwpbS-1655720494326)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.d7add9ebef33a1eff441d7efd006d952(en-us,pandp.10)].png)

    Figure 4 - Index tables implementing secondary indexes for customer data. Commonly accessed data is duplicated in each index table.

    图4-为客户数据实现二级索引的索引表。常用访问的数据在每个索引表中重复。

    Using this technique, you can strike a balance between the first two approaches. The data for common queries can be retrieved quickly by using a single lookup, while the space and maintenance overhead is not as great as duplicating the entire data set.

    使用这种技术,您可以在前两种方法之间取得平衡。通过使用单个查找可以快速检索常见查询的数据,而空间和维护开销不像复制整个数据集那么大。

If an application frequently queries data by specifying a combination of values (for example, “Find all customers that live in Redmond and that have a last name of Smith”), you could implement the keys to the items in the index table as a concatenation of the Town attribute and the LastName attribute, as shown in Figure 5. The keys are sorted by Town, and then by LastName for records that have the same value for Town.

如果一个应用程序经常通过指定一组值来查询数据(例如,“查找所有住在 Redmond 并且姓 Smith 的客户”) ,您可以实现索引表中项目的键,作为 Town 属性和 LastName 属性的连接,如图5所示。对于具有相同 Town 值的记录,密钥按 Town 排序,然后按 LastName 排序。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-F0Tn30IW-1655720494327)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.363b18e3040f87e7a0d2908feb39f108(en-us,pandp.10)].png)

Figure 5 - An index table based on composite keys

图5-基于组合键的索引表

Index tables can speed up query operations over sharded data, and are especially useful where the shard key is hashed. Figure 6 shows an example where the shard key is a hash of the Customer ID. The index table can organize data by the non-hashed value (Town and LastName), and provide the hashed shard key as the lookup data. This can save the application from repeatedly calculating hash keys (which may be an expensive operation) if it needs to retrieve data that falls within a range, or it needs to fetch data in order of the non-hashed key. For example, a query such as “Find all customers that live in Redmond” can be quickly resolved by locating the matching items in the index table (which are all stored in a contiguous block), and then following the references to the customer data by using the shard keys stored in the index table.

索引表可以加快对分片数据的查询操作,在分片键被散列的情况下尤其有用。图6显示了一个示例,其中碎片键是 Customer ID 的散列。索引表可以按非散列值(Town 和 LastName)组织数据,并提供散列碎片键作为查找数据。如果应用程序需要检索范围内的数据,或者需要按照非散列键的顺序获取数据,这可以避免应用程序重复计算散列键(这可能是一个昂贵的操作)。例如,“ Find all customer that live in Redmond”这样的查询可以通过在索引表中定位匹配项(它们都存储在一个连续的块中)来快速解决,然后通过使用存储在索引表中的分片键来跟踪对客户数据的引用。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6DQGNLaa-1655720494327)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.3df3a51a9694669cd1eb00f54b21fdc1(en-us,pandp.10)].png)

Figure 6 - An index table providing quick look up for sharded data

图6-提供快速查找分片数据的索引表

Issues and Considerations 问题及考虑

Consider the following points when deciding how to implement this pattern:

在决定如何实现此模式时,请考虑以下几点:

  • The overhead of maintaining secondary indexes can be significant. You must analyze and understand the queries that your application uses. Only create index tables where they are likely to be used regularly. Do not create speculative index tables to support queries that an application does not perform, or that an application performs only very occasionally.

    维护次要索引的开销可能很大。您必须分析和理解应用程序使用的查询。只创建可能经常使用的索引表。不要创建推测性索引表来支持应用程序不执行的查询,或者应用程序只是偶尔执行的查询。

  • Duplicating data in an index table can add a significant overhead in terms of storage costs and the effort required to maintain multiple copies of data.

    在索引表中复制数据会在存储成本和维护多个数据副本所需的工作方面增加很大的开销。

  • Implementing an index table as a normalized structure that references the original data may require an application to perform two lookup operations to find data. The first operation searches the index table to retrieve the primary key, and the second uses the primary key to fetch the data.

    将索引表实现为引用原始数据的规范化结构可能需要应用程序执行两个查找操作来查找数据。第一个操作搜索索引表以检索主键,第二个操作使用主键来获取数据。

  • If a system incorporates a number of index tables over very large data sets, it can be difficult to maintain consistency between index tables and the original data. It might be possible to design the application around the eventual consistency model. For example, to insert, update, or delete data, an application could post a message to a queue and let a separate task perform the operation and maintain the index tables that reference this data asynchronously. For more information about implementing eventual consistency, see the Data Consistency primer.

    如果一个系统在非常大的数据集上合并了许多索引表,那么就很难保持索引表与原始数据之间的一致性。围绕最终一致性模型设计应用程序是可能的。例如,要插入、更新或删除数据,应用程序可以向队列发送消息,并让单独的任务执行操作并维护异步引用此数据的索引表。有关实现最终一致性的详细信息,请参阅数据一致性入门。

    Note

    注意

    Microsoft Azure storage tables support transactional updates for changes made to data held in the same partition (referred to as entity group transactions). If you can store the data for a fact table and one or more index tables in the same partition, you may be able to use this feature to help ensure consistency.

    Microsoft Azure 存储表支持对同一分区(称为实体组事务)中保存的数据所做的更改的事务性更新。如果可以将事实表和一个或多个索引表的数据存储在同一个分区中,则可以使用此特性来帮助确保一致性。

  • Index tables may themselves be partitioned or sharded.

    索引表本身可以分区或分片。

When to Use this Pattern 何时使用此模式

Use this pattern to improve query performance when an application frequently needs to retrieve data by using a key other than the primary (or shard) key.

当应用程序经常需要通过使用主键(或碎片键)以外的其他键来检索数据时,可以使用此模式来提高查询性能。

This pattern might not be suitable when:

在下列情况下,这种模式可能不适合:

  • Data is volatile. An index table may become out of date very quickly, rendering it ineffective or making the overhead of maintaining the index table greater than any savings made by using it. 数据是不稳定的。索引表可能会很快过时,使其无效,或使维护索引表的开销大于使用索引表所节省的开销
  • A field selected as the secondary key for an index table is very non-discriminating and can only have a small set of values (for example, gender). 选择作为索引表的次要键的字段是非常无区别的,并且只能有一小组值(例如性别)
  • The balance of the data values for a field selected as the secondary key for an index table are highly skewed. For example, if 90% of the records contain the same value in a field, then creating and maintaining an index table to look up data based on this field may exert more overhead than scanning sequentially through the data. However, if queries very frequently target values that lie in the remaining 10%, this index may be useful. You must understand the queries that your application is performing, and how frequently they are performed. 选择作为索引表的次要键的字段的数据值的平衡是高度倾斜的。例如,如果90% 的记录在一个字段中包含相同的值,那么创建和维护一个索引表来基于该字段查找数据可能会比顺序扫描数据带来更多的开销。但是,如果查询频繁地以位于其余10% 中的值为目标,则此索引可能非常有用。您必须了解应用程序正在执行的查询,以及执行查询的频率

Example 例子

Azure storage tables provide a highly scalable key/value data store for applications running in the cloud. Applications store and retrieve data values by specifying a key. The data values can contain multiple fields, but the structure of a data item is opaque to table storage, which simply handles a data item as an array of bytes.

Azure 存储表为在云中运行的应用程序提供了高度可伸缩的键/值数据存储。应用程序通过指定键来存储和检索数据值。数据值可以包含多个字段,但数据项的结构对于表存储是不透明的,表存储只是将数据项作为字节数组处理。

Azure storage tables also support sharding. The sharding key comprises two elements, a partition key and a row key. Items that have the same partition key are stored in the same partition (shard), and the items are stored in row key order within a shard. Table storage is optimized for performing range queries that fetch data falling within a contiguous range of row key values within a partition. If you are building cloud applications that store information in Azure tables, you should structure your data with this feature in mind.

Azure 存储表也支持分片。分片键包括两个元素,一个分区键和一个行键。具有相同分区键的项存储在相同的分区(分片)中,并且这些项按行键顺序存储在分片中。对表存储进行了优化,以执行范围查询,从而获取分区内连续范围内的行键值数据。如果您正在构建在 Azure 表中存储信息的云应用程序,那么您应该在构建数据时考虑到这个特性。

For example, consider an application that stores information about movies. The application frequently queries movies by genre (Action, Documentary, Historical, Comedy, Drama, and so on). You could create an Azure table with partitions for each genre by using the genre as the partition key, and specifying the movie name as the row key, as shown in Figure 7.

例如,考虑一个存储有关电影信息的应用程序。该应用程序经常按类型查询电影(动作片、纪录片、历史片、喜剧片、戏剧片等)。通过使用类型作为分区键,并指定电影名作为行键,您可以为每种类型创建一个包含分区的 Azure 表,如图7所示。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-I1Ni9E6T-1655720494328)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.acb553d7f508bceb5d96621a3cfedb82(en-us,pandp.10)].png)

Figure 7 - Movie data stored in an Azure Table, partitioned by genre and sorted by movie name

图7-存储在 Azure 表中的电影数据,按照类型进行分区,并按照电影名称进行排序

This approach is less effective if the application also needs to query movies by starring actor. In this case, you can create a separate Azure table that acts as an index table. The partition key is the actor and the row key is the movie name. The data for each actor will be stored in separate partitions. If a movie stars more than one actor, the same movie will occur in multiple partitions.

如果应用程序还需要通过主演演员查询电影,则此方法的效率较低。在这种情况下,您可以创建一个单独的 Azure 表作为索引表。分区键是参与者,行键是电影名称。每个参与者的数据将存储在单独的分区中。如果一部电影由多个演员主演,同一部电影将在多个分区中出现。

You can duplicate the movie data in the values held by each partition by adopting the first approach described in the Solution section above. However, it is likely that each movie will be replicated several times (once for each actor), so it may be more efficient to partially denormalize the data to support the most common queries (such as the names of the other actors) and enable an application to retrieve any remaining details by including the partition key necessary to find the complete information in the genre partitions. This approach is described by the third option in the Solution section. Figure 8 depicts this approach.

通过采用上面解决方案一节中描述的第一种方法,您可以在每个分区所持有的值中复制电影数据。然而,每部电影很可能会被复制几次(每个演员一次) ,因此部分反规范化数据以支持最常见的查询(比如其他演员的名称)可能更有效,并且通过包含在类型分区中查找完整信息所必需的分区键,允许应用程序检索任何剩余的细节。该方法由解决方案部分中的第三个选项描述。图8描述了这种方法。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1ij5jJQC-1655720494329)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589791.b6c3ece11ebfd016e5251ca4708f5b0a(en-us,pandp.10)].png)

Figure 8 - Actor partitions acting as index tables for movie data

图8-用作影片数据索引表的 Actor 分区

Related Patterns and Guidance 相关模式及指引

The following patterns and guidance may also be relevant when implementing this pattern:

下列模式和指南在实现此模式时也可能有用:

  • Data Consistency Primer 数据一致性入门. An index table must be maintained as the data that it indexes changes. In the cloud, it may not be possible or appropriate to perform operations that update an index as part of the same transaction that modifies the data—an eventually consistent approach may be more suitable. This primer provides information on the issues surrounding eventual consistency. .索引表必须在其索引的数据更改时进行维护。在云中,作为修改数据的同一事务的一部分来执行更新索引的操作可能是不可能或不合适的ーー最终一致的方法可能更合适。这本入门书提供了有关最终一致性问题的信息
  • Sharding Pattern 分片模式. The Index Table pattern is frequently used in conjunction with data partitioned by using shards. The Sharding pattern provides more information on how to divide a data store into a set of shards. .Index Table 模式经常与使用碎片进行分区的数据一起使用。碎片模式提供了有关如何将数据存储区划分为一组碎片的更多信息
  • Materialized View Pattern 实体化视图模式. Instead of indexing data to support queries that summarize data, it may be more appropriate to create a materialized view of the data. This pattern describes how to support efficient summary queries by generating pre-populated views over data. .与其索引数据以支持汇总数据的查询,不如创建数据的物化视图。此模式描述如何通过在数据上生成预填充视图来支持高效的摘要查询

Leader Election Pattern 领袖选举模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 11 minutes to read还有11分钟

在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
Coordinate the actions performed by a collection of collaborating task instances in a distributed application by electing one instance as the leader that assumes responsibility for managing the other instances. This pattern can help to ensure that task instances do not conflict with each other, cause contention for shared resources, or inadvertently interfere with the work that other task instances are performing.

通过选择一个实例作为负责管理其他实例的领导,协调分布式应用程序中协作任务实例集合执行的操作。此模式有助于确保任务实例之间不会相互冲突,不会引起共享资源的争用,也不会无意中干扰其他任务实例正在执行的工作。

Context and Problem 背景与问题

A typical cloud application consists of many tasks acting in a coordinated manner. These tasks could all be instances running the same code and requiring access to the same resources, or they might be working together in parallel to perform the individual parts of a complex calculation.

一个典型的云应用程序由许多以协调方式运行的任务组成。这些任务都可以是运行相同代码并需要访问相同资源的实例,或者它们可以并行工作以执行复杂计算的各个部分。

The task instances might run autonomously for much of the time, but it may also be necessary to coordinate the actions of each instance to ensure that they don’t conflict, cause contention for shared resources, or inadvertently interfere with the work that other task instances are performing. For example:

任务实例可能在大部分时间里是自主运行的,但是也有必要协调每个实例的操作,以确保它们不会发生冲突、引起对共享资源的争用,或者无意中干扰其他任务实例正在执行的工作。例如:

  • In a cloud-based system that implements horizontal scaling, multiple instances of the same task could be running simultaneously with each instance servicing a different user. If these instances write to a shared resource, it may be necessary to coordinate their actions to prevent each instance from blindly overwriting the changes made by the others. 在实现水平伸缩的基于云的系统中,同一任务的多个实例可以与服务于不同用户的每个实例同时运行。如果这些实例写入共享资源,则可能需要协调它们的操作,以防止每个实例盲目地覆盖其他实例所做的更改
  • If the tasks are performing individual elements of a complex calculation in parallel, the results will need to be aggregated when they all complete. 如果任务并行地执行复杂计算中的单个元素,那么当它们全部完成时,将需要聚合结果

Because the task instances are all peers, there is no natural leader that can act as the coordinator or aggregator.

因为任务实例都是对等的,所以没有可以充当协调器或聚合器的自然领导者。

Solution 解决方案

A single task instance should be elected to act as the leader, and this instance should coordinate the actions of the other subordinate task instances. If all of the task instances are running the same code, they could all be capable of acting as the leader. Therefore, the election process must be managed carefully to prevent two or more instances taking over the leader role at the same time.

应该选择一个任务实例作为领导者,该实例应该协调其他从属任务实例的操作。如果所有的任务实例都运行相同的代码,那么它们都可以充当领导者。因此,必须认真管理选举过程,以防止两个或两个以上的情况同时接管领导角色。

The system must provide a robust mechanism for selecting the leader. This mechanism must be able to cope with events such as network outages or process failures. In many solutions, the subordinate task instances monitor the leader through some type of heartbeat mechanism, or by polling. If the designated leader terminates unexpectedly, or a network failure renders the leader inaccessible by the subordinate task instances, it will be necessary for them to elect a new leader.

系统必须为选择领导者提供一个强有力的机制。此机制必须能够处理诸如网络中断或进程失败之类的事件。在许多解决方案中,从属任务实例通过某种类型的心跳机制或轮询来监视领导者。如果指定的领导意外终止,或者网络故障使得从属任务实例无法访问该领导,则它们需要选择一个新的领导。

There are several strategies available for electing a leader amongst a set of tasks in a distributed environment, including:

在分布式环境中,有几种策略可用于在一组任务中选举领导人,其中包括:

  • Selecting the task instance with the lowest-ranked instance or process ID. 选择具有排名最低的实例或进程 ID 的任务实例
  • Racing to obtain a shared distributed mutex. The first task instance that acquires the mutex is the leader. However, the system must ensure that, if the leader terminates or becomes disconnected from the rest of the system, the mutex is released to allow another task instance to become the leader. 竞争获得共享的分布式互斥锁。获取互斥量的第一个任务实例是领导者。但是,系统必须确保,如果领导终止或断开与系统其他部分的连接,就会释放互斥锁,以允许另一个任务实例成为领导
  • Implementing one of the common leader election algorithms such as the 实现一种常见的领导人选举算法,如Bully Algorithm 恶霸算法 or the 或者Ring Algorithm 环形算法. These algorithms are relatively straightforward, but there are also a number of more sophisticated techniques available. These algorithms assume that each candidate participating in the election has a unique ID, and that they can communicate with the other candidates in a reliable manner. .这些算法相对简单,但也有许多更复杂的技术可用。这些算法假设参与选举的每个候选人都有一个唯一的 ID,并且它们可以以可靠的方式与其他候选人进行通信

Issues and Considerations 问题及考虑

Consider the following points when deciding how to implement this pattern:

在决定如何实现此模式时,请考虑以下几点:

  • The process of electing a leader should be resilient to transient and persistent failures. 选举领导人的过程应该对短暂和持续的失败具有弹性
  • It must be possible to detect when the leader has failed or has become otherwise unavailable (perhaps due to a communications failure). The speed at which such detection is required will be system dependent. Some systems may be able to function for a short while without a leader, during which time a transient fault that caused the leader to become unavailable may have been rectified. In other cases, it may be necessary to detect leader failure immediately and trigger a new election. 它必须有可能检测到当领导人已经失败或已成为其他方面无法(也许是由于通信故障)。需要这种检测的速度将取决于系统。有些系统可能能够在没有引导器的情况下短时间内正常工作,在此期间,导致引导器不可用的暂时故障可能已得到纠正。在其他情况下,可能有必要立即发现领导人的失败,并引发新的选举
  • In a system that implements horizontal autoscaling, the leader could be terminated if the system scales back and shuts down some of the computing resources. 在实现水平自动缩放的系统中,如果系统缩放并关闭一些计算资源,则可能会终止引导程序
  • Using a shared distributed mutex introduces a dependency on the availability of the external service that provides the mutex. This service may constitute a single point of failure. If this service should become unavailable for any reason, the system will not be able to elect a leader. 使用共享的分布式互斥体引入了对提供互斥体的外部服务的可用性的依赖性。此服务可能构成单点故障。如果该服务因任何原因不能使用,系统将无法选举领导人
  • Using a single dedicated process as the leader is a relatively straightforward approach. However, if the process fails there may be a significant delay while it is restarted, and the resultant latency may affect the performance and response times of other processes if they are waiting for the leader to coordinate an operation. 使用一个专门的过程作为领导是一个相对简单的方法。但是,如果流程失败,在重新启动时可能会有显著的延迟,并且如果其他流程正在等待领导者协调操作,那么由此产生的延迟可能会影响其他流程的性能和响应时间
  • Implementing one of the leader election algorithms manually provides the greatest flexibility for tuning and optimizing the code. 手动实现一种领导者选举算法可以为调优和优化代码提供最大的灵活性

When to Use this Pattern 何时使用此模式

Use this pattern when the tasks in a distributed application, such as a cloud-hosted solution, require careful coordination and there is no natural leader.

当分布式应用程序中的任务(比如云托管的解决方案)需要仔细协调且没有自然的领导者时,可以使用此模式。

Note

注意

Avoid making the leader a bottleneck in the system. The purpose of the leader is to coordinate the work performed by the subordinate tasks, and it does not necessarily have to participate in this work itself—although it should be capable of doing so if the task is not elected as the leader.

避免使领导成为系统中的瓶颈。领导者的目的是协调下属任务所完成的工作,它不一定要参与这项工作本身ーー尽管如果任务没有被选为领导者,它应该有能力这样做。

This pattern might not be suitable:

这种模式可能并不合适:

  • If there is a natural leader or dedicated process that can always act as the leader. For example, it may be possible to implement a singleton process that coordinates the task instances. If this process fails or becomes unhealthy, the system can shut it down and restart it. 如果有一个天生的领导者或专注的过程,总是可以作为领导者。例如,可以实现一个协调任务实例的单例进程。如果此进程失败或变得不健康,系统可以关闭它并重新启动它
  • If the coordination between tasks can be easily achieved by using a more lightweight mechanism. For example, if several task instances simply require coordinated access to a shared resource, a preferable solution might be to use optimistic or pessimistic locking to control access to that resource. 如果任务之间的协调可以通过使用更轻量级的机制轻松实现。例如,如果多个任务实例只需要对共享资源进行协调访问,则可取的解决方案可能是使用乐观或悲观锁定来控制对该资源的访问
  • If a third-party solution is more appropriate. For example, the Microsoft Azure HDInsight service (based on Apache Hadoop) uses the services provided by Apache Zookeeper to coordinate the map/reduce tasks that aggregate and summarize data. It’s also possible to install and configure Zookeeper on a Azure Virtual Machine and integrate it into your own solutions, or use the Zookeeper prebuilt virtual machine image available from Microsoft Open Technologies. For more information, see 如果第三方解决方案更合适。例如,微软 Azure HDInsight 服务(基于 Apache Hadoop)使用 Apache ZooKeeper 提供的服务来协调 map/reduce 任务,聚合和汇总数据。也可以在 Azure 虚拟机上安装和配置 Zookeep,并将其集成到您自己的解决方案中,或者使用可从 Microsoft Open Technologies 获得的 Zookeep 预构建的虚拟机映像。有关更多信息,请参见Apache Zookeeper on Microsoft Azure 微软 Azure 的 Apache ZooKeeper on the Microsoft Open Technologies website. 在微软开放技术网站上

Example 例子

The DistributedMutex project in the LeaderElection solution included in the sample code available for this guide shows how to use a lease on a Azure storage blob to provide a mechanism for implementing a shared distributed mutex. This mutex can be used to elect a leader amongst a group of role instances in a Azure cloud service. The first role instance to acquire the lease is elected the leader, and remains the leader until it releases the lease or until it is unable to renew the lease. Other role instances can continue to monitor the blob lease in the event that the leader is no longer available.

本指南提供的示例代码中包含的 Leader選舉解决方案中的 DisbutedMutex 项目展示了如何使用 Azure 存储 blob 上的租约来提供实现共享分布式互斥对象的机制。这个互斥体可以用来在 Azure 云服务中的一组角色实例中选择一个领导者。获得租约的第一个角色实例被选为领导者,并且在其释放租约或无法续签租约之前一直保持领导者的身份。在领导者不再可用的情况下,其他角色实例可以继续监视 blob 租约。

Note

注意

A blob lease is an exclusive write lock over a blob. A single blob can be the subject of a maximum of one lease at any one point in time. A role instance can request a lease over a specified blob, and it will be granted the lease if no other lease over the same blob is currently held by this or any other role instance, otherwise the request will throw an exception.
To reduce the possibility that a faulted role instance retains the lease indefinitely, specify a lifetime for the lease. When this expires, the lease becomes available. However, while a role instance holds the lease it can request that the lease is renewed, and it will be granted the lease for a further period of time. The role instance can continually repeat this process if it wishes to retain the lease.
For more information on how to lease a blob, see Lease Blob (REST API) on MSDN.

Blob 租约是对 blob 的独占写锁。在任何一个时间点上,单个 blob 都可以是最多一个租约的主题。角色实例可以请求一个指定的 blob 上的租约,如果这个或任何其他角色实例当前没有持有同一 blob 上的其他租约,则将授予该租约,否则请求将抛出异常。若要减少出错的角色实例无限期保留租约的可能性,请指定租约的生存期。当这个过期时,租约就可以使用了。但是,当角色实例持有租约时,它可以请求续订租约,并且将在更长的时间内授予该租约。如果角色实例希望保留租约,它可以不断重复此过程。有关如何租赁 Blob 的更多信息,请参见 MSDN 上的 LeaseBlob (RESTAPI)。

The BlobDistributedMutex class in the example contains the RunTaskWhenMutexAquired method that enables a role instance to attempt to obtain a lease over a specified blob. The details of the blob (the name, container, and storage account) are passed to the constructor in a BlobSettings object when the BlobDistributedMutex object is created (this object is a simple struct that is included in the sample code). The constructor also accepts a Task that references the code that the role instance should run if it successfully acquires the lease over the blob and is elected the leader. Note that the code that handles the low-level details of obtaining the lease is implemented in a separate helper class named BlobLeaseManager.

该示例中的 BlobDibutedMutex 类包含 RunTaskWhenMutexAquired 方法,该方法使角色实例能够尝试通过指定的 blob 获得租约。BlobSettings 对象创建时,blob 的详细信息(名称、容器和存储帐户)被传递给 BlobSettings 对象中的构造函数(这个对象是示例代码中包含的一个简单结构)。构造函数还接受一个 Task,该 Task 引用角色实例应该运行的代码,如果它成功地通过 blob 获取了租约并被选为领导者的话。请注意,处理获取租约的底层细节的代码是在名为 BlobLeaseManager 的单独助手类中实现的。

C# C #Copy 收到

public class BlobDistributedMutex{
  ...
  private readonly BlobSettings blobSettings;  private readonly Func<CancellationToken, Task> taskToRunWhenLeaseAcquired;
  ...

  public BlobDistributedMutex(BlobSettings blobSettings,            Func<CancellationToken, Task> taskToRunWhenLeaseAquired)  {    this.blobSettings = blobSettings;    this.taskToRunWhenLeaseAquired = taskToRunWhenLeaseAquired;  }public async Task RunTaskWhenMutexAcquired(CancellationToken token)  {    var leaseManager = new BlobLeaseManager(blobSettings);    await this.RunTaskWhenBlobLeaseAcquired(leaseManager, token);      } 
  ...

The RunTaskWhenMutexAquired method in the code sample above invokes the RunTaskWhenBlobLeaseAcquired method shown in the following code sample to actually acquire the lease. The RunTaskWhenBlobLeaseAcquired method runs asynchronously. If the lease is successfully acquired, the role instance has been elected the leader. The purpose of the taskToRunWhenLeaseAcquired delegate is to perform the work that coordinates the other role instances. If the lease is not acquired, another role instance has been elected as the leader and the current role instance remains a subordinate. Note that the TryAcquireLeaseOrWait method is a helper method that uses the BlobLeaseManager object to obtain the lease.

上面的代码示例中的 RunTaskWhenMutexA 方法调用下面的代码示例中显示的 RunTaskWhenBlobLeaseA 方法来实际获得租约。RunTaskWhenBlobLeaseAcached 方法异步运行。如果成功获得租约,则角色实例已被选为领导者。TaskToRunWhenLeaseAcquited 委托的用途是执行协调其他角色实例的工作。如果未获得租约,则另一个角色实例已被选为领导者,并且当前角色实例仍然是从属的。请注意,TryAcquireLeaseOrwait 方法是一个帮助器方法,它使用 BlobLeaseManager 对象获取租约。

C# C #Copy 收到

  ...
  private async Task RunTaskWhenBlobLeaseAcquired(    BlobLeaseManager leaseManager, CancellationToken token)  {while (!token.IsCancellationRequested)    {      // Try to acquire the blob lease.       // Otherwise wait for a short time before trying again.      string leaseId = await this.TryAquireLeaseOrWait(leaseManager, token);if (!string.IsNullOrEmpty(leaseId))      {        // Create a new linked cancellation token source so that if either the         // original token is cancelled or the lease cannot be renewed, the        // leader task can be cancelled.using (var leaseCts =           CancellationTokenSource.CreateLinkedTokenSource(new[] { token }))        {// Run the leader task.          var leaderTask = this.taskToRunWhenLeaseAquired.Invoke(leaseCts.Token);          
          ...
}}}}
  ...

The task started by the leader also executes asynchronously. While this task is running, the RunTaskWhenBlobLeaseAquired method shown in the following code sample periodically attempts to renew the lease. This action helps to ensure that the role instance remains the leader. In the sample solution, the delay between renewal requests is less than the time specified for the duration of the lease in order to prevent another role instance from being elected the leader. If the renewal fails for any reason, the task is cancelled.

领导者启动的任务也异步执行。在此任务运行期间,下面的代码示例中显示的 RunTaskWhenBlobLeaseA 方法定期尝试续订租约。此操作有助于确保角色实例仍然是领导者。在示例解决方案中,更新请求之间的延迟小于为租约期限指定的时间,以防止另一个角色实例被选为领导者。如果由于任何原因续订失败,任务将被取消。

If the lease fails to be renewed or the task is cancelled (possibly as a result of the role instance shutting down), the lease is released. At this point, this or another role instance might be elected as the leader. The code extract below shows this part of the process.

如果租约未能续订或任务被取消(可能是由于角色实例关闭) ,则释放租约。此时,这个或另一个角色实例可能被选为领导者。下面的代码摘要显示了该过程的这一部分。

C# C #Copy 收到

  ...
  private async Task RunTaskWhenBlobLeaseAcquired(    BlobLeaseManager leaseManager, CancellationToken token)  {    while (...){
      ...
      if (...)
      {
        ...
        using (var leaseCts = ...)
        {
          ...
          // Keep renewing the lease in regular intervals.           // If the lease cannot be renewed, then the task completes.          var renewLeaseTask =             this.KeepRenewingLease(leaseManager, leaseId, leaseCts.Token);          // When any task completes (either the leader task itself or when it could          // not renew the lease) then cancel the other task.          await CancelAllWhenAnyCompletes(leaderTask, renewLeaseTask, leaseCts);}
      }
    }
  }
  ...
}

The KeepRenewingLease method is another helper method that uses the BlobLeaseManager object to renew the lease. The CancelAllWhenAnyCompletes method cancels the tasks specified as the first two parameters.

KeepReviingLease 方法是另一个帮助器方法,它使用 BlobLeaseManager 对象更新租约。CancelAllWhenAnyCompletes 方法取消作为前两个参数指定的任务。

Figure 1 illustrates the functions of the BlobDistributedMutex class.

图1说明了 BlobStributedMutex 类的函数。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6brXvCNQ-1655720494334)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn568104.cde8dc5436dcacf7a517855bc7fc52ff(en-us,pandp.10)].png)

Figure 1 - Using the BlobDistributedMutex class to elect a leader and run a task that coordinates operations

图1-使用 BlobDibutedMutex 类选择一个领导者并运行一个协调操作的任务

The following code example shows how to use the BlobDistributedMutex class in a worker role. This code obtains a lease over a blob named MyLeaderCoordinatorTask in the leases container in development storage, and specifies that the code defined in the MyLeaderCoordinatorTask method should run if the role instance is elected the leader.

下面的代码示例演示如何在辅助角色中使用 BlobDibutedMutex 类。这段代码在开发存储中的租约容器中获得一个名为 MyLeaderConceratorTask 的 blob 的租约,并指定如果角色实例被选为领导者,则应运行在 MyLeaderConceratorTask 方法中定义的代码。

C# C #Copy 收到

var settings = new BlobSettings(CloudStorageAccount.DevelopmentStorageAccount,   "leases", "MyLeaderCoordinatorTask");var cts = new CancellationTokenSource();var mutex = new BlobDistributedMutex(settings, MyLeaderCoordinatorTask);mutex.RunTaskWhenMutexAcquired(this.cts.Token);
...

// Method that runs if the role instance is elected the leaderprivate static async Task MyLeaderCoordinatorTask(CancellationToken token){
  ...
}

Note the following points about the sample solution:

关于示例解决方案,请注意以下几点:

  • The blob is a potential single point of failure. If the blob service becomes unavailable, or the blob is inaccessible, the leader will be unable to renew the lease and no other role instance will be able to obtain the lease. In this case, no role instance will be able to act as the leader. However, the blob service is designed to be resilient, so complete failure of the blob service is considered to be extremely unlikely. 这个斑点是一个潜在的单点故障。如果 blob 服务不可用,或者 blob 不可访问,领导者将无法续签租约,其他角色实例将无法获得租约。在这种情况下,任何角色实例都不能充当领导者。但是,blob 服务被设计成具有弹性,因此被认为极不可能发生 blob 服务的完全失败
  • If the task being performed by the leader stalls, the leader might continue to renew the lease, preventing any other role instance from obtaining the lease and taking over the leader role in order to coordinate tasks. In the real world, the health of the leader should be checked at frequent intervals. 如果领导者执行的任务停止,领导者可能会继续续签租约,阻止任何其他角色实例获得租约,并接管领导者角色以协调任务。在现实世界中,领导者的健康状况应该经常检查
  • The election process is non-deterministic. You cannot make any assumptions about which role instance will obtain the blob lease and become the leader. 选举过程是不确定的。您不能假设哪个角色实例将获得 blob 租约并成为领导者
  • The blob used as the target of the blob lease should not be used for any other purpose. If a role instance attempts to store data in this blob, this data will not be accessible unless the role instance is the leader and holds the blob lease. 作为 blob 租赁的目标的 blob 不应该用于任何其他目的。如果角色实例试图在这个 blob 中存储数据,那么除非角色实例是领导者并持有 blob 租约,否则将无法访问这些数据

Related Patterns and Guidance 相关模式及指引

The following guidance may also be relevant when implementing this pattern:

在实施这一模式时,下列指导意见也可能是相关的:

  • Autoscaling Guidance 自动缩放导航. It may be possible to start and stop instances of the task hosts as the load on the application varies. Autoscaling can help to maintain throughput and performance during times of peak processing. .随着应用程序负载的变化,可以启动和停止任务主机的实例。自动伸缩有助于在处理高峰期间保持吞吐量和性能
  • Compute Partitioning Guidance 计算分区指南. This guidance describes how to allocate tasks to hosts in a cloud service in a way that helps to minimize running costs while maintaining the scalability, performance, availability, and security of the service. .本指南描述了如何以一种有助于最小化运行成本的方式将任务分配给云服务中的主机,同时维护服务的可伸缩性、性能、可用性和安全性

Materialized View Pattern 实体化视图模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 7 minutes to read还有7分钟

在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
Generate prepopulated views over the data in one or more data stores when the data is formatted in a way that does not favor the required query operations. This pattern can help to support efficient querying and data extraction, and improve application performance.

当数据的格式化方式不利于所需的查询操作时,通过一个或多个数据存储区中的数据生成预填充视图。此模式有助于支持高效的查询和数据提取,并提高应用程序的性能。

Context and Problem 背景与问题

When storing data, the priority for developers and data administrators is often focused on how the data is stored, as opposed to how it is read. The chosen storage format is usually closely related to the format of the data, requirements for managing data size and data integrity, and the kind of store in use. For example, when using NoSQL Document store, the data is often represented as a series of aggregates, each of which contains all of the information for that entity.

在存储数据时,开发人员和数据管理员的优先级通常集中在如何存储数据,而不是如何读取数据。所选择的存储格式通常与数据的格式、管理数据大小和数据完整性的要求以及正在使用的存储类型密切相关。例如,在使用 NoSQL 文档存储时,数据通常表示为一系列聚合,每个聚合包含该实体的所有信息。

However, this may have a negative effect on queries. When a query requires only a subset of the data from some entities, such as a summary of orders for several customers without all of the order details, it must extract all of the data for the relevant entities in order to obtain the required information.

但是,这可能会对查询产生负面影响。当查询只需要来自某些实体的数据的一个子集时,比如没有所有订单详细信息的几个客户的订单摘要,它必须提取相关实体的所有数据,以获得所需的信息。

Solution 解决方案

To support efficient querying, a common solution is to generate, in advance, a view that materializes the data in a format most suited to the required results set. The Materialized View pattern describes generating prepopulated views of data in environments where the source data is not in a format that is suitable for querying, where generating a suitable query is difficult, or where query performance is poor due to the nature of the data or the data store.

为了支持高效的查询,一种常见的解决方案是预先生成一个视图,该视图以最适合所需结果集的格式实现数据。物化视图模式描述了在源数据格式不适合查询、难以生成合适查询或者由于数据或数据存储的性质查询性能较差的环境中生成预填充的数据视图。

These materialized views, which contain only data required by a query, allow applications to quickly obtain the information they need. In addition to joining tables or combining data entities, materialized views may include the current values of calculated columns or data items, the results of combining values or executing transformations on the data items, and values specified as part of the query. A materialized view may even be optimized for just a single query.

这些物化视图(仅包含查询所需的数据)允许应用程序快速获取所需的信息。除了连接表或合并数据实体之外,物化视图还可以包括计算列或数据项的当前值、合并值或对数据项执行转换的结果以及作为查询的一部分指定的值。物化视图甚至可以针对单个查询进行优化。

A key point is that a materialized view and the data it contains is completely disposable because it can be entirely rebuilt from the source data stores. A materialized view is never updated directly by an application, and so it is effectively a specialized cache.

关键的一点是,物化视图及其包含的数据是完全可抛弃的,因为它可以完全从源数据存储重新构建。物化视图永远不会被应用程序直接更新,因此它实际上是一个专用缓存。

When the source data for the view changes, the view must be updated to include the new information. This may occur automatically on an appropriate schedule, or when the system detects a change to the original data. In other cases it may be necessary to regenerate the view manually.

当视图的源数据发生更改时,必须更新视图以包含新信息。这可能在适当的时间表上自动发生,或者当系统检测到对原始数据的更改时发生。在其他情况下,可能需要手动重新生成视图。

Figure 1 shows an example of how the Materialized View pattern might be used.

图1显示了如何使用材质化视图模式的示例。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-U2ntEM8f-1655720494337)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589782.4c2a971af08d43ad170690390ae5abe7(en-us,pandp.10)].png)

Figure 1 - The Materialized View pattern

图1-物化视图模式

Issues and Considerations 问题及考虑

Consider the following points when deciding how to implement this pattern:

在决定如何实现此模式时,请考虑以下几点:

  • Consider how and when the view will be updated. Ideally it will be regenerated in response to an event indicating a change to the source data, although in some circumstances this may lead to excessive overheads if the source data changes rapidly. Alternatively, consider using a scheduled task, an external trigger, or a manual action to initiate regeneration of the view. 考虑如何以及何时更新视图。理想情况下,它将在响应指示源数据更改的事件时重新生成,尽管在某些情况下,如果源数据快速更改,这可能会导致过多的开销。或者,可以考虑使用计划任务、外部触发器或手动操作来启动视图的再生
  • In some systems, such as when using the 在某些系统中,例如在使用Event Sourcing pattern 事件源模式 to maintain a store of only the events that modified the data, materialized views may be necessary. Prepopulating views by examining all events to determine the current state may be the only way to obtain information from the event store. In cases other than when using Event Sourcing it is necessary to gauge the advantages that a materialized view may offer. Materialized views tend to be specifically tailored to one, or a small number of queries. If many queries must be used, maintaining materialized views may result in unacceptable storage capacity requirements and storage cost. 为了只维护修改数据的事件的存储,可能需要具体化视图。通过检查所有事件来预填充视图以确定当前状态可能是从事件存储区获取信息的唯一方法。在不使用事件源的情况下,有必要评估物化视图可能提供的优势。物化视图倾向于专门针对一个或少数查询定制。如果必须使用许多查询,维护物化视图可能会导致不可接受的存储容量需求和存储成本
  • Consider the impact on data consistency when generating the view, and when updating the view if this occurs on a schedule. If the source data is changing at the point when the view is generated, the copy of the data in the view may not be fully consistent with the original data. 在生成视图时考虑对数据一致性的影响,以及在按计划发生时更新视图时考虑对数据一致性的影响。如果在生成视图时源数据正在更改,则视图中数据的副本可能与原始数据不完全一致
  • Consider where you will store the view. The view does not have to be located in the same store or partition as the original data. It could be a subsets from a few different partitions combined. 考虑将视图存储在何处。视图不必与原始数据位于同一存储区或分区中。它可以是来自几个不同分区的子集合
  • If the view is transient and is used only to improve query performance by reflecting the current state of the data, or to improve scalability, it may be stored in cache or in a less reliable location. It can be rebuilt if lost. 如果视图是临时的,并且仅用于通过反映数据的当前状态来提高查询性能,或者用于提高可伸缩性,则视图可以存储在缓存中,也可以存储在不太可靠的位置。丢了可以重建
  • When defining a materialized view, maximize its value by adding data items or columns to the view based on computation or transformation of existing data items, on values passed in the query, or on combinations of these values where this is appropriate. 在定义物化视图时,根据现有数据项的计算或转换、查询中传递的值或这些值的适当组合,将数据项或列添加到视图中,从而最大限度地提高其价值
  • Where the storage mechanism supports it, consider indexing the materialized view to further maximize performance. Most relational databases support indexing for views, as do Big Data solutions based on Apache Hadoop. 在存储机制支持它的地方,考虑索引物化视图以进一步最大化性能。大多数关系数据库支持视图索引,基于 Apache Hadoop 的大数据解决方案也是如此

When to Use this Pattern 何时使用此模式

This pattern is ideally suited for:

这种模式非常适合:

  • Creating materialized views over data that is difficult to query directly, or where queries must be very complex in order to extract data that is stored in a normalized, semi-structured, or unstructured way. 在难以直接查询的数据上创建物化视图,或者查询必须非常复杂才能提取以规范化、半结构化或非结构化方式存储的数据
  • Creating temporary views that can dramatically improve query performance, or can act directly as source views or data transfer objects (DTOs) for the UI, for reporting, or for display. 创建临时视图可以显著提高查询性能,或者可以直接作为 UI、报告或显示的源视图或数据传输对象(DTO)
  • Supporting occasionally connected or disconnected scenarios where connection to the data store is not always available. The view may be cached locally in this case. 支持偶尔连接或断开连接的场景,其中并不总是可以连接到数据存储区。在这种情况下,视图可以在本地缓存
  • Simplifying queries and exposing data for experimentation in a way that does not require knowledge of the source data format. For example, by joining different tables in one or more databases, or one or more domains in NoSQL stores, and then formatting the data to suit its eventual use. 以一种不需要了解源数据格式的方式简化查询并公开用于实验的数据。例如,通过在一个或多个数据库中联接不同的表,或在 NoSQL 存储中联接一个或多个域,然后对数据进行格式化以适应其最终使用
  • Providing access to specific subsets of the source data that, for security or privacy reasons, should not be generally accessible, open to modification, or fully exposed to users. 提供对源数据的特定子集的访问,出于安全或隐私的原因,这些子集不应该是通常可访问的、可以修改的或完全暴露给用户的
  • Bridging the disjoint when using different data stores based on their individual capabilities. For example, by using a cloud store that is efficient for writing as the reference data store, and a relational database that offers good query and read performance to hold the materialized views. 根据不同的数据存储区的各自能力,在使用不同的数据存储区时架起不相交的桥梁。例如,使用一个高效的云存储作为参考数据存储,以及一个提供良好查询和读取性能的关系数据库来保存物化视图

This pattern might not be suitable in the following situations:

这种模式可能不适用于下列情况:

  • The source data is simple and easy to query. 源数据简单且易于查询
  • The source data changes very quickly, or can be accessed without using a view. The processing overhead of creating views may be avoidable in these cases. 源数据变化非常快,或者可以不使用视图访问。在这些情况下,创建视图的处理开销可以避免
  • Consistency is a high priority. The views may not always be fully consistent with the original data. 一致性优先级很高。视图可能并不总是与原始数据完全一致

Example 例子

Figure 2 shows an example of using the Materialized View pattern. Data in the Order, OrderItem, and Customer tables in separate partitions in a Microsoft Azure storage account are combined to generate a view containing the total sales value for each product in the Electronics category, together with a count of the number of customers who made purchases of each item.

图2显示了一个使用材质化视图模式的示例。在微软 Azure 存储账户的单独分区中的 Order、 OrderItem 和 Customer 表中的数据被组合起来,生成一个视图,其中包含电子类别中每个产品的总销售额,以及购买每个产品的客户数量。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YAak8J0X-1655720494338)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589782.37100b42c46a96cae6d62f609c87f458(en-us,pandp.10)].png)

Figure 2 - Using the Materialized View pattern to generate a summary of sales

图2-使用物化视图模式生成销售总结

Creating this materialized view requires complex queries. However, by exposing the query result as materialized view, users can easily obtain the results and use them directly or incorporate them in another query. The view is likely to be used in a reporting system or dashboard, and so can be updated on a scheduled basis such as weekly.

创建这个物化视图需要复杂的查询。但是,通过将查询结果公开为物化视图,用户可以轻松获得结果并直接使用它们或将它们合并到另一个查询中。该视图可能用于报告系统或仪表板,因此可以按计划(如每周)进行更新。

Note

注意

Although this example utilizes Azure table storage, many relational database management systems also provide native support for materialized views.

虽然这个例子使用了 Azure 表存储,但是许多关系数据库管理系统也为物化视图提供了本地支持。

Related Patterns and Guidance 相关模式及指引

The following patterns and guidance may also be relevant when implementing this pattern:

下列模式和指南在实现此模式时也可能有用:

  • Data Consistency Primer 数据一致性入门. It is necessary to maintain the summary information held in a materialized view so that it reflects the underlying data values. As the data values change, it may not be feasible to update the summary data in real time, and instead an eventually consistent approach must be adopted. The Data Consistency primer summarizes the issues surrounding maintaining consistency over distributed data, and describes the benefits and tradeoffs of different consistency models. .有必要维护物化视图中保存的摘要信息,以便它反映底层数据值。随着数据值的变化,实时更新汇总数据可能是不可行的,因此必须采用最终一致的方法。数据一致性入门总结了围绕在分布式数据上维护一致性的问题,并描述了不同一致性模型的优缺点
  • Command and Query Responsibility Segregation (CQRS) Pattern 命令和查询责任分离(CQRS)模式. You may be able to use this pattern to update the information in a materialized view by responding to events that occur when the underlying data values change. .通过响应底层数据值更改时发生的事件,您可以使用此模式更新物化视图中的信息
  • Event Sourcing Pattern 事件源模式. You can use this pattern in conjunction with the CQRS pattern to maintain the information in a materialized view. When the data values on which a materialized view is based are modified, the system can raise events that describe these modifications and save them in an event store. .您可以将此模式与 CQRS 模式结合使用,以维护物化视图中的信息。当修改物化视图所基于的数据值时,系统可以引发描述这些修改的事件并将它们保存在事件存储区中
  • Index Table Pattern 索引表模式. The data in a materialized view is typically organized by a primary key, but queries may need to retrieve information from this view by examining data in other fields. You can use the Index Table pattern to create secondary indexes over data sets for data stores that do not support native secondary indexes. .物化视图中的数据通常由主键组织,但查询可能需要通过检查其他字段中的数据来从该视图检索信息。可以使用 Index Table 模式在不支持本机辅助索引的数据存储区的数据集上创建辅助索引

Pipes and Filters Pattern 管道及过滤器模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 12 minutes to read还有12分钟
    在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
    Decompose a task that performs complex processing into a series of discrete elements that can be reused. This pattern can improve performance, scalability, and reusability by allowing task elements that perform the processing to be deployed and scaled independently.

将执行复杂处理的任务分解为一系列可重用的离散元素。通过允许独立地部署和扩展执行处理的任务元素,该模式可以提高性能、可伸缩性和可重用性。

Context and Problem 背景与问题

An application may be required to perform a variety of tasks of varying complexity on the information that it processes. A straightforward but inflexible approach to implementing this application could be to perform this processing as monolithic module. However, this approach is likely to reduce the opportunities for refactoring the code, optimizing it, or reusing it if parts of the same processing are required elsewhere within the application.

应用程序可能需要对其处理的信息执行各种不同复杂度的任务。实现此应用程序的一种直接但不灵活的方法是将此处理作为单片模块执行。但是,如果应用程序中的其他地方需要相同处理的某些部分,这种方法可能会减少重构代码、优化代码或重用代码的机会。

Figure 1 illustrates the issues with processing data by using the monolithic approach. An application receives and processes data from two sources. The data from each source is processed by a separate module that performs a series of tasks to transform this data, before passing the result to the business logic of the application.

图1说明了使用整体方法处理数据的问题。应用程序接收和处理来自两个源的数据。来自每个源的数据由一个单独的模块处理,该模块执行一系列任务来转换此数据,然后将结果传递给应用程序的业务逻辑。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Lfly1VYR-1655720494343)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn568100.dda1687b9f2689cc895a5730875cb945(en-us,pandp.10)].png)

Figure 1 - A solution implemented by using monolithic modules

图1-使用单片机模块实现的解决方案

Some of the tasks that the monolithic modules perform are functionally very similar, but the modules have been designed separately. The code that implements the tasks is closely coupled within a module, and this code has been developed with little or no thought given to reuse or scalability.

单片机模块执行的一些任务在功能上非常相似,但是模块是单独设计的。实现这些任务的代码紧密地耦合在一个模块中,开发这些代码时很少或根本没有考虑到重用或可伸缩性。

However, the processing tasks performed by each module, or the deployment requirements for each task, could change as business requirements are amended. Some tasks might be compute-intensive and could benefit from running on powerful hardware, while others might not require such expensive resources. Furthermore, additional processing might be required in the future, or the order in which the tasks performed by the processing could change. A solution is required that addresses these issues, and increases the possibilities for code reuse.

但是,随着业务需求的修改,每个模块执行的处理任务或每个任务的部署需求可能会发生变化。有些任务可能是计算密集型的,并且可以受益于在强大的硬件上运行,而其他任务可能不需要如此昂贵的资源。此外,将来可能需要额外的处理,或者处理执行的任务可能发生变化的顺序。需要一个解决这些问题的解决方案,并增加代码重用的可能性。

Solution 解决方案

Decompose the processing required for each stream into a set of discrete components (or filters), each of which performs a single task. By standardizing the format of the data that each component receives and emits, these filters can be combined together into a pipeline. This helps to avoid duplicating code, and makes it easy to remove, replace, or integrate additional components if the processing requirements change. Figure 2 shows an example of this structure.

将每个流所需的处理分解为一组离散的组件(或过滤器) ,每个组件执行一个任务。通过标准化每个组件接收和发出的数据格式,可以将这些过滤器组合到一个管道中。这有助于避免重复代码,并且在处理需求发生变化时,可以轻松地删除、替换或集成其他组件。图2显示了此结构的一个示例。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gqVJ3nrg-1655720494344)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn568100.c5850d12ce8ac45846e1bcebf45b9096(en-us,pandp.10)].png)

Figure 2 - A solution implemented by using pipes and filters

图2-通过使用管道和过滤器实现的解决方案

The time taken to process a single request depends on the speed of the slowest filter in the pipeline. It is possible that one or more filters could prove to be a bottleneck, especially if a large number of requests appear in a stream from a particular data source. A key advantage of the pipeline structure is that it provides opportunities for running parallel instances of slow filters, enabling the system to spread the load and improve throughput.

处理单个请求所需的时间取决于管道中最慢的过滤器的速度。一个或多个过滤器可能会成为瓶颈,特别是当来自特定数据源的大量请求出现在流中时。流水线结构的一个关键优点是,它提供了运行慢速过滤器的并行实例的机会,使系统能够分散负载并提高吞吐量。

The filters that comprise a pipeline can run on different machines, enabling them to be scaled independently and can take advantage of the elasticity that many cloud environments provide. A filter that is computationally intensive can run on high performance hardware, while other less demanding filters can be hosted on commodity (cheaper) hardware. The filters do not even have to be in the same data center or geographical location, which allows each element in a pipeline to run in an environment that is close to the resources it requires.

组成管道的过滤器可以在不同的机器上运行,使它们能够独立地进行伸缩,并且可以利用许多云环境提供的灵活性。计算密集型过滤器可以在高性能硬件上运行,而其他要求较低的过滤器可以托管在普通(更便宜)硬件上。过滤器甚至不必位于相同的数据中心或地理位置,这允许管道中的每个元素在接近所需资源的环境中运行。

Figure 3 shows an example applied to the pipeline for the data from Source 1.

图3显示了一个应用于来自源1的数据的管道的示例。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-t88pz618-1655720494346)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn568100.1e7fdbe8433418d9a52ee466a9dbfc02(en-us,pandp.10)].png)

Figure 3 - Load-balancing components in a pipeline

图3-管道中的负载平衡组件

If the input and output of a filter are structured as a stream, it may be possible to perform the processing for each filter in parallel. The first filter in the pipeline can commence its work and start to emit its results, which are passed directly on to the next filter in the sequence before the first filter has completed its work.

如果过滤器的输入和输出结构为流,则可以并行地对每个过滤器执行处理。管道中的第一个过滤器可以开始其工作并开始发出其结果,这些结果在第一个过滤器完成其工作之前直接传递到序列中的下一个过滤器。

Another benefit is the resiliency that this model can provide. If a filter fails or the machine it is running on is no longer available, the pipeline may be able to reschedule the work the filter was performing and direct this work to another instance of the component. Failure of a single filter does not necessarily result in failure of the entire pipeline.

另一个好处是这个模型可以提供的弹性。如果过滤器失败或者它所运行的机器不再可用,管道可能能够重新调度过滤器正在执行的工作,并将这些工作指向组件的另一个实例。单个过滤器的失效不一定导致整个管道的失效。

Using the Pipes and Filters pattern in conjunction with the Compensating Transaction pattern can provide an alternative approach to implementing distributed transactions. A distributed transaction can be broken down into separate compensable tasks, each of which can be implemented by using a filter that also implements the Compensating Transaction pattern. The filters in a pipeline can be implemented as separate hosted tasks running close to the data that they maintain.

将管道和过滤器模式与补偿事务模式结合使用,可以提供实现分布式事务的替代方法。分布式事务可以分解为单独的可补偿任务,每个任务都可以通过使用同时实现补偿事务模式的过滤器来实现。管道中的筛选器可以作为独立的托管任务实现,这些任务运行在它们所维护的数据附近。

Issues and Considerations 问题及考虑

You should consider the following points when deciding how to implement this pattern:

在决定如何实现此模式时,应考虑以下几点:

  • Complexity. The increased flexibility that this pattern provides can also introduce complexity, especially if the filters in a pipeline are distributed across different servers.

    复杂性。此模式提供的增加的灵活性也会引入复杂性,特别是如果管道中的过滤器分布在不同的服务器上。

  • Reliability. Use an infrastructure that ensures data flowing between filters in a pipeline will not be lost.

    可靠性。使用基础设施,确保管道中过滤器之间的数据流不会丢失。

  • Idempotency. If a filter in a pipeline fails after receiving a message and the work is rescheduled to another instance of the filter, part of the work may have already been completed. If this work updates some aspect of the global state (such as information stored in a database), the same update could be repeated. A similar issue might arise if a filter fails after posting its results to the next filter in the pipeline, but before indicating that it has completed its work successfully. In these cases, the same work could be repeated by another instance of the filter, causing the same results to be posted twice. This could result in subsequent filters in the pipeline processing the same data twice. Therefore filters in a pipeline should be designed to be idempotent. For more information see Idempotency Patterns on Jonathan Oliver’s blog.

    阳痿。如果管道中的过滤器在接收到消息后失败,并且工作被重新调度到该过滤器的另一个实例,则部分工作可能已经完成。如果这项工作更新了全局状态的某些方面(例如存储在数据库中的信息) ,则可以重复相同的更新。如果过滤器在将其结果发布到管道中的下一个过滤器之后,但在指示其已成功完成工作之前失败,则可能会出现类似的问题。在这些情况下,过滤器的另一个实例可以重复相同的工作,导致相同的结果发布两次。这可能导致管道中的后续过滤器对相同数据进行两次处理。因此,管道中的过滤器应设计为幂等的。欲了解更多信息,请参阅乔纳森 · 奥利弗博客上的“阳痿模式”。

  • Repeated messages. If a filter in a pipeline fails after posting a message to the next stage of the pipeline, another instance of the filter may be run (as described by the idempotency consideration above), and it will post a copy of the same message to the pipeline. This could cause two instances of the same message to be passed to the next filter. To avoid this, the pipeline should detect and eliminate duplicate messages.

    重复的信息。如果管道中的过滤器在将消息发送到管道的下一阶段之后失败,则可以运行该过滤器的另一个实例(如上面的幂等性考虑所描述的) ,并且它将向管道发送同一消息的副本。这可能导致同一消息的两个实例被传递到下一个筛选器。为了避免这种情况,管道应该检测并消除重复消息。

    Note

    注意

    If you are implementing the pipeline by using message queues (such as Microsoft Azure Service Bus queues), the message queuing infrastructure may provide automatic duplicate message detection and removal.

    如果通过使用消息队列(例如 Microsoft Azure Service Bus 队列)实现管道,消息队列基础结构可以提供自动重复消息检测和删除。

  • Context and state. In a pipeline, each filter essentially runs in isolation and should not make any assumptions about how it was invoked. This means that each filter must be provided with sufficient context with which it can perform its work. This context may comprise a considerable amount of state information.

    背景和状态。在管道中,每个筛选器基本上都是独立运行的,不应对如何调用它做任何假设。这意味着必须为每个过滤器提供足够的上下文,使其能够执行其工作。这个上下文可能包含大量的状态信息。

When to Use this Pattern 何时使用此模式

Use this pattern when:

在以下情况下使用这种模式:

  • The processing required by an application can easily be decomposed into a set of discrete, independent steps.

    应用程序所需的处理可以很容易地分解为一组离散的、独立的步骤。

  • The processing steps performed by an application have different scalability requirements.

    应用程序执行的处理步骤具有不同的可伸缩性要求。

    Note

    注意

    It may be possible to group filters that should scale together in the same process. For more information, see the Compute Resource Consolidation pattern.

    可以将应该在同一过程中缩放在一起的过滤器分组。有关更多信息,请参见计算资源整合模式。

  • Flexibility is required to allow reordering of the processing steps performed by an application, or the capability to add and remove steps.

    需要灵活性以允许对应用程序执行的处理步骤进行重新排序,或允许添加和删除步骤。

  • The system can benefit from distributing the processing for steps across different servers.

    系统可以从跨不同服务器分布步骤的处理中获益。

  • A reliable solution is required that minimizes the effects of failure in a step while data is being processed.

    需要一个可靠的解决方案,以便在处理数据时将步骤中的故障影响降至最低。

This pattern might not be suitable when:

在下列情况下,这种模式可能不适合:

  • The processing steps performed by an application are not independent, or they must be performed together as part of the same transaction. 应用程序执行的处理步骤不是独立的,或者它们必须作为同一事务的一部分一起执行
  • The amount of context or state information required by a step makes this approach inefficient. It may be possible to persist state information to a database instead, but do not use this strategy if the additional load on the database causes excessive contention. 步骤所需的上下文或状态信息的数量使这种方法效率低下。相反,可以将状态信息持久化到数据库中,但是如果数据库上的额外负载导致过度的争用,则不要使用此策略

Example 例子

You can use a sequence of message queues to provide the infrastructure required to implement a pipeline. An initial message queue receives unprocessed messages. A component implemented as a filter task listens for a message on this queue, performs its work, and then posts the transformed message to the next queue in the sequence. Another filter task can listen for messages on this queue, process them, post the results to another queue, and so on until the fully transformed data appears in the final message in the queue.

可以使用消息队列序列提供实现管道所需的基础结构。初始消息队列接收未处理的消息。作为筛选器任务实现的组件侦听此队列上的消息,执行其工作,然后将转换后的消息按顺序发送到下一个队列。另一个筛选器任务可以侦听此队列上的消息,处理它们,将结果发送到另一个队列,以此类推,直到完全转换后的数据出现在队列中的最终消息中。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-r6o2dzTb-1655720494347)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn568100.b07877dde3a777fc231bfdf20e69db0f(en-us,pandp.10)].png)

Figure 4 - Implementing a pipeline by using message queues

图4-通过使用消息队列实现管道

If you are building a solution on Azure you can use Service Bus queues to provide a reliable and scalable queuing mechanism. The ServiceBusPipeFilter class shown below provides an example. It demonstrates how you can implement a filter that receives input messages from a queue, processes these messages, and posts the results to another queue.

如果您正在 Azure 上构建解决方案,那么您可以使用服务总线队列来提供可靠且可伸缩的队列机制。下面显示的 ServiceBusPipeFilter 类提供了一个示例。它演示了如何实现一个筛选器,该筛选器接收来自队列的输入消息,处理这些消息,并将结果发送到另一个队列。

Note

注意

The ServiceBusPipeFilter class is defined in the PipesAndFilters.Shared project in the PipesAndFilters solution. This sample code is available is available for download with this guidance.

ServiceBusPipeFilter 类在 PipeAndFilters 中定义。管道和过滤器解决方案中的共享项目。此示例代码可通过本指南下载。

C# C #Copy 收到

public class ServiceBusPipeFilter{  ...  private readonly string inQueuePath;  private readonly string outQueuePath;  ...  private QueueClient inQueue;  private QueueClient outQueue;  ...  public ServiceBusPipeFilter(..., string inQueuePath, string outQueuePath = null)  {     ...     this.inQueuePath = inQueuePath;     this.outQueuePath = outQueuePath;  }  public void Start()  {    ...    // Create the outbound filter queue if it does not exist.    ...    this.outQueue = QueueClient.CreateFromConnectionString(...);    ...    // Create the inbound and outbound queue clients.    this.inQueue = QueueClient.CreateFromConnectionString(...);  }  public void OnPipeFilterMessageAsync(    Func<BrokeredMessage, Task<BrokeredMessage>> asyncFilterTask, ...)   {    ...    this.inQueue.OnMessageAsync(      async (msg) =>    {      ...      // Process the filter and send the output to the       // next queue in the pipeline.      var outMessage = await asyncFilterTask(msg);      // Send the message from the filter processor       // to the next queue in the pipeline.      if (outQueue != null)      {        await outQueue.SendAsync(outMessage);      }      // Note: There is a chance that the same message could be sent twice       // or that a message may be processed by an upstream or downstream       // filter at the same time.      // This would happen in a situation where processing of a message was      // completed, it was sent to the next pipe/queue, and then failed       // to complete when using the PeekLock method.      // Idempotent message processing and concurrency should be considered       // in a real-world implementation.    },    options);  }  public async Task Close(TimeSpan timespan)  {    // Pause the processing threads.    this.pauseProcessingEvent.Reset();    // There is no clean approach for waiting for the threads to complete    // the processing. This example simply stops any new processing, waits    // for the existing thread to complete, then closes the message pump     // and finally returns.    Thread.Sleep(timespan);    this.inQueue.Close();    ...  }  ...}

The Start method in the ServiceBusPipeFilter class connects to a pair of input and output queues, and the Close method disconnects from the input queue. The OnPipeFilterMessageAsync method performs the actual processing of messages; the asyncFilterTask parameter to this method specifies the processing to be performed. The OnPipeFilterMessageAsync method waits for incoming messages on the input queue, runs the code specified by the asyncFilterTask parameter over each messages as it arrives, and posts the results to the output queue. The queues themselves are specified by the constructor.

ServiceBusPipeFilter 类中的 Start 方法连接到一对输入和输出队列,Close 方法与输入队列断开连接。OnPipeFilterMessageAsync 方法执行消息的实际处理; 此方法的 syncFilterTask 参数指定要执行的处理。OnPipeFilterMessageAsync 方法等待输入队列上的传入消息,在每个消息到达时运行由 syncFilterTask 参数指定的代码,并将结果发送到输出队列。队列本身由构造函数指定。

The sample solution implements filters in a set of worker roles. Each worker role can be scaled independently, depending on the complexity of the business processing that it performs or the resources that it requires to perform this processing. Additionally, multiple instances of each worker role can be run in parallel to improve throughput.

示例解决方案在一组辅助角色中实现筛选器。根据所执行的业务处理的复杂性或执行此处理所需的资源,可以独立地对每个辅助角色进行伸缩。此外,每个辅助角色的多个实例可以并行运行,以提高吞吐量。

The following code shows a Azure worker role named PipeFilterARoleEntry, which is defined in the PipeFilterA project in the sample solution.

下面的代码显示了一个名为 PipeFilterARoleEntry 的 Azure 工作者角色,该角色在示例解决方案中的 PipeFilterA 项目中定义。

C# C #Copy 收到

public class PipeFilterARoleEntry : RoleEntryPoint{  ...  private ServiceBusPipeFilter pipeFilterA;  public override bool OnStart()  {    ...    this.pipeFilterA = new ServiceBusPipeFilter(      ...,      Constants.QueueAPath,      Constants.QueueBPath);    this.pipeFilterA.Start();    ...  }  public override void Run()  {    this.pipeFilterA.OnPipeFilterMessageAsync(async (msg) =>    {      // Clone the message and update it.      // Properties set by the broker (Deliver count, enqueue time, ...)       // are not cloned and must be copied over if required.      var newMsg = msg.Clone();      await Task.Delay(500); // DOING WORK      Trace.TraceInformation("Filter A processed message:{0} at {1}",         msg.MessageId, DateTime.UtcNow);      newMsg.Properties.Add(Constants.FilterAMessageKey, "Complete");      return newMsg;    });    ...  }  ...}

This role contains a ServiceBusPipeFilter object. The OnStart method in the role connects to the queues for receiving input messages and posting output messages (the names of the queues are defined in the Constants class). The Run method invokes the OnPipeFilterMessagesAsync method to perform some processing on each message that is received (in this example, the processing is simulated by waiting for a short period of time). When processing is complete, a new message is constructed containing the results (in this case, the input message is simply augmented with a custom property), and this message is posted to the output queue.

此角色包含 ServiceBusPipeFilter 对象。角色中的 OnStart 方法连接到队列,用于接收输入消息和发布输出消息(队列的名称在 Constants 类中定义)。Run 方法调用 OnPipeFilterMessagesAsync 方法对接收到的每条消息执行一些处理(在本例中,通过等待一段短时间来模拟处理)。处理完成后,将构造一个包含结果的新消息(在本例中,输入消息只是通过自定义属性进行了扩展) ,并将此消息发送到输出队列。

The sample code contains another worker role named PipeFilterBRoleEntry in the PipeFilterB project. This role is similar to PipeFilterARoleEntry except that it performs different processing in the Run method. In the example solution, these two roles are combined to construct a pipeline; the output queue for the PipeFilterARoleEntry role is the input queue for the PipeFilterBRoleEntry role.

示例代码包含 PipeFilterB 项目中名为 PipeFilterBRoleEntry 的另一个辅助角色。此角色类似于 PipeFilterARoleEntry,只是在 Run 方法中执行不同的处理。在示例解决方案中,这两个角色组合在一起构造管道; PipeFilterARoleEntry 角色的输出队列是 PipeFilterBRoleEntry 角色的输入队列。

The sample solution also provides two further roles named InitialSenderRoleEntry (in the InitialSender project) and FinalReceiverRoleEntry (in the FinalReceiver project). The InitialSenderRoleEntry role provides the initial message in the pipeline. The OnStart method connects to a single queue and the Run method posts a method to this queue. This queue is the input queue used by the PipeFilterARoleEntry role, so posting a message to this queue causes the message to be received and processed by the PipeFilterARoleEntry role. The processed message then passes through the PipeFilterBRoleEntry role.

示例解决方案还提供了另外两个角色,分别名为 InitialSenderRoleEntry (在 InitialSender 项目中)和 FinalReceiverRoleEntry (在 FinalReceiver 项目中)。InitialSenderRoleEntry 角色在管道中提供初始消息。OnStart 方法连接到单个队列,Run 方法将一个方法发送到此队列。此队列是 PipeFilterARoleEntry 角色使用的输入队列,因此将消息发送到此队列将导致由 PipeFilterARoleEntry 角色接收和处理该消息。然后,处理的消息将通过 PipeFilterBRoleEntry 角色传递。

The input queue for the FinalReceiveRoleEntry role is the output queue for the PipeFilterBRoleEntry role. The Run method in the FinalReceiveRoleEntry role, shown below, receives the message and performs some final processing. Then it writes the values of the custom properties added by the filters in the pipeline to the trace output.

FinalReceiveRoleEntry 角色的输入队列是 PipeFilterBRoleEntry 角色的输出队列。FinalReceiveRoleEntry 角色中的 Run 方法(如下所示)接收消息并执行一些最终处理。然后,它将管道中筛选器添加的自定义属性的值写入跟踪输出。

C# C #Copy 收到

public class FinalReceiverRoleEntry : RoleEntryPoint{  ...  // Final queue/pipe in the pipeline from which to process data.  private ServiceBusPipeFilter queueFinal;  public override bool OnStart()  {    ...    // Set up the queue.    this.queueFinal = new ServiceBusPipeFilter(...,Constants.QueueFinalPath);    this.queueFinal.Start();    ...  }  public override void Run()  {    this.queueFinal.OnPipeFilterMessageAsync(      async (msg) =>      {        await Task.Delay(500); // DOING WORK        // The pipeline message was received.        Trace.TraceInformation(          "Pipeline Message Complete - FilterA:{0} FilterB:{1}",          msg.Properties[Constants.FilterAMessageKey],          msg.Properties[Constants.FilterBMessageKey]);        return null;      });    ...  }  ...}

Related Patterns and Guidance 相关模式及指引

The following patterns and guidance may also be relevant when implementing this pattern:

下列模式和指南在实现此模式时也可能有用:

  • Competing Consumers Pattern 消费者竞争模式. A pipeline can contain multiple instances of one or more filters. This approach is useful for running parallel instances of slow filters, enabling the system to spread the load and improve throughput. Each instance of a filter will compete for input with the other instances; two instances of a filter should not be able to process the same data. The Competing Consumers pattern provides more information on this approach. .管道可以包含一个或多个筛选器的多个实例。这种方法对于运行慢过滤器的并行实例非常有用,使系统能够分散负载并提高吞吐量。过滤器的每个实例将与其他实例竞争输入; 过滤器的两个实例不应该能够处理相同的数据。竞争消费者模式提供了关于这种方法的更多信息
  • Compute Resource Consolidation Pattern 计算资源合并模式. It may be possible to group filters that should scale together into the same process. The Compute Resource Consolidation pattern provides more information about the benefits and tradeoffs of this strategy. .可以将应该缩放到同一进程中的过滤器分组。计算资源合并模式提供了有关此策略的优缺点的更多信息
  • Compensating Transaction Pattern 补偿事务模式.****A filter can be implemented as an operation that can be reversed, or that has a compensating operation that restores the state to a previous version in the event of a failure. The Compensating Transaction pattern explains how this type of operation may be implemented in order to maintain or achieve eventual consistency. .过滤器可以被实现为一个可以被反转的操作,或者具有一个补偿操作,在发生故障时将状态恢复到以前的版本。补偿事务模式解释如何实施这类操作,以维持或达到最终一致性

Priority Queue Pattern 优先队列模式

  • Article文章
  • 08/26/2015 2015年8月26日
  • 10 minutes to read还有10分钟

在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
Prioritize requests sent to services so that requests with a higher priority are received and processed more quickly than those of a lower priority. This pattern is useful in applications that offer different service level guarantees to individual clients.

对发送给服务的请求进行优先排序,以便优先级较高的请求比优先级较低的请求更快地得到接收和处理。这种模式在向各个客户端提供不同服务水平保证的应用程序中非常有用。

Context and Problem 背景与问题

Applications may delegate specific tasks to other services; for example, to perform background processing or to integrate with other applications or services. In the cloud, a message queue is typically used to delegate tasks to background processing. In many cases the order in which requests are received by a service is not important. However, in some cases it may be necessary to prioritize specific requests. These requests should be processed earlier than others of a lower priority that may have been sent previously by the application.

应用程序可以将特定的任务委托给其他服务; 例如,执行后台处理或与其他应用程序或服务集成。在云中,消息队列通常用于将任务委托给后台处理。在许多情况下,服务接收请求的顺序并不重要。但是,在某些情况下,可能需要对特定请求进行优先排序。这些请求应该比应用程序以前发送的优先级较低的其他请求更早得到处理。

Solution 解决方案

A queue is usually a first-in, first-out (FIFO) structure, and consumers typically receive messages in the same order that they were posted to the queue. However, some message queues support priority messaging; the application posting a message can assign a priority to a message and the messages in the queue are automatically reordered so that messages with a higher priority will be received before those of a lower priority. Figure 1 illustrates a queue that provides priority messaging.

队列通常是先进先出(FIFO)结构,使用者接收消息的顺序通常与发送到队列的顺序相同。但是,有些消息队列支持优先级消息传递; 发布消息的应用程序可以为消息分配优先级,队列中的消息会自动重新排序,以便优先级较高的消息会先于优先级较低的消息被接收。图1演示了一个提供优先级消息传递的队列。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WL44ma8z-1655720494351)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589794.b030a05277c200f2e90d2e8cfa964549(en-us,pandp.10)].png)

Figure 1 - Using a queuing mechanism that supports message prioritization

图1-使用支持消息优先排序的队列机制

Note

注意

Most message queue implementations support multiple consumers (following the Competing Consumers pattern), and the number of consumer processes can be scaled up or down as demand dictates.

大多数消息队列实现支持多个消费者(遵循竞争消费者模式) ,消费者进程的数量可以根据需求进行调整。

In systems that do not support priority-based message queues, an alternative solution is to maintain a separate queue for each priority. The application is responsible for posting messages to the appropriate queue. Each queue can have a separate pool of consumers. Higher priority queues can have a larger pool of consumers running on faster hardware than lower priority queues. Figure 2 shows this approach.

在不支持基于优先级的消息队列的系统中,另一种解决方案是为每个优先级维护一个单独的队列。应用程序负责将消息发送到适当的队列。每个队列可以有一个单独的使用者池。与低优先级队列相比,高优先级队列在运行速度更快的硬件上具有更大的消费者池。图2显示了这种方法。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-YlmwCf8u-1655720494352)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589794.7125ac4bce90d3e21557b3914217624f(en-us,pandp.10)].png)

Figure 2 - Using separate message queues for each priority

图2-为每个优先级使用单独的消息队列

A variation on this strategy is to have a single pool of consumers that check for messages on high priority queues first, and then only start to fetch messages from lower priority queues if no higher priority messages are waiting. There are some semantic differences between a solution that uses a single pool of consumer processes (either with a single queue that supports messages with different priorities or with multiple queues that each handle messages of a single priority), and a solution that uses multiple queues with a separate pool for each queue.

这种策略的一个变体是有一个消费者池,它首先检查高优先级队列上的消息,然后只有在没有高优先级消息等待的情况下才开始从低优先级队列中获取消息。使用单个使用者进程池的解决方案(或者使用单个队列支持具有不同优先级的消息,或者使用多个队列,每个队列处理具有单个优先级的消息)与使用多个队列,每个队列使用单独池的解决方案之间存在一些语义差异。

In the single pool approach, higher priority messages will always be received and processed before lower priority messages. In theory, messages that have a very low priority may be continually superseded and might never be processed. In the multiple pool approach, lower priority messages will always be processed, just not as quickly as those of a higher priority (depending on the relative size of the pools and the resources that they have available).

在单池方法中,优先级较高的消息总是在优先级较低的消息之前被接收和处理。从理论上讲,优先级非常低的消息可能会不断被替换,并且可能永远不会被处理。在多池方法中,优先级较低的消息将始终得到处理,只是处理速度不如优先级较高的消息快(这取决于池的相对大小和它们可用的资源)。

Using a priority queuing mechanism can provide the following advantages:

使用优先级排队机制可以带来以下好处:

  • It allows applications to meet business requirements that necessitate prioritization of availability or performance, such as offering different levels of service to specific groups of customers. 它允许应用程序满足需要对可用性或性能进行优先排序的业务需求,例如向特定客户群体提供不同级别的服务
  • It can help to minimize operational costs. In the single queue approach, you can scale back the number of consumers if necessary. High priority messages will still be processed first (although possibly more slowly), and lower priority messages may be delayed for longer. If you have implemented the multiple message queue approach with separate pools of consumers for each queue, you can reduce the pool of consumers for lower priority queues, or even suspend processing for some very low priority queues by halting all the consumers that listen for messages on those queues. 它可以帮助最小化运营成本。在单队列方法中,如果需要,可以缩减使用者的数量。高优先级的消息仍将首先处理(尽管可能会更慢) ,而低优先级的消息可能会延迟更长时间。如果您已经实现了多消息队列方法,为每个队列使用了不同的使用者池,那么您可以减少低优先级队列的使用者池,甚至可以通过停止侦听这些队列上的消息的所有使用者来暂停一些非常低优先级队列的处理
  • The multiple message queue approach can help to maximize application performance and scalability by partitioning messages based on processing requirements. For example, vital tasks can be prioritized to be handled by receivers that run immediately while less important background tasks can be handled by receivers that are scheduled to run at less busy periods. 通过基于处理需求对消息进行分区,多消息队列方法可以帮助最大限度地提高应用程序的性能和可伸缩性。例如,重要任务可以优先由立即运行的接收器处理,而不太重要的后台任务可以由计划在不太忙的时间段运行的接收器处理

Issues and Considerations 问题及考虑

Consider the following points when deciding how to implement this pattern:

在决定如何实现此模式时,请考虑以下几点:

  • Define the priorities in the context of the solution. For example, “high priority” could mean that messages should be processed within ten seconds. Identify the requirements for handling high priority items, and what other resources must be allocated to meet these criteria. 在解决方案的上下文中定义优先级。例如,“高优先级”可能意味着消息应该在10秒内处理完毕。确定处理高优先级项目的需求,以及必须分配哪些其他资源来满足这些标准
  • Decide if all high priority items must be processed before any lower priority items. If the messages are being processed by a single pool of consumers, it may be necessary to provide a mechanism that can preempt and suspend a task that is handling a low priority message if a higher priority message becomes available. 决定是否所有高优先级项目必须在任何低优先级项目之前处理。如果消息是由单个使用者池处理的,则可能需要提供一种机制,以便在出现高优先级消息时抢占和挂起正在处理低优先级消息的任务
  • In the multiple queue approach, when using a single pool of consumer processes that listen on all queues rather than a dedicated consumer pool for each queue, the consumer must apply an algorithm that ensures it always services messages from higher priority queues before those from lower priority queues. 在多队列方法中,当使用侦听所有队列的单个使用者进程池而不是针对每个队列的专用使用者池时,使用者必须应用一种算法,该算法确保它总是服务来自高优先级队列的消息,而不是来自低优先级队列的消息
  • Monitor the speed of processing on high and low priority queues to ensure that messages in these queues are processed at the expected rates. 监视高优先级和低优先级队列的处理速度,以确保以预期的速率处理这些队列中的消息
  • If you need to guarantee that low priority messages will be processed, it may be necessary to implement the multiple message queue approach with multiple pools of consumers. Alternatively, in a queue that supports message prioritization, it may be possible to dynamically increase the priority of a queued message as it ages. However, this approach depends on the message queue providing this feature. 如果需要保证处理低优先级消息,则可能需要实现具有多个使用者池的多消息队列方法。或者,在支持消息优先级排序的队列中,可以在消息老化时动态增加队列消息的优先级。但是,这种方法依赖于提供此特性的消息队列
  • Using a separate queue for each message priority works best for systems that have a small number of well-defined priorities. 对于具有少量定义良好的优先级的系统,为每个消息优先级使用单独的队列效果最好
  • Message priorities may be determined logically by the system. For example, rather than having explicit high and low priority messages, they could be designated as “fee paying customer”, or “non-fee paying customer.” Depending on your business model, your system might allocate more resources to processing messages from fee paying customers than non-fee paying ones. 消息优先级可以由系统在逻辑上确定。例如,它们可以被指定为“付费客户”或“非付费客户”,而不是具有明确的高优先级和低优先级消息根据您的业务模型,您的系统可能分配更多的资源来处理来自付费用户的消息,而不是非付费用户
  • There may be a financial and processing cost associated with checking a queue for a message (some commercial messaging systems charge a small fee each time a message is posted or retrieved, and each time a queue is queried for messages). This cost will be increased when checking multiple queues. 检查消息的队列可能会带来一定的财务和处理成本(一些商业消息传递系统在每次发布或检索消息时收取很少的费用,而在每次查询队列以获取消息时收取很少的费用)。在检查多个队列时,此成本将会增加
  • It may be possible to dynamically adjust the size of a pool of consumers based on the length of the queue that the pool is servicing. For more information, see the 可以根据消费者池所服务的队列的长度动态调整消费者池的大小。有关更多信息,请参见Autoscaling Guidance 自动缩放导航.

When to Use this Pattern 何时使用此模式

This pattern is ideally suited to scenarios where:

这种模式非常适合下列情况:

  • The system must handle multiple tasks that might have different priorities. 系统必须处理具有不同优先级的多个任务
  • Different users or tenants should be served with different priority. 不同的用户或租户应享有不同的优先权

Example 例子

Microsoft Azure does not provide a queuing mechanism that natively support automatic prioritization of messages through sorting. However, it does provide Azure Service Bus topics and subscriptions, which support a queuing mechanism that provides message filtering, together with a wide range of flexible capabilities that make it ideal for use in almost all priority queue implementations.

MicrosoftAzure 没有提供一种队列机制,本地支持通过排序自动确定消息的优先级。但是,它确实提供了 Azure 服务总线主题和订阅,支持提供消息过滤的排队机制,以及广泛的灵活功能,这些功能使其非常适合于几乎所有优先级队列实现。

A Azure solution can implement a Service Bus topic to which an application can post messages, in the same way as a queue. Messages can contain metadata in the form of application-defined custom properties. Service Bus subscriptions can be associated with the topic, and these subscriptions can filter messages based on their properties. When an application sends a message to a topic, the message is directed to the appropriate subscription from where it can be read by a consumer. Consumer processes can retrieve messages from a subscription using the same semantics as a message queue (a subscription is a logical queue).

Azure 解决方案可以实现一个服务总线主题,应用程序可以向该主题发送消息,方式与队列相同。消息可以以应用程序定义的自定义属性的形式包含元数据。服务总线订阅可以与主题关联,这些订阅可以根据其属性筛选邮件。当应用程序向主题发送消息时,消息被定向到适当的订阅,使用者可以从该订阅读取消息。使用者进程可以使用与消息队列相同的语义从订阅中检索消息(订阅是逻辑队列)。

Figure 3 illustrates a solution using Azure Service Bus topics and subscriptions.

图3演示了一个使用 Azure 服务总线主题和订阅的解决方案。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0hsYhV9W-1655720494352)(https://docs.microsoft.com/en-us/previous-versions/msp-n-p/images/dn589794.5dcaec147bebcec467c83c9b62319c18(en-us,pandp.10)].png)

Figure 3 - Implementing a priority queue with Azure Service Bus topics and subscriptions

图3-使用 Azure 服务总线主题和订阅实现优先级队列

In Figure 3 the application creates several messages and assigns a custom property called Priority in each message with a value, either High or Low. The application posts these messages to a topic. The topic has two associated subscriptions, which both filter messages by examining the Priority property. One subscription accepts messages where the Priority property is set to High, and the other accepts messages where the Priority property is set to Low. A pool of consumers reads messages from each subscription. The high priority subscription has a larger pool, and these consumers might be running on more powerful (and expensive) computers with more resources available than the consumers in the low priority pool.

在图3中,应用程序创建了几条消息,并在每条消息中分配了一个名为优先级的自定义属性,该属性具有 High 或 Low 值。应用程序将这些消息发送到主题。该主题有两个关联的订阅,这两个订阅都通过检查“优先级”属性来筛选邮件。其中一个订阅接受“优先级”属性设置为“高”的消息,另一个订阅接受“优先级”属性设置为“低”的消息。消费者池从每个订阅中读取消息。高优先级订阅有一个更大的池,这些使用者可能运行在功能更强(也更昂贵)的计算机上,与低优先级池中的使用者相比,这些计算机具有更多可用资源。

Note that there is nothing special about the designation of high and low priority messages in this example. These are simply labels specified as properties in each message, and are used to direct messages to a specific subscription. If additional priorities are required, it is relatively easy to create further subscriptions and pools of consumer processes to handle these priorities.

请注意,在这个示例中,高优先级和低优先级消息的指定并没有什么特别之处。这些只是作为每个消息中的属性指定的标签,用于将消息定向到特定的订阅。如果需要额外的优先级,则相对容易创建进一步的订阅和使用者流程池来处理这些优先级。

The PriorityQueue solution in the code available with this guidance contains an implementation of this approach. This solution contains two worker roles projects named PriorityQueue.High and PriorityQueue.Low. These two worker roles inherit from a class called PriorityWorkerRole which contains the functionality for connecting to a specified subscription in the OnStart method.

本指南提供的代码中的 PriorityQueue 解决方案包含此方法的实现。此解决方案包含两个名为 PriorityQueue 的工作角色项目。高优先级队列。低。这两个辅助角色继承自一个名为 PriorityWorkerRole 的类,该类包含连接到 OnStart 方法中指定订阅的功能。

The PriorityQueue.High and PriorityQueue.Low worker roles connect to different subscriptions, defined by their configuration settings. An administrator can configure different numbers of each role to be run; typically there will be more instances of the PriorityQueue.High worker role than the PriorityQueue.Low worker role.

优先队列。高优先级队列。低工作角色连接到由其配置设置定义的不同订阅。管理员可以配置要运行的每个角色的不同编号; 通常会有更多的 PriorityQueue 实例。高工作线程角色。低级工人的角色。

The Run method in the PriorityWorkerRole class arranges for the virtual ProcessMessage method (also defined in the PriorityWorkerRole class) to be executed for each message received on the queue. The following code shows the Run and ProcessMessage methods. The QueueManager class, defined in the PriorityQueue.Shared project, provides helper methods for using Azure Service Bus queues.

PriorityWorkerRole 类中的 Run 方法为队列上接收的每个消息安排执行虚拟 ProcessMessage 方法(也在 PriorityWorkerRole 类中定义)。下面的代码显示 Run 和 ProcessMessage 方法。QueueManager 类,在 PriorityQueue 中定义。共享项目,为使用 Azure 服务总线队列提供帮助器方法。

C# C #Copy 收到

public class PriorityWorkerRole : RoleEntryPoint{  private QueueManager queueManager;  ...  public override void Run()  {    // Start listening for messages on the subscription.    var subscriptionName = CloudConfigurationManager.GetSetting("SubscriptionName");    this.queueManager.ReceiveMessages(subscriptionName, this.ProcessMessage);    ...;  }  ...  protected virtual async Task ProcessMessage(BrokeredMessage message)  {    // Simulating processing.    await Task.Delay(TimeSpan.FromSeconds(2));  }}

The PriorityQueue.High and PriorityQueue.Low worker roles both override the default functionality of the ProcessMessage method. The code below shows the ProcessMessage method for the PriorityQueue.High worker role.

优先队列。高优先级队列。低工作角色都覆盖 ProcessMessage 方法的默认功能。下面的代码显示了 PriorityQueue 的 ProcessMessage 方法。高级工人的角色。

C# C #Copy 收到

protected override async Task ProcessMessage(BrokeredMessage message){  // Simulate message processing for High priority messages.  await base.ProcessMessage(message);  Trace.TraceInformation("High priority message processed by " +    RoleEnvironment.CurrentRoleInstance.Id + " MessageId: " + message.MessageId);}

When an application posts messages to the topic associated with the subscriptions used by the PriorityQueue.High and PriorityQueue.Low worker roles, it specifies the priority by using the Priority custom property, as shown in the following code example. This code (which is implemented in the WorkerRole class in the PriorityQueue.Sender project), uses the SendBatchAsync helper method of the QueueManager class to post messages to a topic in batches.

当应用程序将消息发送到与 PriorityQueue 使用的订阅关联的主题时。高优先级队列。如下面的代码示例所示,使用“优先级”自定义属性指定优先级。此代码(在 PriorityQueue 中的 WorkerRole 类中实现)。发件人项目) ,使用 QueueManager 类的 SendBatchAsync 帮助器方法将消息分批发送到主题。

C# C #Copy 收到

// Send a low priority batch. var lowMessages = new List<BrokeredMessage>();for (int i = 0; i < 10; i++){  var message = new BrokeredMessage() { MessageId = Guid.NewGuid().ToString() };  message.Properties["Priority"] = Priority.Low;  lowMessages.Add(message);}this.queueManager.SendBatchAsync(lowMessages).Wait();...// Send a high priority batch.var highMessages = new List<BrokeredMessage>();for (int i = 0; i < 10; i++){  var message = new BrokeredMessage() { MessageId = Guid.NewGuid().ToString() };  message.Properties["Priority"] = Priority.High;  highMessages.Add(message);}this.queueManager.SendBatchAsync(highMessages).Wait();

Related Patterns and Guidance 相关模式及指引

The following patterns and guidance may also be relevant when implementing this pattern:

下列模式和指南在实现此模式时也可能有用:

  • Asynchronous Messaging Primer 异步消息入门. A consumer service processing a request may need to send a reply to the instance of the application that posted the request. The Asynchronous Messaging Primer provides more information on the strategies that can be used to implement request/response messaging. 处理请求的使用者服务可能需要向发送请求的应用程序实例发送应答。异步消息传递入门提供了有关可用于实现请求/响应消息传递的策略的更多信息
  • Competing Consumers Pattern 消费者竞争模式. To increase the throughput of the queues, it’s possible to have multiple consumers that listen on the same queue, and process the tasks in parallel. These consumers will compete for messages, but only one should be able to process each message. The Competing Consumers pattern provides more information on the benefits and tradeoffs of implementing this approach. .为了提高队列的吞吐量,可以让多个使用者在同一个队列上侦听并并行处理任务。这些消费者将竞争消息,但是只有一个消费者能够处理每条消息。竞争消费者模式提供了更多关于实现此方法的好处和权衡的信息
  • Throttling Pattern 节流模式. You can implement throttling by using queues. Priority messaging can be used to ensure that requests from critical applications, or applications being run by high-value customers, are given precedence over requests from less important applications. .可以通过使用队列来实现节流。优先级消息传递可用于确保来自关键应用程序或由高价值客户运行的应用程序的请求优先于来自不太重要的应用程序的请求
  • Autoscaling Guidance 自动缩放导航. It may be possible to scale the size of the pool of consumer processes handling a queue depending on the length of the queue. This strategy can help to improve performance, especially for pools handling high priority messages. .根据队列的长度,可以调整处理队列的使用者进程池的大小。这种策略可以帮助提高性能,特别是对于处理高优先级消息的池
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值