Understanding the SDM to SML Evolution

115 篇文章 0 订阅
82 篇文章 0 订阅

 

 

 

 


Understanding the SDM to SML Evolution:

Practical Application of the System Definition Model (SDM) and its Evolution to the Service Modeling Language (SML)

 

 

 

 

 

Abstract

The Microsoft System Definition Model (SDM) and the cross-industry Service Modeling Language (SML) proposed by Microsoft and nine other leading vendors can be used to create models that capture the organizational and operational management knowledge relevant to entire distributed systems.  This paper provides an overview to the System Definition Model, its practical application in Windows-based environments in the Windows “Longhorn” Server timeframe and its alignment with SML.

SDM and its successor, SML, are key technical innovations of the Dynamic Systems Initiative (DSI), a technology strategy led by Microsoft to enhance the Microsoft Windows platform and deliver a coordinated set of solutions that dramatically simplify and automate how businesses design, deploy, and operate distributed systems.

 


The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.  Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only.  MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user.  Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document.  Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place, or event is intended or should be inferred.

© 2007 Microsoft Corporation. All rights reserved.

Microsoft, Active Directory, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.



Information Technology continues to be a labor intensive activity, with the experienced staff of different departments often too bogged down in the minutiae of daily operations to document all that they know in ways that would make it simple to transfer their experience into libraries of reusable knowledge.  Yet with so many different people acting on a system over its lifetime – architects, developers, testers, operators, support staff and the users themselves – if the knowledge they each have could be captured in machine readable form over the life of a system, then this knowledge could be harnessed to automate many of the well-defined management tasks that are handled manually today.  This would not only have the impact of continuously driving down support costs, it also reduces the risk of mistakes and omissions associated with humans carrying out all the steps of even a single best-practice management process.  This concept, of capturing and reusing knowledge over the life of the system and each manageable component is at the heart of the management aspects of the Microsoft Dynamic Systems Initiative. 

For a system to be manageable “by design,” all relevant aspects of the system must be known.  The intended state[1] of the system must be clearly defined, the actual state of the system must be understood and the difference between current and intended state must be compared for acceptable tolerances. In addition, processes must be available to keep the actual state within the intended margins.  This is so for any system; it becomes even more critical when dealing with a greater number of moving parts.  Twenty years ago business and mission critical applications ran on a single mainframe or mini-computer, 10 years ago they were distributed over two- and three-tier systems. Now these applications are distributed over n-tiers and may comprise tens of thousands of objects.  With changing business needs requiring IT to be ever more agile and cost conscious, IT personnel must continuously increase the speed and efficiency with which systems are defined, designed, tested, deployed and operated.  At the same time IT is looking to increase its use of inexpensive, reusable components and reduce its dependence on specialized custom applications.  However change is implemented, whether centrally or disparately, the problem is the same.  The first steps to successful management are to understand what you have, develop a clear picture of what you want, and implement well defined, traceable mechanisms for turning what you have into what you want.

1.1. Modeling Environments

In the world of dynamic systems, modeling provides the foundation of how knowledge is captured and used.  As a general definition, a model is a representation of something in, or intended for, the real-world; its purpose is to describe specific characteristics, behavior and relationships with sufficient accuracy that it is an acceptable representation of what it describes.  In the context of dynamic systems a model is a machine-readable representation of the components comprising a system and the policies that govern the system: it provides sufficient detail for systems to adapt dynamically to changing conditions, and changes in business requirements, through intelligent automation of the management function.

In the development of information technology solutions, models can used to manage complexity and to communicate system requirements between business stakeholders, solution and system architects, developers, and operations personnel.  However, with each of these groups being driven by different needs and using different tools, there is a tendency for multiple models to be created.  Each of these models may permit project participants to contribute their unique expertise to the development and management of the resulting system, but each model tends to live in a vacuum, disconnected from the other models.  So while these models help reduce project costs and risk, by heading off errors and oversights within a team, little of the knowledge captured and used by one team is in a form suitable for it to flow to another team.

In a perfect world it may be possible to reduce everything to a single model.  In the real world, while this is an interesting goal, this may never be wholly achieved.  The real world needs practical solutions, and for this two factors are critical to success. First, the number of models being used needs to reduce from many to as few as possible. Second, where more than one model is being used, all knowledge relevant to multiple models must be sharable between models so it can be used over the life of the system.  

Although the concept of models in itself is not new, a key component of the Dynamic Systems Initiative is to use a common description language and modeling platform across the entire software lifecycle for each component of the system—from initial conception, through development and test, to operations and management.  Microsoft calls this language, also referred to as a meta-model, the System Definition Model (SDM) and the corresponding modeling platform, the SDM Platform.  The SDM Platform comprises the meta model, a library of common models, a run-time environment, authoring and support tools and a range of utility applications.

1.2. Interoperability

Microsoft is working closely with key industry vendors, including IBM, Cisco, Intel and Hewlett Packard, to develop a single, general-purpose language to express management information in models.  When complete this language, called the Service Modeling Language (SML), will establish a common industry meta-model for all vendors’ modeling platforms.  For its part, Microsoft is committed to the aligning its work on SDM with SML.  This will help enable interoperable, automated management across not just Windows but the heterogeneous enterprise.   SML will enable each team across all companies to create models for their specific areas of expertise (hardware, applications, solution, etc) using common semantics for their respective models.  This is a fundamental requirement for fully interoperable management across all aspects of a dynamic system.

While the SDM Platform will become increasingly pervasive the philosophy underpinning the Dynamic Systems Initiative also allows for the reality that systems will need to support for additional models: other SML-based modeling platforms, a legacy of existing models, other standards, device and environment specific instrumentation models and non-management related models.  For this reason Microsoft, in partnership with other industry leaders, is also working within the Distributed Management Task Force (DMTF) to specify a standard protocol for transferring management information between systems using a profile of existing Web service protocols.  This proposed standard is called WS-Management.  WS-Management provides the potential for the SDM Platform to obtain and share information with other systems and to access other management instrumentation, making it possible to federate hardware management and the management of Windows and non-Windows environments without first requiring a single pervasive model across a heterogeneous system. 

The remainder of this paper focuses on SDM, providing an overview of what it is and its value over the next 1 – 3 years.  At the end of this papers are links to additional information, including more details on WS-Management.

Just as model-based management is the foundation for the Dynamic Systems Initiative, the foundation for model-based management is the System Definition Model (SDM).   SDM is an XML-based language and modeling platform through which a schematic “blueprint” for effective management of distributed systems can be created.  SDM models can be consumed by specific management systems, such as members of the System Center product family and third party management products; they can also be hosted by each component of the system to enable local self management by the components themselves. While each component only carries the SDM model for describing itself, as components combine to create systems delivering new services and business functions, the components’ models can be combined to provide a model that is a complete description or service. This is made dramatically easier through ensuring that each component’s model is founded on a consistent set of core models (the SDM Common Model).  And so, just as a distributed system is a set of related software and hardware resources running on one or more computers that are working together to accomplish a common function, SDM models combine to form a common management definition of that distributed system, created out of the resultant sum of its component parts. 

2.1. Modeling Using SDM

Like most modeling systems the SDM is a series of interconnected objects.  The objects each represent a specific characteristic of the system, including the desired state of that characteristic, the range of states it is capable of being and what resources it needs. The interconnections between these objects describe the nature of the relationship between these characteristics.  For example, one object may rely on the other in order to exist, in which case one object is hosting the other.  Alternatively, one object may simply provide a service to the other and the two objects are peers.  Reduced to its simplest form, SDM can be illustrated as a series of boxes and lines.  The boxes represent systems, subsystems and components. The lines in the model diagram represent different kinds of relationships. These lines might represent a hosting relationship, lines of communication, and also indicate dependencies.  Each element in a model, a system or a relationship belongs to a class and has attributes, constraints and policies.  When creating an SDM model, the designer places and interconnects the elements and specifies the desired values for the attributes.

 

Figure 1:  SDM models interconnect to build an end-end model of a complex system or service.

The model of a system or service is created from a collection of definitions from one or more SDM models defining the individual components.  The basic structure for this model is typically specified during the development process, by architects, software developers, and from feedback from IT operations and support staff.  This base definition serves as the skeleton on which all other information is added.  In addition to the structure, the system model will also be able to contain deployment information, installation processes, configuration information, events and instrumentation, automation tasks, health models, performance and capacity models, operational policies, and so on. However, this is just the starting point: other information can be added by operations staff, vendors, and management systems throughout the life of a distributed system.  Potentially any development or management tool may contain authoring tools for creating and extending SDM models; likewise, any application may express its configuration, desires and constraints in an SDM model.

2.2. Reducing Complexity through Object-Oriented Modeling

Modeling a complex line-of-business application is challenging if it has to be done from scratch, yet without SDM-like models this is how system management modeling has been done. Building models from scratch is not only tedious, for higher order distributed applications it quickly becomes too daunting to contemplate.  This is the principle reason why so few implementations of traditional enterprise management platforms ever meet the original intent or objectives of those who have invested in them.  Even if the creation of the required underlying model is attempted and created, the complexity of changing it hampers the ability for IT to adapt to accommodate changing business needs.  Use of monolithic models also tends to lead to multiple, disparate models for the same physical resource, making it hard to integrate information from different management solutions. Even straightforward generic concepts such as computer, user or file may have very different descriptions in the different models. This in turn results in operational confusion with different models creating an alternative range of properties, methods and behaviors for the same thing.

With SDM all this changes.  SDM takes a tiered, objected-oriented approach to modeling.  Using a building block approach, and XML technology, it is possible to make maximum use of common, reusable components, inheriting parts of the model that already exist and only building custom components to add the characteristics that distinguishes a system to make it unique.

The XML-based model used in SDM is suitable for any system that needs to be managed, large or small, distributed or in a single machine. It can describe the topology of a system, its components and the relationships between them, and all information about each component that is pertinent to deployment and ongoing operations, from technical details to prescriptive guidance and troubleshooting advice. However, rather than having to create a single model for an entire enterprise – that near impossible task – each component need only model what it does, how it does it and what other resources it needs to do it. 

Discretely modeling each component in this way has the advantage of abstracting the complexity of the underlying components, so that a component only need to know about itself the components around it with which it directly interacts. For example, take a custom-built business service such as a customer order taking system.  The model for this may simply define the level of business service to be expected, how to verify the state of the service, and a configuration manifest that states its requirement for a database and a web portal of particular configurations. At this level the underlying technology of the database and the portal are unnecessary detail.  In turn though, there will be a model for the database and web portal that define their purpose, configuration dependencies and the components on which they in turn each depend.  This concept ideally continues at all levels down the hierarchy of components until reaching models that defining very general purpose operating system services and the hosting hardware components.

2.3. SDM and Other Standards

There is an elegant simplicity to modeling in SDM, enhancing its practical value in modeling complex distributed systems over most modeling in this space.  A common error of models is that the basic structure is overly defined in some areas and under-expressible in others.   For example, the Common Information Model (CIM)[2] and Windows Management Instrumentation (WMI) – which derives from CIM – are both of great value for instrumenting and managing individual devices but become unwieldy if used to describe the abstracted virtual constructs of a distributed system.  Also, since CIM and WMI are targeted for instrumentation, it is difficult to define desired state using their basic concepts.  At the other end of the scale an advanced language such as Unified Modeling Language (UML) possess the ability to model distributed systems using classes, relationships, inheritance, and composition. UML, however, lacks two features that are very useful for modeling desired state of distributed systems: the ability to scope relationships within a context and the ability to specify policies that apply to instances within a context.  SDM possess these two features and thus makes it easier to model the desired state of a distributed system.

It is also important to note that SDM does not replace instrumentation such as CIM or WMI; it simply provides a layer of model above the instrumentation, describing the desired state, interconnected relationships and management policies of the distributed system.  The semantics of composition in SDM are simple yet very rich, defining not only that a system has certain subcomponents, but also type- and value-constraints within the composite to ensure that the composite matches the design.

2.4. Management Systems and Managed Applications

The SDM model’s purpose is to maintain and deliver knowledge. It is not just traditional management systems that leverage this knowledge, the application itself may make use of SDM.  Common types of management, such as configuration tuning and health monitoring, which in the past have typically been the responsibility of a separate manager, may now be the tasks of an application taking some local responsibility for its own management. 

In the world of SDM, there is a less distinct line between management platforms and managed applications. The creation of the management model starts as an intrinsic part of application development, and applications can interact with SDM in more or less tightly coupled ways. They can both provide knowledge and exploit it, by writing or reading to the SDM through its object model (assuming that they are properly authorized). However SDM is the knowledge system and is not itself the manager. Various applications can use this knowledge system but the knowledge system by itself does not perform any actions on the real-world systems, this remains the responsibility of the application or the management system.  Consider, for example, software distribution and installation based on the SDM. It is not SDM that performs the software distribution, and systems do not need to “self deploy”, distribution may well still remain the responsibility of a solution such as Systems Management Server, or Active Directory Group Policy used with Windows Installer. In such cases the distribution system digests the SDM from the components to be deployed, using it to construct installation job definitions in a form it can use, with a bill of materials, installation manifests, orchestration schedules, status reporting, etc.

2.5. Practical Modeling of Managed Systems

While the model does not have to be a representation of everything, it must be an accurate reflection of the characteristics and behavior of those aspects of the system to be managed.  In this regard, the level of granularity the model must support is driven by the scenario.  Furthermore, as decisions will be made against the model not the live system, it must be an up-to-date representation of the system.  This not only has impact both on what is modeled, it also impacts where the model should be hosted so as to overcome any issues of latency for rapidly changing characteristics.  

It is also necessary to identify anticipated changes to the system that require changes in the model. Where the system has been deployed automatically from the SDM, the “system-as-deployed” can be read directly by reading the model. However in real-world systems, even where SDM is pervasive, parts of the system will have been deployed or modified in other ways.  Auto-discovery mechanisms are therefore a necessary contributor to model-based management.  Such services come in many forms.  For example, monitoring systems and error reporting tools can tell us a lot about the workings of the systems, their health and performance and aggregate service level. For systems based on a service-oriented architecture, instrumentation in the infrastructure can tell us even more, including the emergent topology of the system. As services connect with other services, perhaps using WS-Management to connect with services outside the scope of management, we can discover this connection.  Discovery of traffic volumes, service states, performance and service levels can all be used to flesh out the model, creating instances that correspond to abstract classes in the model.

2.6. Sources of Models and Policy

As has already been noted, many different groups contribute information regarding the operation and management of business systems, and each group requires authoring tools suited to their environment.  SDM is completely extensible: any developer or management tool may contain authoring tools and may define models using the common SDM language; any application may register its configuration, desires, constraints or behavior with in the SDM Run-time.  Developers author parts of a model in a development environment such as Visual Studio Team System.  IT administrators and architects will then author deployment and policy process directly from their preferred management tools. 

In the future solutions in the Microsoft System Center family, and third party solutions, will all include experiences for creating SDM models. Over time, it is also likely that these authoring functions will become integrated into their respective environments.  As they do, the creation of SDM will become increasingly transparent, with the human experience being captured in SDM without the operator necessarily taking additional steps to author the model.

SDM started as a pure research project within Microsoft Research.  The goal was to develop a general purpose modeling language for the next generation of distributed dynamic computing, from the ground up, that would have broad and practical application by the largest number of people.  As these ideas have matured SDM has moved from Microsoft Research to the Microsoft Server and Tools Business Division (STB) where SDM is now being integrated into Microsoft’s development tools, its management solutions and the Windows platform itself.

While the first practical implementation of SDM is already in use in Visual Studio 2005 Team System, the definition of the underlying SDM language is an evolutionary process. As SDM is extended to cover more of the lifecycle, as more applications are being architected to use it by Microsoft and other vendors start to explore the use SDM in their products and professional services offerings, ideas and change requests are being fed back through a formal review cycle. At the point of writing the second version of the SDM language has been completed and the third version is undergoing review with industry partners, with a view to evolving the SDM language into a standard for the entire industry.

Evolution: SDM to SML

 

SDM defines its model in an XSD-based XML document.  In the first two iterations of SDM these “types” are proprietary.  However, “SDM version3,” now to be known as SML Platform, will be based on a profile of the following open standards:

 

  • System Modeling Language (SML)
  • XML Schema 1.0
  • Xpointer
  • Xpath 1.0
  • Schematron

 

 

 

In July 2006, Microsoft, along with BEA Systems Inc., BMC Software Inc., Cisco Systems Inc., Dell Inc., EMC Corp., HP, IBM Corp., Intel Corporation, and Sun Microsystems, published a draft of a new specification defining a consistent way to express how heterogeneous computer networks, applications, servers and other IT resources are described — or modeled — in extensible markup language (XML).  Based extensively on SDM, this new specification is to be called the System Modeling Language (SML). In support of this work, Microsoft will be aligning all its own work on the SDM Platform with SML and renaming the platform it is creating, the SML Platform.    

Rather than discussing SDM and SML in isolation, it is more relevant to show how its ongoing development, both as a language and a platform, is being implemented in practical, commercial solutions. SDM as developed now, and currently in use in Visual Studio 2005 Team System, focuses on the design-time validation. As products start supporting the SDM Platform concepts developed for the next version of Windows Server (Project “Longhorn”) while the SDM meta-model matures through SDM v2 to a standards-based SML.  Products associated with DSI include Microsoft Operations Manager, Systems Management Server and the initial release of Windows “Longhorn” Server.   Post-Longhorn, with all the experience gained from its practical implementation and now aligned with available standards, the SML Platform opens to support solutions that extend beyond Windows, potentially modeling any environment.

There is a natural flow to the development of SDM through these three phases.  First, SDM is focused towards application design requirements. Second, SDM adds aspects of real time operations including declarative service and a limited scope policy model, specifically focused on the deployment and role configuration in Windows “Longhorn” Server. Finally, the SDM Platform is opened up for general purpose use, with a rich common model library available to model arbitrary distributed systems[3].

3.1. SDM Version Compatibility

Stepwise extension of SDM to new purpose and new parts of the lifecycle is important, as the model’s functions must be reliable.  In developing SDM in this way, each part of SDM can be hardened and matured through focused usage, feedback and review.  For these early iterations of SDM, Microsoft has taken the decision that each iteration should be free to improve upon its past even if this causes some issues with backward compatibility. One obvious example of this is in the alignment of SDM with standards.  In its first two iterations, SDM uses XSD descriptions in a unique way to describe SDM schema.  This was a method developed several years ago within Microsoft Research.  However, during the course of multiple reviews, it was generally agreed the advantage of simpler interoperability if SDM types are defined using standard XML-schema outweighs any advantage in the original approach.  The third version therefore breaks ranks with the previous two versions and is being aligned with a profile of existing W3C and ISO standards, and the broad industry effort on the System Modeling Language (SML), to develop a single meta model for the entire industry.  Of course this may not be the last time this happens. Standards themselves are continuously undergoing revision; where significant advantage is seen to adopting a new profile, developed around standards published in the future, it may be necessary to realign SDM and its successor, the SML Platform, once again.

With the potential for schema incompatibility between products using different versions of SDM, migrating the SDM-based knowledge libraries will inevitably require some effort. Key here therefore is to ensure that knowledge libraries can be carried forward into the new models. Microsoft understands this and is committed to ensuring that knowledge captured in one version of SDM can be hosted in the next.  To help, Microsoft will be providing extensive guidance on the migration steps required and, wherever possible, providing automation tools to reduce the overhead of the migration.

3.2. The Language of SDM

In the world of SDM (and SML) each component of the model is expressed in a piece of self-describing XML-based schema known as the SDM document. This document comprises definitions of the important objects and relationships of the modeled system. You can think of the definition as the base from which other, more specific characteristics are built and derived, including objects, relationships, constraints, and settings. Using these simple concepts SDM documents can be read by an SDM run-time host to build into a more complete view of the system. 

It is worth noting that, unlike traditional enterprise management concepts there, isn’t necessarily just one run-time running a complete model of the system on an enterprise management console. In the world of SDM there are potentially many run-times.  The more dynamic the system the more each component may need to be self-managing.  Wherever management takes place enough of the SDM model must exist for local management policy to be carried out effectively; wherever an SDM model is to be hosted there will need to be an SDM run-time hosting it.

Underpinning all work on SDM is the meta-model which, in version 3, will be the System Modeling Language (SML).  This provides the rules by which models are constructed.  The meta-model or language is not the model, it is the semantics of the model: its base vocabulary, the rules of composition, the grammar and the syntax. The meta-model allows the creation of a framework of increasingly abstracted models, with each higher order part of the model depending on other models below it and in turn acting as a host for the more abstracted models above it. Clearly as the level of abstraction increases so too does the level of specificity, and therefore the function and reusability of that particular part of the model.  Likewise the lower down the stack the model, the more generic its function and the more reusable the model becomes. Interestingly, there is also a link between each layer and the groups of people principally creating models at that layer, with the lowest layers being the responsibility of platform developers; the domain models the purview of application developers and their support departments; the reusable configurations and customized application models the responsibility of customer IT support and professional services.

Figure 2: The SDM Ecosystem: In addition to a framework, SDM requires a rich library of reusable models and broad industry effort

The most reusable models of all are a set of core models or common models.  These use the meta-model to describe generic concepts: networking, operating systems, devices, storage, desktops, server systems, a web server, a directory service and so on.  These are the standard items on which each vendor can now develop domain specific models.  These framework- or abstract definitions are used to provide a common categorization that is subsequently used by user-defined or concrete definitions that represent the actual application or data center elements. Concrete definitions extend abstract definitions and provide an implementation.

The first level of concrete definitions is domain specific technology models, describing specific instances of the generic models. For example, there will be domain specific models each unique type of web server (IIS, Apache etc) but each will be based on the common underlying model for a web server, augmented with information characterizing the unique flavor that distinguishes one from the other.

Even at this domain-specific layer of modeling, the models are comparatively generic.  Ultimately success relies on these lower layers holding the richest possible library of generic models and for this library to grow with each new version of the Platform. These models can define the behaviors of applications and services operating in particular environments but do not in themselves describe how a system should run in order to support a particular task or to operate in a particular customer environment.  This knowledge is laid down above the technology models, in the form of best practice configuration models, which in turn is inherited into, and adapted by specialized models. These are specific to a customer/business application or service and describe the very particular range of purposes for which that particular IT solution is being implemented. 

As mentioned, for these upper most layers to be successful the library of lower level models must be extensive.  This is a key objective of the current SDM review cycle.   Once achieved the creation of a customer specific model will no longer be the daunting task it has been.  In the future even the most complex systems can be described by creating a specialized model which simply characterizes how standard technology and best practice components of the model should behave and relate to solve a particular business function.

4.1. Visual Studio 2005

All too often a new application has entered its final testing stage before software developers find that a particular application cannot be supported in the actual data center in which it is to be deployed.  Software developers are not typically trained in the details of data center architectures and policies; what may therefore seem like a perfectly reasonable set of system requirements to the software developer can easily end up being unsupportable in the real world.  Rather than finding this out after the application is developed, Visual Studio 2005 Team System uses SDM v1, as the basis for a design time validation tool.  Through use of SDM modeling, data center architects can enter a logical description of the data center and data center policies in force.  The software design architect can then use this to validate that a design is deployable as soon as they have mapped out the basic structure of a new distributed application.  Even before full coding takes place, as soon as the basic design of the application is complete, the software designer/architect can simulate its deployment on the logical data center model to verify that it is supportable in the live environment.  For example, part of an application may need to be hosted in a secure zone of the data center.  Servers in such a zone are often locked down tightly, with only certain services, protocols and ports enabled.  During validation Visual Studio will confirm that the new components to be deployed in this zone will meet policy or identify where they do not.

Having a model of the data center at design time can save multiple coding iterations, saving both time and costly code rewrites.  Although the logical data center model must be created manually in Visual Studio today, Visual Studio develops the SDM model of the software system in real time, as it is being architected. The software designer/architect is not required to take additional steps to add SDM: as properties, requirements and dependencies of the software are modified SDM is automatically created and updated.  If the code is changed the model changes; if the model is changed the code is updated. 

Although only touching the surface of SDM’s potential, even this first instantiation begins to show the value of SDM modeling in the creation of manageable systems suited to dynamic business needs in the future.  Software developers and IT operations staff are dependent on each other and yet do not speak an entirely common language.  IT operations is driven to maintain secure environments and IT service levels, optimize available resources, and minimize the support overheads.  Codifying this world requires the building models for policy, process, team workflows and risk.  Mapping this world back to the developer environment of Visual Studio carries the experience and requirements of the operations end of the lifecycle back to the beginning of the lifecycle for new applications under development.  In turn, developing new applications against this experience will require increasingly little effort, making it possible to use Visual Studio and Team Foundation System to create inherently manageable applications that are consciously designed for operations.

4.2. System Center Products

Since the introduction of Microsoft Operations Manager (MOM) Microsoft has taken the philosophy that each application author must take full responsibility for creating the deep operations management knowledge needed to support a system in a live environment.  Through its Common Engineering Criteria, which requires all its own Windows Server System product groups to provide this management knowledge when a product is shipped or updated, Microsoft is setting an example that is now being followed by many vendors.  In the Microsoft world these packs of knowledge are simply called “Management Packs”.  Today Management Packs are primarily associated with Microsoft Operations Manager; in the future their application will extend across other System Center products.  Owned by the application developer and shipping on the development cycle of the application rather than a specific management product, Management Packs provide up-to-date in-depth knowledge base content, reporting, prescriptive guidance and rules for monitoring a comprehensive array of server, service and application health indicators.  Importing these management packs into a System Center product like MOM allows MOM to monitor on the state of an application, report on trends and identify problems, often preemptively, and guiding corrective intervention before system performance degrades below acceptable services levels.

Management Packs already include the concept of service and health modeling and additional models will be added in support of other management functions including configuration management and capacity planning.  Over the next several years, all these underlying modeling will migrate to using SDM, starting with the Service Model.  In fact the next version of MOM, renamed System Center Operations Manager, and already in Beta, uses SDM v2 for this purpose.  Reading the SDM-based service model contained in a Management Pack, System Center Operations Manager is able to build up a logical description of each application it manages, the relationships between the application components and what dependencies each application component has on other software, operating system services and even hardware.   It is actually the information in this model that allows MOM to progress from being an advanced element manager, monitoring a physical servers acting together as a system, to being a full system manager, able to monitor a distributed application as a logical entity. 

In the future, each System Center product will make use of Management Packs, each importing the models its needs from a Management Pack to support the deep product-specific knowledge it may need to manage effectively.  Still shipping with the application just as they are today, these Management Packs will be as valuable to Systems Management Server and the next generation of System Center capacity management and planning tools as they are to MOM today.  For example, using an SML-based configuration model, the next version of System Management Server, shipping as part of DSI solutions, introduces the concept of desired configuration management.  This will use SDM to provide centralized and generalized support similar in function to specific support provided currently by a solution like the Exchange Best Practice Analyzer (ExBPA).

To explain this concept in more detail, today SMS is able to discover and inventory all the computers in an enterprise.  While this is sufficient for SMS to create a detailed catalog of hardware configurations and deployed software it is hard to determine from this if a particular application or service is correctly configured.  The problem with monitoring software to establish whether it is correctly configured is that many configuration parameters are interdependent. Also it may not be appropriate to define an application’s configuration in terms of absolute values. Rather, a particular configuration may be considered healthy provided key settings are consistently within a given range of values. Using SDM it will be possible to define configurations this way.  SMS will be able to use this to compare configuration information it collects against the SDM model it creates to describe desired application or service configuration. This will provide administrators with vital information about the validity of a configuration of any application or service it is monitoring.  Tying this back to SMS’ rich targeting and deployment capabilities will also allow administrators to remediate configuration issues centrally, from the SMS console.

4.3. Real-Time use of SDM in Windows “Longhorn” Server

With each product Microsoft releases using SDM, SDM takes another step forward.  Visual Studio 2005 Team System compares and relates configuration characteristics using a SDM-based schema; Microsoft Operations Manager will import SDM-based service models contained in applications’ Management Packs, to derive its real-time view of an application’s composition; in the next version of Windows Server, project “Longhorn”, SDM is being used to model server roles.  There is a natural progression happening here. In its use within Visual Studio SDM models are authored and compared, in Microsoft Operations Manager SDM models are used to communicate knowledge from which MOM derives and understanding the logical relationship system components.  In each of these cases the SDM itself is not actually functioning against a live environment.  The big advance in “Longhorn” is that SDM finally becomes a live model: administrators will be interacting directly with SDM via the UI at the same time SDM is interacting with the underlying components it represents.

In Windows “Longhorn” Server, SDM will be used for one specific and important function as the basis for the new Server Manager console. The Server Manager provides administrators with a simple and secure way to install and configure server roles.  By interacting with the system via SDM the underlying complexity of interacting with the relevant operating system services and understanding the inter-dependencies associated with changing configurations is handled by the model not the administrator. 

·         For example, an administrator wishing to deploy a Windows Server as a Terminal Server must also make sure the Network Access Services (NAS) and Web Server (IIS) on which the TS Gateway Service depends are available and correctly configured.  Opening the Server Manager console the administrator will select the option of setting up the server as a Terminal Server from the list of potential roles offered.  This will launch a single wizard (see figure 3) presenting the configuration options available for the selected roles along with common security settings.

In this illustration Server Manager is shown streamlining the administrator experience for adding the Terminal Server role to a Windows “Longhorn” Server

Note: These images are of Windows “Longhorn” Server Beta software. The final administrator experience may differ.

Figure 3: Server Manager – An administrator’s view while commissioning a new server role

 

Example Server Roles modeled by Server Manager in Windows “Longhorn” Server

 

-      Active Directory Domain Services

-      Active Directory Federation Services

-      Active Directory Rights Management Services

-      Active Directory Certificate Server

-      DHCP Server

-      DNS Server

-      Fax Server

-      File Server

-      Network Access Services

-      Print Services

-      Terminal Services

-      UDDI Services

-      Web Server (IIS)

-      Windows Media Services

-      Windows SharePoint Services

Having made any changes or accepting the standard defaults the administrator and confirmed the change request, the Server Manager will then perform the necessary installation and configuration changes.  From the point the administrator confirms the change request the Server Manager takes control, returning a confirmation report to the administrator on completion.  A process that in the past has required many manual steps, using multiple configuration and security tools, becomes a streamlined operation.  With the guidance of the Server Manager the administrator defines the desired end state for the server; the Server Manager then automates the change and confirms the results.  

Removing or adding roles from a deployed server is handled in the same way as adding roles.  In each case the Server Manager automates change to the security and configurations to ensure the change happens successfully and efficiently without compromising the safety of the system. 

In addition to adding and removing server roles, the Server Manager also provides administrators with a way to view the ongoing status of server roles.  Each time Server Manager is launched it will load the SDM models, determine which roles are currently installed and their operational state. Comparing these actual states against the models of best practice model, Server Manager will also identify any constraint violations to the administrator for corrective action.

Figure 4: Server Manager – An administrator’s status view

Now we have seen the purpose of the Server Manager and the user experience, let’s look at its underlying architecture and the role SDM plays.

Figure 5: Server Manager – The underlying technology

The Server Manager console is a snap-in to the Microsoft Management console that links directly with SDM run-time.  In this first release the SDM run-time runs as an application not as a service. Neither the run time nor the live model persist, they are simply launched and created as needed.  This means the SDM run-time must initiate itself each time it is started.  This initiation comprises two parts: first it must import the suite of available server role models from their associated SDM documents, and then it must discover the actual state of the system.   From the SDM model it has created, Server Manager knows all possible server roles, and exactly which services and APIs it must interrogate to discover the current configuration of the system.  Through this interrogation, Server Manager is able to establish which roles are deployed on that server and their current state.  Server Manager uses this information to create the actual instances of the real system in the run-time model and this in turn is rendered to the operators’ console. This will include configuration data from the SDM instance space, notifications generated from constraints in the model and any other data (e.g. status, events) collected directly using Server APIs. 

When an operator now attempts to apply a change through the console, they do so to the model, not the live system.  The run-time will then only take action after the change request is validated against the model and even then only after being authorized to do so by the operator.  With the new desired state recorded in the SDM now being out of step with the actual state of the system, it is necessary to bring the actual state in line with the new desired state.  This is the task of the SDM run-time’s synchronization process. Finally, as the synchronization process completes, the Server Manager will display the status of synchronization to the operator on a confirmation page.

In its first release Server Manager will ship with a set number SDM documents describing the server roles.  This will not be extensible. To some degree this makes the use of SDM in the “Longhorn” Server Manager an extra layer of abstraction that may at first sight seem like over-engineering.  It would certainly be possible to achieve the same results for Server Manager by hardwiring the solution.  However, it is important to see this first Server Manager implementation as an important proof of concept.   In this well-defined environment it is possible to refine and harden the modeling and runtime aspects of operating SDM in real time before releasing an extensible version of the SDM run-time as a Windows service.

4.4. SDM as a Windows Service

In its use in the initial release of Windows “Longhorn”, Server SDM in Server Manager only runs when the Server Manager application is running.  The major architectural step to take will be to add persistence to the run-time model.  This requires the addition of a store and the opening up of the run-time, moving it from being in-memory application tightly coupled to the Server Manager function, and turning it into an extensible background Windows service.

 

Figure 6: Server Manager post-“Longhorn”

As in the original Server Manager implementation of the run-time, there is a synchronization engine and a discovery component - though this latter function moves to a declarative model. However, the major difference in the use of SDM/SML is that the SDM/SML model now persists in a background store and now remains distinct from any particular client session.  Several SDM/SML client applications can now interact with store, each creating its own session instance comprising the subset of the model in which it is interested.  In this architecture, an SDM/SML session provides an interactive view of the model through which a human operator or automation tool can initiate action, observe change and interact with the model.  Just as with the Server Manager, the instantiated session model only exists while the client is running.  Unlike the original Server Manager implementation, the model continues to be available as a reference in the background, with changes that were made to it through the client session interaction persisting over time.   In effect, the store becomes a Configuration Management Database (CMDB) for the system it describes.


 

4.5. SDM/SML as a General Purpose CMDB

With the upswing in popularity of best practice standards like the Information Technology Infrastructure Library (ITIL) there has been a significant increase in vendors promoting Configuration Management Database (CMDB) technologies.  The concept of a CMDB lives within a broader service management discipline called Configuration Management; its purpose is to provide definitions of the current, past and future states of managed systems.

The value of a Configuration Management Database (CMDB) is in its ability to provide the basis by which this is possible, by modeling the environment and maintaining the information necessary to evaluate the impact and success of change.  The CMDB itself may exist as a single database or an aggregation of information from multiple sources. Which path is taken is an implementation detail; what is more important is that a CMDB models the right items, that it can describe the expected relationships between each item, and that it can track the actual state and relationship of these items over time.

The CMBD is not a new concept, it was developed in the mainframe era, and today’s CMDBs are largely still little more than a series of relational database records existing in an archive.  Using SDM as the generalized model fronting a variety of live and archive sources, a CMDB can finally exist both live and in an archive. Instead of having to predefine all potential Configuration Items (CIs) and their relationships, the schema can evolve, adding new serviceable items as series of inter-related objects and extending the CMDB as needed.  Service levels and can be held not just as documents but as machine readable policies, an issue in a CI at a business level can be seen not just to other business CIs but also to physical devices hosting key software components.  Now we have the basis of a new generation of CMDB, suited to the needs of both the traditional data center and the dynamic systems of the future. The SDM store will, in effect, model the CMDB function.  

These ideas will actually first be used by System Center Configuration Manager 2007, the next version of Systems Management Server (), as part of its desired configuration management feature.  In DSI, with the implementation of SDM as a general purpose service, in addition to being a service for a local Windows server the SDM Store also becomes the basis for a general purpose CMDB.  Such an implementation is at the heart of the upcoming service desk solution for System Center, modeling potentially thousands of devices in a distributed and dynamic environment.  This will be a major step forward from today’s CMDB implementations.

Using SDM to model the CMDB will finally free the CMDB from any particular implementation, creating a virtual CMDB out of data sourced from multiple locations including directories, management databases, policy and practice libraries and even ERP systems.  Unlike today’s CMDBs the source data will not be moved to a central repository, it will remain in situ, and therefore up to date, at the source.  SDM v3 will simply maintain a model of where the data is and the relationships, allowing any system to query the unified SDM model to retrieve information on CIs and mine CMDB information.


The Dynamic Systems Initiative is a multi-year project that will effect may aspects of how systems are developed and implemented.  While DSI will spawn technological innovation, the initiative is also a call to action for technologists to rethink what is required to design, implement and operate dynamic systems.  Key to this is management, an area of IT that has sometimes been an after-thought; something to be added after a system has been implemented as and when budgets allow.  Some years ago the industry took a similar view of security; now security considerations are integral to design.  With DSI, through a combination of technology, process and leadership, Microsoft is pioneering a similar change to how management is perceived and implemented. From a technology perspective Microsoft is committed to innovating technologies such as SML, the SML Platform and WS-Management, and embedding these into its platform, management solutions and software development   tools to provide an intrinsic infrastructure for management that spans the IT lifecycle.  From a process angle Microsoft is creating and publishing a framework of best practices ranging from development projects to ITIL-based operations guides. These are embodied in its Microsoft Solutions Framework (MSF) and Microsoft Operations Framework (MOF) respectively.  From the point of view of leadership, Microsoft has implemented a strict set of requirement on its own product teams in the form of the Common Engineering Criteria (CEC).  This includes DSI-related management requirements for all new applications released in the Windows Server System family, setting an example to others by ensuring its own product teams are at the forefront of this new way of thinking.

Adopting the ideas of the Dynamic Systems Initiative does not require waiting for all DSI technology, or for Windows “Longhorn” Server.  You can start implementing this new way of thinking about manageability now. As Microsoft is demonstrating with the Common Engineering Criteria and its own management solutions, there are many practical steps that can be taken today that improve manageability and help IT operations and development staff take advantage of each new advance along the DSI road to self-managing dynamic systems.  

Here’s what to consider immediately:

·         Review the Microsoft Common Engineering Criteria and use it to develop and implement your own manageability best practices for all future applications, whether developed in-house or purchased from external suppliers.

·         Get IT staff trained in and implementing ITIL/MOF-based operational practices. 

·         Move in-house development projects to Visual Studio 2005 Team System; use the SDM and then SML to model your data center requirements and validate new designs against their ability to be implemented during the earliest stages of application design.

·         Implement Microsoft Operations Manager 2005 (MOM) and associated Management Packs to manage your Windows Servers.  This will provide the most complete management of the base platform and all Windows Server System applications.

·         Use the MOM add-ins for Visual Studio to create MOM Management Packs for all in-house developed applications running on Windows. Links to further information on all the above can be found in the next section.

See the following resources for further information:

·        Microsoft System Center  Management Solutions Overview

o http://www.microsoft.com/management

·        Dynamic Systems Initiative

o http://www.microsoft.com/dsi

·        System Definition Model:

o http://www.microsoft.com/windowsserversystem/dsi/sdm.mspx

·        System Modeling Language:

o http://www.microsoft.com/windowsserversystem/dsi/serviceml.mspxWS-Management

o http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnglobspec/html/wsmgmtspecindex.asp

·        Building MOM Management Pack

o http://www.microsoft.com/downloads/details.aspx?familyid=c5b42e5b-68ed-45ea-8864-a9d4087d261d&displaylang=en

·        SDM and Visual Studio 2005 Team System:

o http://msdn2.microsoft.com/en-us/library/ms181772(VS.80).aspx

·        MOF/ITIL and Solutions Accelerators

o http://www.microsoft.com/windowsserversystem/overview/benefits/manageability/default.mspx

·        Windows “Longhorn” Server

o http://www.microsoft.com/windowsserversystem/windowsserver/bulletins/longhorn/beta1.mspx

·        Common Engineering Criteria

o http://www.microsoft.com/windowsserversystem/cer/default.mspx

 



[1] State describes the operating condition of the system (including configuration, service availability, performance levels, etc).

[2] The Common Information Model (CIM) and Windows Management Instrumentation (WMI) were originally developed in the mid-1990s as a unified description and access mechanism to management instrumentation of hardware and software components with a single piece of hardware or local cluster, such as a single PC, networking component or storage device.

[3] An architectural view of the SDM Platform in DSI can be found in figure 6 below.

 
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值