ServiceTechMag.com > Archive > Issue LX, March 2012 > Top 3 Tips for Using Automation to Bring Governance to Service-Oriented Analysis and Design
Lee Ackerman

Lee Ackerman

Biography

Lee Ackerman is the VP Products & CTO at The Emphasys Group. Over the years, Lee has investigated and evaluated new ideas, designed and developed solutions, and helped others to do the same. Today he focuses on helping organizations improve how they deliver software through automation, reuse and agile best practices.

He has had significant impact in the areas of service-oriented architecture, patterns-based engineering, and asset-based development. Most recently, he was the driving force behind the creation of the Service-Oriented Analysis and Design Model Accelerator – an innovative product that automates the analysis and design of SOA solutions.

Prior to joining The Emphasys Group, Ackerman spent 11 years with IBM and, in recognition of his technical leadership, was designated as an IBM Sr. Certified IT Specialist. During this time with IBM, he assisted IBM customers, business partners and IBM employees to improve how they delivered software. His books, articles and courses have been used to enable thousands in best practice based approaches to software delivery.

The model accelerator incorporates many of the concepts that he pioneered while with IBM and wrote about in his book Patterns-Based Engineering: Successfully Delivering Solutions via Patterns.

Contributions

rss  subscribe to this author

Bookmarks



Top 3 Tips for Using Automation to Bring Governance to
Service-Oriented Analysis and Design

Published: March 23, 2012 • Service Technology Magazine Issue LX PDF
 

Abstract: Governance plays a critical role in the success of our SOA projects. However, the fog and friction of development hinders the effectiveness of enforcing governance policies in our service-oriented analysis and design (SOAD) efforts. Adopting a process and a toolset, consulting documentation and participating in skill building events are the typical first steps taken by organizations as they combat the fog and friction of development. However, learning, internalizing and effectively executing a process and set of tools can be an exercise of frustration. This article discusses how to use automation to apply patterns, validate designs and summarize details to amplify the efforts and skills of teams in SOAD. By using these approaches, we are better able to align business and IT, ensure proper investments, and track and enforce adherence to best practices, all of which combine to bring process and governance to life.


Introduction

Service-oriented analysis and design (SOAD) encompasses the tasks, deliverables and workflows that lead to the creation of service-oriented (SO) solutions. This includes the efforts across business modeling, service identification, service specification, and service realization.

SOA governance empowers and enables the people within the organization to successfully perform SOAD and deliver SOA solutions. People become empowered by establishing chains of responsibility, authority and communication. We enable people to carry out their roles and responsibilities by establishing measurement, policy and control mechanisms. The team must use the same best practices, improve communication, and support measurement and traceability. The focus of such efforts is to ensure business-IT alignment, reduce risk, ensure proper investments, support reuse, and deliver quality solutions in a timely manner. You might start out successfully with your SOA initiative, but without SOA governance from the beginning, you will start to see a greater usage of resources to maintain your existing services, and start to lose the value of moving to SOA in the first place.

Realizing the potential and promise of SOA is best achieved through the successful application of SOAD along with SOA governance. However, our efforts need to overcome the fog and friction [REF-1] associated with any significant undertaking. That is, are we able to gather reliable information and reduce the fog surrounding our efforts? And with information in hand, can we execute in an effective and timely manner by reducing the friction? What obstacles need to be overcome, what delays and unnecessary (and unexpected) steps must we deal with in order to complete the project?

The following figures touch upon some of the questions related to the fog and friction that we must overcome.



Figure 1 – Questions that Highlight the Fog Associated with Service Development.




Figure 2 – Questions that Highlight the Friction Associated with Service Development.


The above lists aren’t exhaustive, although it certainly is exhausting to think of the work that is needed to tackle these issues. Sadly, we often fail to overcome these challenges, and in coming up short, the negative outcomes tend to snowball and put projects in jeopardy.

To address these challenges we need to harness proven approaches and tooling. By doing so, we are able to reduce the fog and friction and in turn improve our governance efforts and deliver better solutions.


Automate. Accelerate. Amplify.

There are many options available for tooling, processing, educating, and skill building. There are modeling and development tools, domain specific languages (such as SoaML [REF-2]), published processes (i.e. Rational SOMA [REF-3]), books, training courses, articles, and communities of like-minded individuals. However, how can we expect the practitioner to consume, internalize and then follow through? We place a significant cognitive burden on our practitioners, yet are surprised when the results are not as we expect.

Modeling can play a key role in addressing the issues discussed to this point. The essence of modeling is that it is a simplification of reality. Using a model allows us to hide details that unnecessarily complicate and hinder our understanding. Ideally, we can take things a step further – starting from a base of simplification, we use models to comprehend, communicate and generate (parts of) a solution.

However, we still are left with questions: What should be modeled? How should it be modeled? And once we get started, how do we reduce repetitive, mechanical and mind-numbing steps? How can we focus on the valuable and creative aspects of our design efforts? How do we leverage the investment we’ve made in our models? How can the information from the model be used for enhanced comprehension, improved communication and better analysis? And, how do we move from a process that is documented to a process that is followed and realized in our day-to-day efforts; that is, how do we bring our process to life?

Modeling has shown value in supporting communication, thinking through solutions and supporting generation of related artifacts. But, this is just a starting point. We can amplify the impact of modeling and accelerate our efforts through three key approaches to automation:

  1. Infuse best practices via patterns
  2. Validate designs via constraints
  3. Summarize information and identify governance gaps via report generation

1. Infuse Best Practices via Patterns

From a technical point of view, patterns are focused on providing proven solutions to known and recurring problems. They abstract away the core of the solution and allow us to customize the application of the pattern – mapping elements from our context into the solution through points of variability.

Things get especially interesting as patterns are automated in tooling [REF-4]. Rather than expecting a practitioner to model using only fine-grained elements from SoaML or business modeling languages, we use a combination of such elements along with pattern-driven, coarse-grained, customizable collaborations of elements. Automating patterns:

  • Reduces mechanical steps: The pattern generates model elements, properties and relationships. This work is done the same each time, is performed quickly and reduces mechanical steps – letting the practitioner focus on solving the creative and challenging aspects of the task at hand.
  • Accelerates project efforts and improves solution quality: Less mechanical steps, automated application of best practices and adherence to best practices (less rework) means that we get it done more quickly.
  • Enhances documentation and traceability: The pattern instances themselves increase the meaningful documentation in the model. The relationships that are generated between model elements, and from the pattern instance to model elements increases the level of traceability in the solution. We are better able to answer questions such as:
    • “Where did this come from?”
    • “Why is this here?”
    • “What happens when we change this?”
  • Infuses best practices into the solution: An automated pattern will usually encapsulate multiple steps, elements and best practices. As a result, the pattern reduces the burden that we place on the practitioner to remember and follow through on applying and adhering to best practices.
  • Brings best practices to life: Having a process to follow is a nice aspect of a project to have. Having people follow and adhere to the practice is crucial. Patterns move process from documentation into actionable realization.

In designing an SOA solution, the model contains numerous elements:

  • Domain Elements: These are the elements that we use to understand the problem domain or the details of the solution domain and include things such as service interfaces, business goals, service participants (providers and consumers), and so on.
  • Relationships: Examples of common relationships include inheritance, implementation, composition or dependencies for traceability.
  • Properties: Profiles and stereotypes extend UML and make it possible to capture additional details about model elements. These details are captured as properties.
  • Pattern Instances: Each use of a pattern leads to an instance of the pattern within the model. Each instance is related to its definition (which details the pattern) and to the elements that are bound to each of its parameters.


Figure 3 – A High-Level View of the Elements in a Model.


Discussing relationships and traceability requires us to touch upon a key aspect of modeling. When modeling, there is a clear distinction between the model and the diagrams that present views of the model. Diagrams provide us with different views on the model and the elements contained therein. They enable us to focus on specific areas. We can hide details, vary content and emphasize design aspects, depending on our goals.



Figure 4 – A Model Supports Multiple Views.


We also need to recognize that the models (and the elements within) created through the use of automated patterns provide a rich source of data for governance efforts. Patterns may impact a diagram, but they will definitely impact the model. These impacts include:

  • Adding/updating model elements
  • Adding/updating model element properties
  • Adding pattern instances to the model
  • Adding/updating relationships between elements

Validating models via constraints and summarizing information via generated reports, as described in later sections, takes advantage of the rich set of data that is created to produce key input for our metrics, measurement, and risk mitigation efforts associated with SOA governance.


SOAD Patterns

The SOAD patterns catalog is introduced in the “Designing Solutions Using the SOAD Model Accelerator” eBook [REF-5]. In the following sections, an overview of these patterns, grouped by area of focus, is provided.


Business Modeling Patterns

  • Business Goal Decomposition: Sometimes our business goals are too broad in scope to be acted upon. To simplify understanding, analysis and implementation, we subdivide or decompose the high-level goal into a number of sub-goals.
  • Business Entity Promotion: Starting with Business Entities, the pattern generates candidate services, potential participants, message types and information types. Traceability relationships from source elements to generated elements are automated by the pattern.
  • Structured Business Driven Scenario: Guides the practitioner to consider the key elements that comprise a business scenario (business goal, use case, scenario, actors and workers). With this set of elements defined, the pattern then generates a use case realization, relationships between participating elements, and an interaction with lifelines for each realization participant.
  • Use Case Realization Promotion: Migrate to service identification by using the responsibilities as identified within use case realization interactions as the basis for candidate services.

Service Identification Patterns

  • Capability Consolidation: Take the responsibilities of one or more fine-grained capabilities and move them to a more coarse-grained capability. The fine-grained capabilities are put aside for documentation purposes and are no longer considered candidate services. Traceability relationships are added to the model to understand how the coarse-grained element gained additional responsibilities.
  • Capability Promotion: Takes a model with a set of confirmed capabilities and generates a set of service interfaces. Before using this pattern, the practitioner is expected to have evaluated and categorized the capabilities.
  • Capability Usage: Creates a usage dependency between an originating capability and set of target capabilities.

Service Specification Patterns

  • Service Litmus Test: Evaluate the constraints such as time, costs, business fit, available resources, and technical concerns to determine candidate services that warrant investment and should be created.
  • Compound Service Contract: Guides the practitioner to think in terms of creating more coarse-grained service contracts via the composition of a number of more fine-grained service contracts.
  • Coordinating Participant: Specify a more sophisticated participant that offers services and makes requests in a way that allows us to minimize mechanical steps while focusing on problem solving.
  • Coordinating Service Interface: Support a bi-directional communication between service providers and consumers. In such situations we specify provided and required interfaces and detail a protocol for the interaction.
  • Domain Prefix: Quickly and easily update the model package names to support solution creation and deployment.
  • Participant Subsystem: Create subsystems comprised of other participants while defining the externally available interfaces for services offered and requests that will be made.
  • Simple Request Participant: Used to model a participant that is a consumer of a simple set of services.
  • Simple Service Participant: Used to model a participant that is a provider of a simple set of services.

To summarize, by using a catalog of automated patterns, we’ve taken three significant steps forward in our SOA governance efforts:

  1. We have selected a set of best practices to be used in our SOAD efforts.
  2. By automating the best practices within tooling we have simplified best practice application and adherence.
  3. The pattern instances and resulting model updates have provided a rich source of information for confirming best practice application and adherence, solution insights, and metrics.

2. Validate Designs via Constraints

Before diving into the topic of using model validation and constraints, recall that patterns have a written specification. This documentation includes:

  • An overview of the pattern
  • A description of the problem and solution
  • A listing of forces and constraints related to applying the pattern
  • A guide to related patterns

We use this information to decide when and where to use a pattern, as well as for guidance about other patterns we should consider using.

This provides us with a path for determining which patterns to use. However, this is not the only path. We can also use model validation via constraints to flag design issues and guide us to pattern use opportunities.



Figure 5 – A List of Warnings Generated by Model Validation.


Automated model validation also provides a quick approach for quality assessment and best practice adherence. There will be times when we want to get a higher-level view of a model’s content. Perhaps we’ve stared at our own model for a bit too long, or, perhaps someone else created the model, and we want to get a feel for what has gone into the design.

To validate a model, we rely upon the information in the model. Automating validation is only possible when we can query and evaluate the model elements, relationships, properties, and pattern instances. Such information doesn’t exist if we are only using drawing tools to capture designs.

A set of constraints that are found highly effective include:

Business Goal Constraints:

  • Change Value: Business goals should state a scalar value for change value.
  • Date: Business goals should state a date by which the goal will be attained.
  • Performance Measurement: Business goals should have performance and measurement values specified.
  • Sub-goals: Business goals should be decomposed into a set of sub-goals.

Capability (candidate service) Constraints:

  • Capability Catalog: A capability should be placed within a package with a catalog stereotype applied. That is, thought should go into categorizing and organizing the services being considered.
  • Capability Responsibility: A capability should have more than one responsibility. We look to avoid capabilities that are too fine-grained.
  • Service Litmus Test: In our analysis efforts, we will identify more candidate services than what should be developed. Some will merge, some will disappear, and others will be postponed to a later project. We should perform a service litmus test for each candidate service to help us decide and document where we make investments. The validation flags untested capabilities and directs the practitioner to use the Service Litmus Test pattern, which will guide them through and document the results of the testing effort.

3. Summarize Information and Identify Governance Gaps via Report Generation

Consider some of the many scenarios where it is essential to be able to share information about SOA solution design:

  • Personal review and understanding of our model
  • Participating in design reviews
  • Sharing with others to help them understand the solution
  • Providing input in EA efforts for project coordination
  • Supporting SOA governance efforts with solution metrics, best practice application and adherence insights, and details about business-IT alignment

The key word here is “information”. We can’t just throw a model or diagram at someone and expect comprehension to follow. We need to review, interpret and analyze the data we have and put it into meaningful representations. This requires a blending of summary graphics as well as supporting details.

The challenge is to take every day work efforts and minimize the amount of effort and time it takes to transform these work products into meaningful, shareable representations. Using manually created diagrams, pictures, spreadsheets and documents is a path that we already know does not work. It takes too long to create these artifacts, never mind the fact that they are out of date almost from the moment of inception. Creating and recreating such diagrams is seen as busy work. As a result, we don’t bother with the effort. It’s just documentation – and as everyone knows, the job is to deliver software.

This brings us back to the importance of having a model (containing meaningful representations of domain elements, relationships, pattern instances, traceability, and so on). This is a rich source of information waiting to be leveraged for the benefit of the organization. Automating the generation of reports that harvest information from our models is a significant win for the organization. We need reports that can be generated in seconds, which provide meaningful insights, and which can be regenerated at any time, taking into account the current state of our model(s).

Let’s take a look at three reports that have been automated and which provide great value and support for our governance efforts.


Goal-Service Model Report

The Goal-Service Model report is used to ensure direct relationships between the business goals and the services as identified and specified in our SOAD efforts. This report details the traceability between business strategy and IT capabilities. Use of this report enables us to inspect and adapt during development - ensuring that IT stays aligned with the business.

Constraints and patterns discussed previously guide the user to create model elements that are referenced by this report. Some of the patterns that directly lead to meaningful information for the report include Structured Business Driven Scenario, Use Case Realization Promotion, and Capability Consolidation.

By using these patterns in combination we have direct traceability from our services to the business goals. The report uses the traceability links to list out all of the business goals and the list of services that map to each goal. But that’s just the beginning, the report also identifies:

  1. Business goals that do not have any services
  2. Business goals that have too many services
  3. Services that do not map to any business goals
  4. Services that map to too many business goals

The following figure provides the first high-level summary in the report – a simple inventory of goals, candidate services and service interfaces that exist in the model.



Figure 6 – Inventory of Goals and Services.


The next figure provides another summary view, providing a business goal realization summary. In this representation we are shown how many of the goals have some form of realization (that is, they are connected to one or more candidate service or service interface). We are also alerted to the fact that the model contains candidate services and service interfaces that do not map to any business goals – orphaned elements, not owned by any business driver.



Figure 7 – Analysis of Goals and How They are (Not) Realized.


Along with these summary views, the report examines each business goal, showing:

  • Performance and metrics
  • Capabilities that realize the business goal (either directly or via use case realization)
  • Service interfaces that realize the business goal (either via a capability or direct relationship)

Service Litmus Test

Further supporting the goal of business-IT alignment and successful SOA solution delivery, we perform litmus tests on our services. Litmus tests are conducted on all of the services, following an iterative and incremental approach.

The following figure shows a service litmus test summary view. In this summary we can see how many services have been tested, how many have successfully completed testing, how many have not yet been tested, and even cases where we have gotten ahead of ourselves and have a test with no service yet specified.



Figure 8 – Summary of Service Litmus Test Results within a Model.




Figure 9 – Legend for the Service Litmus Test Pie Chart (Figure 8).


Service litmus tests are a critical part of our governance effort. We need to make sure that we are investing in the right services. Reflective of this importance, there is support for service litmus tests across all three approaches to automation. The pattern supports the application of the test, capturing results of the test. The constraints provide a quick and simple method to find services that have not been tested. The report summarizes and details testing efforts.

When performing a service litmus test, a minimum set of questions that should be answered include:

  • Is the Service Business Aligned: Does the service provide business functionality? And, does it trace back to a business process or goal? We also need to consider service lifecycle funding, whether and how the service will be used (internally? externally?) - and the implications of such use.
  • Does the Service have an External Description: Does the service have a WSDL file (or some other comparable representation) that is used to describe the solution? Is this description external and separate from the implementation?
  • Is the Service Reusable: Can the service be reused in all of the processes where it is required?
  • Is the Service Technically Feasible: Taking into account all functional and non-functional requirements, is it possible to create the service?
  • Is the Service Composable: Does the service have the right level of granularity? Does the service meet QoS requirements? Is the service stateless? Is the service technically aligned?
  • Is the Service Business Aligned: Does the service provide business functionality? And, does it trace back to a business process or goal? We also need to consider service lifecycle funding, whether and how the service will be used (internally? externally?) - and the implications of such use.
  • Does the Service have an External Description: Does the service have a WSDL file (or some other comparable representation) that is used to describe the solution? Is this description external and separate from the implementation?
  • Is the Service Reusable: Can the service be reused in all of the processes where it is required?
  • Is the Service Technically Feasible: Taking into account all functional and non-functional requirements, is it possible to create the service?
  • Is the Service Composable: Does the service have the right level of granularity? Does the service meet QoS requirements? Is the service stateless? Is the service technically aligned?

Along with the summary view, the report details:

  • Service interfaces that have passed the litmus test
  • Testing results for each question for each service that has not yet passed the litmus test
  • Service interfaces that have not been tested
  • Tests that exist in the model that do no map to any service interface

Pattern Instances

With SOA governance we want to establish the best practices that the team should follow. Automation makes the application and adherence of best practices actionable and significantly increases the likelihood of such occurrences. However, we still will want to know where and how best practices are being used. The pattern instances report helps us to meet this need by detailing each of the patterns that are used within a model.

The following figure shows a summary view from the report, where we can see how many instances of each pattern are applied in the model.



Figure 10 – Summary of the Pattern Instances Found within the Model.


The details section of the report lists each pattern instance along with the parameters that have been bound to the pattern instance. In doing so, we gain insights into which patterns are used, how often they are used, and where in the solution they are used. This data provides insight into solution quality, as well as team adherence to best practices.

In summary, with automated report generation, execution occurs in seconds and is based on the current state of the model. We can support information requests on demand – and always meet the request with up to date and accurate insights into design status.


Conclusion

Working in teams, collaborating across organizational boundaries, integrating systems and partners, and the inherent challenge of delivering solutions based on abstractions leaves little room for busy work or rework. Strategic use of automation helps us to attack and reduce the impact of the fog and friction of SOAD and SOA governance.

By reducing fog and friction we are better able to align business and IT, ensure proper investments, and track and enforce adherence to best practices. Three key approaches to automation that make these results possible, include:

  1. Infuse best practices via patterns
  2. Validate designs via constraints
  3. Summarize information and identify governance gaps via report generation

Although we can use these approaches in isolation, much greater value comes from using them in combination. With a strategy of combined application we amplify and accelerate our efforts.


References

[REF-1] Clausewitz, Carl von, “On War – Volume 1”. The book is in the public domain and available via Project Gutenberg: http://www.gutenberg.org/ebooks/1946

[REF-2] Object Management Group Inc., “Service Oriented Architecture Modeling Language (SoaML), Version 1 – Beta 2”, 2009, http://www.omg.org/spec/SoaML/

[REF-3] Dunnavant, Todd, Johnston, Gary, “Design and develop a more effective SOA, Part 1: Introducing IBM’s integrated capabilities for designing and building a better SOA.” IBM developerWorks, 2011, http://www.ibm.com/developerworks/webservices/library/ws-designsoapart1/

[REF-4] Ackerman, Lee, Gonzalez, Celso, “Patterns-Based Engineering: Successfully Delivering Solutions via Patterns”, Adison-Wesley, July 2010, http://patternsbasedengineering.net/

[REF-5] Ackerman, Lee, Lexvold, Randy, “Designing Solutions using the SOAD Model Accelerator”


Acknowledgements

I wish to thank the following individuals who reviewed and critiqued my article: Todd Dunnavant, Ph.D., P.Eng. Principal Solution Architect, IBM and Randy Lexvold, CEO, The Emphasys Group.