ServiceTechMag.com > Archive > Issue XLIX: April 2011 > Service Portfolio Management - Part IV
Toufic Boubez

Toufic Boubez

Biography

Dr. Toufic Boubez is a well-respected SOA and Web services pioneer and co-author of the SOA Manifesto. He is a Certified SOA Architect and Security Specialist, as well as a consultant and Certified SOA Trainer for SOA Systems Inc. He is the founder of SOA Craftworks and the founder and CTO of Layer 7 Technologies, one of the most successful vendors in SOA Governance and Security. Prior to Layer 7, he was the Chief Architect for Web Services at IBM's Software Group, and the Chief Architect for the IBM Web Services tools. At IBM, he founded the first SOA team and drove IBM's early XML and Web Services strategies. As part of his early SOA activities, he co-authored the original UDDI specification, and co-authored a service description language that was a precursor to WSDL. His current activities span SOA Security, SOA Governance and the impact of Cloud Computing.

Toufic is a sought-after presenter and has chaired many XML and Web services conferences, including XML-One and WebServices-One. He has also been actively involved with various standards organizations such as OASIS, W3C and WS-I. He was the co-editor of the W3C WS-Policy specification, and the co-author of the OASIS WS-Trust, WS-SecureConversation, and WS-Federation specifications. He has also participated on the OASIS WS-Security, SAML and UDDI Technical Committees. He is the author of many publications and several books, including "Building Web Services with Java" and the upcoming titles "SOA Governance" and "SOA Security: Practices, Patterns, and Technologies for Securing Services". InfoWorld named him to its "Ones to Watch" list in 2002, and CRN named him a Technology Innovator for 2004. Dr. Boubez holds a Master of Electrical Engineering degree from McGill University and a Ph.D. in Biomedical Engineering from Rutgers University.

Contributions

rss  subscribe to this author

Bookmarks



Service Portfolio Management - Part IV

Published: April 15, 2011 • SOA Magazine Issue XLIX PDF
 

This is the fourth part of a four-part article series. The first three parts are published at Service Portfolio Management - Part I, Part II and Part III


Introduction

SOA governance should ensure that there is an appropriate process in place by which services described by the service model become candidates to enter the Service Portfolio. Not all services in the business service model can be realized in the form of IT solutions, so if our intended use of the Service Portfolio is to drive IT development planning, we must first decide which services are potentially realizable and which services are not.

The current range of technologies available includes:

  • Executable Business Process Models - BPEL allows business processes to be modeled with enough precision and integrity that they can be physically executed by a software component called a business process execution engine. BPEL allows a process modeler to create process logic, workflows, decision points, the ability to invoke other business processes (either manual or automated), interact with other systems or humans, and invoke other services (for example Web services).
  • Wrapped Legacy Systems - Existing legacy IT systems can often be ‘wrapped’ – for example, hidden behind a technology layer that exposes some or all of their functions or workflow management activities as explicit services, available for use by other systems or individuals. However, unless the legacy system had a modular design, or directly exposes supported APIs, this may involve directly accessing the legacy databases it manages; this could be risky, both because it bypasses the application’s control of the data, and because the wrapper may fail if the legacy system undergoes maintenance. In such cases, legacy wrapping should be seen as a temporary expedient for a legacy system that is due for imminent replacement.
  • Request-Response Services - Many services, particularly entity and utility services can be delivered in a single operation where a service request is immediately followed by a matching response. These request-response or transactional services, representing single independent logical units of work, are frequently implemented using Web services technology. Each transaction is a single independent logical unit of work.
  • Conversational Services - These are the opposite of transactional services. They involve a session consisting of a series of requests and responses between two parties or systems, with each party responsible for maintaining context information as the session proceeds. A shopping cart service within an Internet shopping application is a good example of such a conversational service.
  • Compound or Orchestrated Services - A compound service exposes a simple high-level service interface to the service requestor but ”under the covers” invokes a series of smaller service requests to perform the necessary actions. Price comparison websites that collect a series of comparative prices from multiple alternative suppliers are an example of the use of such services.
  • Manual Services - Many common business processes contain tasks or decision points that are so complex or critical that they currently need to be performed manually. However, it is a best practice to embed manual services or tasks within larger processes that are controlled and monitored by an automated workflow engine, to ensure there are no gaps in the overall end-to-end processes.


Figure 1 - Different types of service technology.

An experienced solution analyst should easily be able to categorize each service in the service model into one of these service technology types. As a general rule of thumb, the relative degree of effort required to create each of these service types, in decreasing order, is:

  1. Fully Automated Process - The degree of effort required varies, of course, by complexity of the process being automated, but development effort increases rapidly as the complexity of the business process increases.
  2. Partly Automated Process - These often represent a good practical compromise. The process execution engine manages overall workflow and performs some of the simpler activities but delegates the more complex tasks or decision points to a suitably empowered person.
  3. Conversational Service - This is the kind of interface that applications expose to their end users. Purpose-built applications certainly have a part to play in creating an overall SOA solution but are best used to support groups of users who spend a significant amount of their day performing specialist tasks, such as call center or customer support staff.
  4. Wrapped Legacy System - An enormously useful approach for integrating and coordinating disparate IT systems. The degree of effort required depends on the technology and design of the individual application. Extracting data managed by an application (without changing it) is generally fairly straightforward. Executing individual functions embedded within an application is harder. Modifying the workflow management of a large application is often impossible.
  5. Composite Services - The overall effort of developing a composite service naturally includes any efforts needed to create the component services it encompasses if they do not already exist.
  6. Transactional Services - The development costs depend on the complexity of the service function. Development of those entity services that simply return the attributes of an entity are especially simple to develop, for example.
  7. Manual Services - These may have no development cost to IT, but the overall cost to the organization (including training costs, salary and overheads) may be high. Business analysts should create investment cases for automating manual processes, weighing the costs of performing that process manually against the development and operational cost of creating automated replacements.

Prioritizing Candidate Services

As the last step in the governance of the Service Portfolio process, a rough prioritization should be made by comparing the relative business value of each service in the service model with its likely development cost.

A good way to do this is run a combined business & IT workshop that creates a 4-quadrant chart where the business value, and estimated development effort of each candidate service are compared. The potential the service has for reusability should be a factor in determining the business value. Even though these two factors are somewhat subjective, the author’s experience has shown this to be an effective method of prioritizing candidate services, especially when performed by a group of business and technical professionals.



Figure 2 - Service business value vs. development effort 4-quadrant chart.

The quadrants have the following priority:

  1. The upper left-hand quadrant contains services that are expensive to create but offer low business value. These are probably not worth automating.
  2. The upper right-hand quadrant contains services of significant business value but that are relatively difficult to produce. These should be analyzed individually to determine their individual business cases or to look for partial automation solutions that would provide partial business benefits at reasonable cost. As technology matures, services in this quadrant should gradually move to quadrant 4.
  3. This quadrant contains services that are relatively easy to automate but offer relatively low business value. Services positioned towards the left-hand edge are probably not worth automating, but services towards the right-hand edge of this quadrant are worth considering.
  4. The lower right-hand quadrant shows those services that offer high business value and at the same time have relatively low development costs. If there were no other considerations, these would take the highest priority to be developed.

Defining Services Deployment Units

The 4-quadrant chart approach described would create a reasonable service development priority if all these services were independent of each other, but in practice that is rarely the case, as real business solutions typically involve interactions between multiple business services.

An associated group of services that are delivered together and that provide a useful and usable solution to a specific business need is termed a service deployment unit. Defining these deployment units should again be carried out by a team that combines both IT and business skills. Generally, the workshop session that creates the 4-quadrant chart should also be responsible for defining these service deployment units. Typically, practical service deployment units involve combinations of services from multiple business domains. Deployment units consisting of all services within a single business domain are unlikely to represent the best use of IT resources.

It is possible that the workshop will define multiple sets of deployment units with varying degrees of overlap of the services they contain. In this case, the team should make a decision between prioritization based on time (a deployment unit that requires minimum development effort), based on business impact (a deployment unit that delivers maximum business value) or some compromise between the two, generally involving multiple phased deployment units.


NOTE

The creation of a formal service deployment unit defines an end-date by which all constituent services have to be deployed to production but does not imply that all of those services are deployed together on that date. There is value in incrementally deploying individual services as they pass certification testing even if they are not yet in active use.



Case Study Example

The Business Analyst and solution architect started by doing some bottom-up analysis to get a better idea of the ERP package functionality originally deployed at Tri-Fold and now being implemented at Alleywood. They were pleasantly surprised to find that:

  • The ERP package already had the capability to reserve future inventory against an outstanding order. It turned out that Alleywood’s IT department hadn’t implemented a service to support this, because a business need for it was not recognized.
  • The new ERP system exposed an application programming interface (API) that made the task of exposing its functions as services relatively straightforward technically. For example, creating services to add waste products like sawdust, shavings and bark as new ‘products’ in the inventory items would require relatively little effort.

Armed with this information, the SGPO, Business Analyst and solution architect met the McPherson product management in a workshop session, where they developed a 4-quadrant business value vs. development chart. Any proposed services that had little or no business benefit were immediately discarded, irrespective of their ease of development.

The workshop attendees next moved on to the issue of defining the optimum set of service deployment units and after some discussion agreed on three (3) separate releases:

  1. A high-priority release containing the bare minimum set of services needed to implement a basic federated inventory system for the product managers.
  2. A maintenance release to expand the functionality of the federated inventory service.
  3. A final release to automate monitoring inventory levels that would automatically request shipment of inventory items between subsidiaries to optimize the geographic location of stock.

The team agreed that the product planning processes were probably too complex to automate at this stage.



Figure 3 – McPherson Corp. federated product inventory service deployment units.

The McPherson product managers were delighted with this outcome. The paper products manager called the CIO to ensure theses services were given maximum development priority. During that call, he said that the workshop was “the first time that I’ve ever seen IT show any signs of being more interested in our real problems than in finding excuses to spend yet more on technology”.

The only sour note came from Alleywood’s security manager who was horrified at the prospect of “putting all of our commercially sensitive information out on the Web for all to see”. The SGPO called him to reassure him that data security was one of their absolute priorities and that no system would be deployed that posed any risk whatsoever to data security.


The Service Lifecycle

Once agreed and formally approved, the deployment units formally define the initial contents of the Service Portfolio, and the process of Service Portfolio management is now well under way. The rest of the art of effective Service Portfolio management consists of growing and maintaining the Service Portfolio by

  • adding new candidate services for additional business domains
  • prioritizing the development of services across multiple business domains
  • managing the services in the portfolio through their entire lifecycles of design, development, testing, deployment, versioning and retirement

This article has already described the process for achieving the first two of these objectives. This final section outlines how Service Portfolio management handles the lifecycle of individual services.

The life of a service, at least in terms of an IT asset, begins once it has been added to the Service Portfolio and been assigned to a service deployment unit. However, each service has to pass through many separate phases before it can be physically deployed in a production environment. These activities group naturally into the following phases:

  1. Analysis Phase - In this phase, Business Analysts gather detailed functional requirements from all potential consumers of each new service. In parallel to this activity, service architects define a set of technical requirements, defining levels of performance, security and availability. These technical requirements are also known as non-functional requirements.
  2. Design Phase - Once the basic requirements have been confirmed, the service designer can start to create a design for the service. Depending on the preferred development approach of the organization, the gathering of detailed requirements may overlap the design process to a lesser or greater extent, ranging from a waterfall approach (no design begins until all requirements are finalized) to an extreme programming approach (where analysts and developers work together to develop a solution by daily increments).
  3. Model-Driven Design - The technology of model-driven design tools to assist the service design process is evolving rapidly; today, most service designs take the form of models rather than paper documentation.
  4. Development Phase - In this phase the design is realized by transforming it into executable code. Again, model-driven design and development tools are increasingly automating this process, but some physical code development is often still needed.
  5. Testing Phase - Once the development of the service is complete, thorough testing is needed to ensure that each service fully meets all of its functional and non-functional requirements. If the service fails any single test, it is returned to the development phase for correction.
  6. Deployment Phase - This is the phase where the service is migrated into a production-like environment, where IT operations can test its ability to monitor and manage the service within the agreed performance and reliability limits.
  7. Operational Phase - This is the most important and longest phase in the service lifecycle where the service is active and providing business value. During this phase, the service may require maintenance if any technical or operational problems are found. Changing business requirements may lead to services requiring new versions with enhanced functionality. In this case a new version of the service should be created that passes through all of the previous five phases as for any other service. IT operations staff should continuously monitor the performance and usage of all operational services.
  8. Deprecation Phase - Best practices require old services to be deprecated – for example, they remain available to existing consumers (so that they don’t need to change any of their systems that use those services but don’t need the enhanced function), while new service consumers are directed to use the new version. Governing this area is especially complex.
  9. Retirement Phase - The final resting place for inactive services. Once the IT operations department has verified that a deprecated service has no current consumers, it can be discontinued without risk. Logically, however, it still remains within the Service Portfolio.

Activities in each of these phases have to be coordinated and performed to an acceptable quality to minimize unnecessary corrective re-work. Unfortunately, it is normal human behavior to misunderstand or re-interpret instructions, take shortcuts or add unnecessary embellishments, so some degree of governance is vital if each service is to progress as smoothly as possible between these phases as between manufacturing steps in a production-line. Good governance should also ensure that requirements change management and resolving any issues or omissions interrupts the service lifecycle as little as possible.

The most effective approach to governance is to review the quality and completeness of outputs from each individual lifecycle phase before promoting the service to the next phase, since the longer the time before an error or omission is noticed, the higher its impact and the cost of correction.

These reviews do not need to be overly bureaucratic processes, nor do they need to be a ‘witch hunt’ that publicly criticize the competence of service analysts, designers and developers. However, governance mechanisms do need to be effective and actionable, so there may be an occasional need for punitive measures to deter repeat offenders.

The reviews governance activities we would recommend, in addition to the regular Service Portfolio prioritization sessions described above are:

  1. Requirements Reviews - that validate that the functional and non-functional requirements of the service have been adequately captured and specified to the appropriate level to represent a consensus between all service stakeholders. For a given service, such a review should take place as soon as the Business Analyst and service modeler believes that the detailed functional and non-functional requirements of the service have been defined to a level where the service can be designed and developed.
  2. Design Reviews - that validate that the design documentation or models accurately match these functional and non-functional requirements. For a given service, this review should take place as soon as the service designer declares that the design is complete.
  3. Code Reviews - that determine that the service executable material matches the design specification. For a given service, this review should take place as soon as the service developer declares that the code is complete and has successfully passed unit testing.
  4. Acceptance Reviews - that determine that the functional and non-functional requirements of the service have been rigorously and successfully tested and that IT operations has confirmed that service is ready to be deployed to a production or pre-production environment. This review should be triggered once the IT operations group completes formal acceptance testing.
  5. Certification Reviews - that confirm that the service is now fully operational and is available for use by approved consumers. The Service Portfolio needs to be updated to reflect which services have now become operational. This review should be triggered once all services in a deployment unit have passed acceptance testing.
  6. Regular Vitality Reviews - confirm that all operational services continue to meet their performance commitments, determine if any services need to be replaced by new versions, deprecated or discontinued.
  7. Generally, reviews should not be planned as a single event, since there is generally some feedback from a first review that requires revisions. In practice, it is better to schedule regular review sessions, each of which reviews the status of multiple services, so that each service effectively has a draft review, one or more iteration reviews, followed by a final acceptance review at each stage.


    Figure 4 - Service Lifecycle Phases and Governance Reviews.

    Conclusion

    1. The implementation of SOA across an organization must be driven by the business needs of that organization, not by the availability of technology.
    2. The creation of a business ‘heat map’ that defines the capabilities of each business operating unit of the organization and highlights those specific capabilities that need enhancement is an excellent means of documenting both an organization’s structure and business priorities.
    3. Using the business capabilities prioritized by the heat map, experienced business modelers should be able to identify business stakeholders and translate their individual needs into new candidate conceptual business services fairly quickly.
    4. These conceptual business services can be mapped against key corporate objectives to determine an initial priority order. Agnostic (reusable) services should generally be given higher priority than non-agnostic services.
    5. Services within the portfolio should be grouped into deployment units that collectively solve a specific business problem.
    6. Understanding the role of the Service Portfolio in the overall SOA journey is important to realizing the strategic benefits of an SOA.
    7. Services in the portfolio pass through an ordered progression of phases on their way to becoming operational and eventually retired. Instituting reviews during the service lifecycle that governs the transition of services between each of these phases can provide effective practical governance of the Service Portfolio.