img
ServiceTechMag.com > Issue LI: June 2011 > Confronting SOA's Four Horsemen of the Apocalypse
Philip Wik

Philip Wik

Biography

Philip Wik is a Database Administrator for Redflex. Philip has worked for JP Morgan/Chase, Wells Fargo, American Express, Honeywell, Boeing, Intel, and other companies in a variety of applications development, integration, and architectural roles. He has published two books through Prentice-Hall: How to Do Business With the People's Republic of China and How to Buy and Manage Income Property.

Contributions

rss  subscribe to this author

Bookmarks



Confronting SOA's Four Horsemen of the Apocalypse

Published: June 17, 2011 • Service Technology Magazine Issue LI PDF
 

Abstract: SOA's Four Horsemen of the Apocalypse represent domains that can bring failure to an enterprise's SOA efforts; security, performance, change management, and testing. Governance must bridge the gap between the promise of SOA and its realization. This article provides a structure of analysis that can help address each of these four areas.


Figure 1 – Albrecht Dürer's “Apocalypse”, 1498
Wikimedia Commons

Introduction

Outlined against a blue-gray October sky, the Four Horsemen rode again. In dramatic lore their names are Death, Destruction, Pestilence, and Famine. However, those names are their aliases. Their real names are: Stuhldreher, Crowley, Miller and Layden. They formed the crest of the South Bend cyclone before which another fighting Army team was swept over the precipice at the Polo Grounds this afternoon as 55,000 spectators peered down upon the bewildering panorama spread out upon the green plain below.

Grantland Rice, a sportswriter for the former New York Herald Tribune, linked this Biblical image of terror to football players Harry Stuhldreher, Don Miller, Jim Crowley, and Elmer Layden after Notre Dame's 13-7 upset victory over the Army in October, 1924. The four horsemen of the Book of the Revelation has been a source of wonder for mystics and others. “And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him. And power was given unto them over the fourth part of the earth, to kill with sword, and with hunger, and with death, and with the beasts of the earth” [REF-1]. Differences of interpretation exist as to what these horsemen represent, but all agree that we would not welcome the visitation of these macabre figures. Terror comes to those who try to implement an enterprise wide SOA in different ways. The road to waste is sometimes so subtle that we scarcely know we have failed until in costly hindsight. However, there are times when disaster comes in a horrific gallop. And those failures are often failures of understanding in security, performance, change management, and testing. These are SOA's Four Horsemen of the Apocalypse.


Outline

  1. SOA governance (SOA-G) must bridge the gap between the promise of SOA and its realization.
  2. A disciplined and integrated approach is necessary for SOA-G.
  3. The remainder of this article provides not as many answers as questions. These questions can provide a framework for discussion, analysis, and policy that relate to performance/optimization, security, change/configuration management, and quality assurance/testing.

I chose these areas as they are typically breeding grounds of systems instability and customer dissatisfaction. Mastery in these areas builds a context for SOA success. Enterprises that are able to build SOA on these strong pillars creates an ecology for excellence. Of the four domains that we will consider, security is the most critical because it directly impacts our customers and corporate financials and speaks to the credibility of our SOA.


From Promise to Realization

What does the implementation of a contemporary service-oriented architecture guarantee for the corporation? The correct answer is nothing.

As I concluded in an earlier article, “the promise of SOA is increased business responsiveness, broader service offerings, reduced risk, reduced time to market and vendor dependence, and increased flexibility, innovation, reliability, scalability, and internationalization” [REF-2]. But that is all it is-a promise. As Anne Thomas Manes notes: “Once thought to be the savior of IT, SOA instead turned into a great failed experiment at least for most organizations. Except in rare situations, SOA has failed to deliver its promised benefits” [REF-3]. Much depends on the quality and relevance of the SOA and its support across the enterprise. At the top of the list as a killer for any successful SOA are organizational contradictions that grow into dysfunctional fiefdoms and a culture of inertia. Once we slash the bureaucratic underbrush, generally by leadership and by human resources decisions, we can tackle the more technical problems.


A Disciplined and Integrated Approach

Governance is critical for a successful enterprise SOA. “Governance,” IBM says, “means enforcing how people and solutions work together to achieve organizational objectives. This focus on putting controls in place distinguishes governance from day-to-day management activities” [REF-4].

We must view rules that relate to the definition, granularity, composition, reuse, and orchestration of services holistically. For example, security rules inform optimization rules, while testing and change management also integrate with each other and with security and optimization.


Figure 2 – The SOA-G Context

We could also add other SOA-G categories. These include planning new or reusing current services and integrating services into infrastructure, such as enterprise or federated buses, metadata repositories, and service registries. However, security, performance, change management, and testing seem to define the most vexing SOA issues.

There is the debate as to the what and the why of SOA-G. However, the other SOA-G questions are not so clearly understood. Where do these rules come from? Who decides them? How are such policies enforced?

The best practices for principles of a SOA-G committee include the following, with the caveat that SOA-G will vary in accordance with a company's leadership, culture, size, lines of business, life-cycle maturity, and other factors. Open Source's Governance Framework Technical Standard or the OASIS Reference Architecture Foundation for Service-Oriented Architecture can help define the structure and execution of SOA-G.

  1. Authority: It is on this principle that a SOA succeeds or fails. The ability to make clean decisions must derive its legitimacy from the corporation's strategic goals and financial resources. It must also be grounded in the will of the leadership as it reflects the desires of the corporation's share and stake holders who in turn mirror and amplify marketplace pressures. Authority must remain uncompromised by other internal or external interests or considerations. It must be responsive to market changes or new funding models. A triad of finances, politics, and technology shapes SOA decision-making, but top-down intentionality must integrate these factors. Grass-roots governance is an oxymoron and furthermore cedes policy into the hands of the most reactionary elements of the corporation, no matter how well-meaning, talented, and experienced they may be. Authority is meaningless unless there are ways to enforce that authority. A few iron laws of which there is no ambiguity is to be preferred over a plethora of rules that are subject to ranges of interpretation and compliance.
  2. Accountability: No accountability, weak accountability, the wrong accountability, or diffused accountability undermines authority. While taking into consideration as many relevant voices as possible, the source of final accountability should rest with one or two people at most from SOA-G. A committee in itself cannot enforce governance. Enforcement ultimately rests in the clarity and rationality of an enterprise's vision for its SOA as it is articulated by the leadership.
  3. Size: Broad enough to represent major business and technology domains, but small enough so that nimble decisions can be made, each new decision logically building on prior decisions.
  4. Membership: Ranking business and technical stakeholders and thought leaders, while excluding those who are unaligned with the enterprise's SOA goals. Membership should be weighted to business stakeholders to combat the notion that SOA-G is fundamentally an information technology function. But how do we resolve the paradox that SOA will largely succeed or fail for technical reasons while populating a SOA-G with non-technical people? The answer is that SOA-G-members need to rise above their job description to mirror the interoperable, dynamically bound, and location-transparent nature of its service-oriented architecture. It means that business people will need to understand service operations and support while technical people need to embrace service transformation planning, funding, business vision, and IT alignment. At the end of the day, silos are not departments but psychology, and SOA is all about creatively destroying such silos.
  5. Velocity: Having a bias to timeliness, with agendas, milestones, and assignments. Frequency of meetings should be enough so as to ensure deliberative momentum.
  6. Flexibility: Openness to technical serendipity.
  7. Communication: Up but also out and in as well. Evangelizing SOA strategy to the rank and file is essential in that it builds support and provides an early reality check.
  8. Transparency: Identifying and communicating obstacles early and accurately to all SOA-G team members. At times, the lack of transparency to the enterprise-a fait accompli is necessary to get results.
  9. Execution: SOA-G meetings can be open-ended and discursive, with issues that often remain hanging as the clock runs out. Our goal is to reduce an issue to a one page case, no more than that is needed, as a catalyst for discussion. A case, no matter how carefully we prepare it, has little value until it has gone through the refining fires of disputation. Litigation has a way of releasing value from within a corporation. This dialectical approach exposes gaps in knowledge and flaws in understanding, reveals new relationships and opportunities, and also requires from all participants a certain toughness to absorb and articulate critiques of this kind.
  10. Continual Improvement: Absorbing lessons learned for all SOA related initiatives on a never satisfied, never ending basis. These lessons eventually generalize into rules.

The beginning of wisdom is to question. Here, then, are five questions in each category with the back story for those questions that will help provide guidance for building a plan to a robust SOA. I have chosen these questions with the view that they should trigger yet many more questions, digging ever deeper until we understand.


Security

Security baked into legacy applications may no longer suffice when an application exposes its capabilities as services that can be used by other applications. The ongoing evolution of standards and languages must accommodate encryption layers, digital signing, and services that are consumed outside of the trust domain. Single sign-on and message level end-to-end security must protect data in transit and at rest. “Message-level security can clearly become a core component of service-oriented solutions,” Thomas Erl notes. “Security measures can be layered over any message transmissions to either protect the message content or the message recipient. The WS-Security framework and its accompanying specifications therefore fulfill the QoS (quality of service) requirements that enable enterprises to utilize service-oriented solutions for the processing of sensitive and private data (and) restrict service access as required” [REF-6].

We can contextualize security in yellow in this SOA-G reference capabilities diagram. The tiers are categories within a SOA governance architecture. Each of the four domains integrates with each other and with all tiers. But each of the areas that we are discussing dominates a part of a SOA governance map as suggested below.


Figure 3 – SOA Reference Architecture and Security

#
Questions
1 What emerging technologies and standards address our SOA security?

For Web services, these include the WS-* specifications. Virtual Organization in Grid Computing, Application-Oriented Networking (AON), and XML Gateways also enforce identity and security for SOAP, XML, and RESTful services. Gateway security features include PKI, digital signature, encryption, XML Schema, and pattern recognition. Also consider Java Based Integration (JBI) and Data Distribution Services (DDS) that do not depend on remote procedure calls (RPC) or translations through XML. WS-I Basic Security Profile (BSP) addresses the interoperability issue, designed to support the addition of security functionality to SOAP messaging. One example of such functionality is the confidentiality of selected SOAP header blocks and SOAP body elements and content through the use of OASIS Web Services Security encryption.
2 How do we authenticate and authorize SOA users outside of our enterprise?

One SOA solution to maintaining trust across teams as services are created is federated authentication, in which multiple parties agree that a set of criteria can authenticate a set of users by creating a Security Assertion Markup Language (SAML) assertion. Acquire understanding for legacy application wrapping and also managing services metadata.
3 How can we protect core systems from malicious intent?

A secure application proxy can receive and respond to all Web services requests, and thereby avoid letting anyone reach the service hosting platform.
4 How can we protect our SOA against denial of services (DoS) attacks?

We can consider a contract management security solution, utilizing a tool for tracking and managing the operation of Web services. A SOA security solution is to stamp the SOAP message with a unique identifying number, to prevent flooding with duplicated requests.
5 What security specifications can we tailor to our SOA?

Security specifications that can be used as part of SOA include the following: WS-Security, WS-SecurityPolicy, WS-Trust, WS-SecureConversation, WS-Federation, Extensible Access Control Markup (XACML), Extensible Rights Markup Language (XrML), XML Key Management (XKMS), XML-Signature, XML-Encryption, Security Assertion Markup (SAML), .NET Passport, Secure Sockets Layer (SSL), and WS-I Basic Security Profile.


Performance

SOA architecture results in overheads, especially if the architecture depends on remote procedure calls or translations through XML parsing. Our goal is to optimize reliable SOA operations in a high-transaction, heterogeneous SOA environment.


Figure 4 – SOA Reference Architecture and Performance

#
Questions
1 What are the benchmarks for measuring performance?

Optimization criteria includes improved efficiency, cost, quality, performance, time to market, allocation of resources, agility, and business processes. We can accomplish SOA deployment optimization by modifying existing deployments or business processes or reassembling a service for a new business use. Benchmarks should include services, choreography that contains business processes, and integration. Benchmarks should also include non-web services, such as messaging transports. We must verify transaction processing-either all operations work or they do not work and all executions need to be executed exactly once and only once, for example, in a service that credits an account. Industry standards, consisting of a consensus between multiple vendors, for benchmarks may not exist or continue to evolve.
2 What vendor solutions are available to support SOA operations?

Given the number of XML parsers, we need to ensure that we select the right parsers as different parsers have different performance and scaling characteristics. Consider open-source parsers. Consider the broadest context in which SOA lives, including not just the bus but the network and infrastructure.
3 How do we test scaling up and scaling out?

We must test for throughput-the number of messages that can be processed within a given time period, and latency-how long a given message takes to travel through the same system. We need tests for the maximum number of supported users and requests and the maximum number of concurrent requests. We also need to test for scaling out, when workloads are executed in multiple places at the same time. For concurrency testing, we should separate the bus from clients and proxy services. The key questions are: Does the application work, is it fast, and can it scale?
4 Can we identify scalability bottlenecks?

Bottlenecks exist between service components that pass very large messages (VLMs). We can make VLMs as lightweight as possible or replace them by the standardized Service Data Object (SDO) API. Creating and parsing XML will be a performance drag. We need to ensure that components in the bus are not the bottleneck, perhaps by using REST. Bottlenecks typically involve data access or handoffs between services. Increasing the grain of web method calls, reducing the calls between the SOA client application and the SOA service layer, employing asynchronous web method calls, choosing the right communication protocol, using caching, and storing session state in a distributed cache can mitigate performance problems.
5 What about parallel processing?

Investigate software pipelines. [REF-6]


Change Management

Services are interrelated and interdependent as they can be reused in other contexts. A new or changed service could impact other systems that are not under the management of any one team. String and integration tests and deployments require coordination between development teams and require umbrella protocols for validating and versioning changes.


Figure 5 – SOA Reference Architecture and Configuration/Change Management

#
Questions
1 Do we trust that changes verified in a QA environment will behave exactly the same in production?

Standards are required to define the state and agencies that will manage new or acquired services, versions, and scope of impact. A highly dynamic enterprise SOA can give rise to production application issues. An answer is automation through modeling-allowing a management tool to virtualize the architecture so that visibility is given to all changes. A decision needs to be made as to whether we would like the QA environment to be an exact replica of production, including its operational data.
2 Do we just "unit test" the service components associated with a unit of change, or do we work with other teams to conduct a full integration test in QA?

The entire virtual enterprise needs to be involved in change management, not just elements of the enterprise.
3 How do we ensure that the QA environment is pristine and how do we coordinate among multiple team's project schedules, which are often more different than similar?

Integrated testing is needed to ensure that downstream components in multi-part, distributed transactions are not impacted by service changes.
4 How do we manage the worst-case scenario where a change impacts hundreds of service components?

In a highly interconnected and interdependent environment, we must build use cases that game out a variety of worst-case contingencies, including emergency changes and disaster recovery.
5 If change verification/testing in production are allowed, how do we facilitate synthetic transactions so that actual production data is not interfered by test cases?

Synthetic transactions are transactions that serve no business value other than to exercise the code and infrastructure. Such transactions can be passive, creating no residual impact to the system. They can also be active in the sense that the transaction itself is processed and stored within the application. Risk can be contained by creating a penultimate production environment that consists of a small subset of the total database, such as, for example, in a database of ten million customers, a production deployment against 50,000 customers for rolling one week periods.


Testing

While there are some sophisticated tools that allow testing in a SOA space, the testing of headless services (including message and database services along with Web services) and managing the data state of idempotent services remains challenging. Testing should be complete. Every functional service test should be reused for performance testing. Testing should be collaborative. It should cut across teams and staging directories. It should be done prior to the creation of user interface testing. Producers and consumers should test continuously, with new services that are tested as they are integrated into existing workflows. We will need to provide for additional resources in support of SOA testing, including automated tools, new roles, and ten to twenty percent of extra time to test services.


Figure 6 – SOA Reference Architecture and Testing

#
Questions
1 What open source SOA testing tools are available?

We need to consider impacts of increasing more complex WSDLs, schemas, and message patterns. Will require automated regression testing and automated regression baseline creation, with a XML diff capacity. We should look for tools that allow for test harnesses and grant testers structural access to services. One such tool suite is the open-source JUnit framework, a powerful, simple-to-use tool for running automated tests. We need to consider the limitations of such tools and whether such limitations require consideration of vendor solutions. Tools should be able to execute schema, WSDL, and SOAP validation, simulate services, test intermediaries, and test for performance and security. Leading tools include Green Hat GH Tester, SOAP UI, Crosscheck Networks SOAPSonar, Parasoft, Matador, and iTKO Lisa.
2 What is a good SOA unit test case?

We need to address how to verify the proper behavior of services and build automated regression test suites. This requires developing customer use-cases and event flows, with particular attention to composite applications. We will need to set up test instances that span application domains and also address the physical infrastructure for testing distributed SOA. We will need to construct business integration domains to reduce risk and facilitate incremental component development, builds, and deployment. We need to manage idempotent services and get assurances of the real world effect of a service deployment. The service provided will need to make sure the service achieves its advertised real world effect. To validate functionality, we can extend a developer's test harness into functional testing.
3 How do we validate performance?

We need to determine throughput and capacity, and comply with service level agreements, identify bottlenecks and potential architectural weaknesses, and assess runtime analysis in terms of consuming and handling message patterns.
4 How do we validate interoperability?

We need to measures the design characteristics of a WSDL or schema service. Challenges include the headless nature of services and understanding what the services do from a business standpoint.
5 How do we validate transport (point to point) and message security?

We need to assess risk posture with regard to vulnerability, data privacy, and data integrity. Security testing involves PKI with signatures and identity tokens, and requires understanding emerging W3C and OASIS standards.


Conclusion

Whether on the gridiron or in the marketplace, there will always be challenges and multiple points of cost and failure. And, for many enterprises that are struggling to realize a SOA, the names of that pain are performance, security, change management, and testing. An antidote is muscular governance that can manage a SOA wisely and intentionally. A disciplined approach to SOA-G can help check SOA's Four Horsemen of the Apocalypse.


References

[REF-1] The Bible, King James Version (1611), Revelation 6:8.
[REF-2] Philip Wik, “Effective Top-Down SOA Management in an Efficient Bottom-up Agile World,” April-May, 2010, SOA Magazine, http://www.soamag.com/I38/0410-1.php
[REF-3] Anne Thomas Manes, “SOA is Dead: Long Live Services”, January, 2009 http://apsblog.burtongroup.com/2009/01/soa-is-dead-long-live-services.html
[REF-4] Open Source's Governance Framework Technical Standard, 9. http://www.opengroup.org/ . See also OASIS's SOA Reference Architecture. http://docs.oasis-open.org/soa-rm/soa-ra/v1.0/soa-ra-cd-02.pdf
[REF-5] Thomas Erl, Service-Oriented Architecture: Concepts, Technology, and Design, (Prentice-Hall, 2005), 265.
[REF-6] Corey Isaacson, “High Performance SOA With Software Pipelines,” March, 2007, SOA Magazine, http://www.soamag.com/I5/0307-1.php
[REF-7] These questions are from “SOA Change Management Strategies”, Microsoft SoCalArchitectBloc, April, 2008. http://blogs.msdn.com/b/socalarchitect/archive/2008/04/22/soa-change-management-strategies.aspx


Acknowledgments

I wish to thank the following individuals who reviewed and critiqued my paper: Steve Wisner, Director, IT, Genworth Financial, Chad Mason, Director, Test Engineering, Choice Hotels International, and Brian Mericle, Principle Systems Engineer, Choice Hotels International.