ServiceTechMag.com > Issue LXV, August 2012 > Flexible Service Repository Taxonomy and Service Broker Realization
Sergey Popov

Sergey Popov

Biography

Sergey Popov is a Certified SOA Architect, Consultant, Analyst, Governance Specialist, Security Specialist, as well as a Certified SOA Trainer. He has broad technical experience with SOA and application integration architecture development, and specializes with integration development based on Oracle applications. During his 15 years of working in the shipping and transportation industry, he helped with establishing communications and message brokering platforms for the world's biggest RO-RO and container shipping companies. In several projects for business critical applications within enterprises Sergey designed and implemented ESB solutions with average transaction volumes of over 100,00 messages per day. Sergey's wide business knowledge in the Telecom industry is further complemented with expertise in the SCM, ERP, Banking and Finance sectors. He was responsible for the implementation of a Pan-European service layer for the second biggest cable and telecommunication provider in Europe. Sergey is also an Oracle Certified professional and Oracle Fusion Middleware architect. His accumulated SOA, Java and Oracle Middleware experience was recently combined in PACKT Publishing book Applied SOA Patterns on the Oracle Platform, August 2014 (http://bit.ly/1uqK9dq

Contributions

rss  subscribe to this author

Bookmarks



Flexible Service Repository Taxonomy and Service Broker Realization

Published: August 20th, 2012 • Service Technology Magazine Issue LXV PDF
 

Abstract: In the telecommunication industry, the ability of business domain Order Provisioning to invoke services dynamically at runtime is one of the critical requirements . The Service Broker as a controller is the main means of composing agnostic services dynamically. Composed services, together with other related enterprise artifacts, must be successfully discovered and exposed in various ways, which are clearly defined by a Service Inventory taxonomy. The main subject of this article will be creating an effective Inventory taxonomy. It will be discussed in the following pages, together with the Service Broker, and Service Messaging architecture.

In the first part we will try to offer a solution to the common arguments over the necessity and benefits of the Service Inventory. For an example model, we will use a hypothetical large telecommunication company that has several affiliates spread out over a few different countries.


Introduction

For many years, and after much experience gained in numerous implementations, the most common problem found when trying to maintain a desirable level of reusability (in order to increase ROI) has always been the lack of discoverability of artifacts during design and runtime of a Service Inventory. Often, the mechanism of discoverability was mainly obscured by unclear, incomplete or too complex ways of classifying the composition of an artifact.


1. SAIF, Taxonomy and Governance

The UDDI tModel semantic [REF-1], although very powerful due to its minimalistic approach, is not always suitable for design-time discoverability. On the other hand, a very extensive SAIF taxonomy [REF-2] needs to be adopted through more than one Implementation Framework (IF) because one single framework may not be able to cover the complete scope of the SAIF Governance.

In general, a low level of discoverability leads to the implementation of overlapping de-normalized services, while reducing the chance of introducing agnostic composition controllers and sub-controllers. Controllers often become hybrid services with ambiguities and collect unneeded code detritus, such as:

  1. Dead code branches (placeholders for further extensions)
  2. Calls to dummy services
  3. Calls to heavy adapters (DB) with no business reason
  4. Unused variables
  5. Embedded composition of code with heavy if-else and case logic, with many hardcoded conditions based on geographical unit, item, client history values, and the like.

2. SOA Design Patterns

If the first four discrepancies can be solved by a vendor's team alone, the last one would greatly benefit from the attention of SOA architects and SOA governance managers. Any controllers suffering from problem number five in the list above is a prime candidate for service decomposition, with a subsequent integration of a list of new services with redefined capabilities.

Resolving the problems listed above can be accomplished through the creation of a Service Repository for cataloguing references of new, redesigned services. This will feature dynamic service brokering, through the implementation of a new Service Broker, where the following patterns can be applied:

  • Foundational Service Patterns (with Functional Decomposition, Service Encapsulation, Agnostic Context)
  • Capability Composition Patterns (with Capability Composition, Capability Recomposition, Entity Linking)
  • Inventory Centralization Patterns (with Process Centralization, Schema Centralization, Policy Centralization)

Benefitting from lessons learned, in the following articles I would like to present the practical approach of the implementation of the agnostic composition controller. This will include the service repository taxonomy which is essential for both supporting the composition controller and the message elements.

To begin, this article will focus on Inventory Centralisation and the multiple ways of accessing the Inventory in various situations, from different architectural layers. First, lookup situations will be explained, the types of objects will be classified. After that, the roles of different message elements will be identified for every step.


3. General Objectives

The Service Inventory blueprint is a part of the Enterprise SOA architecture which requires strong commitment from all levels, especially from architects. Astonishingly enough, even today, not all architects are ready to support this architectural layer due to its complexity and the fact that the direct benefits don't seem obvious. However, it remains worthwhile to highlight the objectives we pursue with the implementation of the Service Repository and the principles we will implement to achieve these objectives.




Further discussion will be based on the cross-country realization of a TM Forum Telecom Resource Provisioning Component [REF-3].

  • A composite of three provisioning BPEL flows with Pan-European content, which are agnostic to the geographical units, while handling Order header and body.
  • The individual Order line handler, although it is not agnostic and contains specific parts for every operating country.
  • Also, other conditions must be taken into account. Every order line could spawn several child processes depending on the mentioned conditions.

The entire structure is currently realized in a single BPEL flow.

According to generic SOA practices this component must be functionally decomposed in order to provide runtime discoverability and a new solution must be able to combine services into "task composition lists" depending on the content of the Order/Order line. The SCA's WS involved must be combined according to the context, and endpoints must be recognized for each composition. Also, the compensation flow must be identified for each individual transaction and every following operation must potentially be able to use the outcome from a previous one.

This way, everything starts with functional decompositions and is followed by recomposition. This will inevitably lead to the implementation of new components and compositions. Needless to say, one of the Core SOA design rules is "always speculate on new service reusability". It is also clear that when a reusable service is designed and made public, it must immediately become discoverable; otherwise all efforts would be lost.

It is not enough, however, to just make a service reusable. It is also important to provide its reusability with minimal efforts for the composition controller. The measure of this effort is the composability of the service, and can be achieved through standardization of the service contract as well as, principally, proper CDM implementation.

At this point it is obvious that functional decomposition will result in presenting pan-European and country specific components. What is still unknown are the numbers. This level of clarity must also be taken into consideration during practical realization. Generally, we can expect three kinds of compositions or individual services:

  1. Completely generic, potentially reusable on every GU. Roles could be Controllers, Sub-Controllers, Initiators and Composition Members
  2. Services with a generic structure but different endpoints or endpoint particulars. Roles could be Controllers, Sub-Controllers and Mediators
  3. Non-agnostic, GU specific processes/services

Regarding the first two, if we are to follow SOA design rules, we will also have to make them domain-agnostic, so that they are not just useable in the Order Provisioning domain.

[Assumptions 1-3]: We assume that the SCA(BPEL) will be the main orchestration platform for long running processes [1]. ESB will be the main Message Broker for short running stateless operations [2]. And our design must be vendor independent in a way that we could replace some components, stateless and common, by custom build (Java) or other vendor's components [3].


4. Possible Realizations


4.1 Lookup Types

One of the practical ways to classify services and artifacts' taxonomy is to identify what kind of data we will be looking up from every service layer. For vertical infrastructure layering, we suggest to use the Oracle AIA service layer notation [REF-4]. This way we will have three main layers – Adapters, Enterprise Services (usually hosted or available thru ESB), and task orchestrated services in the Enterprise Business Flow layer. Note that vertical stratification for the three main service models still remains.





4.1.1 The Service Business-Delegate is Looking for the Service-Worker




4.1.2 The Service is Looking for Endpoint(s)




4.1.3 The Service is Perform Data Transformation/Validation




4.1.4 The Service is Looking for Endpoint Particulars




4.1.5 The Service is Looking for Internal Task's Parameters




4.1.6 The Service is Making Decisions (or the Decision)




4.2 Entity Types




The physical implementation could be:

  • File based
  • DB based

The construction of composite entities (as tasks) could be:

  • Static from file location
  • Dynamic from DB query
  • Static from DB
  • Dynamic from RE

A common rule for the ER implementation for all approaches is to have a unified ER endpoint for entity lookup, with Message header elements as an input parameter.

4.2.1 Entity Groups




4.2.2 Entity Relations




4.3 Decentralized Realization

The idea of this realization is to avoid lookups of any kind and maintain the orchestration logic as a static process. This way, processes will be reconfigured only through recoding and reimplementation.


4.3.1 Application Project Store

Lookups Types in use: None

Entities maintained: None, all project specific

The first approach is straight forward:

  1. Assumptions (from paragraph 4) 1 and 2 are taken for granted and the provisioning flow is functionally decomposed at the level where agnostic common services are separated from functionally complete GU specific business services.
    Consequences:
    Layers of service repository established for Task and Entity services.
  2. New, full scale functional compositions, designed in order to minimize "creeping of business logic", i.e. reduce the number of compositions for maintenance and reusability purposes. This is the classical "Top-Down approach".
    Consequences:
    A Top-Down approach means: Devise complete analysis upfront. It not only takes a lot of time, but considering a dispersed GU, it will require deep and precise knowledge of the entire business operation everywhere, not to mention a substantial budget. In general, it is too late to do a Top-Down analysis at this stage, although the analysis itself is a very positive thing.

  3. SCAs used to implement compositions in a static BPEL way. Dynamic service/endpoints invocations with lookups avoiding creation of BPEL flows are more visible for inexperienced developers.
    Consequences:
    The Utility Services layer in the Service Repository is deliberately neglected for simplification purposes. In fact, it can potentially lead to the implementation of Hybrid services. In this case, Reliability (1) will be reduced. Vendors' SOA knowledge is usually not considered very high, which narrows the choice of vendor but potentially invites inexperienced solution providers. This also hits the Reliability issue.

    In general, processes will be identical with minor alterations and developed using copy+paste. Maintainability (3) will be severely affected. With no common Utility components as a single point of failure, Reliability can be high. Without design time discovery, after several implementation laps, it will be virtually impossible to maintain the desired level of Reusability (2).

    Reliability (1) can vary from process to process, depending on the complexity of the orchestration logic. The more complex "if-else" logic is used, the more prone to errors the process will be. As a workaround solution, SCA mediators with static dispatching logic can be implemented. Math for mediator filters/branches can be the same as for the number of processes.
  4. Service endpoint handling in ESB will be similar to the SCA solution. The number of services will be an equal {number of channels} x {number of affiliates}. Goals will be affected in a similar way.

4.4 Centralized Realization

Centralization denotes a constant reuse through runtime resources lookup and discovery . Types of lookups and objects are defined above. The governance rule "alteration by configuration" will give the most profound benefits when it is maintained centrally. The following approaches practice the same lookup paradigm with different degrees of centralization and lookup frequencies.

It is obvious that the number of cross-platform lookups should be limited due to performance requirements and the scope of the returned objects must be adjusted to its transactional scope. For this purposes, according to Assumption [3] we implemented a Message Container with a Process Header, where transaction related values must persist and be propagated along the way via all layers. So, a certain trade off must take place in order to optimize the size of process transaction-specific data, the number of lookups, and the transaction MEP itself.

The MC implementation with PH/MH also allows us great independence from platform vendors.


4.4.1 Domain Repository

Lookup Types in use: 4.1.1-4.1.4
Entities maintained ( from 4.2): 1-4

This approach can be a good choice when:

  • The GU has a great deal of independency in order to stay more flexible in terms of business operations
  • The GU is supplied will all the necessary SOA guidelines and has strong SOA sponsorship, willing to follow the Enterprise ICC guidelines
  • The GU is well capable of maintaining its own SOA assets and infrastructure
  • The GU SOA assets are mostly GU-specific, so establishing an Enterprise Repository is just impractical

Usually, we expect that lookups 4.1.1 -4.1.4 are used. Domain service broker(s) will be implemented in order to perform service dispatching. Part of the Service Broker, the Service Locator, must discover enough information to support transactions from end-to-end and supply the Message Broker (ESB part) with all the information for 4.1.3-4.1.4.

Design rules would be as follows:

  • Synchronous MEPs: one DR lookup per transaction, persist data in PH
  • Asynchronous MEPs with Global Correlation ID: one DR lookup and one PH lookup by CorrID
  • Asynchronous MEPs without Global Correlation ID: more than one DR lookup depending on the number of services/operations to invoke

Again, the above rules are subjected to tradeoffs, and totally depend on the level of service granularity, message size and process simplification.

This approach is also very traditional with the following steps:

  1. Functional decomposition does not have to be utterly completed before implementation. Only the utility services must be clearly identified upfront, which is simple as these services themselves are well patterned: Service Broker, Translator, Transformer, RE Endpoint, DE Endpoint, Message Broker. Business services can be presented initially in big chunks, suitable for further decompositions. This is the typical "Meet-in-the-Middle" approach, where Top-Down and Bottom-Up benefits are combined.
    Consequences:
    Optimal delivery time with measurable and attainable performance, significantly better than with a decentralized approach. Reliability: is constantly maintained in a balanced manner along the decomposition.
  2. Business Logic and Complex Composition logic are removed from the Composition Controllers and Sub-Controllers in order to make the composition adjustable through simple configuration files (Entities, Rules), not by re-coding. The important thing here is that Composition Controllers don't have to be totally abstract and agnostic as they are predefined in the business and/or GU Domain. This is also a way of to provide tradeoff reusability for time-to-market. The borderline is where the EBO for the Composition Controller is implemented in the Message Container as payload "any", or else within the Message Container namespace. It could be acceptable to have a second approach in this type of realization.
    Consequences:
    Positive impact on Reusability. Maintainability is significantly increased. Performance could be potentially less than in a direct coding approach, but it can be easily justified through resizing compositions. This is attainable through configuration. Reliability also can be negatively impacted as we have implemented some single points of failure here, but caching and redundant implementation can solve this problem just as easily.
  3. The transactional part of an extracted configuration persisted in the Message Container and Process Header part (execution plan, set of transactional variables or routing slip). PH will be propagated end-to-end to the adapter framework before the Ultimate Receiver.
    Consequences:
    Positive impact on All characteristics.
  4. EBS in ESB will use PH values for Transformation (Enrichment), Validation, Filtering, and Invocation Service, or its ABCS. A minimal number of lookups is allowed here as this layer must be a good performer. But, if necessary, Java callouts to the MDBs (Like RE MDB) are allowed, see above. This approach is in alignment with the Delivery Factory pattern where groups of adapters to the Ultimate Receiver can be abstracted through the Factory Layer. Grouping is usually done by MEP +transport protocol.
    Consequences:
    Positive impact on All characteristics.

4.4.2 Cross-Domain Utility Layer

Lookups Types in use: 4.1.1-4.1.5
Entities maintained ( from 4.2): 1-5

The main disadvantage of the previous approach is that we identified and implemented, but didn't reuse across all the domains' Utility layers within the Service Repository. The Utility layer is very generic, so with some efforts it could be totally reusable. Efforts are clearly identified in the previous approach:

  • Make broker payload-independent, presenting <any> block in the Message Container
  • Implement an SBDH-compliant Message Header as reference to payload
  • Implement the Process Header as a persistence container for routing slip/execution plan
  • Implement Audit/message Tracking data for message tracking information

The last point is a very positive outcome of this implementation, as a universal message/service broker will endorse implementation of other OFM Common Patterns, such as Error Hospital, Common Audit, Centralized Logging [REF-5]. This is highly important for maintaining a unified contract for all Service Broker connected components, which could be fairly simple with the implementation of a Message Container as described above.

Cross-layer utility services will be more thoroughly reviewed and tested against a domain specific implementation, and it will have positive impact on Reliability and Performance. SB scalability must be treated with great care as it may be a success factor for a cashed/clustered implementation. SB/SR apparently become single points of failure, therefore rigorous stress-testing is absolutely required. A dedicated framework must be devised for this. A proposition is already sent. Due to the implementation of a Canonical Endpoint for all composition members, an alternative implementation based on J2EE can be presented with relatively low efforts.

Reusability and Maintainability will be higher than with previous methods but as it is still on Domain level (GU), it is not yet at top level, and quite decentralized.

All four steps are identical to the previous solution, except that the implementation of Utility layers/patterns is governed from a central location.


4.4.3 Enterprise Repository

Lookups Types in use: All
Entities maintained: All

If any GU prerequisite from 4.4.1 is left out, implementation of the Domain Repository with/without common utility layers will become problematic. In this case, decentralization will effectively lead to the same disorder and implementation of the Application Project Store. The key success factor for the Domain Repository is maintaining Canonical Endpoints and unified a configuration for the PE, as mentioned above.

With this approach, all entities must be maintained centrally. This is important, because all development and maintenance are already performed centrally in one HQ GU. Therefore, all decomposed and recomposed GU-related flows will be maintained and configured in a single ER. According to anticipated evaluation, up to 80% of all business flows across GU's have the same structure and logic (we are all, after all, in Telecom business), and thus the main lookup types will be 4.1.2, 4.1.2, 4.1.4.

All four implementation steps will be similar to 4.4.2 with some exceptions:

  1. Functional decomposition will be on GU+ affiliate level.
  2. Possible decomposition parameters for Service Broker and Message Broker will be stored centrally as execution plans /routing slips.
  3. Before implementing a new process, diligent investigation must have taken place and could result in further decomposition of existing processes with the next decomposition, according to new requirements. It may, or may not end up with the implementation of a new execution plan.
  4. Rule engines will be used very extensively. The main concern with this will be the implementation of inexpensive rules (avoid rules "explosion"). It is already done by splitting the rule tables.

As a conclusion of this approach, we can expect that the Performance will be the same as in the previous one. Maintainability and Reusability will be at the top, and Reliability/Stability will be our main concern and subject of the proper Infrastructure implementation.


References

[REF-1] Providing a Taxonomy for Use in UDDI Version 2 https://www.oasis-open.org/committees/uddi-spec/doc/tn/uddi-spec-tc-tn-taxonomy-provider-v100-20010717.pdf

[REF-2] HL7 Service-Aware Interoperability Framework http://www.hl7.org/documentcenter/public_temp_2631CD67-1C23-BA17-0CB72AE0515B2DAC/standards/v3/SAIF_CANON_R1_INFORM_2011SEP_public.pdf

[REF-3] Telecommunication Business Process Framework (eTOM) http://www.tmforum.org/BusinessProcessFramework/6775/home.html

[REF-4] Getting Started with the Oracle™ AIA Foundation http://docs.oracle.com/cd/E20713_01/doc.11112/e17359.pdf

[REF-5] Oracle OFM patterns http://www.oracle.com/technetwork/articles/entarch/index-098853.html