Leszek Jaskierny

Leszek Jaskierny

Biography

Master IT Architect, working for Hewlett-Packard since 2002, has 17 years of experience with Computing Industry, delivering project for the Financial Services customers. Leszek has gained programming, solution development and project leading experience, building IVR systems, database projects and Web applications. Prior to joining HP, Leszek worked for Metrosoft, Inc, delivering solutions for major US brokerage houses and investment banks.

Representative accomplishments are: SOA platform implementation for National Bank of Poland, Multi-Channel Platform and Corporate Electronic Banking application for Lukas Bank (Credit Agricole), Investor Phone System for TD Waterhouse and Quick & Reilly, Bloomberg Investor Phone for GTE Airfone (Verizon). Leszek’s current focus is on Internet Banking Solutions, Service Oriented Architecture featuring both, JEE and .NET technologies, Enterprise Security and Master Data Management solutions.

Contributions

rss  subscribe to this author

Bookmarks



SOA in Banking: A Review of Current Trends and Standards based on an Example of a Real-Life Integration Project Delivered to a Financial Services Customer Published: June 11, 2014 • Service Technology Magazine Issue LXXXIV PDF

Abstract: The following article is based on the example of an actual banking project, delivered by HP Professional Services to a major European bank. A similar approach has been taken in other integration projects to extents, depending on the legacy constraints and requirements of the customer.

Introduction

Due to rapid growth in the market, a European bank decides to replace its IT application infrastructure, including its core processing system, integration platform and all front-end delivery channels. One of the challenges of this project was the integration of over 20 back-end systems being used by the bank, as well as integration with external service providers.

Three main challenges of the project were: (1) business related, to bridge the gap between business requirements and technical specifications, (2) management related, to shorten development time and meet the aggressive timeline of the project, and (3) technical, to assure that performance requirements will be met on the given hardware platform. The performance of the transactional system, especially the middleware layer, is extremely important because of strict rules imposed by the systems being interfaced. An example is how ATM requests processed in the online mode are governed by strict time-out rules.

Two key areas of the solution were: (1) the definition of a unified, enterprise-wide data model for communication, and (2) selection of tools, allowing high-level definition of orchestrated services, that will meet agreed performance requirements in the production environment and allow business analysts to participate in the service design process.

The project team acknowledged that the resulting data model would need to follow business requirements but also need to comply with the requirements of the existing legacy systems. While basic data transformations can be performed directly in the service interface of the back-end systems, fine-grained business requirements can be fulfilled only by the orchestration of coarse-grained back-end services.

In contrast to business services exposed by the back-end applications, services resulting from the orchestration were called virtual services.

img

Figure 1

Unified Data Model

Building a unified data model was the first step on the transformation roadmap for a service-oriented architecture. After an IT assessment, 21 different back-end applications have been identified that provided various services in the bank's production environment. Applications were operated internally by the bank itself or externally, usually by another bank, exposing a set of remote services. After identification of the distinct data areas, a "primary owner" or "source application" (application that owns the particular data) has been assigned to each data area, redundancies (multiple systems storing the same data) have been identified and finally systems that were dependent on the selected data have been found. As a result, representation of the particular data item has been defined as an XML schema type.

How to Start the Process?

Common approaches to the data modeling were to:

  • Select and use an industry standard data model
  • Select and customize template(s)
  • Build model from scratch

The first option was ruled out, because there were no single standard defining end-to-end banking operations. Also, country specific regulations are very often inconsistent with particular standards, leading to the need for proprietary extensions.

The last option was ruled out, because the effort (cost and time) estimated for the "build from the scratch" approach far exceeded the available resources.

A Unified Data Model was created based on a template: XML Reference Data Model developed by our team, using several industry standards and applicable to the financial industry, like SWIFT, IFX (Interactive Financial eXchange) or ISO20022 (ISO Standard for Financial Services Messaging), and drawing on our practical experiences previous projects.

This selection allowed for a pragmatic approach to the standardization process within the bank. Instead of enforcing some specific standard, it provided an effective framework and templates for the implementation teams in all the relevant banking operations' areas. It was positioned an effective baseline for the Gap Analysis and subsequent analytical and design activities, with the final result being a full, customized XSD representation of the actual bank's environment. We found such an approach very practical and effective in several other banking projects.

Another question to be answered was the scope of the project, defined as:

  • enterprise-wide project
  • domain-oriented project

Although the project would follow similar steps and activities in both cases, an enterprise-wide project would require much stronger governance, in order to synchronize work performed by the teams focusing on particular domains.

The project referenced in this article was recognized as an enterprise-wide project, and the work was organized around the following activities:

  1. create catalog of functional and non-functional domains
  2. recognize links between identified domains
  3. create a common dictionary that will be shared by all domains
  4. create domain-specific dictionaries appropriate for particular domains
  5. identify and document domain-specific data structures, starting from most characteristic data entities related to particular domain

Identification of Business Domains

Particular domains and sub-domains have been associated with different business units of the Bank. For example:

  • User management, including maintenance of the user data, permission models, relations with the customers have been associated with electronic banking department
  • Management of the banking products has been split into several departments responsible for different product lines, e.g. deposits, lines of credit and credit cards
  • Transaction processing was also split between the back office, which is responsible for transactions, versus transaction origination and authorization that has been managed by the electronic banking department.
  • Although sales processes are mainly organized around sales of the banking products and product-related services, they are usually managed by a separate department rather than departments being responsible for particular products themselves.

Another aspect of such a split was the definition of data ownership. Data ownership can be viewed from an IT (technical) or a business perspective. IT is mainly focused on where data is being stored, while business looks at the data from an operational perspective. Very often those two perspectives are different. For example, in case of the bank using multiple core banking systems (CBS) data entities belonging to the same logical domain are stored in different core systems. Looking from this perspective, the application of logical views allows for the identification of important links that are not visible on the technical level.

The following diagram presents different ways to split the data, depending on the viewpoints described in the next chapter.

img

Figure 2

Definition of the Design Standards

The consequence of the decision to follow service-oriented design approach was the ability to involve business at any stage of the project, starting from the analysis through to design, implementation and maintenance. In order to ensure that such cooperation is possible, the project team had to prepare set of simple, yet extremely important design standards enabling this cooperation, such as:

  • naming conventions
  • organization of the physical assets of the project, including organization of the project folders and files
  • structure of the data dependencies on the analytical level
  • decisions on the tools and document templates that will be used by the analyst and developer teams
  • introduction of the service governance elements
Naming Convention

A rule of the thumb is that naming conventions should be self-explanatory and human-readable. It is also good to agree on the consistent usage of a structure and to use fixed parts of the name defining its purpose. Below are examples of names used as a part of definition of the XSD schema:

  • AmountType
  • CrcType (currency ISO code)
  • NameType
  • AddressType
  • EmailAddressType
  • PhoneNumberType
  • ...
  • UserBasicType
  • UserPersonalType
  • UserCorporateType
  • UserPermissionsType
  • UserType (or UserElement)
  • ...
  • SetUserReq
  • SetUserResp
  • GetUserReq
  • GetUserResp
  • ...

The first part of the example shows basic types, that might be either enterprise-wide (as Amount or Currency), or domain-specific, such as Address type.

The second part shows domain specific types, where "basic" type is always defining a minimum subset of the data common for all the sub-types. An example is how "personal" and "corporate" sub-types of the user is demonstrating different identities of the same entity (e.g. the user can be identified as an employee of the corporation, or individual customer of the bank). "Permission" type is an example of a set of properties that are applicable to any user. Finally, UserType is a super-type, grouping all possible details of the user together.

The third part of the example is presenting message parts that will later be used to build services and service operations (functions).

Organization of the Physical Assets of the Project

Although it may sound like a minor decision, the organization of the data model in terms of folders and files can have significant consequences, both for IT and for business. While the technical approach focusses on the organization of files based on their technical properties (e.g. simple types, complex types, interfaces), the business approach would be oriented around grouping the files based on data ownership.

Enterprise-scale projects usually require a number of teams working in parallel. Proper organization of the physical assets was very helpful in achieving a certain level of independence between the teams, focusing on different business domains and working with different counterparties from the bank.

Structure of the Data Dependencies on the Analytical Level

Another fundamental decision is to distinguish between enterprise data that will be shared between various business domains and domain specific data that most likely will be governed within the domain.

In both cases, common data will be organized in the form of dictionaries. While domain-specific dictionaries are governed locally by the team responsible for that particular domain, enterprise dictionaries will be shared between the teams and hence would require special governance procedures.

Tools and Document Templates

The most important part of collaboration between the project teams is to focus on the business and functional contexts. This is why proper selection of the design tools and document templates is critical for effective collaboration between the teams, ensuring that the results can be easily shared and merged together on the enterprise level.

Selection of the tools for data modeling.

In the project referenced in this article, a choice was made between two tools:

  • Enterprise Architect (Sparx Systems)
  • XML Spy (Altova)

While those tools are significantly different in nature, both allow for the definition and maintenance of the data model. The main differences are the following:

  • Enterprise Architect allows creation of various views of the model, starting from a high-level, logical model, followed by different business and implementation perspectives
  • XML Spy focuses on the technical view of the model, showing only one perspective – organization of the data that will be used by the services.

The choice made on the project level was to use XSM Spy as a primary data modeling tool, because of the outstanding quality of the XSD produced by this tool.

In the other project, a similar decision related to the choice between EA and XML Spy led to mixed approach. Thanks a high-quality API exposed by EA, small application were developed to automatically synchronize work between the two tools.

Introduction of Service Governance

Service governance is essentially related to central management of the enterprise resources that span between domains and business areas. All the already mentioned design standards are contributing to the overall governance, but cover only one aspect. Tools and templates won't work without strong management procedures that needs to be followed at any stage of the project.

Unfortunately, event templates and procedures are not sufficient for overall success. It could be the case that, especially during the analysis phase of the project, work done in accordance with agreed standards could lead to meaningless results. In the project referenced in this article, a team working on the transaction authorization procedures came to the point where processes were so complicated that there was no simple way to model them in any of the available tools.

Such illuminate the most important aspect of governance – strong leadership. It could be the Chief Architect or any other team member with a clear vision of the overall solution and the ability to put several small bits and pieces into the big picture to represent expected results of the project. In the referenced project, the decision was made to establish an Architecture Board consisting of the Chief Architect and a few other team members with good overall knowledge of the project. Throughout the project the Architecture Board was responsible for reviewing of all the analytical documents, to achieve overall consistency. In case of conflicts, members of the Architecture Board were dealing with business to produce satisfactory solutions.

Definition of the Physical Data Model

Definition of the physical data model was following top-down vs. bottom-up approach.

Top-Down Approach

The primary focus of the top-down approach was to understand the future needs of the business stakeholders, by:

  • gathering user requirements
  • defining and cataloging the services required to support the user requirements
  • mapping the required services on the data model templates
  • fine tuning the model
img

Figure 3

Bottom-Up Approach

The bottom-up approach focused on the existing applications by:

  • reviewing enterprise assets
  • identifying and cataloging the service candidates from the existing application assets
  • mapping selected service candidates on the data model templates and fine tuning the model
img

Figure 4

Design of the XML Schema

Results of the top-down and bottom-up analyses lead to the creation of the unified data model defined as an XML schema, where:

  • functional blocks were grouped according to business domains, such as customer data, account/product data, transaction data...
  • technical and business data types were defined in common dictionaries shared by all the business domains
  • the reference data model provided basic structures (or templates) that may be extended during the analysis process
img

Figure 5

The prefix CFM stands for Common Message Format. In various banking projects, data models defined for particular banks were named individually for each bank. For example, the model for Bank X could be named BXMF.

Design of the Service Messages

Although XSD was used to define data structures, the ultimate goal of the SOA project was definition of the services. This is why the data model was usually referred to as a "message model".

Message-specific data types extended the following abstract types:

  • CommonRequestType
img

Figure 6

  • CommonResponseType
img

Figure 7

Usage of the common types for request and response messages brought another level of standardization. Despite technical data transferred in case of Web services in the SOAP header, common types were defining common parts of the body of the messages.

Design of Web Services

The Web services were defined for "business services" exposed by the back-end systems and for "virtual (orchestrated) services" exposed by the service orchestration engine separately. In both cases:

  • The Web service definitions (WSDLs) referred to data types defined by the unified data model (XSD)
  • The unified data model was shared between all the units of the organization
  • The unified data model was applied to the services provided by the applications (implementation services) and orchestrated services (virtual services)
img

Figure 8

Selection of Service Orchestration Tools

The ability to create fine tuned business services following detailed specifications prepared together with the business was one of the fundamental requirements that justified adopting the service-oriented approach for the project.

The selection of the service orchestration tool was based on the following set of goals, highlighting particular business and technical requirements of the project:

  • Goal 1: Define business processes that interact with external entities through Web service operations defined using WSDL 1.1. The message part of the WSDL should refer only to the complex types (or elements) defined in the schema of the unified data model.
  • Goal 2: Use well-known and widely adopted notation.
  • Goal 3: Avoid tight dependency between the design tool and the workflow notation, in order to make it portable between design tools.
  • Goal 4: Provide data manipulation functions for the manipulation of data needed to define process data and flow control.
  • Goal 5: Support an identification mechanism for process instances that allows the definition of instance identifiers at the application message level.
  • Goal 6: Support the implicit creation and termination of process instances as the basic lifecycle mechanism and build flow-control mechanism to enforce the sequence of the actions.
  • Goal 7: Define a long-running transaction model that is based on proven techniques.
  • Goal 8: Implement a zero-coding pattern when designing and deploying orchestrated services

The tool that was selected for the project was based on the proprietary solution, and is due to be replaced by an off-the-shelf software package. Currently a number of leading ESB-class applications available on the market is fulfilling the requirements stated above.

Summary

The project resulted in the identification of approximately 140 business services provided by internal and external service providers. These services were further orchestrated into 500 highly specialized, fine-grained virtual services in support of various service consumers that were mainly front-end applications.

img

Figure 9

Following the top-down and not bottom-up approach, the services delivered for the front-end applications almost perfectly matched requirements stated by the business stakeholders, while only minor modifications to the back-end systems were required.

All the data transformations between internal data structures of the back-end systems and unified data model were implemented in the specific adapters, exposing services from the particular systems. In some cases, services were implemented using native capabilities of the back-end applications (e.g. internal scripting languages), while in other cases, adapters were placed on top of the legacy systems as a separate, add-on applications.

From a technical perspective, service orchestration was implemented on the ESB-class platform that was selected for the project.

img

Figure 10

Since most of the online communications, including a timeout-sensitive ATM channel, were based on the orchestrated services, detailed performance testing was performed to measure the actual overhead of the orchestration:

  • testing was performed on 2000 concurrent users
  • system was deployed on 8 virtual servers
  • average Web page response time was ~ 2s (65% < 1s)
  • average orchestrated service roundtrip time was ~ 500ms
  • overhead of orchestrated service processing was ~ 20ms
img

Figure 11

These results proved that properly designed service orchestration that focuses on business requirements and avoids extensive data transformations brings only minimal performance overhead.

Later on, orchestration capabilities were used several times to develop new capabilities required by the business. In one of the examples, the creation of brand new line of "saving" products, combining term deposits with mutual funds, was completed in only three weeks, while a similar task would have required three months following the traditional and not service oriented approach.