Over the past decade IT systems have grown in the variety of solutions, platforms, frameworks and implementation approach options available to organizations. This growth points to greater heterogeneity across and increased complexity within these systems. Coupling this with the drive to organize, reduce costs and gain efficiencies creates a need for a proven methodology towards tackling both the complexity and the discipline necessary to successfully achieve a mature first-class IT organization. Enterprise architecture (EA) is increasingly becoming acknowledged as a methodology to help address a wide variety of industry-specific needs based on best practices. Similarly service-oriented architecture (SOA) as a discipline has become increasingly popular as a methodology of improving organizational agility and reducing costs. SOA and EA have evolved into mature approaches to solving many of the goals and challenges businesses face today. This article explores possibilities of where service-oriented analysis, the first step towards achieving an SOA, can and should take place within an EA, specifically concerning the Department of Defense Enterprise Architecture Framework (DoDAF) version 2.0. Service-oriented analysis is the first step in migrating towards an SOA. The goal of service-oriented analysis, while multifaceted, primarily seeks to deliver a standard set of services that comprise a service inventory. The SOA methodology addresses the question of what services need to be built and what logic should be encapsulated by each service. The process through which one conducts service-oriented analysis is supported in many ways within the strategic top-down approach of the enterprise architecture (EA) methodology. Service-oriented analysis, therefore, can and more importantly should be seen as a necessary part of any successful EA endeavor.
This article compares centralized and decentralized application security models. It focuses on technical costs and organizational considerations while comparing these models. The analysis shows that centralized management of security policies has significant advantages over decentralized application security deployments including cost reduction, better risk mitigation and greater freedom for application developers to focus on creating business value.
Now, more than ever before, the global business environment expects greater customer service, demands deeper value chain integration and drives fiercer competition while requiring corporations to perform efficiently with diminishing resources. IT departments are in the midst of this global storm and are now pushed to deliver applications rapidly while minimizing costs. Fortunately, with the maturity of agile development, SOA and related standards, and cloud computing, the foundations are available for building resilient, nimble and cost effective IT infrastructure that is responsive to business needs.
Modern applications that meet current business needs consume information from multiple sources, internal and external. Composite application, Rich Internet Application, service APIs, virtualization, and cloud services provide extensive integration of data for real-time information access. This drive to open up business applications for integration comes at a cost: application security. As companies move towards opening up systems for greater information access they expose systems to broader security risks, including sensitive data leak, unauthorized information access and an increasing vulnerability attack surface area. In this article, we will contrast centralized and decentralized security models and explore how corporations are using centralized application security for cost-effective, consistent, and manageable security.
Application security is deployed within corporations in centralized (hub-spoke), decentralized (point-to-point) or hybrid models.
This article, adapted for the Service Technology Magazine from the Spring Web Services 2 Cookbook, covers integration testing using Spring-JUnit support, server-side integration testing using MockWebServiceClient, client-side integration testing using MockWebServiceServer, monitoring TCP messages of a Web Service using TCPMon, and monitoring and load/functional testing a Web Service using soapUI.
New software development strategies require comprehensive testing in order to achieve the quality in the software development process. Test-driven design (TDD) is an evolutionary approach to the development process, which combines the test-first development process and re-factoring. In the test-first development process, you write a test before writing the complete production code to simplify the test. This testing includes unit testing as well as integration testing.
Spring provides support for integration testing features using the spring-test package. These features include dependency injection and loading the application context within the test environment. Writing a unit test that uses mock frameworks (such as EasyMock and JMock to test a Web Service) is quite easy. However, it is not testing the content of the XML messages, so it is not simulating the real production environment of testing. Spring Web Services 2.0 provides features to create server-side integration tests as well as the client-side one. Using these integration test features, it is very simple to test a SOAP service without deploying it on the server when you are testing the server side, and without the need to set up a server when you are testing the client side.
In the first recipe, we will discuss how to use the Spring framework for Integration testing. In the next two recipes, new features for integration testing of Spring-WS 2.0 are detailed. In the last two recipes, using tools, such as soapUI and TCPMon for monitoring and testing Web Services, are presented.
This article is geared to help CIOs, IT managers and architects understand: when to use BizTalk Server, SQL Server, Windows Server AppFabric; when to use Azure Service Bus; when to use hybrid integration architecture; the Federated Enterprise Service Bus (ESB); integration of PaaS and SaaS applications; and how to design future proof integration patterns.
Integration is key in a best-of-breed application landscape, B2B (business-to-business) scenarios or service-oriented architecture (SOA). Applications, functions and services need to exchange data in order to participate in business processes. In the early days, integration was done by creating peer-to-peer connections between applications using direct database interaction, exchanging proprietary import/export files or executing API (application programming interface) calls. Over time, this resulted in unmanageable and badly performing applications due to the well-known “spaghetti integration” dilemma.
With the arrival of the enterprise application integration (EAI) server, integration got a more sophisticated approach. However, very often it still results in peer-to-peer connection spaghetti, but now with a broken (hub) in the middle.
Especially with the acceptance of SOA, the ESB integration pattern was born: loosely coupled integration through an Enterprise Service Bus, preferably based on the principles of service design by Thomas Erl, thereby creating a manageable and flexible abstraction layer. But, the risk of creating unmanageable and non-performing integration architectures is still there; it is said that the greatest competitor of integration middleware is the developer.
Traditionally, integration middleware (the common name for the group of products facilitating ESB, EAI, B2B) is hosted and operated by the company that also hosts the back-end applications that are used to support the company business processes. For some scenarios, mainly B2B, external providers are sometimes used to integrate the third parties outside the company firewalls.