> Archive > Issue XLIII: September 2010 > Information Security for SOA: Why the Information Security Consultancy Industry Needs a Major Overhaul - Part II
Hans Wierenga

Hans Wierenga


Hans Wierenga is an enterprise architect working for DCE Consultants, an Amsterdam based management and IT consultancy firm. He is one of the two founding authors of the Netherlands Governmental Reference Architecture (NORA), which is the official reference architecture for all layers of government in the Netherlands. He is one of the three key authors of the GEMMA Information Architecture, in which the principles of the NORA have been translated into concrete designs to enable local governments across the Netherlands to act as the entry point of citizens and businesses to all government services. He is currently working on a project to connect all provincial governments in the Netherlands to the national databanks.

Other recent publications include 10 SOA Commandments and Architectural Information Economics.

Hans holds a Bachelor of Science (Honours) degree from the University of Tasmania and a Graduate Diploma in Computing Studies from Canberra University.



rss  subscribe to this author


Information Security for SOA: Why the Information Security Consultancy
Industry Needs a Major Overhaul - Part II

by Hans Wierenga

Published: September 15, 2010 • SOA Magazine Issue XLIII

The following is the second part of a two part series on "Information Security for SOA: Why the Information Security Consultancy Industry Needs a Major Overhaul". The first part of the article makes the case for replacing a systems-centric approach based on confidentiality, integrity and availability with a service-centered approach based on engendering trust, showing respect and delivering utility, and discusses what is necessary in order to engender trust. It can be read here.

Showing Respect

Business Value

In general, people will entrust their personal information freely to you only if they are satisfied that you will use it with respect for them. If they suspect that you don't respect them and expect that your competitor will, it is likely that you will lose business to that competitor, particularly if doing business with you requires them to divulge personal information. And if you can't lose business because your customers have nowhere to turn, the expectation of not being respected will lead many customers to hold back information whenever they can, thereby reducing your effectiveness. Providing your customers with the experience of being respected delivers pure business value.

Showing respect has, however, consequences that go wider than your business. Showing respect is a statement about the society you want to live in and about the values you hold dear. Do you want to live in a society where your privacy is respected as a matter of course? Then think further than your bottom line.

The current information systems consultancy industry approach does not contain the concept 'respect'. The nearest it gets is the concept 'confidentiality', and that is not the same thing. Within the CIA-paradigm, to protect and preserve the confidentiality of information is defined as ensuring that it is not made available or disclosed to unauthorized individuals and processes. This definition is flawed: who is authorised is part of the means, not the goal. Ultimately, determining which persons and processes are authorized is done by the system . This goes awry when the systems are either configured or fooled to accept as authorized those who have no business to access and manipulate their information. In the Netherlands, people have been killed because 'helpful' government officials divulged information as to their whereabouts to imposters claiming to be officials from another town. In other words, they pretended to be a part of the system. Confidentiality is a systems concept. You need respect in order to turn confidentiality into business value. Confidentiality is what you do, whereas respect is not only what you do but also what you believe and what you proclaim.

The information security consultancy industry gives us no guidelines on showing respect. To learn how to show respect, we do not, however, need to turn to some management guru. Most dating sites provide enough tips to get you going. These tips can generally be boiled down to five precepts: think respect, demonstrate respect, proclaim respect, facilitate respect and expect respect. As we shall show below, none of these precepts are adequately covered by the term confidentiality.

Like trust, and unlike confidentiality, customer experiences of being respected (or not) and the value it adds to your business are things that you can measure, simply by conducting opinion polls.

Think Respect

Respect is a cultural value. It should be a reflex, not a conscious behaviour, because it shows itself in almost everything you do. You can't hide disrespect for long. You may think that it is sufficient to refrain from saying anything bad about people while they are present, but the way you talk about others behind their backs is the way your listeners expect you to talk about them when they are gone. That is why you must think respect. If you don't think respect, all the other aspects of respect are ten times harder.

Of course, the information security consultancy industry does concern itself with changing the thinking of stakeholders, albeit indirectly. Information security awareness is increased whenever an organisation applies conscious attention to information security, even when the measures which are applied are not particularly effective. There is a case for evaluating such projects in terms of how they affect our thinking, just as we evaluate a dance not in terms of the progress that the dancers make from one part of the dance floor to another but in terms of the changes in the attitudes of the dancers to each other. Having said that, the more we focus the attention of stakeholders on the things that really matter, the more likely we are to achieve the changes we desire.

Thinking respect means acknowledging that the subject has rights at the same level as you. Subjects are just as entitled to see what use is made of their information as you. You can make that visible by letting him or her see who has had access to their information. Even if subjects rarely ask to see this, the mere fact that you can tell them that they can is important. And it has useful side-effects. In order to be able to disclose who accessed the information and why, you must record all such access. This can be used as a basis for detecting wrongful access, long after the event. We shall discuss later in this article why that is such an effective deterrent.

Thinking respect makes it natural to involve cultural norms and values in your business case for information security, and not just your bottom line, as the information security consultancy industry proclaims. For example, in the ISO 27002 derivative "the Standard of Good Practice for Information Security", the business case for confidentiality is based on the damage to the organisation, not to its customers. It is cheaper to say sorry than to prevent people being murdered as a consequence of a security breach of the system, the Standard indicates that there is no business case for investing in prevention.

Demonstrate Respect

Each interaction between you and your stakeholders tells the stakeholder something about how you respect him. You should therefore examine these interactions in detail. There are a number of things that you should pay particular attention to, as follows.

Firstly, it should be clear to the stakeholder how he and his information will be respected. That is not primarily a question of having appropriate statements in fine print; it depends more on the attitude that you convey in your interactions. For example, if you sell insurance, you are there to meet his insurance needs, period. It should be evident that his information will be used only for the purposes for which it is collected, because it is evident that these purposes are what you are all about.

Secondly, the interaction must convey the message that you and the stakeholder are of equal value. Don't ask the stakeholder to wait until it suits you to pay attention, or ask him to provide you with information that you either already should know or is clearly not pertinent to his case. Don't create the impression that your needs are more important than his. If your stakeholder suspects you of having a hidden agenda, you are lost. You must create a relationship of mutual trust. And that begins by extending trust. Not blind trust, just trust. People do not feel less trusted when subjected to routine controls which designed to weed out the people who are not like them, provided that they are performed in a discrete, non-obtrusive and non-discriminating manner. If you can conduct these controls in a manner which reinforces the "us" of you and the stakeholder as the dependable and honest ones versus the "them" of the cheaters, they actually enhance the "equal value" perception.

Thirdly, you must give the stakeholder confidence that his privacy is in safe hands. You must visibly treat personal information with care. Given a choice between a doctor who leaves the records of the previous patient on his desk while treating you and a doctor who doesn't, most of us would choose for the latter. And you must insist on genuine proof of identity before you provide the stakeholder with information concerning himself. If you don't, how can he rest easy in the assurance that you will protect his information against prying eyes and inquisitive ears? If you don't, aren't you creating the impression that you are helping him via the back door, and thereby attaching an odium of sleaziness to your relationship? A respected customer is helped via the front door, and there are hundreds of ways to make that helpful enough.

It should be noted that respect has a social context, and therefore, unlike confidentiality, means different things in different situations. In relationships which are not entered into freely - such as that between a citizen and the tax authorities - information must be treated with far more respect than the same information in a voluntary relationship. And in social media such as Facebook that information can be used even more freely, except, of course, by Facebook itself.

Proclaim Respect

When you succeed in demonstrating respect, you can reinforce the business value which that delivers by proclaiming your policy of respect. Back your deeds up with words. That makes your respect tangible, it helps your stakeholders to give a name to their experience of you. For this to work, it is important not to serve your words together with your deeds. Don't say: "I am doing this because I respect you." Your deeds are proclamation enough. The function of words is to help people make sense of their experiences, and you should let them do that in their own good time.

You proclaim respect by publishing your respect policy on your website. But that is not sufficient: you must also refer to this policy in your sales offers and contracts. That turns words into commitments, and commitments persuade where mere words just bore.

The words you use to proclaim respect are important. Use words which imply an active stance. Use the word 'privacy', but only in combination with the word 'respect'. Have a policy of respect rather than a privacy policy. Never use the word 'privacy' without the word 'respect'. 'Respect' is a stronger term. Most people will happily relinquish their privacy if they think that it will gain them respect.

Proclaiming respect adds business value in ways that proclaiming confidentiality cannot, because confidentiality is abstract and passive whereas respect is tangible and active.

Facilitate Respect

In our daily social interactions, we employ a vast array of expressions and mannerisms which we can employ to show respect without even having to consciously think about it. For most of us, it takes vastly less effort to say "Good morning" or "Have a nice day" than to say "Have a rotten day", even when we are in a sour mood. Our culture facilitates respect and makes disrespect more difficult. Respectful treatment of stakeholder information should be supported in the same way. The safe way should be the easy and the reinforced way. Provide people with procedures, training, tools and facilities so that they don't need to worry about compromising respect. To do that requires a way of thinking in which the systems are seen as serving the users, rather than the users being seen as serving the systems.

Note that facilitating respect is a necessary ingredient of all that you do, even those actions which do not directly expose information to disclosure. We do not use production data for test purposes, because we respect our stakeholders. Respect goes deeper than confidentiality: respect is about purpose, confidentiality is about disclosure only. Respect demands that information that no longer serves the purposes for which it was collected is destroyed, whereas confidentiality implies this at best in a very indirect manner.

Facilitating respect is not just a matter of supporting individuals. A culture of respect requires that respectful behaviour is reinforced and aberrant behaviour is detected and discouraged. The facilitation of respect requires manageable access security. And manageable access security is incompatible with an approach in which each of your many, many information systems is individually secured, as the CIA-paradigm dictates.

Expect Respect

Respect is a two-way street. Nobody is impressed with the respect of somebody who they themselves do not respect. If you allow your stakeholder to walk all over you, how do you expect him to trust you with valuable information?

You can show that you expect to be respected when you refuse to divulge confidential information to stakeholders who have not adequately demonstrated their identity. If somebody rings your call centre and is unable to identify himself, you don't divulge personal information concerning the person he proclaims himself to be, however vociferous he is. Even if you think divulging the information is the customer-friendly thing to do, you should consider the possibility that he will afterwards be so uneasy about it that he will take his business elsewhere. And you should consider other, safer means of making the requested information immediately available to him; for example by sending him a SMS, or by providing him with the means to identify himself securely.

Delivering Utility

Business Value

The utility of information is the degree to which it is made available to stakeholders in a way that is useful to them, at times and places that are useful to them. Its business value is evident.

Naturally, utility is in first instance the province of systems designers, rather than of information security advisors. It enters into the information security domain when the advisors are charged with establishing the difference between what the information systems of an organisation may reasonably be expected to achieve and what they actually achieve. COBIT applies this way of thinking in order to include effectiveness and efficiency in its list of criteria. It enters deeper into the information security domain when it concerns functions that are needed in order to engender trust in the information. This applies most strongly for the functionality which is required in order to properly confront reality.

To understand what it means to deliver utility, it is helpful to imagine that you are in desperate need of a hamburger, now. You will need to be able to find a place that can serve you a hamburger, that place must be open for business, the hamburger must be served quickly and it must be digestible. Thus understood, utility goes far further than the CIA concept of availability, which is defined to be the need for designated information systems functions to be available when required. Utility is determined by the user, availability by the provider. Utility sets demands on both the ability to find the information and its digestibility, availability does not. Utility is diminished by not providing functionality in the first place, availability is not.

It is imperative that information security is designed in a context where the concept of utility determines the context within which confidentiality and availability are considered. If this is not the case, information security advisors are inclined to disallow useful functions for security reasons, although they have neither the inclination nor the mandate to work constructively on solutions which permit these functions to be delivered securely. In my experience, this results in information security advisors being responsible for far greater denial of service than hackers.

Like trust and respect - and unlike availability - utility and the business value it adds can be measured, by asking the stakeholders.

Ability to Find Services

One of the key differences between the Internet age and the period in which the CIA-paradigm first saw the light of day is that there is now such an enormous supply of information and information functions that it has become a problem to find the best implementation, or even a useful one. We have a problem of choice. To make such choices we need information as to what is available. In service-oriented architectures there are tools to provide this information, but they are not used anywhere near as much as they could be. In part, this is due to an approach known as security by obscurity, which has long been decried by the information security consultancy industry, and rightly so. However, it is allowed to persist when it comes to service discovery tools.


The information security consultancy industry limits the use of the term 'availability' to functions of information systems, as they have been designed. Availability is reduced if these functions are not available for use when they are supposed to be, at least not with the required qualities. Such a reduction could be caused by a power failure, network overload, disk crashes and a whole host of other factors. Our definition of availability incorporates this, but is broader: an organisation has an availability problem if there is any behaviour of its information systems that ought to be available and isn't. Behaviour that ought to be available but has never been built is structurally unavailable, and the fact that this is due to a failure of the information systems department rather than a power failure does not reduce its impact on the proper functioning of the organisation.

In the Internet era, most organisations have immense structural availability problems. Information systems functions and behaviour which are necessary in order to be able to operate effectively using the Internet are often just not present. Typically, the following services are not fully present, mostly not even in the degree required in order create transparency, right wrongs and confront reality, as described earlier in this article.

  • The ability to present to the stakeholder, directly via the Internet and indirectly via a call centre and in writing, all the information concerning the stakeholder for which the stakeholder is the authentic source, and all correspondence to and from the stakeholder regardless of the medium used for that correspondence (e-mail, the postal system, handing documents over the counter,…) .
  • The ability to accept and contextually validate input from the stakeholder both via webforms and via services. Contextual validation includes checks that can only be carried out using the information resources of the organisation. For example, determining whether a requested delivery date is a valid date in the future is a context-free validation, whereas determining whether the organisation is capable of delivering the goods on that date is a contextual validation requiring access to its ERP system.
  • The ability to send outgoing messages to the stakeholder via the medium and to the address of choice of the stakeholder, regardless of the information system from which the messages originate.
  • The ability to inform a stakeholder actively and in response to a query on the progress in dealing with any request that the stakeholder has placed with the organisation.

Even if we were to limit the concept of availability to just the functions already present in the information systems of the organisation, there are a number of subtle differences between the standard information security consultancy industry application of the term and the application in the Internet age. Firstly, with SOA availability is applied to services, rather than to information systems. Secondly, availability requirements are determined in first instance by the stakeholder, not the organisation., and may therefore focus on times and places which make sense to the stakeholder. Thirdly, continuity must be provided not just in the fact of availability but also in the manner of it. For example, a link to an item on your website that worked yesterday should work today too, even if the item has been moved off the front page.


One of the key innovations of the fast food industry, as practiced by hamburger chains such as MacDonalds, was to start making the hamburgers before the customer ordered them, so that they would be ready when the order was placed. In the Internet age, the same principle should be applied to any service which other services will need to use in the ordinary course of events. Only then is it possible for new services to be developed quickly, efficiently and manageably. For if, in a project to develop a new service, these other services must be developed as well, that greatly increases the complexity of the project. As a rule of thumb, doubling the number of things that must be built results in quadrupling of the project complexity.

The services that should be available before anyone asks for them include the following:

  • Authentication: establishing that somebody who wishes to access your information is indeed who he says he is
  • Authorisation control: establishing that somebody should be permitted to carry out an action
  • Translation of messages from one format to another
  • Making shared customer information available
  • Making product information available
  • Making the information about a customers that a service stores available to other services.


The first time my son warmed up a frozen pizza in the oven, he failed to remove the plastic covering. It wasn't a great success. Of course, he should have read the instructions. But why can't they make frozen pizzas that don't need a plastic covering in addition to the cardboard box that they come in? Why create an unnecessary barrier? Unfortunately, it is not only pizzas which are protected by unnecessary barriers: the information systems consultancy industry is responsible for many barriers which make it more difficult to use information systems, as we shall discuss below.

One of the key advantages of Service-Oriented Architectures is that it promotes the exchange of information using formats (XML) that can be easily interpreted in order to both process and present it, without the recipient having to build knowledge of all the metadata into his own systems. That makes it very much easier for a stakeholder to use your information. Far less time needs to be spent on agreeing to data formats, building them into computer programmes and testing that these programmes work. Changes on your side normally result in just minor effects on his. It is cheaper, quicker and more manageable. Changes on his side need not concern you at all. As a result, the barriers to use are greatly diminished. Even though the availability is the same, the utility is greatly increased.

The information security consultancy industry is itself responsible for barriers to use which are concerned with the way in which we must identify ourselves to information suppliers. I have a list of some twenty organisations with which I maintain some sort of online account, and I suspect that many of you will regard me as a minor player in this regard. There is no way most of us can remember all the account names and passwords without writing them down or using the same password over and over again. If you want to discover every detail of somebody's life, all you have to do is entice him to open an account on your site and note the password he uses. A single digital identity which you can use at any site you want would provide far more effective access security and greatly increase the utility of the sites.

CIA within a TRU Context

The Best of Both Worlds?

Admittedly, the above comparison of the CIA paradigm (confidentiality, integrity, availability) with the TRU-approach(trust, respect, utility) seems harsh. Are you really better off if you ignore the information security consultancy industry and instead allow yourself to be guided by a management guru with no IT background, a dating site and your desire for a hamburger? Wouldn't it be better to implement CIA within a TRU context, taking additional measures for all those security aims which the CIA does not adequately address? Wouldn't you then be able to ensure that your information security was business driven, whilst taking advantage of the vast experience of people within the information security consultancy industry?

That would be a brilliant idea if the only problem with the CIA paradigm was that it missed some things. Unfortunately, it is also sometimes wrong - not just a little off the mark, but totally in the wrong direction. Provided that you are aware of these problems and are prepared to put your foot down whenever your security advisors go astray, implementing CIA within a TRU-approach is okay, but no more than that. You have to correct for several severe biases, as outlined below.

The Role-Based Access Security Simplification

For about the last twenty years, CIA-security has promoted role-based access security as the ultimate wisdom. The idea is that instead of saying employee Smith is allowed to change salaries, we say that Smith is a payroll clerk and everybody who is a payroll clerk is allowed to change salaries. That way, when Smith is replaced by Jones, we only need to record that Jones is a payroll clerk, and this is sufficient to permit him to perform all functions which payroll clerks are entitled to perform, without any change to the payroll application. This way of structuring information, which results in less information needing to be stored and in information needing to be updated in fewer places when something changes, has been known for the past thirty years as the standard practice of normalisation when applied to anything except authorisations, but for CIA-advisors it is still a big deal. Just google the words "role based access security" if you think that this is in any way an exaggeration.

Unfortunately, role-based access security is very limiting, particularly in the Internet age. It assumes that what a person should be allowed to do is entirely dependent on what he is. If you are a payroll clerk, you should be able to do everything that payroll clerks are allowed to do. And payroll clerks are allowed to perform any information retrieval or update function that they may need to perform in the course of their work. CIA-advisors call this the "need-to-do"-principle. A more accurate name would be the "might-sometime-under-some-conceivable-circumstance-need-to-do"-principle, because for this to be workable the authorisations must also permit people access to functions that they need to perform only rarely. When you base access security only on what people are, the result tends to be as indiscriminate as a cluster bomb.

It is also possible to introduce other factors into the process of determining what somebody should be allowed to see and do. One factor is the relationship between the person and the subject of the information. We apply this all the time when we allow our customers to see only a slice of all the information available. This slice consists of the information pertaining to them, and not that of other customers. And we should apply this by allowing call centre employees access to customer information only after they have supplied information which correctly authenticates the customer. In general, the authorisations of front-office employees should be slice-based rather than function-based. Another factor consists of the tasks concerning the subject of the information which have been assigned to the person. For example, we have no problem with a medical specialist seeing our medical file if he treating us, but for any medical specialist in the country to be able to access our medical file whenever he wants is going a bit too far. The stronger and more specific the assignment to our case, the more legitimate the access. This "assignment"-information is typically registered in Business Process Management systems, and there are no good technical reasons why it cannot be harnessed to make access security more accurate, to make it is as discriminating as the rifle shot of a marksman. However, most CIA-advisors actively discourage the use of such information, and you will find almost nothing in the information security consultancy industry on the topic.

Over-Reliance on Walls

Imagine that you are a burglar, wanting to steal a precious jewel. What would worry you more: a security system consisting of a high wall which you may have difficulty in scaling, or the possibility that the security system could trap you, so that you could be apprehended any time security personnel happened to look your way? With the wall, either you don't scale the wall - in which case you go away and work out a better plan of attack - or you do. Unless you are detected in the very act of scaling the wall, you are fairly safe. With the trap, you are nervous all the time that the trap could get you.

With information security, the difference between the time that you are at risk with a trap-based defence and a wall-based defence is far greater than with burglary. Take, for example, the choice between two strategies for the prevention of fraud concerning minor payments to creditors. In the wall-approach, one person must enter the payment into the system and another person must approve it. In the trap-approach, the same employee can perform both actions, and his actions are recorded and can be analysed in detail up to seven years after the event. The wall-approach is open to compromise because employees will find reasons to "need" the access codes of colleagues, for example in order to be able to function while they are sick or on vacation. Such misuse can only be detected at the moment it occurs. With the trap-approach, the perpetrator must live with the fear of detection for years and years.

Nevertheless, CIA-advisors have a strong bias towards building walls. That these walls often result in behaviour which undermines security is seen as evidence of the need for even higher walls, which result in still more security-undermining behaviour. This pattern is the almost inevitable result of addressing a behavioural problem with technology, instead of starting with the people whose behaviour should be regulated.

Tailoring Security to the Systems, Instead of the Systems to the Security

The systems-based thinking that is associated with the CIA-paradigm has strong parallels with the transportation industry of the pre-container era. Just as each item of freight in that era needed to be handled according to its own shape, size and weight, the CIA-paradigm results in each information system being addressed as a unique combination of security characteristics. Ensuring that the information security of adjacent systems matches is a complicated and expensive process, delivering messy and unmanageable solutions. This is the inevitable result of making the security fit the systems, instead of making the systems fit the security. In the rest of the IT world, enterprise architecture is used in order to ensure that systems are designed to be interoperable with other systems, but the world of information security lags at least a decade behind such developments.

An architecture for security interoperability between systems should focus on attributes that make sense at the level of information systems. Systems support trust with information reliability, and respect with confidentiality. The basic principles of an interoperable information security architecture were propounded by Bell and la Padula [REF-1] - for confidentiality - and by Biba [REF-2] - for information reliability - more than thirty years ago. Application of these principles in a SOA environment leads to the following architecture:

  1. For each of the attributes 'information reliability' ('reliability' for short) and 'confidentiality' there is a linear scale of security levels, going from no support to maximal support.
  2. Each separately securable part of the IT infrastructure - a security space - is designed to support a particular level of reliability and a particular level of confidentiality. For example, the file system of a computer may be designed to support 'authentic' reliability and 'private' confidentiality, whereas a website may be designed to support 'certified' reliability and 'none' confidentiality. In Service-Oriented Architectures, a security space is normally a part of the infrastructure with its own service gateway.
  3. Each item of information is classified according to the levels of reliability and of confidentiality which must be maintained for that item. This classification forms part of the item. Transformations of the item may change the level. For example, validating an incoming message increases its reliability.
  4. An item of information may of itself maintain a particular level of reliability by means of a digital signature, and a particular level of confidentiality by means of encryption. An item which maintains its own reliability and confidentiality to the desired levels is permitted in any security space.
  5. Otherwise, an item of information is permitted to be in a security space if that space maintains at least the level of reliability and the level of confidentiality which must be maintained for the item. The infrastructure prohibits the item from being passed from one space to another if the target space cannot satisfy the respect level. If the target space cannot satisfy the reliability level, the reliability level of the item is downgraded to that of the target space.
  6. A person and/or program may access a space only if he is authorised to at least the space's confidentiality level and has been authenticated by means of a mechanism which is strong enough to handle that level. Within the space he may be subject to additional access constraints. A person and/or program may write and modify information to a space only if authorised to at least its reliability level and authenticated by means of a mechanism which is strong enough to handle that level.

It is of prime importance that all organisations use the same levels, with the same meaning. That way it is possible to ensure that all components which go into producing a service result meet the security levels which the service requires. Here is a suggestion:

Reliability Confidentiality
Level Meaning Level Meaning
None No guarantees as to the reliability of the information are given None Available to all
Sanitized The source is not an authentic source for this information, but does guarantee that the content has been subjected to all appropriate contextless controls (for example, each date is a valid date) Public Available to weakly authenticated users
Validated The source is not an authentic source for this information, but does guarantee that the content has been validated in the source's business context (for example, if the information is that employee X been off duty due to sickness from a particular date, that X is indeed an employee and that he was on duty until that date) Restricted Available to weakly authenticated users who are specifically authorised for that type of information
Authentic The information originates from an authentic source and has been validated by that source Private Available to strongly authenticated users who are specifically authorised for that type of information and have a registered responsibility for the subject of the information
Certified The information is authentic and has been digitally signed by the authentic source Sensitive Available to strongly authenticated users who are specifically authorised for that type of information and have a registered responsibility for the subject of the information, in the course of their carrying out an assigned task with respect to the subject
    Secret Available to authenticated users via bespoke authorisation and very strong authentication

For utility it makes less sense to have a 1-dimensional categorisation, but the approach is the same as for trust and utility. We determine the requirements of the highest level services and use these to determine the requirements of lower level services. The requirements have to do with the required up-time (for example, 7 x 24) and the intensity of use that must be supported. All lower level services must have at least the same up-time and must be able to support at least the same intensity of use as the sum of all the higher level services into which they are incorporated.

The Safety of Islands

The information security consultancy industry has a strong bias towards maintaining its own control over everything to do with information security, even when many of the stakeholders are external to the organisation. For example, there has been a very slow take-up on federated identity solutions, in which organisations allow access to their information systems to users who have been authenticated by other parties and of whom they themselves have no prior knowledge. Instead, they insist on maintaining an individual account and password for each such external user, despite the fact that this creates huge barriers to such users and encourages them to record their passwords, thereby compromising security rather than enhancing it. In the days when each organisation was effectively an information island that may have been appropriate, but in the Internet age it makes no sense at all.

The slow take-up of federated identity technologies is in part due to information security consultancy industry advice to organisations to be "in control", in other words to become a fortress, dependent on no-one.


Current information security consultancy approaches try to improve the wrong attributes - confidentiality, integrity and availability, instead of trust, respect and utility - and try to improve these attributes at the level of information systems rather than services. These errors are so fundamental that attempts to improve them by means of gradual, incremental changes are doomed to failure. It is time to start afresh, using goals, methods and approaches which are appropriate to the new age. We must learn to engender trust, show respect and deliver utility. We must replace systems-centricity by service-centricity. We must align authorisations with tasks. Then we can use information security to reap business benefits.

The key enabler for this new approach to information security is the Service-Oriented Architecture paradigm. It can be applied at the levels of the business, processes and information systems in order to provide a coherent account of business value. Stakeholder experiences of services can be measured, providing a means to measure the contribution of information security to that value. These experiences can be discussed in terms of the trust they engender, the respect showed through them and the utility they provide. Information security and Service-Oriented Architecture go hand in hand.


[REF-1] Bell, David Elliott and La Padula, Leonard J. (1973) (PDF). Secure Computer Systems: Mathematical Foundations. MITRE Corporation.
[REF-2] Biba, K. J. "Integrity Considerations for Secure Computer Systems", MTR-3153, The Mitre Corporation, April 1977
[REF-3] COBIT is available at
[REF-4] CRAMM (CCTA Risk Analysis and Management Method): see
[REF-5] ISO/IEC 27000 family of standards: for an overview see
[REF-6] The Standard of Good Practice for Information Security is available at
[REF-7] Wierenga, Hans (2010): 10 SOA Commandments, available at