ServiceTechMag.com > Archive > Issue LVII, December 2011 > Systems Science and Service Computing
Paul S Prueitt

Paul S. Prueitt

Biography

Professor Prueitt has taught mathematics, physics and computer science courses in the nation's community colleges or in universities or four-year colleges. He has served as Research Professor in Physics at Georgetown University and Research Professor of Computer Science at George Washington University. He has served as Associate Professor or Assistant Professor of Mathematics at HBCUs in Virginia, Tennessee, Alabama and Georgia. Prueitt was co-director of an international research center at Georgetown University (1991-1994). He is a NSF reviewer and Principle Investigator. He served for over a decade as an independent consultant focused on information infrastructure, software platforms and intelligence algorithms. He has consulted on national intelligence software platforms and continues this work under private contracts.

His post Master's training in pure and applied mathematics focused on real analysis, topology and numerical analysis. His PhD, earned in 1988 from The University of Texas at Arlington, was developed using differential and difference equations as models of neural and immunological function. He has over forty-five publications in journals, books or as conference papers.

Motivated by a desire to understand the nature of the American educational crisis, he served for seven years at a Historical Black College or University, or at an open door minority serving institution. He currently teaches mathematics learning support courses, in Atlanta, using a deep learning method. The method uses four steps to bring mathematics learning support students to a college level understanding. The method is motivated from a study of behavioral neuroscience and properties of immune response mechanisms.

Contributions

rss  subscribe to this author

Bookmarks



Systems Science and Service Computing

Published: December 14, 2011 • Service Technology Magazine Issue LVII PDF
 

Abstract: An extension to complexity science has implications to our theory of computing, and to service computing. Natural complexity is seen to arise from cross-organizational scale emergence. Emergence is then seen as the aggregation of substructure. In biology, aggregated form meets function and often appears non-deterministic. Many functions are met with the same subset of substructural forms, and a single function may be met with several different subsets of these sub-structural forms. A set of selection mechanisms determines which elements are used to fulfill which functions. Real time selection provides robustness in service fulfillment. An application of these principles separates computing resources into framework-based substructure, which is then used to meet functional requirements. This provides a solid foundation for a new service computing technology as well as new science, based on knowledge about interactions between organizational layers. Computing in support of service fulfillment maybe optimized through a re-use paradigm based on observation, and the extraction of patterns. The patterns are then used to define process atoms. Organization scale hides specific use when composed process atoms fulfill service support requirements. New forms of information security are then available.


Section One: Stratified Model for Service Computing

Many are interested in tackling enterprise architecture to support social media, and in particular social media like Second Life, Twitter, or Facebook. Any stratified or layered service fulfillment model will use an automated process to reduce the measurement of data occurrences into a small set of structured information categories. The means to reproduce these structures or categories may then be linked to a grid, or framework, such as the Zachman framework [REF-1]. This results in a self-monitoring system that gives the persons using the social media a common cognitive structure [REF-2].

Stratified models using variable underlying frameworks create a means to more easily and reliably reflect thematic threads of the online community. In principle, these trends maybe made visible to all participants, and can be associated with various self-described intentions. Also, in principle, information generated within a social network may be secured by encrypting the underlying set of process atoms. This type of security may be optimal when associated with a data-encoding paradigm based on the use of categorical abstraction [REF-3].

A ‘non-local” feature of the proposal provides some pleasant surprises. Any enterprise solution should account for how “framework compressed” data might be optimally encoded and broadcasted. The surprise is in how this optimality is achieved. A distributed encrypting regime may follow the concept of super distribution of information [REF-4], where digital artifacts are publicly available and distributed with encryption. Once this is in place, a set of rules for generating data is moved within the grid rather than the data itself.

The first step towards understanding this non-local feature is in the use of replication. Algorithms receive data from measurement processes. An isomorphism is designed to place a metric on the quality of models for observed phenomenon. But there is a requirement that all data be encoded into a small set of process atoms. And yet the composed service must be as expected by any grid participants. A metaphor is found in comparing the word occurrence in a book. Encoding each word into a compression dictionary would not be efficient. However, finding the atoms of words allows a reasonable encoding scheme.

Raw data might be better collapsed into an informational structure by using a pattern extraction technology and one or more formal multi-dimensional frameworks. Informational sub-structure is then analyzed for evidence of fitting into pre-existing categories. This fitting process creates a constraint in the entire architecture.

What appears to be an obstacle is in fact the door to a deep revolution in service computing. The constraint imposed by a single framework, if well developed, will work well most of the time. However, it is now necessary to have a measure on newness, and novelty. Any closed formalism must be re-opened, periodically, to ensure that the best methods are being used. If and when methods are configured or new ones are added, social media applications may update those members’ informational substructure while also generating distance from other substrates.

The closure and modification of substructure is a necessary function to recognizing changes in experience. A novelty detection mechanism must exist that is sensitive to the discovery of hidden or poorly measured phenomenon. But human brains face this problem all the time. This opening and closing of generative fields is similar to the creation and dissolution of electromagnetic and quantum field coherence in the individual human brain [REF-5].

The purpose of an isomorphic algorithm is to develop an ontological model that mirrors an observed natural process. This development is not done directly. Resources are created which then produce inductions about the viability of a computational model. A substrate or an underlying layer is developed, and a Mill’s logic [REF-6] provided so that the substrate is complete in that the logic properly places aggregations of the substrate as having properties [REF-7].

A more full description of Mill’s logic and related inferential logics is useful. In essence, the logics measure the properties of composed set of atoms and assisting in a decompression of sets of atoms into information consumable by humans.

Mill’s logic provides a global measure of the value of a system’s resources. This measurement system may be designed in correspondence to a model of the frontal lobe’s role in selective attention [REF-8] as demonstrated by a model of immune response function [REF-9]. The full architecture remains only in theory but is richly suggestive of new types of cognitive aids. Information scientists are open to the synthesis of this theory with practical experience in enterprise architectures. We need only to align current service computing [REF-10] practices with the elements of natural science and then add what we already know about neuro-architecture.

Organizational stratification is seen to involve two processes, composition and synthesis of sub-structural categories. Opposite to aggregation is a synthesis of category, of invariance measured to exist across multiple instances, due to what is often called reification. The methods, while still difficult technically, for synthesis of category, has a conjectured optimal computational footprint. This footprint also has an optimal encryption regime.

Perhaps as important, the development of a small set of atomic “templates” is essential to a matching of our computational architecture to what we believe occurs in the human brain system. Each template may be unique and yet fully functionally within a single brain system. We suspect that a similar uniqueness may be generated for each use of the encryption regime within social media. These regimes will be robust when used by participants and very opaque when viewed from outside.

There is a leap of faith, since the notion of a fixed small size set of semantic atoms is not resolved. However, the architecture has the feature that an operational set of atoms may be rapidly produced based on situational factors. Some parts of methods are developed in what is called “knowledge engineering”. An extraction, for instance, of a sub-striate element within an aggregation into category is called a reification of the sub-striate category from instances. In theory, this extraction process may be accomplished fast and thus the redevelopment of the processing “field” may be accomplished in real time.

Our abstract formalism provides a proper grounding for computer science and, in particular, data repository management systems. The abstract form, e.g., a category’s data container, derived from natural process is implemented as a highly efficient computational mechanism. This is done as part of a computational architecture designed to create a super distribution [REF-11] of the sub-structure to a set of trusted computational mirrors [REF-12].

The mirrors manifest locally when the supporting stratified knowledge ontology system is used. The complete system, first outlined in 2005, creates an arbitrary number of disassociated “computational processing fields” [REF-13]. Finally, computable mechanisms are defined within a digital service architecture that supports ontology management.

The relationship between localized formation of stratified ontology resources and the “computing backplate” is an important one. Localized observation leads to structural information about various targets of observation. These structured forms are emergent in real time using a substrate and a derivative of Mill’s logic.

With the aid of pre-existing information, the resulting composed situational ontology will select some form of coherence, and thus exhibit a “field”. This coherence is then realized as the situational inference engine. A mirror of the situation is observed locally. A global benefit is refined indirectly as sets of atoms are produced and synthesized into a common computing backplate.

The question occurs. How might this architecture and a specific illustration of isomorphic algorithms be realized in the short term? The answer involves several steps. Current tools, and data standards, are relevant of course and should be used when possible. The architecture will produce standard ontology. So we may use resource description framework (RDF) standards to create persistent data structure and relationships. However, the proposed information-encoding paradigm will be quite different from standardized ontology web languages.

The difference is seen in two ways. First, the data must be in the form of n-aries or RDF triples with no tables, no columns and no SQL statement library. We will not use a relational database. The encoding of a RDF repository should be optimal in a sense to be discussed. Second, an underlying multi-dimensional framework is to be used as a source for normalization of the reification of a “parts from wholes” process.

From data normalization, across the system, we get a decrease of two orders of magnitude in the digital bit footprint, e.g., the size of active memory being used. To move towards optimality, use a fractal or fractal like compression to obtain a small set of primitives. It is also predicted that an increase of two orders of magnitude, e.g., one hundred times current, in functional information throughput. Inference and encryption may be done at the same time. A reduction in computer technology costs and increases in measurable efficiencies is then possible.

These value propositions are separate from a qualitative evolution of the new system from current generation decision support systems. This qualitative difference is captured with the sense that isomorphic algorithms create a mirror between human neural processing field characteristics and the abstract nature of a discrete ontological model of situations under review. The mirror is thus based on matching architectural elements known to be functional within the human brain system and a computational architecture [REF-14].

We can turn to neuro-architecture and find stratification. Thus the possibility of integrating current service-oriented computing with neurally-inspired process architecture is suggested.


Section Two: The Tri-level Architecture

Limitations in mathematics, and computer science, may be partially accommodated using “stratified theory”.

The stratified model has three forms:

  1. Conceptual and notational
  2. Implementation as optimal computer processes
  3. Models of neural and immune function

It is proposed that a computing architecture designed to assist social media in synthesizing complex discussions be created. This architecture may be used to produce analysis about subject matter indicators from the measurement of human language occurrences. A Mill’s logic is used [REF-15][REF-16][REF-17]. The architecture motivates the use of the co-occurrence of parts of words, whole words and phrases as indicating functional roles. 

A formalization relating the parsing of words by algorithms and co-occurrence patterns was expressed in research developed between 1996 and 1998 [REF-18]. This was connected to well-known semantic extraction methods. Parsing finds patterns and functional uses for these patterns. Encoded sub-structural information at one level produces compositional organization at a different level.

Higher order patterns form from an aggregation of encoded structural information. The meaning of these high order patterns is then subject to human-interpretation. The effects of the interpretations are encoded into replication mechanisms. Over time, the encoding of consequences from human interpretations, to the degree possible, results in a distributed knowledge base that then may be used to assist in future observation and interpretations.

Stratification provides real time generated context. For example ambiguation/disambiguation methods address issues of emergence of meaning directly. In linguistic systems the point of emergence is where specific word relationships are found. This point is where we must rely on human capacity to make interpretations. In the architecture, human interpretation is situational and emergent; adding to the value of isomorphic algorithms working at the sub-structural and composition levels of organization.

Function-structure relationships need ambiguation and disambiguation as part of differential responses based on what is available to respond with and what needs to be accomplished. Information on resource availability and use is often external to the computable data. Fortunately, function-structure relationships may be addressed using qualitative structure function analysis [REF-19]. This analysis is then available in real time.

This form of computational knowledge representation takes into account other constraints. This allows choices to be made at times when an aggregation is emerging. The choice may address a specific function that is external to the computing formalism. This is contextualization in real time. For purposes of what is often called “semantic extraction from text”, a representation is being made to balance the limitation of computer technology with human-in-the-loop influence and control.

We may see this as a real time dynamic on a continuum based neural model of human decision-making. In natural settings the emergence of function from the aggregation of substructure passes through choice points. At these points the specifics of environmental conditions, such as the distribution of actual substructural elements, forms a type of negotiation over actual synthesis of function. This synthesis is achieved by humans, within the social media, in real time, and the results are propagated in near real time.

It is suggested that during this function-structure negotiation nature builds symmetry inductions. These symmetry inductions produce secondary consequences related to the creation or modification of natural category.


Conclusion

Stratification into layers delineated by time is observed from natural science within physical processes. So as to align new science with current science, complexity must be defined as a phenomenon associated strongly with the emergence of form composed from regularly occurring atoms in substructure. A match to function is seen as a utility function. These then evolve relationship analysis so as to fit the use patterns found within a community.

A generalization about human inference is made that is computable.  The natural science suggests that induction is seen as a consequence of how non-locality and emergence exist in the natural world. Three layers of organizational stratification fully account for properties we associate with induction, non-locality and emergence.

Self-organizing, emergent, multi-scale-based mechanisms may then regulate complex behaviors. Metabolic processes, for example may be organized within what might be referred to as a molecular layer, separate from the behavioral intentions of the living system. From this, a digital architecture is derived.


References

[REF-1] “The Zachman Framework”, Zachman.com, http://www.zachman.com/about-the-zachman-framework

[REF-2] Prueitt, Paul Stephen, “Stratification Theory as Applied to Neural Architecture enabling a Brain-like function for Social Networks”,Presented to Winter Chaos Conference of the Blueberry Brain Institute, Southern Connecticut State University, March 18-20, 2011.

[REF-3] Prueitt, Paul Stephen, “An Interpretation of the Logic of J. S. Mill”, in IEEE Joint Conference on the Science and Technology of Intelligent Systems, September 1998.

[REF-4] Mori, Ryoichi et al, "Superdistribution: An Electronic Infrastructure for the Economy of the Future", Transactions of the Information Processing Society of Japan, vol. 38.7, July 1997, pp.1465–1472.

[REF-5] Prueitt, Paul Stephen, “Grounding Applied Semiotics in Neuropsychology and Open Logic”, in IEEE Systems Man and Cybernetics, October 1997.

[REF-6] Mill, James Stuart, System of Logic, 1843.

[REF-7] Prueitt, Paul Stephen, “An Interpretation of the Logic of J. S. Mill”, in IEEE Joint Conference on the Science and Technology of Intelligent Systems, September 1998.

[REF-8] Levine, D. et al, “Modeling Some Effects of Frontal Lobe Damage - Novelty and Preservation”, Neural Networks, 1989, pp. 103-116.

[REF-9] Eisenfeld, J. et al, Systemic Approach to Modeling Immune Response. Proc. Santa Fe Institute on Theoretical Immunology. (A. Perelson, ed.), 1988.

[REF-10] Erl, Thomas, et al (2011) “SOA Governance, Governing Shared Services On-Premise and in the Cloud”, Prentice Hall

[REF-11] Mori, Ryoichi et al, "Superdistribution: The Concept and the Architecture", Transactions of The Institute of Electronics, Information, and Communication Engineers, vol. E73.7, July 1990, pp.1133–1146.

[REF-12] Prueitt, Paul Stephen, “Social Digital Media”, Institute for Distributed Creativity e-forum, 2011.

[REF-13] Prueitt Paul Stephen. “Global Information Framework and Knowledge Management; Part 1”, Datawarehouse.com, November 8, 2005.

[REF-14] Prueitt, Paul Stephen, “Service Oriented Architecture in the presence of Information Structure. Topic Maps 2006”, Earley & Associates, 2006.

[REF-15]Finn, Victor, “Plausible Reasoning of JSM-type for Open Domains”, In the proceedings of the Workshop on Control Mechanisms for Complex Systems: Issues of Measurement and Semiotic Analysis, December 8-12 1996.

[REF-16] Prueitt, Paul Stephen, “Quasi Axiomatic Theory, represented in the simplest form as a Voting Procedur”, VINTI, All Russian Workshop in Applied Semiotics, 1997.

[REF-17] Prueitt, Paul Stephen, “An Interpretation of the Logic of J. S. Mill, in IEEE Joint Conference on the Science and Technology of Intelligent Systems”, September 1998.

[REF-18] Prueitt, Paul Stephen, “Similarity Analysis and the Mosaic Effect” In the proceedings of the Symposium on Document Image Understanding Technology, University of Maryland Press, 1999.

[REF-19] Finn, Victor, “Plausible Reasoning of JSM-type for Open Domains”, In the proceedings of the Workshop on Control Mechanisms for Complex Systems: Issues of Measurement and Semiotic Analysis, December 8-12 1996.