Inputs into Joint Meeting of ISO TC 184/SC5/WG1 and the IFAC/IFIP

Inputs into Joint Meeting of ISO TC 184/SC5/WG1 and the IFAC/IFIP Task Force
on Enterprise Integration - Paris, May 1998
Section 1: A Potential Future Activity of the IFAC/IFIP Task Force and ISO TC
184/SC5/WG1
R. H. Weston, J. D. Gascoigne and P. J. Gilders
Following preliminary discussion at Boulder this document seeks to lay groundwork for possible work items of the
Task Force and SC5/WG1 on “capturing enterprise engineering requirements in support of the development of
component-based systems”.
The case made is based on the following assumptions:
1.
Vested within the Task Force and SC5/WG1 is a ‘state-of-the-art’ understanding of general enterprise
engineering requirements which can be developed and applied in more focussed ways. Much of this knowledge
is embedded into the GERAM specification and its pre-existing reference methods and architectures, namely
PERA, CIMOSA, GRAI and TOVE;
2.
Fuelled by developments in distributed systems design and construction, emerging software paradigms
(including component-based software engineering paradigms) are projected to impact significantly on the
development of next generation agile enterprises. It is widely reported that industry wishes to move towards
distributed problem solving software environments which facilitate innovative and collaborating decision
making rather than continue to use hand-crafted, stand-alone software applications which mitigate against
organisational, process and technological change. Industrial enterprises need to perform in a stable, effective
and scalable way in complex and changing environments. Apparently component-based systems engineering
paradigms offer a means of realising targeted incremental change to enterprise systems.
To meet general requirements expressed under 2, the concepts and techniques deployed within emerging
component-based software engineering paradigms must be developed and applied more generally as an integral part
of component-based enterprise systems engineering paradigms.
Implicitly this will require ‘descriptions’ of

components (i.e. common and specialised software, machines and computer supported human system elements
used in manufacturing enterprises)

architectures (e.g. frameworks, structures, infrastructures and controls used to structure the behaviour of
groupings of components)
Discuss_Mandate
1 of 38
07/28/17

methods (e.g. procedures and associated rules which govern the way in which component-based systems are
conceived, engineered and developed)
Despite an evident desire for there not to be so, there remains a significant gap between concepts and techniques
advanced by the Enterprise Modelling and Integration (EM&I) community and those used by Software Paradigm
Developers (SPD). Some may view many EM&I concepts as being top-down, ‘pie-in-the-sky’, others may view
much of contemporary SPD to be directionless, bottom-up, ‘reinventing of the wheel’. Obviously however both
views are important and should be consistent with each other. Arguably any significant and generally useful
development of either view can only be realised in the light of understanding the other view.
Logically it follows that the ‘guardians’ of the EA&I view (namely ISO 184/SC5/WG1 and the IFAC/IFIP Task
Force on Enterprise Integration) should, based on their work leading to GERAM, seek to develop and document a
consensus view of enterprise engineering requirements for software paradigms, in particular component-based
systems. The quid pro quo is that the guardians of the SPD view should also (possibly in parallel) develop multibusiness and users views of what can be achieved by deploying alternative component-based paradigms. Also the
SPD community should question and develop any EM&I position in the light of the capabilities of new enabling
paradigms. Naturally the development of an EM&I consensus view of component-based enterprise system
engineering requirements will require a focussed reassessment of the coverage of GERAM and its pre-existing
architectural and methodological specifications. Initially it will also require key inputs from developers of the SPD
view.
Bearing in mind the arguments outlined above, the authors produced section 2 of this document as an ‘Aunt Sally’
line of thinking which ‘connect’ enterprise engineering requirements to emergent properties of component-based
systems. It does not represent views of other WG1 and Task Force members at this stage. Rather at this stage it is
recommended that other complementary lines of thinking are developed leading towards a plan of how the joint
working group might develop an agreed specification of component-based enterprise engineering system
requirements.
Therefore the ‘Aunt Sally’ reported in section 2 should be seen as no more than one starting point for discussion.
Sincere apologies are offered in respect to its voluminous (and at times rather parochial) nature. The authors and
their fellow researchers in the MSI Research Institute include proponents of both EM&I and SPD views.
Individuals in MSI have developed their own perspectives on such matters by applying, evaluating and developing
the use of state-of-the-art concepts from both EM&I and SPD perspectives. By no means is it claimed that
individual views have been developed into a consistent description of the problems and issues involved. However,
section 2 has contents that are sufficiently neutral that apparently it does not overly upset any individual in MSI.
Furthermore material included in section 2, thus far does not attempt to map current methods, frameworks,
constructs and tools onto the component-based system requirements described in outline. Therefore no attempt is
Discuss_Mandate
2 of 38
07/28/17
made to credit others (and particularly other WG and Task Force members) with known solutions to requirements.
Where solutions are mentioned they are merely indicative of the type of problems involved.
It may be appropriate to point out certain ‘undertones’ (i.e. matters not explicitly mentioned in section 2) which are
the issue of ‘prejudices’ held by the first author, namely his belief that
GERAM (and its predecessor frames and methods) are excellent in concept, and provide the most complete public
domain documentation on EM&I issues, but with respect to their provision of a context for shaping componentbased paradigms:
a)
their scope requires extension to adequately cover issues connected with the engineering of business systems (as
defined in section 2).
b) their current concepts and ‘interfaces’, which relate to software engineering paradigms, require more explicit
definition.
c)
their concepts and ‘interfaces’ related to the design, realisation and development of social systems requires
advancement.
d) their concepts on resource and components models require development.
e)
that without appropriate toolsets and reference models focussed on well defined requirements their use will
remain limited.
Therefore the implied position taken when developing section 2 was that Task Force members are the guardians of
the EM&I but that to provide a requirements specification for next generation software paradigms (including
component-based systems) which would add value to industry at large they must
apply GERAM selectively and with empathy for the work of the developers
of component-based paradigms.
Recommendation
The joint meeting is invited to consider whether: (a) it sees the development of guides for developers of componentbased paradigms as a useful and important role for it to assume; (b) if the answer to (a) is ‘yes’ , to consider how it
might develop such guides and whether the ‘Aunt Sally’ decomposition developed in section 2 has a role to play;
and (c) to assess and justify (or to decide how to assess and justify) what if any standards might be needed by
industry to complement that development.
Discuss_Mandate
3 of 38
07/28/17
Inputs into Joint Meeting of ISO TC 184/SC5/WG1 and the IFAC/IFIP Task Force
on Enterprise Integration - Paris, May 1998
Section 2: Enterprise Engineering Requirements Capture in Support of the
Development of Component-Based Systems
R. H. Weston, J. D. Gascoigne and P. J. Gilders
PREAMBLE
This section of the document seeks to flesh out a coherent view of the future role of component based enterprise
systems in developing reconfigurable business processes capable of operating competitively in complex and
uncertain environments.
If WG1 and the Task Force so decide fragments of this paper could be developed as the basis of a case for a new
mandate from ISO TC 184/SC5 aimed at capturing generic enterprise engineering requirements that can guide the
ongoing development of component-based systems.
1.0 ENTERPRISE ENGINEERING IN SUPPORT OF THE ‘AGILE’ ENTERPRISE
Generally, industrial and commercial enterprises operate within complex environments that change frequently as a
result of political, economic, social and technical (PEST) influences. Therefore an enterprise with a capability to
manage change rapidly and effectively will have a competitive edge over others that do not. It follows that change
management may be a key business process in many companies.
Enterprise Engineering is concerned with change management on a large scale. Its theories and supporting
techniques recognise that the human brain is not capable of fully assimilating the levels of complexity typically
involved. Consequently, contemporary enterprise engineering approaches use abstractions of the enterprise at a
number of levels. Abstraction processes typically involve generalisation of information and therefore effectively
valuable information may be lost if reference is only made to a single abstraction (or view of the enterprise).
Therefore a good enterprise engineering approach will minimise any effect of information loss, by making good and
consistent sets of abstractions that allow effective and timely decisions to be made by the various personnel
responsible for managing change.
Normally an enterprise will have multi-purposes and multiple goals. Therefore different parts of the business are
likely to operate under different environmental conditions, and are subject to different PEST influences. As these
influences will change irregularly and unpredictably it follows that even if it were possible to determine and describe
an ideal enterprise configuration this would have to be (1) a compromise configuration (with respect to different
needs of its multi-purposes) and (2) modified continuously and possibly at a greater rate than could practically be
Discuss_Mandate
4 of 38
07/28/17
followed by any change management business process. Indeed the change management process is further
complicated by the fact that a change at one level of abstraction in an enterprise is likely to have knock-on effects at
the same and other levels. Hence many personnel in an enterprise could be affected by a change and thereby may
need to contribute to a related change initiative (i.e. an instance of the change management business process). Also
it may be necessary to handle change at each level at a different frequency, implying the need for any change
initiative to have a well-defined timeframe at each abstraction level. This re-emphasises the point that there will not
be a single optimum enterprise configuration under conditions where environmental change cannot be predicted.
We conclude that no optimum enterprise configuration can exist and, as a consequence, that responses to changing
requirements must be restricted and imperfect.
None the less, if properly applied enterprise engineering theories and techniques can help to (a) define compromise
enterprise solutions (which may focus on particular initiatives that generate a high yield) and requirements for
change at different levels by making good abstractions; (b) structure and co-ordinate change initiatives, and (c)
facilitate the reuse of knowledge and information (in support of decisions made about change) so that better and
faster change management can lead to improved environmental responses and thereby competitive behaviour.
Based on the above we can infer necessary attributes of an agile enterprise capable of responding competitively (i.e.
rapidly and effectively) to changing conditions.
2.0 CHANGE AT DIFFERENT SYSTEM LEVELS
Figure 1 is a generalisation of levels of abstraction commonly embedded into public domain enterprise engineering
frameworks and proprietary management consultancy methods. This system decomposition is not intended to be a
definitive classification which assumes that all enterprises have systems which correspond neatly to the system
levels depicted. It is based on the following assumptions.
1) There are real resources such as buildings, people, machines, software, money, energy, roads, etc. which can be
deployed to build enterprises. People, machines and software resources can carry out the activities required to
achieve the multi purposes of an enterprise. They can be hired, fired, acquired, installed, etc. and be assigned
different roles and responsibilities. They are supplied by educational systems, machine builders and software
companies as unitary component building blocks of operational systems. They may be large-grained, complex
building blocks that may be considered to constitute a system (in this case a resource system) in their own right.
Discuss_Mandate
5 of 38
07/28/17
Business System
What should the Enterprise be doing ?
Process Systems
What are the value added activities ?
Operational Systems
How can the process requirements be achieved?
Resource Systems
What does each Enterprise resource do ?
Figure 1:
2) Typically, operational systems meet process requirements by organising, planning, scheduling, co-ordinating,
controlling and monitoring some collective application of a group of resources. Operational systems are
themselves real resources, in as much that they comprise software, people or machines which organise, plan,
schedule, co-ordinate, control or monitor ‘lower level’ people, machines and software resources. It follows that
there may be a fuzzy dividing line between system levels as a particular operating system can be a resource
system and vice versa. However, generally, operational systems will be distinguished by their need to be
application specific, i.e. their operational behaviours should be continuously aligned to specific process
requirements of an enterprise which change predictably (with planned job and product changes) or
unpredictably (such as in response to environmental change). Commonly unitary resources are more generally
applicable and enable ‘vendors’ to supply them into many different enterprise configurations and/or types of
enterprise. It is known that significant change management constraints arise if operational systems are
designed, implemented and changed in an ad hoc and/or proprietary way by external suppliers. High cost
overheads and/or ineffective operational systems may result if responsibilities for the life-cycle engineering of
operational systems are inappropriately assigned to internal business units.
3) A process system is a theoretical abstraction designed to represent a thread of value adding activities. These
abstractions help to communicate business and production requirements in a meaningful and efficient way
between the personnel concerned with that process. It is reported that typically the business purposes of an
enterprise can be represented meaningfully by up to ten business process descriptions. Formal models of
business processes and models of their interactions can facilitate analysis about the impact of possible changes
on the performance of an enterprise. If this modelling is achieved with reference to a suitable enterprise
engineering framework it can facilitate collective decision making and thereby the enactment of changes. If
such an approach to change management is to be enabled on an ongoing basis it will be necessary to maintain
consistency between theoretical descriptions of process systems and the operational systems and resource
Discuss_Mandate
6 of 38
07/28/17
systems those descriptions represent. Appropriate organisation structures will also need to be used to ensure
that conflicting requirements of business processes (and thence of the different business purposes) of an
enterprise can be resolved. Hence these structures should enable different process systems to share operational
systems and resource systems as required, and thereby to achieve effective but harmonious realisation of the
business objectives and mission of the enterprise. Importantly, however, this organisation structure should not
place unnecessary constraints on the change management process. Indeed it should promote innovation and
knowledge acquisition.
4) The business system should frequently reassess the purposes of an enterprise in the light of changing
environmental circumstances. It should therefore be concerned with strategic issues, seeking answers to
questions like what should the enterprise be doing to be more competitive? It follows that the business system
should generate the vision, strategy, business objectives and operating principles for the enterprise so as to
maintain its viability and vitality.
Table 1 illustrates typical characteristics of each system level described above.
SYSTEM LEVEL
SCOPE
Business System
Assessment of the
business within its
environment
Process System
Assessment of
processes to support
the business
purposes and their
requirements
Assessment of
operational
improvements
needed
Assessment of the
resources required
Operational System
Resource System
CHANGE
FREQUENCY
Yearly
INFORMATION
CHARACTERISTICS
High level, aggregated
and uncertain
6 monthly
High level data – more
precise specification of
current processes
Monthly
Detailed operational
performance data and
operational rules.
Weekly
Particular resource
information and
performance figures.
OUTPUT
Vision, mission,
strategy, business
objectives and
principles.
Description of
process requirements
(metrics) and their
high level
relationships
Description of
operational rules
used to control and
monitor processes.
Description of
resources
(components) which
are used to control or
execute processes.
Table 1: System level characteristics
It is evident that the topmost two system levels in Figure 1and Table 1do not really exist. Rather they are models
stored and processed in people’s heads, on paper or by a computer modelling facility. As such they allow those
concerned with the life-cycle engineering of systems to develop and use complex abstract concepts to make
decisions. The development and use of these virtual systems supports the generation and definition of what will be
referred to in the rest of this document as a business solution. The business solution will be viewed as being the
compromise enterprise configuration referred to earlier. The bottom two system levels are concerned with a
physical manifestation of the business solution. Collectively they will be referred to as the physical solution.
Discuss_Mandate
7 of 38
07/28/17
Clearly it will be possible to achieve the business solution in many ways using different physical solutions.
However whatever physical operational systems and resource systems are deployed, if change management is to be
achieved on a continuous (and possibly incremental) basis consistency must be maintained between corresponding
virtual (business) and real (physical) solutions.
Many enterprise engineering activities involved in business solution development are naturally centred on the use of
a process oriented approach to problem decomposition whereas commonly physical solution development is centred
on the use of a function-based approach to decomposition. This reflects generic differences between requirements
and aspirations of users of enterprise systems (possibly expressed as a set of abstract business and process systems)
and those of their suppliers (of real operational and resource systems).
Evidently a focus on one or more business processes of an enterprise can improve the way in which groups of
humans make decisions about an enterprise, such as: how can the enterprise respond to a new opportunity? how can
it reposition itself in an established market? how can it focus on and develop its core competances? how can it
improve its resource allocation processes? how can it change its human and IT systems to improve its
responsiveness, improve product quality, reduce overheads costs, etc.? what cultural changes are required? should it
develop new partnerships and customer supplier relationships? etc. Therefore a process-oriented approach to
problem decomposition provides a basis for analytic enquiry and co-ordinated decision making which is better
suited to user needs than is a traditional function-based decomposition approach.
On the other hand the constructors and vendors of enterprise systems may continue to favour use of a function-based
decomposition approach. A function based decomposition of common operational and resource system
requirements can lead naturally to specifications of generic functions which can be realised in the form of
components, or common building blocks of functionality1. From a vendor’s viewpoint ideally such a component
will be sold to many customer enterprises operating in different application domains and/or industrial and
commercial sectors. It is also evident that the adoption of current approaches to providing system components 2
allows vendors to (a) compete with other component vendors in various ways, and (b) have significant influence on
the way in which their customer enterprises operate.
Regarding (a), component and systems vendors may compete by being more capable than their competitors at
identifying and solving specific functional requirements of their customers. They may also compete by devising or
adopting novel methods or by using specific enabling technology to solve user problems. Alternatively, they may
innovate in terms of alternative implementation options to provide systems and components of superior
performance, capability or quality than others elsewhere. Concerning (b) it has often been stated by industrial and
The purist may not like the use of the term ‘component’ in this context. We will return to a definition of the term
‘component’ later in this document and develop the notion of a component oriented approach.
2
From this point in the document the term ‘system component’ or simply ‘component’ will be used somewhat
colloquially in a way similar to its more general use by system vendors. A real component may be a building block
of a resource system or an operational system. Whereas a virtual component will be a building block of a process
system or business system. The caveat about the use of the term ‘component’ referred to in footnote 1 applies.
1
Discuss_Mandate
8 of 38
07/28/17
commercial users of systems that the properties of IT components (as determined by their suppliers) can ‘drive’
properties of process systems and business systems: thereby the component supplier may inadvertently place
significant constraints on the ability of their customers to behave competitively (Kawalek and Leonard 1996). This
may be a natural consequence of any significant mismatch between process-oriented and function-oriented views of
requirements, e.g. as might be adopted by users and vendors respectively. This means that generic components may
not adequately match specific process needs of users, particularly if the components are large grained (Lehman
1991, Warboys et al 1887). This may be because: (1) the supplier only has a limited or specific view of generic
process requirements; or (2) design and implementation compromises must be made to provide a general rather than
specifically tuned capability. Vendors of components for operational and resource systems may also need to retain
proprietary information about their component designs or their method of implementation to remain competitive.
Indeed, it may even be part of a competitive strategy to withhold such details so that a supplier can compete more
effectively for maintenance and reengineering work or even to maintain a mystique or pre-eminence in a field or
niche. Hence the notion of developing well-defined component models which are consistently defined with respect
to common process models may be resisted strongly in certain quarters.
Natural conclusions which can be drawn from the above observations are that for the foreseeable future:
i.
customer enterprises will mainly use process decompositions/representations to characterise the operations
they could and do carry out.
ii.
suppliers of operational and resource systems will mainly use function-based decompositions to characterise
the components they produce.
However, there is not at present, a free interchange of explicit models of generic process need or generic component
specifications. It is appropriate therefore to inquire about and seek to advance the status of process and component
paradigms and their application to provide means of coping with increased levels of uncertainty experienced by
enterprises (Schön 1971, Handy 1989). Not all enterprises, or indeed all parts of a single enterprise, will operate
within an uncertain environment. However, many will do so. It follows that more agile business systems, process
systems, operational systems and resource systems will be required which are capable of being rapidly and
effectively reformed as requirements change, possibly on a continuous, rather than an episodic basis (Goldman et al
1995).
The focus in the next section of this document is on design principles which impact directly on the agility of
systems. Integral to this discussion will be a consideration of design principles which impact on the reuse of system
components, as such a topic is central to the development of an effective component-oriented approach to realising
systems. Subsequently design principles related to agile business processes will be considered.
Discuss_Mandate
9 of 38
07/28/17
3.0 DESIGN PRINCIPLES OF AGILE SYSTEMS
An assumption of this section is that certain vendors (i.e. suppliers of manufacturers 3) will wish to continue to
supply general purpose system components. Subsequently these components will be configured into either (i)
general purpose operational or resource systems applicable for use in a given domain of different enterprises, or (ii)
unique operational or resource systems as specified by a particular enterprise. However the extent to which vendors
will welcome standard component specifications is flagged as an open question, the answer to which is likely to
depend upon the nature of such a standard and its perceived impact on the ability of a vendor to be competitive. An
associated assumption is that under increasingly uncertain environmental conditions manufacturers will need to
deploy theoretical models of business systems and process systems which can be realised effectively by deploying
well-aligned operational systems or resource systems which as far as is practical will deploy well-proven
components which can be redeployed (or reused) by some means as high level system requirements change.
Implicitly this implies the need for models of real components, i.e. ‘virtual components’ which abstract key features
of interest about the components, their interaction and behaviour within systems as required by different users of the
models, be they humans or software ‘agents’ performing the role of system designers, constructors, developers,
managers, or suppliers. It is also assumed that the various parties involved in a change management initiative which
results in system change will be situation/application dependent but it is assumed that the frequency of
reconfiguration is likely (on average) to increase.
3.1 COMPONENTS, COMPONENT INTERACTIONS AND SYSTEM STRUCTURE
As illustrated by Figure 2, systems can be formed by aggregating components into higher level systems, thereby
generating larger grained building blocks of an enterprise. However when using contemporary approaches to
system design and construction complex, costly and time consuming enterprise engineering effort is involved in
aggregating the necessary enterprise components to produce the system behaviours required by a specific. Indeed
multi-MECU project budgets are commonplace, as are lead-times of 6 to 12 months. Yet invariably the outcome
will be the installation of finite, discrete and inflexible systems which will be inappropriate to the service of the host
enterprise in a developed state (Kawalek and Greenwood 1998, Gascoigne and Weston 1998).
The term ‘manufacturer’ is used interchangeably with the term ‘enterprise’. In this context a ‘manufacturer’ is the
‘end user’ of systems and components. The main focus of discussion is on manufacturing enterprises. Such an
enterprise may comprise a number of companies and is likely to realise a variety of business processes. Therefore
despite the focus on manufacturing systems, much of the discussion will also apply in business, commercial and
even government domains.
3
Discuss_Mandate
10 of 38
07/28/17
system
Components
Interface
Mechanisms
Organisation
Structure
Figure 2: Systems composed of reusable components
The current situation arises largely because present generation components are not readily reconfigured, reused and
integrated as a part of new of wider-scope systems. In short they are not really components. Rather, typically they
will have been defined and built with reference to a limited view of what might be required of them when they
behave collectively in conjunction with other components. Also, a user will seldom have sufficiently explicit
information about the various components available from alternative sources to enable well reasoned decisions to be
made about alternative ways of constructing systems.
A second major contribution to high cost long lead-time systems engineering projects arises from the use of
proprietary and often ad hoc approaches to system design and construction. Consequently, important design
knowledge about how system elements should function as part of high level systems is lost during system
implementation which results in application specific functionality being intertwined with system integration
mechanisms. This has major implications when changes to system behaviour (such as in response to a change in
process requirements) are required. Invariably the entanglement of issues makes system reconfiguration even more
difficult and costly to achieve than initial system design and implementation unless the changes required were
originally anticipated at initial build time. Similar problems arise when seeking to integrate the operation of one
system with that of other systems. Without access to an explicit statement of the internal operation of interacting
systems it will be an extremely complex (even impossible) task to get systems from different sources to function in a
unified way. Furthermore even if such knowledge were explicitly known it is highly likely that different ‘structural’
alternatives will have been used by the system and component vendors involved. Indeed resultant system
heterogeneity can place significant constraints on the operation and usefulness of combined systems.
Discuss_Mandate
11 of 38
07/28/17
It is evident therefore that most industrial systems in use today have been produced by via a largely ad hoc approach
to systems engineering. As a consequence resultant systems are constructed in a piecemeal manner and are
inflexible and hard integrated, refer to Figures 3a and 3b. It follows that contemporary operational systems and
resource systems are inflexible, therefore associated process systems and business systems will be at least equally
unresponsive to ongoing environmental change.
structured approach to
systems engineering
ad hoc approach to
systems engineering
(a)
agile ( flexibly
integrated) systems
piecemeal, inflexible and
hard integrated systems
(b)
distributed decisionmaking and control
hybrid decisionmaking and control
centraliseddecisionmaking and control
hybrid approach
to enabling change
inclusion of
redundant functionality
hybrid approach
to systems evolution
fully automated evolution
(c)
embedded
change ethic
(d)
human centred evolution
(e)
wide scale systems
reconfiguring
hybrid programming to
realising agile systems
programming in
the small
(f)
Figure 3: Relevant system characteristics and systems engineering approaches
Often systems integrators have developed proprietary means of designing and constructing systems of any
significant scale which partly negate the fact that explicit knowledge about available system components is not
generally known. Typically the way they operate will be as follows.
(I)
That a given system integrator develops an expertise with respect to a limited and specific choice of
components and with respect to their development into larger grained component.
(II)
That components using standard interface protocols and well-defined parameterised variables are selected
and integrated into systems in a proprietary but organised way.
Discuss_Mandate
12 of 38
07/28/17
(III)
That a common set of infrastructure services (e.g. network, messaging and data) are utilised so that the
specification and use of applications processes and systems integration code can be abstracted away from
detailed implementation issues. However this approach will usually require part of approaches (II) or (I).
(IV)
That structured software engineering methods and tools are used to develop application processes, systems
integration code, infrastructure services in an organised way. The benefits of such a structured approach
can only be fully realised if it is used in conjunction with some form of middleware (e.g. as part of
approach (III)).
The authors’ experiences of the above means of engineering systems on differing scales have shown that (I) and (II)
are only practical options if the scale of systems engineering projects remains small. Use of a combination of (II),
(III) and (IV) can facilitate system engineering on a wider scale. However, as the scale of an enterprise system is
increased so too will the frequency with which some form of change will be desired. Inherently (III) and (IV) can
support system reengineering / reconfiguration but to achieve this in a generalised and effective way some means
must be found and applied to flexibly enforce a separation of factors which independently induce a requirement for
reengineering or reconfiguration. By separating such issues, as opposed to mixing them up together in an
unspecified way when implementing systems, the effect of changes can be localised. This will lead to a reduction in
change management effort and improved opportunities to reuse solution fragments.
How then can we devise and encourage more effective ways of engineering enterprise systems to overcome such
difficulties?
The following sections seeks to outline ‘design principles’ which can be embedded in component oriented
approaches to the life-cycle engineering of systems. The aim of these approaches should be to induce a definite
change in the practice of design, construction and ongoing development of systems leading to orders of magnitude
saving in cost and time when:
i.
reconfiguring systems to satisfy changing business requirements
ii.
integrating the operation of systems with that of other systems.
Clearly to be practical and useful to industry such an approach must cater for the levels of technical complexity
commonly involved, whilst providing means of using current practice as a starting point. This is indeed a tall order
but arguably is achievable if the design principles outlined in the next sub-section are adhered to.
3.2 COPING WITH CHANGE IN COMPLEX SYSTEMS
As explained previously, industry prefers the use of well-proven solutions. Yet their solutions must be unique from
that of competitors and must be capable of changing in response to changes in business requirements. Herein lies a
conundrum which suggests that a compromise is required between specificity and genericity. Such a compromise is
Discuss_Mandate
13 of 38
07/28/17
embodied into component oriented approaches to the life-cycle engineering of systems (Sims 1994, Prins 1996,
Gannon 1998, Wileden and Kaplan 1998).
Component-oriented software architecture has been used in the scientific programming community for about seven
years. According to Gannon this has led to a new software engineering paradigm based on the use of distributed
object technologies which can be deployed to build software applications from ‘off the shelf’ components which
may be distributed across a wide area network of computer and data servers. Here components are defined by
‘public interfaces’ which components may use to communicate with other components. Here public interfaces will
typically specify ‘function’ and ‘protocol’ available to a component. It is claimed that it is not unrealistic (via this
paradigmatic model) to generate application programs comprising over a thousand active components interoperating
as a dynamic network of communicating objects.
Potentially therefore such a paradigmatic shift in software engineering approaches can allow industrial and
commercial enterprises to develop distributed problem solving environments supporting distributed and wide-scale
decision and action making. However for that to happen in industrial environments significant changes in business
and technical practice will be required.
A wide-scale industrial application of a component-oriented system engineering paradigm must extend software
engineering concepts and their application to facilitate the engineering of systems comprising human and machine as
well as software components. The approach will need to be based on the notion that well-proven and reusable
human, machine and software components can be designed and developed at such a level of granularity that:
(A)
their generic reuse does not overly compromise the behaviours and performance of any specific system
constructed from them; and
(B)
that unanticipated specific system change can be accommodated readily by making rapid and effective
changes to the composition of a specific system and/or the reconfiguration of individual or collective
component behaviours.
It is evident, however, that to realise a component based approach in an industrially acceptable and wide-scale way
it is necessary to agree upon, develop and make available: suitable design principles; various classes of component
building block; sufficiently accurate and complete models of components; and appropriate supporting concepts and
tools.
Let us assume for the moment that suitable classes and instances of components can be specified and developed to
satisfy (A) above and that suitable abstract representations of those components can be defined to support the
various requirements of people concerned with the life-cycle engineering of systems. Then to satisfy condition (B)
above it is evident that the way in which any given set of components is structured should be readily reconfigurable,
as should their use of suitable integration mechanisms to enable their interaction, and thereby achieve local (to a
component) and global (to the composed system) behaviour. Conceptually this may be viewed as a requirement for
Discuss_Mandate
14 of 38
07/28/17
‘wide-scale systems reconfiguring’ or ‘programming in the large’ requiring methods and tools capable of supporting
wide-scale reconfiguration of systems.
This leads to a design axiom that agile systems should be realised by flexibly integrating the behaviours of
components. It is evident that integration infrastructures (such as CORBA, DCOM (Active X), Java Beans/Studio,
Newi, OPENDOC, CIM-BIOSYS, AMBAS and the CIMOSA IIS) provide alternative general purpose
computational mechanisms and integration services capable of underpinning component interaction in a
flexible/configurable/programmable way. Indeed integral systems configuration capabilities of available integration
infrastructure can provide (albeit presently in a fragmented way) suitable means of configuring and managing
distributed software processes and information sources in a hardware platform independent manner. Therefore, by
using an integration infrastructure, flexible interaction between distributed software processes can be enabled on a
wide scale, so that decentralised and distributed decision making and control schemes can be enabled and used to
replace less effective and outmoded centralised decision-making and control, i.e. can facilitate choice between the
extreme system characteristics depicted by Figure 3c. By ‘embedding’ human and machine components into
software processes capable of using the services of an infrastructure, organisations can interoperate effectively
despite significant separation in space and time. Furthermore, theoretically the inherent scalability of solutions built
on an infrastructure allows small-grained, reusable components to interact with each other. Thereby in theory
component-based systems can provide a rich variety of system behaviours; albeit that invariably there will be
constraints on component grain size as current network and information systems have limited usable band-width.
The establishment of flexible connections between components is not sufficient on its own to realise ‘wide-scale
systems reconfiguring. It is also necessary to develop and deploy appropriate ways of flexibly defining the system
structure and making that definition manifest (explicit) to facilitate use of configurable architectures. Use of a
flexible structure can enable alignment to be maintained between alternative compositions and behaviours of a
system as higher level requirements change. Evidently enterprise and system modelling methods and frameworks
have a key role to play in flexibly defining system structures. Later we will see that an appropriate decomposition of
modelling issues is necessary to handle the levels of complexity involved in specifying a suitable system structure.
Indeed typically this will require input from teams of people involved in the life-cycle engineering of systems in a
structured and co-ordinated way leading to definitions pertaining to business solution structures and physical
solution structures in a consistent and tractable way.
Figure 4 illustrates the role of integration infrastructures (used to flexibly configure interactions between reusable
system components) and modelling methods and frameworks (used to flexibly define and redefine as required a
suitable system structure). It follows that systems engineering tools will be required to support ‘wide-scale systems
reconfiguring’, where support for human decision and action making will be required in terms of (I) component
selection, (II) the definition and realisation of component interactions and behaviours and (III) the definition and
realisation of system structures which enable the system as a whole to be organised, controlled, managed,
maintained and re-engineered on an ongoing basis.
Discuss_Mandate
15 of 38
07/28/17
system structure
modelling tools
to generate executable
system structures
structural relation
between components
runtime system
integration infrastructure
interaction relations
between components
Figure 4: Agile systems need flexible connections and structure
Implicit in the application of a component oriented approach to the life-cycle engineering of systems should be the
notion of embedding a ‘change ethic’ into systems, i.e. an ability to positively enable a yet-to-be-specified system
change to occur. As exemplified by Figure 3d, this is a distinctly different approach from that of supporting largely
anticipated system change by embedding redundant functional capabilities into a system. Arguably from a technical
standpoint an embedded ‘change ethic’ approach4 will in general be preferable as it will handle a much greater range
of change (including unpredictable change) and will not incur unnecessary financial cost arising from the inclusion
of unused functionality. However this will only be true if the use of well defined system decompositions does not
impose an unacceptable performance overhead and is adequately supported by suitable integration mechanisms and
tools.
4
It is argued that the concept of embedding a change ethic into IT systems is similar to that of developing a change
culture with an enterprise, i.e. the resultant system is designed to be capable of being reformed to meet unspecified
change.
Discuss_Mandate
16 of 38
07/28/17
(a) evolvable component
(b) evolvable architecture
(c) hybrid
Figure 5: Evolvable systems and components
A ‘change ethic’ can be embedded into system components as well as into configured systems. This is akin to
notions of evolvable components and evolvable (or readily maintainable) architectures being currently researched by
the software engineering community (Kawalek and Greenwood 1998, Clements 1998). This concept is illustrated
by Figures 5a and 5b respectively where the fuzzy boundaries depict an inherent capability of the component or
architecture to be responsive to changes in its operating environment. A set of basic (or non-readily evolvable)
reusable components could be configured via use of an evolvable architecture (based on the use of a suitable
infrastructure and model driven system structure) to produce a larger grained evolvable component and so on. The
means of evolving (a component or architecture) could range from being achieved completely automatically (at one
extreme), to it being achieved (at the other extreme) manually and in an ad hoc way. Whereas (in between these
extremes) evolution could be achieved in a model-based, structured but human-centred way, i.e. in a hybrid fashion
by deploying an enterprise modelling framework to organise inputs from a team of system architects, system
designers, system implementers, system managers and maintenance engineers. This notion is illustrated by Figure
3e.
Let us return to issues raised by the need to meet condition (A) of a component oriented approach to the life-cycle
engineering of systems. From a business perspective, as mentioned earlier component vendors may not welcome the
idea that they should provide components which conform to some class of generic component type and a common
description of its function and protocol, as this could reduce their ability to compete in and/or dominate a market.
Discuss_Mandate
17 of 38
07/28/17
Also a change in grain size of components could significantly effect their trading position and their relationships
with other vendors in a users supply chain. From a physical properties standpoint, the level of granularity with
which a function (and hence component) based decomposition can practically be achieved will depend upon ‘meta
physical’ laws which couple functional and behavioural properties of components. Therefore if in a given context
the behaviour of a component is not essentially independent of other components then it may prove better to treat it
as part of a looser grained component. Also from a technical perspective the choice of grain size may well be
constrained by the need for acceptable ‘performance’ during system operation. It follows that a practical component
oriented approach should be capable of facilitating the composition and reconfiguration of systems from
components at different levels of granularity. In so doing it should allow components requiring couplings between
more primitive elemental building blocks (such as couplings required between separately actuated drive system
elements of a high performance, multi-axis manufacturing machine) to demonstrate specific individual and
collective behaviours at the performance levels needed in a given domain, thereby meeting condition (A) above.
Consequently when producing the components themselves (as opposed to systems composed of components)
alternative implementation criteria may be applied; as overall goals may be to achieve acceptable or optimised
performance from components or to produce low cost resource units. When producing software elements which
form part of each component it may therefore be best to utilise well proven, possibly ad hoc design and build
techniques based on the use of general programming languages (such as C or C ++ ). This may be viewed as
‘programming in the small’ as opposed to ‘wide-scale systems programming’ as depicted by Figure 3f. Indeed in
certain cases such implementation options may be key to achieving adequate real-time performance but this may
well constrain the reuse and/or reconfiguration of the component within wider scale systems. The use of an
evolvable architecture embedded within a ‘wide-scale systems reconfiguring’ approach will be appropriate in
situations where the ability to alter the system behaviour is of major concern, albeit that inevitably the use of such an
architecture will require the imposition of certain implementation constraints. For example, use of an evolvable
architectural approach will necessarily dictate a choice (i.e. class, type or form) of infrastructure service and impose
use of a particular (type or form of) problem decomposition, where this decomposition may need to be enforced by
pre-selected modelling perspectives which collectively are used to develop the physical solution structure according
to some agreed meta structure. This line of reasoning reinforces the need for a hybrid approach to systems
realisation involving component generation (programming in the small) and flexible component integration and
reconfiguration (wide-scale systems reconfiguring), as depicted by Figure 3f.
A further prerequisite of the component oriented approach to the life-cycle engineering of systems previously stated
concerned the need for explicit models of components, i.e. descriptions of ‘virtual components’. In satisfying this
need a further advance in contemporary practice is required. For example, as illustrated by Figure 6, there is a
requirement of the approach to establish explicit mappings between the world of system designers (who require
idealised views of possible components they might use in different systems) and the worlds of components
suppliers, system builders and maintainers who deal with ‘real’, rather than ‘ideal’ components. Presently few
vendors of system components provide explicit models which encode their capabilities and qualities. Indeed it is
evident that vendors cannot know what kind of explicit models are required to facilitate a component oriented
Discuss_Mandate
18 of 38
07/28/17
approach until such an approach is adequately defined. Also they will not do this until there are sufficient
inducements to do so.
DESIGN ENVIRONMENT
alternative component
representations or
system or
solution structure
virtual components
mappings
between real & virtual components & structural organisation
integration infrastructure
RUNTIME
SYSTEM
Figure 6: Need for explicit mappings between real and ideal components
It is important to re-emphasise the point that present day system engineering practice is largely based on a
‘programming in the small’ approach. Invariably as indicated earlier the result of such an approach is that systems
integration knowledge is lost or entangled with application specific logic and other code when the system is
implemented. This has implications for component vendors willing to or needing to encode explicit models of their
components. Unless they use formal means of describing their components during conceptual design, detailed
design and component implementation they will always experience difficulty in encoding such a model. This
situation will become increasingly difficult as they change and re-implement new component versions. Hence,
where practical the component vendors themselves may find advantage from deploying a component-oriented
approach to developing components from more primitive elements where the primitive building blocks change their
function or form less regularly.
Discuss_Mandate
19 of 38
07/28/17
DESIGN ENVIRONMENT
wide-scale systems ‘programming’
virtual components
--library of well proven
re-useable components
process
model
system
model
physical solution
structure model
graphical & executable
modelling constructs
Retaining design knowledge
enabling engineering & reverse engineering
integration infrastructure
RUNTIME
SYSTEM
Figure 7: Retaining design knowledge in the runtime system
If suitable component models do become widely available significant benefit will accrue if those models can be
represented graphically, are computer executable and sufficiently complete with respect to supporting the
engineering activities of various persons concerned with the life-cycle of systems. By directly deploying executable
models of components, component interactions and system structures as part of a runtime system it has been shown
to be practical to retain and reuse semantic information on an ongoing basis (Coutts 1998). This notion is depicted
by Figure 7 and has been shown to provide the basis of component oriented approach to wide-scale systems
‘reconfiguring’ (Weston et al 1996). However, this requires means of executing models such as via model
execution services specified within CEN 310 (Shorter 199?). For example this could be achieved by extending and
developing the capabilities and use of general purpose integration services provided by available infrastructures
(Coutts 1998). If in a given application area this can be achieved in a practical and scalable way then the use of
executable models can much facilitate life-cycle engineering allowing rapid movement between modelling, analysis,
simulation and runtime life phases. Also the distinction between interoperating and distributed executable models of
virtual components and systems (used during design) and their counterpart real interoperating software components
and systems will begin to disappear. Indeed, as illustrated by Figure 8, it will become possible to move quickly and
effectively from alternative models of systems to real systems, onto new system models describing alternative
configurations, to enhanced real systems, and so on. This should very significantly reduce lead-times associated
with change management projects. In turn this should lead towards continuous operational realignment of real
systems to higher level requirements defined theoretically by models of specific business systems and process
systems. Also, as illustrated by Figure 9, the retention and reuse of the semantic information can enable and place in
Discuss_Mandate
20 of 38
07/28/17
context the capture of plant data, leading to improved business process visualisation (as an integral part of business
systems and process systems) and thereby onto improved specification of engineering change.
process
model 1
process
model 2
process
model n
process
model n
system
model 1
system
model 2
system
model n
system
model n+1
solution 1
solution 2
solution n
solution n
Figure 8: Rapid and effective life-cycle engineering through retaining and reusing semantic information
DESIGN ENVIRONMENT
realtime
process visualisation
receive order
confirm order
release order
scanning process data in
integration infrastructure
plan production
virtual
components
realtime
RUNTIME
SYSTEM
Figure 9: Illustrative use of design semantic to scan process data
and place it in the context of a live process visualisation
Discuss_Mandate
21 of 38
07/28/17
4.0 AGILE SYSTEMS IN SUPPORT OF RECONFIGURABLE BUSINESS
PROCESSES
Based upon an empirical classification of common enterprise engineering issues this section identifies a high level
decomposition of these issues into ‘views’. The primary purposes of the decomposition are to (i) scope the
problems and issues involved in change management and (ii) suggest generic groupings of systems engineering
issues (i.e. views) which if handled and maintained separately have been found to positively support reconfiguration,
reuse and scalability and thereby change management.
The decomposition is based upon collective practical experiences of MSI researchers when building manufacturing
and business systems. Indirectly therefore it abstracts findings when integrating and applying various proprietary
(albeit of state-of-the-art) enterprise engineering methods, concepts and frameworks (and their associated modelling
perspectives, modelling languages and modelling tools). However a design precept of the proposed decomposition
is that it should (as far as possible) be neutral. This means that the decomposition should not impose or promote use
of any given method, framework, toolset, etc. The decompositions presuppose that (I) it is sensible to develop and
widely apply a component-oriented approach to the life-cycle engineering for systems, and (II) that such an
approach can support systems engineering in enterprises at each of the levels of system abstraction illustrated by
Figure 1. Thereby the decomposition into views embodies the principles of agile systems design and construction
outlined in the previous section.
It is intended that this decomposition should serve as no more than a starting point for developing a requirements
specification capable of guiding ongoing developments on component-based systems. The joint working group of
SC5/WG1 and the Task Force may wish to develop alternative decompositions based on a GERAM (PLUS) or
begin with this decomposition proposed to position contemporary system engineering practice and contrast this with
potential future practice based on emerging component oriented software engineering approaches. It may also be
fruitful to use such a decomposition to help develop a ‘road map’ of existing and developing international standards.
4.1 A Top Level Segmentation of Issues
Tables 2 and 3 illustrate a top level segmentation of enterprise engineering issues.
‘Meta Activities’
Business
Analysis
High Level
Discuss_Mandate
(
(
(
(
(
(
(
(
‘Views’
Business Purpose
External Influences
Capabilities & Knowledge
Output ‘Models’
)
)
)
)
)
)

Business
System
Definition
Processes
Processes
Resources
22 of 38
)
)
07/28/17
Process
Design
(
(
(
(
(
(
(
Organisation
Information
Plan (for the
implementation of change)
)
)
)
)
)
)
)

Process
System(s)
Definition
Table 2: Definition of need via a business solution structure
‘Meta Activities’
Operational
(
System Design
(
& Construction
(
(
Resource
(
System Design
(
& Construction
(
‘Views’
Structured Arrangement of
Components
Component Interactions
Component Interactions
Component Implementation
)
)
)
)
)
)
)
)
Output ‘Models’
Operational
System(s)
Definition &
Realisation
Resource
System(s)
Definition &
Realisation


Table 3: Definition and realisation of need via a physical solution structure
4.1.1 Need to embed a ‘change ethic’ into systems
As explained in earlier sections of this document a precept of a component oriented approach to the life-cycle
engineering of systems is that it should embed a ‘change ethic’ into resultant solutions. Implicitly therefore factors
which change in a largely independent way should as far as is practical be decoupled from each other. Therefore
based upon findings of many empirical research studies Tables 2 and 3 comprise what the authors believe to be
essentially independent groupings (or views) of systems engineering issues. Each view concerns a largely
independent perspective on the ‘purpose’, ‘structure’, ‘components’ and/or ‘infrastructure’ of systems. Collectively
enterprise engineering activities related to each view can lead to the definition of a business solution and its
manifestation as a physical solution. A secondary level of decomposition is also recommended (as explained in later
sections) to facilitate a deeper separation of issues within each view. This secondary decomposition identifies
essentially independent sub-views. Also sub-sub-views can be determined if required. A general outcome of the
decompositions should be a significant resultant reduction in the reconfiguration effort required to reuse solution
fragments. However it should be re-emphasised that as yet the views and sub-views recommended herein represent
a consensus of only one group of researchers. A final choice of separations will have significant technical and
business implications for both users and suppliers of component-based systems. Therefore more work needs to be
done to independently test and develop the decompositions recommended in this paper.
4.1.2 Need for suitable modelling constructs to capture semantic information
To facilitate the capture, maintenance and reuse of design and implementation knowledge suitable modelling
constructs will be required which are capable of representing each view contained with the business and physical
solution structures. Clearly the nature of these modelling constructs may vary considerably to reflect the way in
which different aspects of solution structures are or will be developed. In some cases it is necessary to encode semiinformal or fuzzy cause and effect relationships between issues whereas other cases may benefit significantly from
the use of modelling constructs based on strict mathematical formalisms. Contemporary custom and practice is such
Discuss_Mandate
23 of 38
07/28/17
that invariably system design is based on the use of models, be that in the form of mental models, paper-based
models or computational models. Evidently graphical and executable modelling constructs can be used with great
benefit, such as when visualising and analysing alternative business requirements, systems designs and scenarios of
change. Hence where it is practical to formalise the development of views and to apply them effectively the authors
advocate the use of graphical executable models in support of understanding and developing solution fragments
pertaining to each view of Tables 2 and 3. Clearly GERAM (and its predecessor frames and methods) provides
many of the modelling constructs needed to capture the semantic information described above.
4.1.3 Need for a suitable modelling framework to facilitate ‘wide scale systems reconfiguring’
By separating enterprise system engineering issues into views and supporting the development of those views via
suitable models complex systems can be conceptualised leading to the realisation of agile (reconfigurable, reusable,
integratable and scalable) systems solutions. However, potentially there may well be a downside to optimising
solutions fragments independently with respect to a number of views. If we require those solutions to be effective
as a whole in general it will be necessary to adopt some means of developing requirements specifications, system
designs and implemented systems as complete and effective entities.
Means of consolidating separate developments along views is provided by an enterprise modelling framework.
Such a framework should embed mappings between different modelling constructs in a multi-dimensional way (e.g.
along a view dimension, along a life-phase dimension and along a genericity dimension). In so doing it should
establish general rules needed to maintain consistency between (a) the modelling views contained in the business
solution structure and physical solution structures and (b) mappings between the business (virtual) and physical
(real) solutions. Thereby, use of a suitable enterprise modelling framework and modelling constructs would
effectively provide a ‘wide-scale systems reconfiguring’ language capable of generating agile systems by
aggregating components and suitable integration infrastructural elements.
Planned developments with respect to broadening the basis of UML (Universal Modelling Language) into an EML
(Enterprise Modelling Language) could meet the above requirement to consolidate separate system engineering
views. In the meantime, however, it may be necessary to use less comprehensive, but fairly effective, means of
consolidating the development of views. Such an approach may have implications with respect to the reuse and
integration of resultant solutions in systems of wider scope.
4.1.4
Need for a suitable enterprise engineering method or framework
To facilitate improved useability and applicability of a component oriented approach it will be necessary to
particularise the use of a suitable modelling framework and modelling constructs to allow their use by non-technical
personnel and thereby enable change management initiatives on an enterprise wide scale.
Typically change management on a wide scale is achieved today via some form of consultancy applied methodology
or framework. Normally this class of framework will be used in a human-centred way and will only be supported by
computer tools in a proprietary manner. None the less typically consultancy methods and frameworks offer means
of co-ordinating change project activities by encouraging the use of:
Discuss_Mandate
24 of 38
07/28/17
(i)
different but complementary representations of processes
(ii)
common definitions of system elements (e.g. components, infrastructural elements and structural elements)
in terms of their qualities (e.g. performance, flexibility, reconfigurability, reusability, ease of use),
capabilities (e.g. functions required or provided by, information requirements and service capabilities) and
capacities (e.g. number of units required or provided by per second).
The use of more formal enterprise engineering frameworks (typified by joint IFAC/IFIP and ISO TC184/SG5/WG1
work under GERAM) has become an alternative to the use of proprietary consultancy methods and frameworks. An
enterprise engineering framework may well have an associated modelling language (i.e. a suitable set of modelling
views and constructs, and a connecting framework) that supports the development and integration of views into
business and physical solution structures. However the authors do not know of a single methodology and
framework which is sufficiently comprehensive or complete to connect all of the views which comprise Tables 2
and 3. Never the less some combination of available enterprise engineering methods and frameworks, used in
conjunction with software engineering methods and tools might facilitate coverage of and connections between most
of those issues. Indeed significant benefit may accrue by formally defining structural links and/or mappings
between some of the views, particularly with respect to aspects of the development of operational systems and
process systems. However, the authors believe that the unpredictable scope and nature of certain classes of change
(and particularly with respect to business system issues) may mitigate against a general practical application of a
highly structured framework.
The authors’ experience of using both consultancy and enterprise engineering approaches (which conform to
GERAM) to consolidate different views of systems engineering issues has shown that significant benefit can be
gained from using a hybrid approach. Naturally this makes sense because in some situations where innovative and
distributed human-centred decision making is required it is only practical to provide loose co-ordination
mechanisms and information support. Alternatively other enterprise engineering activities are clearly best done
within a more rigidly defined context and supported by models and services and tools which facilitate design
synthesis, systems analysis, the auto-generation of solution fragments and even the automatic evolution of
component behaviours or architectural structures which organise relationships between components.
5.0 Developing the Business Solution Structure for an Enterprise
Table 4 provides further details about the business solution structure decomposition.
Issue
Need to generate a clear
purpose for the system.
Need to understand the
external influences on the
system?
What are the capabilities
Discuss_Mandate
View of the Solution
Need a clear mission or vision statement.
Need to analyse and define PEST
influences, market forces, customer
requirements, supplier capabilities and
competitor ability.
Need to understand the competitive
25 of 38
Output Model or Specification
Need a clear specification of the business
purpose that the processes need to satisfy.
Need to clearly define customer
requirements and metrics, and links to
other business processes within the
enterprise.
Need to understand the capabilities of
07/28/17
and knowledge within the
business?
Need to define the
processes that will satisfy
the system requirement.
Need to define how the
business resources will be
used to support the
processes.
Need to define any
additional Organisational
structure.
Need to define information
requirements to support the
business.
Need to define a plan.
capabilities and knowledge held within
the enterprise that can be exploited over
the capabilities of the competition.
Need to understand the high level mix of
process and business rules that govern the
way the business satisfies the business
requirement.
What resources (people, machines,
information and finances) are required to
support the overall business structure?
What organisational structures are
required to support the enterprise?
(appraisal, promotional, departmental
relationships, etc.)?
What are the general business information
requirements to support the enterprise?
Need to define timescales and
deliverables for the implementation of
business change.
machines and human resources available
to the processes under consideration.
Need to define the set of activities that are
required for each business process to
fulfil the process requirements and
business requirements.
What resources (people, machines,
information and finances) are required to
support each activity for each business
process?
Which particular organisational entities
will be responsible for each business
activity and how will this be co-ordinated
(organisational infrastructure)?
What information is required to support
each business process and activity and
how is it to be controlled and coordinated?
Need to define a plan of work for the
implementation of change (people,
timescales, etc.).
Table 4: Business analysis issues leading to a business solution structure
Clearly the issues that need to be addressed to specify a business solution structure are interdisciplinary and
complex in nature. Generation of the business solution fragments will require co-ordinated input from various
senior and middle managers, and from associated managers, engineers, technologists and shop floor personnel in
addition to inputs from external consultants and suppliers. Individuals involved in a change management initiative
may utilise methods and tools with which they are familiar or which the company prefer. Such tools could support
competitor analysis, market forecasting, strategy development, project management, financial analysis, and so on.
Directly or indirectly the players involved may utilise a business process oriented decomposition, a database of
information, and/or a consultancy framework to help them organise and visualise their collective efforts. However
seldom will present-day enterprises (or indeed their consultants) seek to formally define much of their business
solution structure. Therefore if they are used at all then the graphical and executable models will only be used in a
localised and fragmented way (possibly as an integral part of a specific method or tool). If a business process
oriented approach is used to specify business solution structures it is likely that this will largely be centred on one
single (i.e. key) business process and as a consequence may not adequately appraise the impact of inherent
interactions between business processes.
Because this document has already become voluminous, the description above is merely indicative of common
industrial practice. However it exemplifies example conditions under which change management initiatives need to
operate. Importantly it is evident that seldom will current practice result in a well structured, formally defined
business solution structure which can be automatically (or even semi-automatically) mapped onto a physical
solution structure in a given enterprise.
Discuss_Mandate
26 of 38
07/28/17
6.0 Developing the Physical Solution Structure
Table 3 illustrated a classification and separation of issues involved in designing and constructing operational
systems from reusable resource systems (or components). This section develops that classification of physical
solution structure issues bearing in mind the need to maintain consistency (during change management initiatives)
with the business solution structure.
6.1 Handling Operational System Design and Construction Issues
Table 5 recommends a secondary separation of issues related to the structured arrangement of components, which
forms part of the physical solution structure defined by Table 3. Use of such a separation (into sub-views) has
proven to be generally beneficial within MSI when engineering flexible integrated systems and when evolving their
properties, behaviours and scope.
Sub-View
Selecting a primary architecture.
Representing the process requirements
Representing business logic
Representing component behaviour
Representing component interactions.
Defining user-system interactions.
Interpretation / Example
Hierarchical – Heterarchical, Client/Server, Master/Slave
Capturing, analysing, refining process definitions. Need tools,
agreed formats etc.
Definition of functionality required to fulfil the requirement.
How the behaviour of candidate VMCs is made accessible to the
design activity.
How the potential interactions of candidate VMCs is made
accessible to the design activity.
What interactions will be possible and when or in what order.
Table 5: Solution structure
Historically many manufacturing companies and systems builders have developed and used their own conventions
and structural guidelines to facilitate the definition of the structure of systems built from available components.
However as yet there is no public domain, comprehensive and formally defined methodology capable of supporting
the life-cycle engineering of such a structure, albeit that there has been an increasing use of formalisms when
producing software to implement solution structures embedded into contemporary manufacturing systems. As a
consequence, when contemporary approaches to operational system design and construction are deployed the
solution structures generated entangle issues related to the sub-views of Table 5. Consequently present day systems
are invariably inflexible (do not facilitate change) and essentially stand alone (i.e. do not facilitate their inclusion
into wider scope solutions).
Enterprise modelling and integration promises more complete and formal ways of specifying a structured
arrangement of components. Theoretically this should help specify and analyse the requirements of manufacturing
systems and map these requirements onto an organised set of interoperating resources (i.e. virtual and real
components). The following subsection indicates ways in which that might be achieved.
6.1.1 Selecting a Primary Architecture
There has been much debate in the academic and research communities about the relative merits of so-called
hierarchical and heterarchical system architectures, the degrees of sophistication in distributing the operational
Discuss_Mandate
27 of 38
07/28/17
process decision making and so forth. In theory, the process definition in conjunction with the chosen parameters for
flexibility and resolution of control of the process would be the determining factors in this choice.
In practice however the majority of implemented solutions have followed much simpler patterns. In general terms
there are two common structures, viz.: (i) client/server (C/S) architecture; and (ii) master/slave (M/S) architecture.
Sometimes such systems involve multiple layers and possibly a mix of C/S or M/S operation for different layers or,
occasionally for different sets of interactions at the same layer.
This is not to say that more sophisticated structures might not be better. Indeed various primary architectures have
been suggested by the research community which have been applied to a limited extent, such as: the NIST model,
the COSIMA model; PCF and the ESPRIT 809 model, However these cases are the exception not the rule in
industry. Therefore either the current practice is lagging the theory or the theory is not sufficiently well developed
for practical application.
So, the necessary decisions on this issue are: (a) if the complexity requires, determining the appropriate levels of the
solution and the division into sub-systems at each level; (b) selecting the appropriate principle structure for each
sub-system identified; and (c) identifying (while minimising) any necessary ‘work arounds’.
6.1.2 Representing Process Requirements and Defining Process Definitions
To date seldom have business process descriptions been explicitly connected to models of systems and their
interacting components. Rather, to-date the use of process models has been focussed on “what an enterprise should
do” rather than “how it can do the what”. This may be largely because the latter is hard to do and to justify in
business terms.
Nonetheless, with respect to IT systems design there is a growing need to connect models of business processes to
models of software components. Indeed this need is driving recent developments by vendors of ERP, MRP and
MES software packages in North America and Europe. Here component based and parameter driven software
applications have been developed for which values can be assigned to parameters to configure the application in
alignment with specific process needs of different manufacturing companies.
Fairly comprehensive formal definitions of ways of connecting models of processes, systems and components (such
as software applications) are described as part of the CIMOSA specification. Furthermore other enterprise
modelling methodologies and architectures like GRAI-GIM, PERA, ARIS and TOVE suggest alternative structural
connections. Figure 10 exemplifies the use of one of these architectures. It illustrates the use of the SEWOSA
enterprise engineering workbench which developed and operationalised key facets of the CIMOSA open systems
architecture. Thereby SEWOSA supports and connects the definition of a series of modelling views. This allows a
team of system designers and implementers to define and realise suitably structured arrangements of components,
covering each of the issues included in Table 5. A meta model, which conforms to the CIMOSA architecture, is
implemented and maintained by the SEWOSA workbench. This allows the separate modelling perspectives to be
developed by different designers and implementers, yet maintains consistency between the models so that more
Discuss_Mandate
28 of 38
07/28/17
effective and physical complete solution structures are defined. Thereby it becomes possible to make manifest
(explicit) the structure of wide-scale systems and thereby systems reconfiguration, integration and re-use can be
facilitated in organised and effective ways.
requirements definition
DP 1
domain
DP 1
context diagram model
BP1
BP 2
EA1
EA 2
BP1
BP1
BP1
DP 2
domain diagram domain 1
structure diagram DP 1
BP 2
BP 2
BP 2
behavior
diagram DP
1
behaviour
behaviourdiagram
diagramDP
DP11
domain templates
FE 1
IEO
domain description
FE 2
Type
EA 1
IIS
printed circuit board
EA 2
object diagram domain 1
Design Authority
behavior diagram EA1
person xyz
EA 1
EA 2
C1
C2
function diagram DP1
XYZ
ARC 2
ARC 1
AR 3
ARC 4
Key: DP-domain process; BP-Business Process;
EA-Enterprise Activity; FE-Function Entity;
ARC-Active Resource Component; C-Capability
behavior diagram EA1
config’ diagram domain 1
design specification
Figure 10 : Some of the modelling views supported by the SEWOSA workbench
6.1.3 Representing Business Logic and System Behaviour
The business logic (i.e. a set of logical functions which need to be included into a given system) will be application
dependent and (at least implicitly) will need to be linked to definitions of process requirements. There are many
alternative ways of encoding business logic. Often when defining and implementing function blocks of systems
general purpose programming languages are used, such as C++, C, PASCAL, VISUAL, BASIC, etc. However, with
these general purpose programming languages the meaning (or semantics) of the business logic (which is some
abstract representation of system functionality which can be understood and reused by a human designer) gets lost
within the detailed system code. Therefore this knowledge is not distinctly retained so that it can be used to change
the system design when and as new requirements emerge. Research is ongoing to address such problems but this
situation makes ‘wide-scale systems reconfiguring’ costly and the solutions very inflexible. Potentially enterprise
modelling frameworks and architectures such as CIMOSA and ARIS provide modelling constructs which can be
used to support the definition of system functionality in a way which retains and promotes the reuse of design
knowledge. Figure 9 also illustrates in outline how this can be done using the SEWOSA workbench. Here
behaviour models are separate from, but can be readily linked to, process and function models. Also SEWOSA
Discuss_Mandate
29 of 38
07/28/17
behaviour models can be mapped readily onto lower level interaction models, which are formally represented by
Petri Nets and models of components described in Estelle, UML and EXPRESS.
6.1.4 Representing Component Behaviour and Component Interactions
As indicated in the previous section, various formal description techniques have emerged which are capable of
representing the behaviour of virtual components and the way in which they interact. Means of representing
behaviour are included as part of popular object oriented approaches to software and system design, e.g. as part of
UML. Here, state diagramming techniques are used. Also commonly used to model component behaviour are
extended versions of Petri Net, as they can offer good visualisation and simulation capabilities. A unification and
extension of state diagramming and Petri Net techniques is offered by using the binary transition language (BTL)
developed by Coutts to execute behaviour models over different integration infrastructures, like CIM-BIOSYS,
CORBA and Internet. Estelle and IDL are formal description languages developed to represent interaction and
communication protocols. Notwithstanding their technical capabilities it is likely that the Interface Description
Language (IDL) will be used most widely in view of its close connection to CORBA developments as part of
standards initiatives by the Object Management Group. Historically the development of EXPRESS as a formal
language has been linked to initiatives world-wide on information systems modelling. EXPRESS is now very
widely used and has powerful description capabilities which can model information entities and their relationships
as part of the physical solution structure definitions.
Arguably therefore, there are many descriptive techniques available to model virtual components, their behaviour
and interactions albeit that currently they are weakest in terms of describing concurrent behaviour, particularly
where tightly coupled synchronous behaviour is needed. These formal descriptions of VMCs need to be a consistent
and integral part of a complete solution structure. Also shared modelling constructs are needed to link formal
descriptions of component implementation and component interaction issues.
6.1.5 Defining User-System Interactions
If a change ethic is to be embedded into a system then by definition it is necessary to consider the complete lifetime
of a system. It follows that manufacturing system users include not only shopfloor operatives, supervisors and
engineers but also businessmen, managers, system architects, system designers and so forth. It also follows that the
process of defining human(user)-system interactions can be very complex. Invariably using current approaches to
designing and constructing systems, humans are treated essentially as a system component which accordingly will
be viewed as having a set of functional capabilities, qualities and constraints (which unfortunately may be ill
defined). It is not obvious whether this situation will change significantly in the foreseeable future. However, if
humans continue to be modelled rather mechanistically as virtual components it will be important to develop better
understandings of the rules governing human-system interactions (such as allowable multimedia rules, profiles,
permissions, sequences, etc.) and the constraints imposed on allowable interactions by current human computer
interface technology.
Discuss_Mandate
30 of 38
07/28/17
6.2 Component Implementation Issues
Table 6 recommends a secondary decomposition of the component implementation view of Table 3, also based on
previous practical and theoretical studies in MSI.
Sub-View
Incorporation of the real elements.
Interpretation/Examples
How physical device behaviour is made externally
available via requests and instructions.
Potential levels of granularity – commands, programs,
abstract task performance.
Purely local or externally available information. Data
formats and structures.
Definition and execution of behaviour.
Access to element specific information.
Table 6 : VMC Implementation
6.2.1
Incorporation of the Real Elements
The actual components required to construct manufacturing systems may be broadly classified as follows:
iii.
Humans: fulfilling a range of roles such as equipment operation, managerial, support, maintenance etc. As
humans cannot be directly interfaced to the rest of the system they will be supported by a range of computer
based devices from switches and lights to computer displays.
iv.
Software applications: generally falling into two sub-categories of system and support functions. The former
contribute directly to the process which the system instantiates and would include planning, control, possibly
manufacturing operations, data collection/storage and so on. The latter comprise of those functions which are
indirectly necessary such as post processing of CAD data to generate machine programs, say.
v.
Machines: which typically will be computer controlled to some degree and will generally be used for
physical manufacturing operations such as part handling, metal cutting, inspection etc.
Unfortunately, most manufacturing machines (and indeed software applications) are conceived as fundamentally
stand-alone. Any ‘external’ access (i.e. by other system components) to their internal function and data is normally
an afterthought. Worse still, such access is often implemented using mechanisms and ‘protocols’ completely
different from those of similar equipment.
The virtual component concept is very useful in supporting an ability to design and specify solutions in abstract
terms - that is, not dependant on the particular machines, interactions etc. to be used. As the fore-going discussion
illustrates, this is essential if the desired solution flexibility, cost effectiveness and timeliness levels are to be
achieved.
Unfortunately, a choice of available manufacturing machinery, applications etc. may not exist which corresponds
directly to component models (i.e. virtual components) used during system design. It is controversial when and
indeed whether any such consistent set of real components will become available. However much the argument is
put forward that the makers of such components will benefit from adopting a common model the suspicion remains
that many suppliers perceive their market edge as stemming from the very uniqueness of their product in operational
Discuss_Mandate
31 of 38
07/28/17
detail as well as capability. This is in addition to suspicions about whether the ‘right’ common model has or is being
defined.
It therefore seems that solutions will be built for some considerable time using ‘non-idealised’ components. If this is
to be compatible with the development and adoption of abstract design techniques then these components will need
to be either modified or accommodated via additional interface functionality. Modification may involve the
component's supplier and may be the route by which the latter eventually adopts and implements a common
‘component’ model. The second method requires either that the extra interface ability is generated by the solution
implementer or possibly by third party companies.
As discussed below, a ‘gateway’ is often required to provide inter-connection between a manufacturing component
and a communication mechanism. It has been common for additional higher level interface functions to be included
to ‘convert’ between interactions using the common model on the system side and interactions with specific
machinery and application software on the component side. Although somewhat inelegant, this approach does
provide the much greater flexibility by dissociating specific manufacturing hardware/software/humans from the rest
of the system. The concept is well known to the ‘Object Oriented’ software community and is known as ‘wrapping’.
The down-side of these approaches is, of course, increased cost. It would seem that end users are not currently
sufficiently convinced of the benefits of enhanced system flexibility to go the extra mile to achieve it. This implies
that the higher level analysis and design tools are currently not capable of convincingly predicting the benefit (in the
case where it does exist, of course - it may not always). This may be due to their current state of development but
another factor is likely to be that the fact that they do not ‘seamlessly’ - if at all - link to the ‘business oriented’ tools
used higher up in the user organisations.
6.2.2
Definition and Execution of Behaviour.
An increasing number of components are becoming more flexible in terms of them being ‘programmable’ via stored
data instruction sequences. What this programmability effectively means is that the way the behaviour of a given
component is defined is split into a generic part and a specific part. The first includes, for example, axis control,
transformations etc. while the latter constitutes a much higher level definition of the what the components will do.
How high this level of behaviour definition is determines how flexible an end solution can be.
6.2.3
Access to Element Specific Information
Many components maintain or generate information which would be useful to other system components. This
information may relate to behaviour definition, execution state, production progress or specific work-piece/tool data
and so on. In accord with the primarily stand-alone concept underlying most available components such information
is often maintained ‘behind’ some form of supplied interface. Thus it is only available via certain mechanisms and in
certain formats both of which are generally highly device specific.
Among others, the Manufacturing Message Specification (MMS) specification standardises message protocols
between system devices. Indeed the ISO 9506 protocol standard was developed to define manufacturing system
Discuss_Mandate
32 of 38
07/28/17
objects (effectively a set of virtual components) and message protocol between objects (thereby standardising
aspects of message formats and semantics, i.e. to encode the meaning of specific messages). ISO 9506 was
developed to include companion standards which define common types of message interaction between Virtual
Manufacturing Devices (VMDs), including robots programmable logic controllers (PLCs) and CNC machines.
Unfortunately, however, the industrial take up of MMS has been limited. This may be: because the architecture of
the MMS/MAP approach is not ideal; because the use of common message formats and partially common semantics
solves only a fraction of the general system integration problem (hence can only partially realise benefits from
system agility); or because available supporting systems engineering tools do not facilitate the approach adequately.
Correspondingly even wider scale initiatives linked to PDES/STEP and EXPRESS have attempted to derive and
increase the acceptance of data models in support of the life-cycle engineering of products primarily but also for
other system information such as ‘work flow’ and process definitions.
The situation currently parallels that of external access to a components behaviour as few available devices
incorporate the concepts behind such models to any great degree. Thus in the short to medium term it seems likely
that the translation of unified, abstract design time representations will require extra work at build time. It will
require the incorporation of additional functionality around the chosen components to make them appear to fit the
model as far as the rest of the system is concerned. Again, as the effort to do this is visibly significant and the
potential future benefits difficult to quantify with current tools the frequent result is a `hard wired' information
exchange structure between pairs or sets of components.
6.3 Handling Component Interaction Issues
Table 7 recommends a secondary decomposition of the component interaction view of Table 3.
Sub-View
Data exchange
Exchange of instructions and requests.
Information exchange.
Special situations.
Interpretation/Examples
Common data exchange mechanisms.
Digital networks and services.
Message sets, structures and formats.
Structured interaction dialogue.
Common formats and structures.
Close coupled activities, highly constrained requirements.
Table 7 : Component interaction
6.3.1
Data Exchange.
The most basic facility underpinning any virtual or real component interaction is that of transferring blocks of data
from one component to another (or several others). At its most basic it does not really look like data exchange, e.g.
where a ‘transmitting’ component turns binary outputs on or off and (an)other component(s) act on the changes of
state. This method is very widely used in shop floor systems and is clearly particularly useful for interaction
involving one or more device of low processing capability, e.g. use of a switch to operate an actuator. It can, though,
be usefully employed between more sophisticated devices such as a manufacturing machine and its attendant workpiece handling robot for example.
Discuss_Mandate
33 of 38
07/28/17
The advantages are very low cost and complexity and very high performance and reliability. These are always at a
premium in real operating solutions. The disadvantages arise when more complex interactions are required indicating to another component which of a choice of parts should be loaded, say. Furthermore this type of
implementation is inherently localised, both in terms of geographical distribution and, more importantly, the specific
machines or applications involved. It would be very difficult to achieve an agreed set of signals for use in certain
situations or with a certain type of machine apart from relatively trivial cases such as most manufacturing machines
having a stop/start or program select button for interaction with humans for instance.
Therefore, for more complex interactions many components implement a more sophisticated data interchange
mechanism often taken from the computer industry. The typical solution involves an RS232 (or closely related)
interface to get medium to large blocks of bytes from one component to another. Unfortunately, presumably from a
mix of parochial thinking and plain lack of proper understanding, there are probably almost as many unique
implementations of RS232 interface as there are machines using it.
The use of LAN technology greatly simplifies the physical distribution and the available hardware has become
standardised, relatively low cost, reliable and widely available as it is used to connect PCs, etc. together in office
environments. In fact the growing use of common PC hardware as the basis of manufacturing machine controllers
may point to a way forward. Apart from this, for various reasons including cost, lack of demand, difficulty of use
and the more complex attendant issues covered under “Exchange of instructions and requests” far from many
components have hitherto possessed a direct LAN connection capability. However, in the wake of the MAP
initiative an increased number of vendors of manufacturing components, like CNC machines, PLCs, SCADA
packages, etc., developed LAN interfaces to a selection of their products.
Generally the lack of LAN interfaces has hitherto been overcome by the use of ‘gateways’, These devices generally
appear as a normal LAN interface on one side and a `local machine' specific interface the other. Thus, the
peculiarities of the local device can be isolated (to a greater or lesser degree) from the rest of the system. The local
interface may range from binary signalling via custom RS232 through to a different LAN technology while the
`standard' main LAN interface enables interaction with all the other MCs in the system which support use of the
same types of interface. The gateway internals perform the necessary manipulation of the data blocks such as
buffering or re-transmission on error and so forth. Much more functionality is usually incorporated though to fulfil
the needs which follow hard on the heels of data interchange as already covered, see “incorporation of the real
elements” above.
Whether a direct LAN interface or a gateway is used the concept of protocols is vital. These ensure not only that the
‘plugs fit the sockets’ but that the data blocks can be transferred to a correct destination with an appropriate degree
of confidence that no corruption has occurred. The protocol therefore covers matters of device addressing - such as
Internet numbers or WWW URLs - splitting of long transfers into multiple short ones, detecting and correcting
errors and so forth. In the past these issues have been the subject of truly enormous debate and needed to be fairly
well understood by a system builder. Fortunately the maturing of the technologies now means that much of the
Discuss_Mandate
34 of 38
07/28/17
technology is buried inside available products and, with one notable exception, the user need not be concerned with
the details. The exception concerns the still relatively commonplace lack of inter-operability between devices which
ostensibly use the same set of protocols. Indeed in recent years there has been a significant focus of attention on so
called middleware software issues, where middleware software such as CORBA (Common Object Request Broker
Architecture) and Internet tools abstract the user further away from LAN and other implementation details by
providing virtual models of data and message interchange.
6.3.2
Exchange of Instructions and Requests
Even when simple binary signalling is used it is necessary to associate some form of protocol with the signals. At
the simplest level this covers the linkage between the signal and the implied action. More complex signalling will
require more protocol - a component needing to signal back that action is now under way, for instance.
However as soon as the more comprehensive mechanisms involving the exchange of blocks of data - or just
sequences of numbers as it in fact is - are adopted then much more comprehensive and sophisticated protocols are
required. These are an extension of the protocols introduced above. While the data exchange protocols can be seen
as enabling the transfer of data blocks, the ‘messaging’ protocols enable the interpretation of the data when it has
arrived. Thus a transferred block of data or ‘message’ might be interpreted as an instruction to, for example, execute
program “Load_Tool” with parameters “Tool_1” and “Holder_4”. This transfer of meaning can only occur if the
components involve both use the same protocol (or can negotiate a common protocol) for the message ‘syntax’.
Thus the data block must be recognisable as an acceptable message, the numbers in the message must be of
appropriate value and occur in an appropriate order and so forth. Any errors which occur in this process will ideally
be made known to the ‘sending’ end but this may not be possible if the data is complete garbage.
In fact, even more protocol than this is required as the message must arrive at an appropriate time. That is to say that
the recipient must be in an appropriate state to act on the message content or that the message is correctly positioned
as part of some sequence. Thus a structure to a series of requests and responses is necessary. This structure will be
indirectly related to “Selecting a primary architecture” above in terms of hierarchical relationships and
synchronous/asynchronous interaction.
6.3.3
Information Exchange.
The basic level of component interactions involve mainly control oriented functions such as start/stop/status and so
forth. Even when the up-load and down-load of programs is incorporated the data involved is usually specific to the
target machine and opaque to the rest of the system. That is, it could not be recognised or interpreted as a set of
instructions except by the particular device it is used by. Thus there is not really the concept of the exchange of
information as opposed to data and this again simplifies the system at the expense of making it highly specific.
However, to achieve high levels of system flexibility it is necessary to move beyond this. For example, if a system
incorporates a machine whose programs definitions are specific to that particular brand then if it was found desirable
to replace them with another machine then the programs would all have to be re-generated. Even worse,
substitutions of manufacturing components with ones from a different class (e.g. human by machine) would require
Discuss_Mandate
35 of 38
07/28/17
wholesale re-engineering of the system. Similarly, if the machine programs are inherently dependant on, say, a
particular work-piece dimension the impact of a design change will be large. Ideally, components should be able to
access and use product information during operation.
Thus, it becomes highly desirable that not only are common mechanisms available for handling the information
requirements of a system but also that consideration is given to the format and content of the system related
information.
6.3.4
Special Situations.
Despite the advantages of adopting commonised system structures, concepts and mechanisms in enabling much
greater abstraction of the design process, it seems certain that there will always be a need to allow for special cases.
A good example of this is the use of programmable manipulators whose specific operations require to be modified at
run time. This will typically be for two reasons.
The first is when, say, a machine must allow for normal variations in the manufacturing process. These variations
may be in dimensional tolerance, part presentation, part recognition etc. when the robot would typically operate in
close co-operation with some form of sensing - mechanical, visual or whatever. Alternatively, a function such as ‘inprocess’ measurement might require modified path trajectories in real time.
The second general case is that where manipulators share a common work-space or need to co-operate to achieve a
task. It may be possible to pre-determine and pre -program for all the constraints involved but this may drastically
impair system flexibility.
In either case, the interactions between the components are likely to require much greater performance than is
available from either the mechanisms or protocols used for commonised interactions in the rest of the system.
Specifically, the limitations will probably be related to timing and the highly specialised nature of the interaction
content and the requirements to support synchronisation between effectively parallel executing threads of behaviour.
Two problems arise from the specialised nature of the interactions or the high performance mechanisms required to
support them.
vi.
How are these represented in the abstract design process?
vii.
How is the impact of ‘oddball’ elements localised in the system?
At present it is often simplest to treat the devices involved as one larger component as far as the design process is
concerned and to treat their interactions as a separate design activity. This is more problematic if the devices interact
with other system components as well as each other. In this case, the design may be split into two (or more) parts
and a different set of common concepts used within each part. In any circumstance it is important to recognise early
and then separate and maintain the different ‘domains’ as much as possible.
Discuss_Mandate
36 of 38
07/28/17
7.0 FIRST STAGE CONCLUSIONS
In suggesting a partly proved decomposition of change management / systems engineering issues this document has
sought to scope the problems involved in developing a new generation of agile systems (and hence reconfigurable
business processes) based on a component oriented approach. Despite its voluminous nature, however, it has only
been practical for this paper to indicate the depth of those problems.
Possibly taking this decomposition as one of a number of starting points the Enterprise Modelling and Integration
community (and its representatives on the joint IFAC/IFIP and ISO working groups on enterprise architectures) are
well placed to provide a requirements specification which can guide developers of component-based enterprise
system engineering approaches.
The joint working group is invited to consider whether (a) it sees the development of guides for developers of
component-based paradigms as a useful and important role for it to assume, (b) if the answer to (a) is ‘yes’ , to
consider how it might develop such guides and if the ‘Aunt Sally’ decomposition herein has a role to play.
REFERENCES
Clements, P.E., 1998,
Coutts, I. A., 1998, An Infrastructure to Support the Implementation of Distributed Software Systems, Ph.D. Thesis,
Loughborough University.
Gannon, D.D., 1998, Component Architectures for High Performance, Distributed Meta-Computing,
http://www.objs.com/workshops/ws9801/papers/paper 086.html
Gascoigne, J.D. and Weston R.H., 1998, Robot Integration within Manufacturing Cells, Handbook of Industrial
Robotics 2nd Ed., Ed. S.Y.Nof, John Wiley & Sons,.
Goldman, S.L., Nagel R.N. and Preiss, K., 1995, Agile Competitors and Virtual Organisations, Van Nostrand
Reinhold Pub., New York. ISBN 0-442-01903-3.
Kawalek, P. and Greenwood, M., 1998, Modelling in the Context of Systems,
http://www.cs.man.ac.uk/ipg/sebpc.html
Kawalek, P. and Leonard, J., 1996, Evolutionary Software Development to Support Organizational and Business
Process Change: A Case Study Account, Journal of Information Technology, 1996, 11, pps. 185-198.
Lehman, M.M., 1991, Software Engineering, the Software Process and their Support, Software Engineering Journal,
September 1991, pp. 243-258.
Prins, R., 1996, Developing Business Objects - A framework driven approach. McGraw Hill, ISBN 007 709 294 5.
Schön, D.A., 1971, Beyond the Stable State, Random-House, New York.
Sims, O., 1994, Business Objects: Delivering Cooperative Objects for Client Server, IB. McGraw Hill Series, ISBN
0-07-707957-4.
Warboys, B.C., et al., 1998, Business Information Systems: A Process Approach, McGraw-Hill, in press.
Discuss_Mandate
37 of 38
07/28/17
Weston, R.H., Edwards, J.M. and Hodgson, A., 1994, “Model-Driven CIM: A Framework and Toolset for the
Design, Implementation and Management of Open CIM Systems” Final Grant Report to SERC/ACME, MSI
Research Institute, Loughborough University.
Wileden, J.D. and Kaplan, A., 1998, “Middleswaqre as Underwear: toward a more mature approach to
compositional software development”, http://www.objs.com/workshops/ws9801/papers/paper 061.html
Discuss_Mandate
38 of 38
07/28/17