The ENEA-ING-TISGI’s Technical Document, CR Casaccia, 14.12.1998 Computer-Aided Emergency Management Training Based on Abstract Intelligent Agent Model Adam M. Gadomski, Claudio Balducelli Italian Agency for New Technologies, Energy and the Environment ENEA, C.R. Casaccia, Via Anguillarese 301, 00060 Rome, Italy e-mail: gadomski_a, [email protected] : Abstract Emergency in human environment is an extremal situation for decision-makers where emergency management has explicitly defined goals and relatively well distinguished physical domains. It requires routine interventions defined in the abstract contexts of losses but it also involves unforeseen scenarios. For this reason the emergency is especially suitable for identification of human agent cognitive architecture and intelligent behavior. The aim of this paper is to discuss the context and the dynamics of cognitive properties of emergency managers involved in the computer supported training of the cooperation during an emergency state. In frame of an abstract intelligent agent model, an abstraction mechanism of learning, discovery, and navigation between different levels of student meta-preferences and meta-knowledge are analyzed. Some patterns for structuring the human agent domain of intervention are also presented. A model of reasoning architecture of trainee-manager and human-tutor are confronted. The assumed intelligent agent model of a trainee, enables explicate conceptualization and analysis of the managerial decision-making framework and interpretation of human errors. The training supervisor may identify the causes of the trainee improper decisions.In general, the errors are referred either to the emergency domain or to the cooperation domain. Using abstract intelligent agent framework, in the both cases, the errors are caused by: - insufficient or false information and, in consequence, wrong situation-assessment - insufficient or false knowledge - inappropriate preferences hierarchy, which includes personal motivation criteria - temporal physical and psychical stress. The general conceptualization framework employed in this study is the TOGA (Top-down Objectbased Goal-oriented Approach) meta-methodology which has been developed in ENEA since 1986. Our illustrative examples are based on the results of the CEC Environment project MUSTER (Multi-User System for Training and Evaluating environmental emergency Response). 1. Introduction The paper discusses the applicability of abstract intelligent agent model to the design of Intelligent Training Systems (ITS) and to the interpretation of trainee errors . The aim of these class of ITS is to teach human decision-makers managerial and cooperation patterns for large scale emergency situations. An emergency situation in human organization context is an extremal situation with relatively well distinguished domains of agent's activity, and which requires a rational explanations of agent's interventions. The cooperation among human agents is subordinated to commonly accepted goals and should be carried out according to the explicitly adopted top preferences. In emergency procedures, the agent activity domains include its physical environment, social environment and organizational (intervention) units. These domains are divided among human agents according to their organizational roles. Agent's autonomy is limited by some a priori established constrains and driven by less or more specific directives. The manager-agent's problem is how to recognize the situation in a proper conceptualization context and to choice actions or a reasoning activity adequate to the required general preferences and to the available knowledge. The design of computer supported training systems needs an elaboration of a flexible humanoriented framework of training procedures. Our illustrative example refers to the CEC Environment MUSTER project. Its goal is the elaboration of a multi-users tutoring system for the training of the cooperation between emergency managers during emergency situations in the Genoa Oil Port [Casablanca, 82], actually under development. The foundations of the emergency management modeling have been developed in ENEA since 1989 in frame of the ESPRIT project "Information Technology Support for Emergency Management" [Gadomski,89], [Sepielli,89], [Gadomski,90]. The results obtained are used in the present paper. The methodological approach and general conceptualization of an abstract intelligent agent which we heuristically employed for modeling of student-managers are based on the TOGA theory developed by Gadomski see for ex. [Gadomski, 88,89, 93]. The subject of this paper is consequent to another work [Balducelli 92] relative to the MUSTER project. The design of ITS requires modeling of emergency managers and their abstract instructor. The abstract instructor functions should be divided and allocated to human instructor and ITS. The basic properties of intelligent agents employed in a training process and their contexts are identified and discussed. The specific property of the MUSTER project is tutoring and training of students on the level of management. For this task it is not sufficient to model the knowledge to be transferred from the instructor to the students and to model how this transfer can be realized but it is also necessary to implement some meta-rules which would enable students to increase their capability of the goaloriented operations on their own domain and cooperation knowledge. One of these capabilities is a possibility of abstraction. It is an intrinsic property of intelligent agent and it is analyzed deeper in the next paragraph. 2. Training of Abstraction and Learning Capabilities The abstraction capability is one of the most important feature of an intelligent agent. This capability, from the dawn of human civilization, allowed the social evolution of men over the other animals. We can imagine that one day, an hominid utilized a stone wedge-shaped object to kill the prey. Then, day after day, he killed many other animals using the same object: finally he understood that an object could be modeled in his mind if it was associated to a predefined goal. In this modeling activity he discovered that it was not important that the modeled object was exactly the same that he found by chance; it was important only that some object attributes and features/properties were similar to the attribute present in his mental model of the object itself (sufficient hardness, adequate shape ect.). The human mind learned to use the concept of tool which is an abstract concept. The capacity to abstract conceptual properties for concrete examples is a typical feature of the human mind and probably is the most important characteristic of the agent intelligent behavior. Many works in the AI literature analyze this important feature of intelligent agents. The necessity to construct abstract models was considered for qualitative modeling of physical systems [Kuipers, 93] and for abstract interpreters of computer programs [Cousot, 92]. Abstraction is also recommended as useful strategy to acquire and formalize knowledge for experts systems building [Balducelli, 90]. All software techniques include implicitly the abstraction mechanism. In the perspective of modeling the behavior of an intelligent agent, it is necessary to consider that a strong relation seems to be present between his abstraction and his learning capacity. An interesting framework in which the concept of abstraction appears associated to the concept of learning, was the ITSIE project of the EU ESPRIT program. The goal of this project was the architectural definition of Intelligent Tutoring Systems for Industrial Applications. It proposed an automatic adaptation of the learner agent cognitive behavior, moving up and down between the three Rasmussen’s knowledge levels [Rasmussen, 82]: skill, rule, and model level. Agent acquires date and try to apply its skill (associative) knowledge, if no skill knowledge is available then it passes to the rule knowledge when the skill knowledge are data. Rule knowledge enables a rule-based reasoning which can produce new skill knowledge. If rulebased reasoning is not sufficient to find an adequate skill knowledge then the agent reasoning is shifted on the model knowledge level. Model knowledge includes meta-rules and model relations which enable learning/discovery process on the rules level. The example of reasoning paths on the fig. 1 illustrates how human can learn something more about his/her domain of activity only by increasing his/her own abstraction capability. During the evaluation of the observation, having learned a new concept (or a rule) on different abstraction levels, he can pass on a lower abstraction level to act more efficiently in his/her physical intervention domain ( = agent's 'end-domain of activity', end-d-o-a). From this point of view, the main reason for which an intelligent agent realizes abstraction processes is a need of the learning or discovering of new features, relations or operational rules, necessary for the achieving a current intervention goal in end-d-o-a. Starting from known particular situations, humans are able to elicitate abstract features and to build models which are applicable in various situations of the same class of the domains. When humans acquire competence and expertise in an end-d-o-a they also improve the capability to increase and decrease the abstraction level of reasoning . MODEL MODEL KNOWLEDGE KNOWLEDGE learning/discovery RULE KNOWLEDGE data acqusition RULE KNOWLEDGE learning/discovery SKILL data acqusition SKILL KNOWLEDGE data acqusition KNOWLEDGE data request Observation activity data request activation activation Action execution Fig 1 - Navigation between three Rasmussen's Knowledge levels (three abstraction levels). In the TOGA theory there are distinguished two abstraction hierarchies, generalization (GL) and meta-levels hierarchies, for example, the following successive structures are defined: metapreferences, meta-knowledge, knowledge on meta-preferences, meta-knowledge on metapreferences, and so on. Hierarchical abstraction process can be performed from different points of view and can lead to the construction of different abstraction spaces. The formal architecture and the navigation between different abstraction levels are discussed in the Gadomski's paper [Gadomski,93]. Therefore, the training process of a human intelligent agent consists of the improvement of his abstraction capability in his different contexts of interest. Training must support this type of intellectual behavior adopting training strategies adequate to end-d-o-a and to the student's initial competences. In this context, the accepted exercises and the drills must : - to indicate a proper abstraction direction in the problem knowledge space [A.Newell]; - to give the possibility of testing of the student abstraction capability to navigation between different abstraction levels; - to give the possibility of evaluation of the student operational knowledge used for such navigation, i.e. real-time validation of methodological rules employed in his problem solving activity. In order to make really effective the training process, a dynamic model of the student, itself must be used during the training sessions. In fact, the students may already have a partial skill or knowledge about the matter to be learn. It is normally ineffective and sometimes dangerous for the instructor to perform training sessions, and to propose drills and exercises without taking into account the student's needs (preferences). These problems was already investigated in AI with the aim to develop intelligent tutoring systems [Sleeman 81], [Hartley 87], [Aiello 88]. The instructor's goal is not only to increase the efficiency of a single agent, but his objective is to improve the efficacy of a population of agents having different experiences and skills but cooperating together to solve the same problem. The considered tutoring problem is relative to the management of an incoming emergency situation inside a port or a railway station. In this case, it is very important to improve the emergency managers abstraction capacity. In fact, in many cases, from the evaluation of few accidents already appeared in the past, or considered as hypothetical, it is necessary to generalize the emergency procedures to take into account all the new possible accidents of the same classes. In addition, every emergency manager must learn the behavioral models of other agents involved in the emergency situation to solve hypothetical conflicts between them, to collaborate and to negotiate (for ex. sharing common resources). 3. Knowledge about the Physical Emergency Domain 3.1 LAYOUT - RESOURCES - SCENARIO, LRS CONCEPTUALIZATION The physical emergency domain is the end-d-o-a of the student, it is the domain of the goal of emergency management. A mental image of the emergency domain is the domain of student hypothetical interventions, i.e the domain of his attempts to achieving particular intervention-goals. The suggested LRS conceptualization framework is composed of three layers: - Layout Layer: LL, - Resources Layer: RL, - Scenario Layer: SL. All of them can be represented by abstract objects-relations networks. The layout layer is the frame for the most static information about the domain itself. Normally the information represented in LL cannot be modified by the training supervisor before and during a training session. The layout of the end-d-o-a is represented by more or less schematic maps of the considered territory. The resources layer represents all equipments, components and human organizations that are active on the layout; they have defined goals and functions. The scenario layer represent the set of sequences of events that may be considered in the layout. They are in relation with the resources and with the emergency management actions . In order to build an integrated emergency environment image, it is necessary to map the resources layer on the layout layer and the scenario layer on the resources layer. All of them must be conceptually referred to the possibilities of observation and modifications/intereventions by emergency managers. The mapping of the resources layer on the layout layer is a simple geographical mapping. This means that when the mapping is performed, only the attribute location of any object of type resource is defined. As a consequence, also the attribute availability time may be redefined for some resource objects. This is due to the fact that the layout contains constraints able to increase or reduce the availability in time of a resource. The mapping of the scenario layer on the layout and resources layer has the effect of modifying several resources' attributes. The attribute availability time may be influenced by the meteorological factors or by the accessibility constraints; the attribute destructiveness is influenced by the level of storage or by the meteorological factor or by the population density, etc. In other words, one can say that the resources layer contains objects attributes for which only average values may be determined without taking into account the other two layers. More specific attribute values can be only determined if the mapping process is performed. 3.2 EMERGENCY PROPAGATION SCRIPT The training supervisor has the duty to produce the most suitable training session designing possible scripts of the emergency scenario evolution. Script is a graphical representation of the web of events that may be considered in the layout during specific emergency cases. It includes only the layout nodes which was, is or can be in an emergency state. In abstraction space, the scripts can be constructed on different GLs (Generalization Level). Every node of a script is characterized by LRS attributes. Using LRS knowledge the script construction requires: 1) A classification and an ontology of the layout objects in term of: - selection of some primary sources of emergency states; - identification of the possible secondary sources of emergency states to be modeled as nodes in an emergency propagation net. These nodes can be identified taking into account the vulnerability and destructiveness attributes of the involved objects; - identification of potential transmitters of emergency states; the boundary between the in-site and the off-site layout of the port is a critical node to transmit the emergency out of the port on the public territory; - identification of potential barriers of emergency states: the port in site fixed resources as the antifire system or the mobile resources as the anti-pollution "panne" are examples of potential barrier. - identification of the vulnerable points generating great amount of losses; - identification of the output nodes changing the emergency range from local to regional scale. LEGEN D Dynamic states of nodes: active no more active not activated (yet) disactivated not vulnerable node (no losses) vulnerable node (with losses) cause-consequence relation primary source transmitter secondary sources barrier output node (emergecy escalation) Fig 2 - An example of the emergency propagation script 2) Identification of cause-consequence relations between the objects. 3) Integration of the previously recognized objects and relations into emergency propagation nets (possible scripts of emergency evolution). 4) Identification of the temporary state of nodes from the management point of view. The Fig. 2 illustrates an emergency script. The lines represent all possible paths of emergency evolutions. The arrows indicate an emergency propagation, and link the nodes which were or currently are in emergency state. Script is a part of abstract student activity domain. From his perspective, the tutor's scripts must be discovered, conceptualized and modified in order to stop the emergency propagation process and , in parallel, to reduce total losses. 4. Cooperation and Coordination Knowledge Emergency managers have different competences and responsibilities related to the physical emergency domain. Cooperating, they construct a common emergency script, and interact and communicate during its individual modifications. In a defined physical domain like an Oil Port, in general, the emergency management activity is performed by an emergency cell composed of a coordinator agent and other manager agents in different roles ( they are responsible on parts of the emergency domain or on some resources like fire-brigades or the police). During an emergency situation the coordinator cannot take decisions alone but must to discuss them with other managers taking into account their respective points of view, individual proposals and preferences resulting from their roles and duties.. Regional Emergency Coordinator Harbour Master Regional Fire Brigates Manager City Police Manager Public Healt Manager Off-Site On-Site Local Emergency Coordinator Local Fire Brigates Electrical Subsystem Manager Anti-fire Subsystem Manager Fuel loading Subsystem Manager Local Police Manager Fig 3 – An example of the Emergency Management Cell structure. On the Fig. 3 an emergency management organization structure is presented. Two principal classes of emergency: on-site and off-site emergency are distinguished. The first is the emergency that can be managed by the local emergency coordinator inside the oil port with the resources present in the port itself. The other is the emergency that can not be managed using only the local resources and out the competences of Local Emergency Coordinator. Using the TOGA conceptualization framework, the relation between intervention-goal and its execution carriers (agents and their d-o-a), may be decomposed in tasks and actions. These tasks and actions must be planned, monitored, controlled, and synchronized ( coordinated one with another), how is shown on the Fig. 4. In the emergency management case, tasks are all the intervention procedures that indicate WHAT must be done, and require the support of many agents resources while actions are procedures that specifies HOW particular task can be executed. The actions are addressed to a single resources units (executive agents). In a real situation the cell coordinator are responsible for cooperating planning, monitoring and controlling at task level, while the other agents realize the same functions but at actions level. Task planning activity can be executed only after a negotiation activity between the coordinator and local managers. In fact, on the basis of manager individual possibilities (possible actions) many different tasks could be suggested by the managers to the coordinator-agent. In many cases these tasks can be in conflict each with other. Therefore, conflict resolution is another typical coordination activity that can be performed in an iterative way between the involved agents. modifications Intervention goals Domain of activity TASKS ACTIONS Coordinator Agent Manager Agent modifications Executive Agent Consequence relation planning / execution monitoring Comunication controlling Fig 4 - Interrelations between 'intervention goals' and 'domain of activity' in presence of coordinations Task monitoring activity is especially necessary when many unforeseen emergency states can appear on emergency script during the execution of emergency interventions. The assessment of the current situation, depending on the monitoring of a particular task execution , it can vary during the emergency process and can cause the necessity of the re-planning previously chosen tasks. The coordinator controls whether the actions on the executive level are performed according to planned tasks and are synchronized in time and according to the common available resources. Also in this case, new situation assessment and new plans can be generated by cooperating agents. Task monitoring and control activity is based on the direct messages (informations) from the emergency domain and communications obtained from other emergency managers. Relatively to the cooperation and executive levels, the coordination level is a meta-level. The agents on this level require knowledge about cooperation and negotiation, which are critical in many high-risk, time limited , and stressed conditions. Therefore to improve the capacity of planning, monitoring and controlling at the cooperation (task) and coordination levels, a practical psychological and sociological knowledge is strongly required. In the above perspective, the emergency organization can be viewed as a distributed multiintelligent agent ( MIA ) system with a local autonomy of its intelligent elements [Gadomski,92]. On the coordination level, his autonomy is limited by common intervention goals , time, and available resources. On executive level, the agents autonomy is also limited by the tasks which they obtained from the previous level. Some aspects of local autonomy of the human agents in emergency management organization was preliminary analyzed in the work [ Gadomski,Gadomska, 89]. 5. Computer Supported Training The agent training with a computer simulation support is a case based and it is realized by individual discovery of the students. Contrary to the direct knowledge acquisition method (which is more characteristic for tutoring systems), the training system produces the examples of dynamic scripts which are student's interventions domains. Learning by examples utilizes an exercise library that must be build by a domain experts and inserted off-line by the training supervisor to the system knowledge base. Learning by tutoring and examples implies that the tutor must select (an exercise selection) the exercise from the library, on the base of the continuously updated student model (his current intervention goal, duties, competence and current learning level). In addition, during the sessions, he must furnish explanations or suggestions (a student tracing), and evaluate the student learning capability and his learning results. When a student performs the learning process only by examples, he must be able to perform by himself the abstraction process from a specific examples. He builds inside his operational knowledge, a strategic procedures how to navigate between abstraction levels. In the case of learning by tutoring, the tutor supports the student learning process according to the various strategies, for example: - Control the sequence in which the exercises are proposed; the proposed sequence must to facilitate the student abstraction process. - Control the types of proposed exercises; the proposed situations must be typical and general avoiding to insert all the facts that are not important and not critical for the situation itself. Intelligent tutoring systems must utilize the both, the tutor and student models, They needs: - a reference abstract student model which includes main required preferences and domain knowledge (according to the LRS conceptualization), - a history record which memorize the correctness of the student actions, - tutor's verification operational knowledge and tutor's training strategies , i.e. his meta operational tutoring knowledge. Computerized Support Preference Domain Knowledge Exercise Library scenario design Historical Records ABSTRACT STUDENT 1 informations acquisition interaction ABSTRACT difficulty TUTOR level modification Simulation ABSTRACT STUDENT 2 ABSTRACT STUDENT 3 observation EXAMPLES observation COOPERATION COMMON KNOWLEDGE conflicts solutions / negotiation strategies / resources allocation Fig. 5 - Multi-Agents Intelligent computer training support A more realistic framework is a multi-user groupware computer supported training: the relative schema is presented on the Fig. 5. In this case, main training goal is not to increase the individual and personal agent domain knowledge, but to construct cooperative preferences and strategies related to the individual preferences, knowledge, and current intervention goals of the cooperating managers. An intelligent tutoring system requires different cooperation patterns. Here, adequate heuristic rule-bases have to be prepared by sociologists and psychologists team. Taking under consideration that the cooperation training is organized for domain specialists, we should stress that in practice, a realistic training of many emergency managers requires a tutor which is rather expert in the above domains than which is expert in specific emergency field. A general abstract emergency student and an abstract tutor can be confronted analyzing the Figures 6 and 7. The main domains of modification and development are the individual student meta-preferences and meta-knowledge systems in the context of fixed preferences of emergency organizations, i.e. his role duties and direcitves included in the emergency procedures/instructions. Preferences related to A, B, C Domain of activity Meta-preferences inform ation Duties References A Emergency Case Axiology Preferences meta-knowledge Current Image B of Emergency Case Image C Current of Cooperation Students' Cognitive Models Kn owle dge re late d to A, B, C inform ation De scriptive & O pe ration al m odification Knowledge meta-preferences Em e rge n cy dom ain fram e s C oope ration fram e s S tu de n t re fe re n ce m ode l Meta-knowledge Tu torin g strate gy k n owle dge Fig 6 - Abstract Tutor functional components represented in the framework of a generic intelligent agent architecture Preferences Meta-preferences Emergency goal Duties Current image of emergency Axiology (local) Current image of cooperation Preferences Meta-knowledge intervention goal Knowledge De scriptive Layout Re source s O pe rational Knowledge Meta-preferences Mapping Re lations Sce nario C oope rative strate gie s First Level D-O-A C oope rative strate gie s Second Level D-O-A Meta-knowledge O pe rative De scriptive - information flow Fig 7 - Main elements of the cognitive architecture of the Abstract Emergency Student (AES) 6. Conclusions In this paper, only preliminary results obtained by the confrontation of AIA with the models of intelligent agents involved in the training of management and cooperation in emergency conditions, were presented. The role of the abstraction mechanism in the generalization and meta levels hierarchy was specially discussed. During training activity human and artificial agents are employed. From TOGA perspective, both of them can be represented according the same frame architecture. The obvious differences relates, of course, to the particular role knowledge and role preferences but which may be structured and operated in the frame of the same cognitive architecture. The main serious differences between artificial and human intelligent agents are refered to the following properties of these agents: - the ITS is programmable and its preferences' systems is integrally available for the programmers. The human knowledge and preferences are evolutive and only indirectly demonstrated. - the top human agent preferences can be only verbally declared because humans have hidden individual preferences absent in the case of the computer agent. - interrelations among rational AIA and its carrier systems can be neglected in the computer agent behavior analysis, but can have essential meaning in the human cases. For example, variable human perception and mental capabilities depend on the state of his body. The top preferences can also be modified by the evolutional biological mechanisms in the way unconscious for the agent (for ex. as an effect of human emotions). Another aspect of the modeling of a strongly connected, hierarchical organization, as the emergency management structures, is the temptation of its conceptualization as a distributed multiintelligent agent. In the future, this problem would be the field of interesting, and probably fruitful, results. From the practical point of view, the definition of a software system for an interactive collaborative training in emergency management will be the next specifications step in the MUSTER project. References C. Balducelli, A. M. Gadomski. Active Decision Support Interfaces for Emergency Management Bottom-up and Top-down Development Approaches. Paper presented on “Terzo Workshop del Gruppo di Lavoro Interfacce Intelligenti”. Roma, Nov, 1997. A.H.Bond and L.Grasser. Readings in Distributed Artificial Intelligence, Morgan Kaufmann,1990. Brooks,A.Rodney, Lynn, A.Stein, Building Brains for Bodies , MIT AI Lab Memo #1439, August,1993. C. Balducelli, R. Di Sapia, A.G. Federico, Expert systems and knowledge acquisition technology in ENEA program on nuclear and conventional energy production processes, Proceedings of VTT Symposium on Artificial Intelligence in nuclear power plants, Vol. II (1990) 275-283. C. Balducelli, A Multi-User System for Cooperative Training and Evaluation of Plans and Emergency Procedures, the Proceedings of the II Italian DAI Meeting , Institute of Psychology (CNR), Rome, 28/5/1992. G. Di Costanzo, A.M. Gadomski. A prototype of an active decision support system, based on an abstract intelligent agent architecture. Proceedings of TIEMEC 97 Conference, Copenhagen, 10 Jun.,1997. L. Carlucci Aiello, M.Carosio, A. Micarelli. An Intelligent Tutoring System for the Study of Mathematical Functions.Proced. Int. Conf. on Intelligent Tutoring Systems ITS-88 Montreal, June 1988. Casablanca V., Meta D., A Tanker Explosion in Genova Oil Port, "Antincendio" Magazine, 82-88, 1982. P. Coad, E. Yourdon, Object-Oriented Analysis, Yourdon Press Compiuting Series (1990). P. Cousot at al., Abstract interpretation and application to Logic Programs, Jour. of Logic Programming, Special Iussue on Abstract Interpretation, Vol 13, Jul. 1992. C. Castelfranchi. «Guarantees for Autonomy in Cognitive Agent Architecture», In M.J. Wooldridge and N.R. Jennings.(editors) Intelligent Agents, Springer, 1995. Cog : <http://www.ai.mit.edu/projects/cog/Text/cog-robot.html> Cog ia a "humanoid robot" being built at <http://www.ai.mit.edu/projects/cog, in the MIT Artificial Intelligence Lab. T.Finin, J.Weber at al..»Specification of the KQML Agemt-Communication Language», [email protected]. , 1993. F.Flores, M. Graves., B.Hartfield, B.Winograd. «Computer Systems and the design of Organization Interction». ATM Transactions on Office Information Systems, V.6,N.2, 1988. S.Franklin , A.Graesser . «It is an Agent, or just a Program?: A Taxonomy for Autonomous Agents», http://www.msci.memphis.e.Prog.html#classificator, 1995/6. A.M. Gadomski, Application of System-Process-Goal Approach for Description of TRIGA RC1 System, Proceedings of " 9th European TRIGA Users Conference ", Oct., 1986, Roma, printed by GA Technologies, TOC-19, USA.1987, also the ENEA Report RT/TIB/88/2,1988. A.M.Gadomski, M.Gadomska, Environmental and Emergency Communication and Decision: Confrontation of Shallow Models from the Point of View of Computer Support Designing, presented at the Second SRA-Europe Conference, Laxenburg, Austria, April 2-4, 1990. A.M.Gadomski, V.Nanni, Problems of Knowledge about Knowledge: an Approach to the Understanding of Knowledge Conceptualization, in the Proceedings of The World Congress of Expert Systems, 1991, pp. 1519-1523. A.M.Gadomski, A Model of Action-Oriented Decision-Making Process: Methodological Approach, in the Proceedings of the 9th European Annual Conference on Human Decison Making and Manual Control, 1990, pp.79-99. A.M.Gadomski, Methodological and Conceptualization Patterns for Modeling Abstract Intelligent Agent, in the Proceedings of the First International Round-Table on Abstract Intelligent Agent, Rome, Jan., 25-27, 1993. A.M. Gadomski S. Bologna, G. Di Costanzo. «Intelligent Decision Support for Cooperating Emergency Managers: the TOGA based Conceptualization Framework.» In Proceedings of "TIEMEC 1995 The International Emergency Management and Engineering Conference . Nice, Fr, May 9-12, 1995b. A. M. Gadomski, C. Balducelli, S. Bologna, G. DiCostanzo. Integrated Parellel Bottom-up and Top-down Approach to the Development of Agent-based Intelligent DSSs for Emergency Management. Proceedings of the International Emergency Management Society Conf. TIEMS’98: Disaster and Emergency Management. Washington, May 1998. G. Hartley, Artificial Intelligence and Instruction: Applications and Methods. Reading, Mass.: Addison-Wesley, 1987. [O’Hare,Jennings,96] G.M.P. O’Hare, N.R.Jennings. Foundation of Distributed Artificial Intelligence, J.Wiley&Sons, 1996. [ Rational Rose] http://www.rational.com/demos/rose4demo/wlkthr_rse.html B.J. Kuipers, Reasoning with qualitative models, Artif. Intell. 59 (1993) 125-132. Y. Shoham. “ Agent-oriented programming”. In Artificial Intelligence 60 (1993) 51-92. M.P.Singh, Multiagent Systems, Springer 1994 D.H.Sleeman, M.J.Smith, Modeling Student's Problem Solving. Artif. Intell. 16:171-187, 1981. J. Rasmussen, Skills, Rules and Knowledge; Signals, Signs and Symbols and Other Distinctions in Human Performance Models. IEEE Transictions on Systems, Man, and Cybernetics, Vol. SCM13(3), 257-266, 1982. D.D. Woods and E. Hollnagel, 1987. Mapping Cognitive Demands in Complex Problem solving worlds. Journal of MMS 26, 257-275. M.J. Wooldridge.and N.R. Jennings. Intelligent Agents, Springer, 1995. ..
© Copyright 2026 Paperzz