Ambient Awareness in wireless information and communication services 1 Herma van Kranenburg (eds.) 1, Stefan Arbanowski 2, Erwin Postmann 3, Johan Hjelm4, Johan de Heer 1, Fritz Hohl5, Stefan Gessler6, Heikki Ailistoo7, Anthony Tarlano 8, Wolfgang Kellerer8, Francois Carrez9 1 Telematica Instituut, The Netherlands 2 Fraunhofer Fokus, Germany 3 Siemens AG, Austria 4 Ericsson, Sweden 5 Sony, Germany 6 NEC, Germany 7 VTT, Finland 8 DoCoMo Euro-Labs, Germany 9 Alcatel, France 1 Company reference TI/RS/2002/004 Copyright © 2002 Wireless World Research Forum Cont ent s 1 Int rodu ct ion 3 2 Def in it io ns 4 3 Ac q ui ri ng a mbi ent in f or mat ion 5 4 Cr unc hin g ( Int e rp ret i ng) amb i ent info r mat ion 7 5 T ailo ri ng I/ O 9 5.1 Interaction by Autonomous Software Agents 9 5.2 Multi-modality and intelligent I/O behaviour 10 5.3 Modifying the (real world) ambience by effectors 11 5.4 Dealing with sensors and effectors in a non -open space 11 5.5 Super Distributed Objects 12 5.6 Modelling the layers of ambient aware applications 16 6 Int er a ct i o ns of hu ma ns an d ma ch in es in p ro ce s sin g am bi ent in for mat ion 18 7 St and a rdi s at i on ef f o r t s & sup po rtin g t e ch nolog i es 20 8 Con clu s ion s 25 2 2 1 Introduction Ambient awareness is particularly relevant for wireless applications due to the use of positional User Context information, such as location and movement, for applications such as location-based services and the limited user interface. This relevancy motivates thecurrent collaborative work on ambient awareness in wireless information and communication services. This collaboration work is a result of a joint effort of several participants within workgroup 2 of the Wireless World Research Forum (WWRF). Ambient awareness is part of ubiquitous attentiveness and context-aware computing, where a combination of ubiquitous context-aware computing devices, exchanging information using communication and the pro-active responsiveness of the ubiquitous environment is applicable. A (wearable) system is ubiquitously attentive when it is cognisant and alert to meaningfully interpret the possibilities of the contemporary human communication space and intentionally induces some (behavioural/system) action(s). Ubiquitous attentiveness comes close to the term context awareness. Within workgroup 2 we split the technological issues in ubiquitous attentiveness and context awareness in personalization, ambient awareness and adaptation. The current paper focuses on the ambient awareness, which denotes the functionality provided by an I-centric system to sense and exchange the situation in which the individual is in at a certain point in time. The ambient or situation is a transection of context awareness. The most appealing and typical mobile services of today are based on derived information using location-awareness. Services can be delivered to users everywhere. Moreover, the location of the user influences the delivery of a service (what is delivered, e.g. a map of a, for the user unknown city that dynamically changes in accordance with the user’s movement or of which the scale dynamically changes upon the user’s velocity). Location-awareness involves more than just physical coordinates and velocity. Ambient conditions, like for instance indoors/outdoors, humidity, and temperature play a role as well. In the current paper different aspects that need to be covered in ambient awareness are further discussed in relation to open research issues needing to be solved for their realisation. At first the term ‘ambient awareness’ is further defined. 3 3 2 Definitions Ambient in the sense of I-centric systems refers to the situational context in which an individual user or actor is in. Ambient awareness deals with sensing and exchanging the ambient of a user/actor in the human communication space. The ambient or situation includes spatial, environmental and physiological information. Examples of spatial information are geographical data like location, orientation, speed and acceleration. Environmental information includes temperature, air quality, and light or noise level. Physiological information depicts life conditions, like blood pressure, hart rate, respiration rate, muscle activity, and tone of voice. An important goal of ambient awareness is to acquire and utilize information about the situation of an actor to enable services in an active context, i.e. services personalized and adapted to a certain situation at a certain moment in time. In ambient awareness a key aspect is situation sensing in where a sensing device detects environmental states of different kind and surpasses them on. The following definitions will be used: Definition 1: When speaking of the person who is actually using a computing system, the term "user" is often applied. This relates to the fact, that the person has an interaction with an object: he is using a service or application. But why should a person still be called user when he is not using the object anymore? To avoid this shortcoming, each person (using an object or not) will be called "actor" in the following. That emphasizes his active role for a system, leaving any relationship to other actors or certain objects open (Arbanowski, 2000). Definition 2: Ambient in I-centric computing defines the ambient or situation the actor is in given his communication space at a fixed moment in time. It includes spatial, environmental and physiological information. Definition 3: Ambient awareness is the functionality provided by an I-centric system to sense and exchange the situation in which the individual is in, at a certain moment of time. Definition4: An effector is a device that can modify the physical world in a way that can be detected by actors by his/her senses directly. Examples for effectors are lights, automatic doors, phones, loudspeakers. 4 4 3 Acquiring ambient information A device is ambient aware if it can respond to certain situations and stimuli in its environment, representing actors and objects of the actor's current communication space. Sensors or human-machine-interfaces can gather information about these actors and objects. Actors can also provide this information themselves. Sensors can e.g. be located in the network, in devices or in the environment of the individual. For actors an automatic context information gathering is preferable. Advances in sensor technology are needed to reach further adaptation of services to - and co-operation with - the environment of actors. In general, we can distinguish the following categories of context describing the environment (Chen,Kotz, 2001) : Computing Context: Such as, network connectivity characteristics, nearby devices, displays, and hardware resources User Context: Such as, users location, social conditions, users profile and history Physical Context: Such as, levels of lightning, noise, humidity, and temperature Time Context: Such as, time of the day, day of the week, time before or after a specific moment This categorization provides a guideline for the categorization of the sensors or interfaces needed to acquire ambient information. A special category of ambient/situation is positional information as part of the user context. Positional information is seen as accurate information (x,y,z coordinates) representing one’s location, while location can be seen as a wider geographic area. An actor, network or terminal can provide positional information of a mobile terminal. Wide area positioning technologies range from Cell-ID to enhanced GPS. Today, network technology makes use of Cell-ID to calculate the coordinates. Within a year the Enhanced Observed Time Difference (E-OTD) solution is ready, while for 3G networks RTT/IP-DL is foreseen to be capable of providing position information through/from the network. High-end terminals will be GPSenabled. E-OTD makes use of location information provided by the terminal, while the actual calculation of the position is done within the network (handset-assisted technology, or vice versa and then it is called network-assisted technology). Round Trip Time (RTT) enhances the Cell-ID with propagation time measurements. Idle Period Downlink (IP-DL) is a cellular signal timing based method for WCDMA. Global Positioning System (GPS) makes use of 24 satellites in orbit around the Earth. Differential GPS can be used to determine the 3D position of an entity within an accuracy of 1 m. However, GPS has a major drawback that it cannot be used indoors. Further it provides no information on orientation (like is the entity oriented to the west). If coverage is fulfilled, short-range wireless technologies, like Bluetooth IEEE 802.11 or Hiperlan-II, can complement the wide-area positioning information. Accuracy and acquisition time can be improved. 5 5 Position sensors provide yet another source of coordinate information, with differences in accuracy, range and costs. So-called inertial sensors can sense accelerations and rotations. Motion sensors detect changes of motion. Body sensors could be used for user context regarding the actor’s condition. They gather its information by measuring directly on the actor. Heartbeat is an example, like skin resistance. These might be used to gather physiological information like emotions and anxiety. User profiles as a source of information will be described in the WG2 White Paper on Personalization. Terminal-activity sensors obtain context information from monitoring the interaction and communication with the actor, like key presses on a keyboard. Several types of visual sensing can be provided by a camera (via light input, or though the use of visual context markers like pictograms). These sensors act on the borderline of human-machine interaction detecting both user context and computing context. For physical context, several types of sensing devices can be used as general -purpose sensors, e.g. cameras, which can sense not only the light intensity, but also wavelengths invisible to the human eye, e.g. infrared, and appropriately calibrated can be used as temperature sensors. Investigating how multiple readings can be derived from one single device is a challenging research task. There is quite some ongoing research on computing context in terms of the request for and the announcement of computing and communication services in a networked environment. “Service Discovery” basically describes the mechanisms to get information about the availability of resources in a local area network, e.g., SLP (IETF, 2002), JINI (SUN, 2002). Note that in this case the computing context is already provided as a service and not as pure information. 6 6 4 Crunching (Interpreting) ambient information In the previous section a few examples of sensors were given. The sensing equipment itself can be quite small and distributed, the problem being to acquire the sensor data at an appropriate aggregation level. Knowing that the temperature varied 1 degree at 1000000 positions may not be appropriate, if the values can not be aggregated to describe the area in relation to a known reference, e.g. the geographic location and time of the sensor; these in turn have to be aggregated appropriately. Knowing that the temperature has risen one degree over the province of Skåne in March will tell Swedish people that spring is approaching, for instance. Sensors also interact, as the example above shows. While a clock is strictly speaking not a sensor (since it does not retrieve its input from the outside, but from the oscillations of a quartz crystal), it is likely that the more general-purpose sensors become, the more they will trust other accessible information sources to provide them with information about their context. Today, if a thermometer is to communicate temperature readings, the location has to be determined and communicated out-of-band (i.e. by setting it in software, or the thermometer itself). A general-purpose thermometer could sense its own location (and, using GPS, the time), and use this in communicating its values. The above examples show that an appropriate interpretation of sensor-readings is needed. Ambient information is useless, until an evaluation takes place, which can take advantage of the additional situational information. Evaluation of the individual’s situation in the communication space is therefore as important to an I-centric computing system as sensing it. Different parts of the actor’s situation (and of the context of the information, such as publisher-created document information) may have different weight in determining the ambient of the actor. In addition, the ordering of the processing may affect the resulting ambient awareness. In traditional programming languages (including script languages and transformation templates such as XSLT) there may be an inherent ordering of the processing, whether consciously created or not. This ordering can be formally expressed, and externalized from the program which actually executes the processing. If the information about the relative weights and the ordering of processing are not communicated in profiles (storing the user’s context) themselves, it has to be communicated as a separate description, essentially a meta-profile for the processing. This can be expressed in a formal rules language, or as an ontology using the names and/or properties of the profiles as nouns. This, in turn, implies that the processing entity can handle the reception of such meta-profiles. This would determine the processing order and the relative weights. Database selection, transcoding, annotation, and other contextual processes could be applied as part of this process. 7 7 Profile evaluation takes place in every kind of service logic inside of an I-centric computing system for the reason of decision-making. Normally different profiles of an actor will be combined and evaluated to react on an actor's command or changes in his environment. The profile evaluation process has to provide several features for enabling mappings of abstract "wishes" to concrete behaviour of certain objects. If two or more objects are needed to realize a "wish", the interaction between these objects in an actor's surrounding has to be ensured by some kind of service logic. The outcome of the profile evaluation process and the resulting actions have to be stored in a history-profile to provide information for self-learning and adaptation capabilities. The result of an evaluation process influences the system's behaviour. For I-centric computing this means that the result of the evaluation of the current situation of an actor defines what has to be done for him by the system at a certain moment in time. The actual creation of a multi-profile system is yet to be realized, but promises to bring a number of interesting challenges. Communicating and aggregating the values also present a number of special challenges. For some applications (e.g. “if the day is Saturday, the month is August, the time is morning, the temperature is above 20 degrees centigrade, and the sun is shining, show me the way to the beach”), the number of values is quite limited, and the aggregation and rule system quite straightforward. However, if there are a large number of uncoordinated sensors, then statistical methods may have to be used to determine the current values for the current position (“what is the temperature near me” can be determined by averaging a number of fixed sensors on buildings, for instance). The problem then becomes one of communicating these readings, as well as one of privacy. The fact that the sun is shining is well known to anyone in the area, but the fact that a person is stressed may not be immediately obvious, and may be something that person desires to hide from others. This will require privacy protection, which is discussed further in the other white papers of WG2. An important aspect of information processing for context aware applications is the social factor. This addresses an automatic or semi-automatic grouping of relevant context information in order to leverage the complexity of context heterogeneity, before the context is processed by an actor. For example, all information regarding user location could be accessed by addressing the issue “user location”, without having to search for all relevant pieces of information. 8 8 5 Tailoring I/O 5. 1 Int er a ct io n b y Au t on omou s So ftw ar e Ag e nts The continuous nature, and the infinite number of states and precepts about the environment surrounding a mobile ambient-aware application, provides inherent difficulties for gaining knowledge of the environment. Autonomous Multi-Agent System research related to socialization and interaction of agents, with knowledge of environment states and settings, coupled with investigation diligence may prove to be a successful strategy for overcoming the inherent difficulties of the continuous problem domain. Autonomous Software Agents Autonomous software agents, or just agents, are capable of autonomous proactive behavior, reactive behavior, and goal driven social behavior using communication in their Environment. Agents can be as simple as sub-routines, but are normally larger entities with persistence (Geneserth, Ketchpel, 1994 ). In rich Multi-Agent System, a strong relationship must occur between the communication act of one agent and the necessity of Interaction, which is the act of another agent understanding the original communication (Odell, et al, 2002). An intelligent interaction framework for ambient awareness, must provide knowledge distribution and the principles and processes governing and supporting the exchange of ideas, knowledge, information, and data. To address the open research requirements for an Intelligent Interaction Framework the objective of our research will be to provide functions and structures that are commonly employed in multi-agent research to enhance knowledge communication using mechanisms, such as described in (OMG ASIG, 2001): Intelligence – state is formalized by knowledge and interacting using symbolic knowledge Scalability/community – support for community organization and creation of Societies. Social participants are normally characterized by the interaction ability to be included in a multi-participants system. Coordination – performance of an activity in a shared environment with other agents. Coordination activities often require negociated plans, workflows, or some other activity management mechanisms. Cooperation – the ability to coordinate with other participants to use a common resource. Collaboration – able to coordinate with other participants to achieve a common goal. Competitive – able to coordinate with other participants except that the success of one agent implies the failure of others. 9 9 5. 2 M ult i- mod al it y and in t el lig ent I /O b ehav io ur Multi-modality Looking into multi-modality is an interesting and useful exercise for anyone interested in ambient awareness (Salber, 2000). In multi-modal communication humans utilize a combination different modalities taking advantage of both the individual strength of each communication mode and the fact that both modes can be employed in parallel. The ability to integrate information across sensory modalities (vision, hearing, kinesthesia (muscle sense), somatic sense (touch, pressure, temperature, pain), chemical sense (taste, smell)) is essential for accurate and robust comprehension by machines and to enable machines to communicate effectively with people. The area where the greatest cross-modality interface research has been carried out has been in the disability access area. Multi-modality like ambient awareness is also about gathering information (images, sounds, gestures, etc), interpreting them, and using them (for output) in a appropriate manner. Multi-modal systems can be used in two ways: to present the same or complementary information on different output modes (e.g., a phone that rings (hearing), lights up the display (vision), or vibrates (somatic)), and to enable switching between different input modes depending on the current context and physical environment (W3C, 2002). Therefore, research on the ambient awareness may benefit from research that ensures modality independence. Many ambient aware applications make use of sensory information sensed using diverse sensors. The research challenge is to be able to use and integrate that information and understand sensory modelling and identifying requirements for a cross-modality sensory model for ambient awareness. Ambient aware intelligent I/O behaviour The presence of multi-modality and the availability of relevant ambient information provide the needs for introducing an intelligent men-machine-interface-management, which enables intelligent I/O behaviour adaptation (Gessler, 2001). This means, that the input/output behaviour of a user end system -for example the modality- can change dynamically based on the actual context. For example the UI to an e-mail-system is typically text-based, whereas it would become audio/speech-based if the user is located in a car. The role of the actor for the physical output could move from the Laptop/PDA to the audio equipment inside the car. This implies, that the end device is not any longer restricted to a single device, but is dynamically reconfigured based on the needs of the application, the context information, and of course based on the available modalities (can be considered as a kind of terminal capabilities). The ontology for an MMI-configuration is provided through a set of profiles and policies. The MMI-mgmt unit has to aggregate a number of different context information from different sources, various profiles and a couple of policies to in the end apply ambient aware personalisation to MMI configuration. Another application area for ambient aware intelligent I/O behaviour is the physical condition or even handicap of the user, which also requires the existence of suitable policies. 10 10 5. 3 M odif ying t h e ( r e al w orl d) am bi en ce b y eff ect or s In the same way as sensors processing converts real world information into the digital domain where it can be used for supporting an actor, effectors convert digital information into real world state changes in a way that can be sensed by actors directly. Thus, finally using effectors is the ultimate result of any I-centric application ,because in the absence of effectors, no actor support would be possible. Definition: An effector is a device that can modify the physical world in a way that can be detected by actors by his/her senses directly. Examples for effectors are lights, automatic doors, phones, loudspeakers, projectors, printers, copy machines, faxes, displays, air conditioning, blinds, etc. Like sensors, effectors can be modelled as objects and share the problems of services in a network: they need to be discovered (which can be additionally complex as some effectors might be reached only by local communications means, e.g. a PAN) they need to be access controlled because not every actor shall be able to e.g. control the blinds in a room there need to be means to allow other objects to actually use effectors, i.e. knowing the achievable effect and use the correct parameters there need to be means to handle the case that an effector in use becomes unavailable due to e.g. communication problems effectors can be stationary or mobile, and mobile effectors might pose special requirements in terms of discovery or usage However, unlike generic services (but also like sensors), communicating with effectors might require proprietary protocols and might take place at lower layers (i.e. below Layer 3). One aspect to examine is therefore, to which extent the mechanisms for dealing with sensors can be applied also to effectors. One difference between effectors and sensors is the fact that the usage of an effector will be exclusive in most cases and consumes data instead of generating it. So e.g. the efficient usage of a single value by many readers is not an aspect for effectors. 5. 4 De al ing w it h s en so rs an d eff ect or s in a n o n - ope n sp a ce An aspect of ambient intelligence is interpreting a variety of sensor information, in order to act on effectors in an intelligent way (according to a user context). This becomes more complicated when this “ambient intelligence “ is to be applied in a non-open space, like a house or apartment are, i.e. when a topology is to be taken into account by the reasoning. Let’s illustrate this complexity in the following example 2: A person arrives in his apartment that is equipped with a quite dense lattice of BlueTooth tags. Let’s imagine the user is located 2 Inspired by an example given by Philips research during the 7 th WWRF meeting in Eindhoven. 11 11 w.r.t. to those blueTooth tags. While he was absent his daughter called him. Upon arriving in his apartment, the man’s context includes the information (incoming event) that his daughter would like him to call her back. The so-called “ambient intelligence” gives the man a notification about that incoming call at a appropriate time, using an effector such as a blinking frame or any kind of “bipping” device. The problems to be solved in this example include which kind of effector to use? How many effectors should be used? Which one(s) should be switched on according to estimated user’s location? For sure, sensing the man’s location using a blueTooth tag and then acting on the closest effector (lightening a lighting frame for instance) is not sufficient as a) people might be sensed through a wall b) the sensor and the effector could be separated by a wall as well, c) the man might take different directions and then be missed by the effector. It appears in this example that the reasoning is dependent on the topology where it takes place. Not necessarily the closest effector is to be used but rather the one that is “known” visible from the estimated user’s location. Maybe two effectors might be used to optimise the chance to warn the man as he might take different directions from his current estimated location. Taking the topology into account can be achieved in some classical ways if this topology is known somewhere in the system, e.g. if the “ambient intelligence” as access to the apartment plan. In most general cases it is desirable that this topology is to be learnt automatically by the system or at least emphasised automatically in the ambient reasoning. In the first case the “ambient intelligence” is going to infer a topology based on sequences of locations gathered during the various user’s displacements, with possible reinforcement methods. The later case can be achieved by learning the user’s behaviour (which itself emphasises the topology) by correlating user’s actions upon effectors’s with values of sensors (using incremental clustering is a possibility there). For instance if a user located in (x,y) never reacts in when an effector located in (x’,y’) is lightening, it is likely that there might be a wall between (x,y) and (x’,y’). In this case the topology is never made explicit as only relationships between sensors-effector-user-action are considered. Of course getting feedback from the user might help to improve the learning and reasoning even it might appear quite strange talking to a frame… Key technologies envisioned here are re-inforcement learning, incremental clustering, rule based deduction, some not yet identified graph techniques and for sure others. 5. 5 Sup e r Di st ri but ed O b je ct s The increasing propagation of devices disposing of processing capabilities and having possibility to connect to each other in an easy and ad-hoc manner lead to the ubiquitous availbility of services and enable to introduce new computing paradigms like mobile computing, ubiquitous computing and pervasive computing. In addition, ubiquitous network connectivity allows building global software infrastructures for distributed computing and storage. A goal of these infrastructures is to provide a distributed community of software components that pool their services for solving problems, composing applications and sharing contents. The resources such as hardware devices and software components can be 12 12 abstracted as Super Distributed Objects 3, because both share common assumptions and technical issues such as large number of distributed resources, ad-hoc application boundary, temporal unavailability of resources, and decentralised organisation of resources. An SDO is a logical representation of hardware or a software entity that provides well -known functionality and services. Super distribution means incorporating a massive number of objects, each of which performs its own task autonomously or cooperatively with ot her objects, without centralised control. Examples include abstractions of devices such as mobile phones, PDAs, home appliances and various software components. An SDO may also act as a peer in a peer-to-peer networking system or a storage node in a global storage network system. Due to their inherent features, SDOs need interoperable middleware technology that enables uniform access to them in order to support easy and rapid service creation. The middleware shall provide SDOs the functionalities of access control, general management (e.g. configuration, monitoring, and reservation), discovery with support of mobility, social networking (cooperative processing), and spontaneous networking in an ad-hoc manner. Today, there are several hardware and software interconnection technologies like HAVi, BACnet, OSGi, and Jini. However, they are restricted to specific platforms, network protocols, programming languages, or they focus on limited application domains. No common model-based standards exist to handle various resources in a unified manner independently of underlying technologies and application domains. The characteristics of an SDO are defined by its properties and the services it offers. An SDO is dynamically characterised by its status, which is non-ambiguous for every point in time. The status of an SDO can be modified solely through the invocation of its services. Services define the capabilities of an Super Distributed Object. An Service is represented by a group of operations that can be executed by the SDO. A service execution can modify configuration or status of an SDO. While some SDOs can act fully autonomously, others need to make use of services of other SDOs before they can accomplish own services. As an example, a Dimmer SDO with service providing constant brightness in a room can be considered. It should find an SDO offering brightness measuring service and SDOs that control light switching devices in the room, such as dimmers, lamps etc. to have the possibility to build and execute its own servi ce. An SDO can wrap an device. Devices wrapped by SDOs are called embedded devices. The SDO is responsible for the mapping of device capabilities to the operations of SDO interface, as well as for representation, monitoring and configuration of device capabilities. The wrapping SDO enables also by means of its interface the access to device functionality. 3 The work related to SDOs described here is currently proposed to be standardized by the Object Management Group OMG, www.omg.org (OMG, 2002). 13 13 SDOs can gather in groups. The group concept can assist to better organisation of communication and interaction between SDOs. The group organisation principle can be choosen freely. It is possible to group SDOs with similar properties or capabilities, for example, all SDOs placed in certain location or all SDOs controlling light -switching devices. An SDO can belong to several groups at the same time. SDO Interface In addition to its services, an SDO shall offer operations intended for resource management. Therefore all SDO interface operations are divided in two large groups. Service operations compose the SDO’s operational interface, while management operations constitute the management interface. The operations of management interface can be divided further in Monitoring, Configuration, Reservation and Discovery interfaces. The Monitoring interface enables to check the current state of SDO resource data; through the operations of Configuration interface the resources can be configured; the Reservation interface permits to reserve SDO utilisation; the operations of Discovery interface allow an SDO to advertise its capabilities in the SDO network and search for services it needs. Composition of SDOs In this approach, an SDO is considered as smallest addressable unit. SDOs can be composed; the result of composition of several SDO entities is again an SDO. Beginning with the moment when several SDOs have built an composed SDO, they can be accessed only through the composed SDO’s interfaces; there is no possibility to address any of them individually. The services provided by those SDOs formerly are henceforth provided by composed SDO. In addition to the services of partial SDOs, the latter can offer further services, possibly based on their services. A composed SDO can maintain an internal table with the list of partial SDOs’ services. Then, when an invocation request arrives for one of these services, the SDO can forward the request to its immediate addressee. Required Functionality In this section the functions are listed that SDO system architecture should offer to facilitate SDO communication and collaboration. Following scenario can be used as example of SDO interaction in the system. Several SDOs that control electrical appliances in the office build an SDO group. A new SDO joins this group. The newcomer offers a service that turns off the office appliances (for example, lighting, air conditioning system, etc.) at 7 p.m. The SDOs in the group should supply some mandatory functions to enable the newcomer to become integrated in the group so that it would be able to communicate and collaborate with other SDOs. A first collection of these mandatory functions is listed below. 14 14 Addressing To make the communication between SDOs possible, some kind of SDO entities addressing is required. In other words, SDOs should be able to address each other to achieve other’s services. In the surveyed scenario, the locations of the SDO entities with required functionality should be known in order the newcomer can contact them directly. Discovery An SDO may need other SDOs to accomplish its services. So when changing its location and joining completely new SDO group, it must first find SDOs offering services it needs before its services can be executed. Knowing exactly or at least similarly what services are required, the SDO should be able to initiate search for services with needed capabilities. Therefore the discovery facility is required to support SDO collaboration. At the same time, a newcomer SDO should have the possibility to announce offered services to other SDOs in system. In the scenario with “power down” service, the service would search for SDOs offering, among others, lighting air conditioner services, probably by asking system discovery service if there are some entities with specified functionality in the office, or by sending requests to all SDOs. Reservation It should be possible for SDOs to reserve other SDOs resource utilisation, therefore reservation functions should be offered by the SDO interface. The requirement is arisen from necessity to get exclusive access to services. For example, if, while the “power down” service would try to turn one of the light switches off by invoking the corresponding operation on light SDO, some other SDO service could be trying to turn it on, the attempts of the former would become rather useless. Hence, it can be advantageous if the SDO would be able to get exclusive access to the services provided by the conditioner SDO. Monitoring SDO should provide interface functions enabling monitoring of their services’ state described by resource data. Services can need monitoring possibility to control service state for some purposes. In the example scenario, the service would have to monitor the state of light SDO. In fact, if the light was turned off yet before the service starts its execution, there is no need to turn it off again. On the other hand, if the light was on, it should be turned off. Event communication Monitoring mechanism permits SDO entity to watch the other SDOs’ state. While it is based on polling, another mechanism for state information propagation can be taken in consideration, enabling SDOs to communicate changes in their state to other system components. 15 15 When the SDO state changes, other system objects can be interested in immediate notification of this event and in receiving of the data describing the new SDO state. So a mechanism for event transmission can be specified enabling system components to propagate information about their state changes by event notifications. In order event notifications need not to be sent to all SDOs that are currently present in system, as it could lead to network congestion, definition of a subscription mechanism is required. With the introduction of subscription concept, SDOs would have possibility to subscribe notification of events they are interested in. In the scenario with the “power down” service, the service could subscribe to event notifications of appliances services some time before its settled start time. By this means the service would be able to collect information about their states and eventually would know at the launch time if they are on and should be turned off theref ore. Configuration SDO interface should provide functions enabling configuration of its resource data and service parameters. This can be required to configure SDO location information, services, alive-notification intervals etc. In the example scenario, the “power down” service should offer interface enabling the configuration of its resources, for example start time. So if somebody would like to work in the office after 7 p.m., he would be able to configure the service to start its execution after he is off. 5. 6 M odelli ng t he l a ye r s of amb ie nt aw ar e a pp li cat ion s Modelling ambient aware applications as shown in Figure 1 is suggested ( Ailisto, 2001). On the lowest level, the Physical layer, there are sensors and other objects, producing output in raw format. Examples of these data are analogue microphone signal or strength of a RF signal sent by a WLAN access point. On the second level (Data layer), there are objects producing processed data, for example spectral information of the phonemes in the audio signal or location co-ordinates computed based on three RF signal strengths. The third level, Semantic layer contains objects that transform the data into form meaningful for inferring context. These objects could analyse the spectral data and state that the speaker is Peter with confidence of 0.9, or they could state that the co-ordinates computed from the RF signals indicate that the mobile terminal is in Lisa's office. The fourth level, inference level, uses information from semantic level, earlier information and inference rules, possibly autonomously learned, to proactively formulate and reason as to what the user (either man or machine) is doing and what kind of services he, she or it might want. For example, if Peter is in Lisa's room and Lisa is his boss, the inference object (agent) might suggest that he is in a meeting. On the uppermost Application level, his personal mobile agent might then decide that he probably don't want calls to his terminal from football team-mates during the meeting and block those calls. It should be noted that objects at a higher layer may combine input from one or more objects at a lower layer with stored information. This is indicated in Figure 1 by input arrows. 16 16 Principle Level 5: Application Action, service or other usable result Level 4: Inference Personal mobile agent Application Infered result Situation Situation inference object Inference object Level 3: Semantic Semantic information Semantic processing object Level 2: Data Location (room) Person-id Authentication object Processed data Template spectra Data processing object Level 1: Physical Example Phonem spectra Spectral computation object (Sensor) output Analog audio Physical or virtual sensor Microphone 17 17 Action 6 Interactions of humans and machines in processing ambient information It has been recognized that there is a pressing need to obtain a better understanding of what ambient is in order to facilitate the exploitation of an individual’s situation through ambient aware applications. The bottom line question is what situational information is relevant, useful, and how to use it? All indicators of ambient awareness constitutes a parametric space in which information have been gathered about complex situations (e.g., computing context, user context, physical context, time context). More information is not always better, however. Situational information is only useful when it can be usefully interpreted. It has been argued that context aware computing (and hence ambient aware computing) redefines the basic notions of interaction with questions such as “What role does context play in our everyday experience, and how can we extend this to the technical domain?” (Moran and Dourish, 2001). A computational solution that covers a human strategy to deduce information from their environment and using that in interactions and act appropriately may point to the solution but is tremendously complex. As humans interact with humans they make implicitly use of verbal (e.g., tone of voice) and non-verbal (e.g., facial expressions) information. In general, human beings imply knowledge about their current situation in order to act accordingly to the state of the environment. This includes usual perceiving and acting, listening and making conversations, rational and irrational decision-making, and common sense reasoning, which are often also based on situation dependent knowledge. Further, the knowledge of a person's situation can comprise information about objects of his current environment as well as abstract and non-physical relationships to other human beings. This kind of information needs to be transparent and needs to be made explicit in order to make mobile applications adaptable. On the one hand, by way of contrast, human-machine interaction is substantially different to human-to-human interaction. A typical communication between a human and a computer system consists of input and output data, which is exchanged sequentially. A human being interacts with the computer through the use of devices like keyboard, mouse, or monitor. A movement with the mouse for instance is directly followed by a reaction of the computer system on the monitor. But, the computer is (in general) not able to "sense" the current situation like the human does. It is therefore unable to adapt its behaviour to the current state of the environment. On the other hand, the interpersonal interaction rules (how humans interact in the real world) have been applied to how humans interact with computers. The research is based on the idea that one can apply theories and methods from the social sciences directly to users' interactions with computers. Amazingly, users' responses to computers are fundamentally social and natural (Reeves and Nass, 1996). 18 18 The situational circumstances of an actor can consist of several different properties representing actors and objects of the actor's current communication space. Sensors or human-machine-interfaces can gather information about these actors and objects. Logging these human computer interactions makes a computer system self-knowledgeable, which is aware of their own context. Profiles have to be created manually by actors, by humanmachine interaction, or automatically by self-learning capabilities. It is recognized that the human-machine interaction is restricted to direct interaction, which makes it ineffective and slow in comparison to human-to-human interaction. To improve this kind of communication, it is necessary to adjust it to typical face-to-face communication between humans, who make use of implied situation-dependent information. It is believed that the interpersonal interactions rules and strategies (Human-Human and Human-Computer) may determine the coupling of the indicators in order to be really ambient aware in the provision of an active context. 19 19 7 Standardisation efforts & supporting technologies Standards to exchange the situational circumstances of an individual and generic support for this exchange as well as standards to secure protection of sensitive data are needed. Though ambient information might be easy to acquire (emotions are difficult, absolute position can be easy), there are only a few well-defined mechanisms, frameworks and standards specifying how the ambient (mostly location or position) information can be made available to applications. Geographic information In the geospatial community, there are three important international organizations leading the development of industry standards and specifications (Hjelm, 2002): ISO/TC211, ISO SQL3/MM, and OGC. ISO/TC211, a formal international standards workgroup, focuses on an entire family of geospatial standards ranging form exchange formats to metadata to spatial data models. ISO SQL3/MM, also a formal international standards workgroup, works on multimedia and spatial extensions to the SQL3 dialect of the SQL standard. The OGC, a consortium of industry, government and academic organizations from around the world, develops software specification (APIs) that enables location-based technologies to interoperate. OGC’s Interoperability Program started an initiative to integrate mobile services into its GIS (Geographical Information Systems) testbed. The Open Location Services (OpenLS) initiative is intended to engage the wireless community in tasking location services to the next level. There are many different ways of expressing location information designed by numerous domains and organizations. They include (Mari et al, 2001): Expression standardized for GSM and UMTS (called here “3GPP”) to be used internally in the GSM and UMTS mobile networks specified by the Third generation Partnership Project (3GPP). An interface towards mobile networks (e.g. GSM) for providing access to location information of mobile terminals in consideration by the Location-interoperability Forum4 (LIF). The LIF 5 has produced a specification for a Mobile Location Protocol (MLP) which is an application-level protocol for the positioning of mobile terminals. It is independent of the underlying network technology (and thus, of the positioning method), so it can also use position data from GPS – if there is a way to get it from the GPS receiver to the Location Enabling Server (LES). the MLP serves as the interface between a Location Server and a Location Enabling Server, which in turn is interfacing to the application server. It defines the core set of operations that a Location Server (essentially an MPC) should be able to perform. In Hjelm (2002) more details are elaborated on. 4 www.locationforum.org/ LIF has signed a Memorandum of Understanding to express their intent to consolidate with the Open Mobile Alliance (OMA, http://www.openmobilealliance.org/). OMA is designed to be a focal point of 5 20 20 The geography Markup Language (GML) for storing and transporting geographic information specified by the Open GIS Consortium 6 (OGC). The OpenGIS GML is part of an entire framework that includes the means for digitally representing the Earth and Earth phenomena, both mathematically and conceptually; a common model for implementing services for access, management, manipulation, representation, and sharing of geodata between communities; and a framework for using the Open Geodata Model and the OpenGIS Services Model (Hjelm, 2002). The intention is to solve not just the technical problem but also to define a vocabulary and a way to extend it, which provides a common ground for information exchange between different information communities (sets of users with different meanings, semantics, and syntax for geodata and spatial processing). NaVigation Markup Language (NVML) for describing navigation information submitted by the Fujitsu Laboratories to the World Wide Web Consortium (W3C). Point Of Interest exchange language (POIX) for exchange location-related information over the Internet created by MObile Information Standard TEchnical Committee (MOSTEC) and submitted to W3C. Geotags for geographic registration and resource discovery of Hypertext markup Language (HTML) documents. National marine Electronics Association’s (NMEA) interface and data protocol NMEA-0183 often used by GPS receivers. The electronic business card format VCard and ICalender for exchanging electronic calendaring and scheduling information in the Internet include elements to specify position. A Means for expressing location Information in the Domain Name System (DNSLOC). Simple text Format for the Spatial Location Protocol (SLoP) proposing a simple text based format to carry minimal location data set 7. GMML, XML-based geographical information for navigation with a mobile. LanXML, an XML-based data format for exchange of data created during land planning, civil engineering and land survey processes. Geospatial-eXtensible markup Language (G-XML) for encoding and exchanging geospatial data specified by the G-XML committee in Japan. mobile services standardization work, to assist the creation of interoperable mobile services across countries, operators and mobile terminals. Through a user-centric approach OMA ensures fast adoption and proliferation of mobile services. The alliance drives the implementation of end-to-end mobile services including an architectural framework, open standard interfaces and enablers. 6 www.opengis.org 7 www-nrc.nokia.com/ip-location 21 21 The Magic Services API (Mobile Automotive Geo-Information Services Core) was created by a loose industry group with the goal of creating a Web services API for location information. The protocol used between the mobile station and the server is Simple Object Access Protocol (SOAP), and the document type is XML. The idea is to create a set of core services, which can be called as Web services in the emerging W3C architecture that could be used by Application Service Providers (ASPs) who intend to set up location-dependent services (Hjelm, 2002). The scope of the services include route planning and geocoding (= the conversion of human text or speech defining an address or other location expression to corresponding geographic coordinates). The Geographic Location/Privacy (geopriv) working group of the IETF is currently addressing the standardized description of location information as well as its transfer in a controlled and secure fashion. If successful, the results of this group might also be extended to other types of context information In addition to these, there are several other non-public specifications of location information expressions, including from WAP Forum Location Drafting Committee, Bluetooth Special Interest Group, ISO/TC211, etc. Agent Technology Agent Technology has grown from roots grounded in the sub-domains of Information Science, Artificial Intelligence and Distributed Computing. Agent research has covered a wide range of topics including (Agentcities 2001): Communication technologies (semantics, expressions of content), Architecture (internal reasoning models for agents), Knowledge representation (in communication or internal), Distributed problem solving (planning, scheduling), Coordination (coherent joint action), Self-interestedness (negotiation, coalition formation) and embeddedness (interaction with the environment including Humans) Software Agents can be used for supporting functions needed for Ambient Awareness: To solve the knowledge problem in open and dynamically changing environment Handle joint Socialization, Coordination, Negotiations, and Planning Learn about context and generate detectors and analyzers A Multi-Agent System, or Multi-Agent Society, is a Social Agent Communication Environment. Multi-Agent System must involve (Odell, et al, 2002): 22 22 The Principle of Coordination - which agent societies must use to provide information about an agent’s role in the society, The principle of Cooperation - which an agent society will use to perform activities in a shared environment, and The Principle of Competition - which an agent society must address due to that fact that agents can have self interested goal that may supersede the goals of the group or another single agent. Of particular importance, is the social agent communications environment provides the processes that allow agents to interact productively. The productive interaction aspects, that are particularly noteworthy include, Social Differentiation, where by a group of the agent society can play multiple roles in multiple groups, Interaction Management, or the rules that manage interactions between agents to ensure correct interaction, and Social Order, which are the rules for production of structured relationships among social agents. A role is an abstract representation of an agent’s function, service or identification within a group. There are three main concepts of socialization and community that are addressed by the use of an agent social group (Odell, et al, 2002). 1. Intra-Group Associations - allows for the partitioning of a larger group, into a smaller communication domain for specific topic or domain-oriented interaction. 2. Group Synergy - allow the agent members of the social group to use the abilities a services that may be offered that are not possible by any single agent. 3. Inter-Group Associations - serves as a mechanism where groups of agents in an agent social group can interact with other sets of agents in another social group. One purposed solution to the requirement of enhanced Agent Communication in the Social Agent Communications Environment is the use of an Agent Communication Language (ACL) (OMG ASIG, 2001). ACL has grown from roots in the research domains of Artificial Intelligence and Linguistics. Agent Communications messages must have well defined semantics that are computational and visible. Using an ACL based approach; different parties can build their agents to interoperate using the ACL as the basic element of an interaction protocol between Agents. Notable ACL implementations include: KQML(1992) – The Knowledge Query and Manipulation Language defines an ‘envelope’ format for messages, using which an agent can explicitly state the intended illocutionary force of a message. 23 23 FIPA ACL(1999) – The Foundation for Intelligent Physical Agents (FIPA) released a specification of an ‘outer’ language for messages. FIPA ACL defines 20 performatives and is superficially similar to KQML. KIF(1992) – The Knowledge Interchange Format is a language explicitly intended to allow the representation of knowledge about some particular ‘domain of discourse’. It was intended to be used with KQML. XML Schema Based Implementations A second purposed solution to the requirement of enhanced Agent Communication in the Social Agent Communications Environment is the use of agent interaction protocols, which can also be known as conversation protocols (OMG ASIG, 2001). Cooperative and Collaborative Agents work in association and toward joint and common goals. To produce coherent plans, a social agent communications environment must be able to support agents recognizing interactions to plan appropriately. There are three possibilities for multi -agent interaction planning in a social communications environment (Woolridge, 2002). 1. Centralized planning for Distributed Plans – a centralized planning system develops a plan for a group of agents, in which the division and ordering of labour is defined. This is a Master-Slave cooperation strategy. An example of this strategy is the Shared Plan Model. 2. Distributed Planning for Centralized Plans – a group of agents cooperates to negotiate a centralized plan. This strategy allows agents to be specialists in different aspects of the overall plan and to contribute part of it. An example of this strategy is Joint Commitment and Partial Global Planning. 3. Distributed Planning for Individual Plans – a group of agents cooperate to form individual plans of action dynamically coordinating their activities along the way. This strategy may be best suited to self-interested agents, where there may never be a ‘global’ plan, but where there is only a virtual emergent global plan for aspects of sharing the cost of some common resource. An example of this strategy is Self interested Multi-agent Interactions. 24 24 8 Conclusions Aspects related to ambient awareness that need to be researched and solved for the next generation I-centric wireless world are described in this paper. Research challenges that follow from these aspects are: Advances in sensor technology are needed to reach further adaptation of services to and co-operation with - the environment of actors. How to optimal combine and express (semantics) different position information acquiring technologies: o Wide area position technologies, o Short range wireless technologies, o Sensory information, o Deal with location information in wireline networks. How to deal with partial information. Investigating how multiple readings can be derived from one single device. Rightful evaluation of ambient indicators: o Proper aggregation in relation to a known reference, o Communicating and relating to additional situational information, o Proper weight and ordering of the processing of the relevant ambient indicators. Optimal use and integrate sensory information sensed by diverse sensors and understand sensory modelling and identify requirements for a cross-modality sensory model for ambient awareness. Privacy protection of user-regarded sensitive or private ambient information. Policies on ambient information access and exchange. Interpersonal interactions rules and strategies (Human-Human and Human-Computer) to determine the right coupling of the indicators in order to be really ambient aware in the provision of an active context. Effector-related issues, like: o Modelling of effectors o Semantic descriptions of effector abilities o Handling disconnection problems of effectors Re-inforcement learning, incremental clustering, rule based deduction, graph techniques. Extend standardisation specs on denoting and exchanging ambient information on a global scale across heterogeneous networks and technologies conforming to an appropriate open inter-working architecture. 25 25 Acknowledgements The following persons are acknowledged for their contributions by useful discussions and suggestion for improvement: Norman Sadeh (Carnegie Mellon University), Juhani Latvakoski and Petri Maatta (VTT Electronics), Wouter Teeuw and Henk Eertink (Telematica Instituut), Axel Busboom (Ericsson Research, Germany), Patricia Charlton (Motorola, Paris, France), John Yanosy (Motorola, Tx, USA), Kimmo Raatikainen (Nokia, Finland), Radu Popescu-Zeletin (Fraunhofer FOKUS). 26 26 References Ailisto, Heikki, Petteri Alahuhta, Ville Haataja, Vesa Kyllönen, Mikko Lindholm, Structuring Context Aware Applications: Five-Layer Model and Example Case, Workshop proceedings Ubicomp 2001. Arbanowski, Stefan, Holger Waterstrat, Sven van der Meer, Radu Popescu-Zeletin. 2000. Open Profiling for Ubiquitous Computing. In: Proceedings of the 1st Workshop on Ubiquitous Computing (PACT 2000), Philadelphia, PA, 2000, ISBN 3-86009-191-3. “Agentcities.NET Testbed for a Worldwide Agent Network Annex I”, Information Society Technologies Programme, 2001 Chen G., Kotz D., “A Survey of Context-Aware Mobile Computing Research”, Dartmouth Computer Science Technical Report TR2000-381, 2001. Dey, A.K.D., Abowd, G.D. & Salber, D. (2001). A cenceptual framework and a toolkit for supporting the rapid prototyping of ambient aware applications. Human-Computer Interaction, 16, xxx-xxx. Dey, A.K. & Abowd, G.D. (1999). Towards a better understanding of context and ambient awareness. GVU Technical Report ITGVU- 99-22, College of Computing, Georgia Institute of Technology. (ftp://ftp.cc.gatech.edu/pub/gvu/tr/1999/99-22.pdf) and in: Proceedings of the Computer-Human Interaction 2000 (CHI 2000), Workshop on The What, Who, Where, When, and How of Ambient Awareness, The Hague, Netherlands, April 2000. Geneserth M.R., Ketchpel S.P., “Software Agents”, Communications of the ACM, 1994, pp 48-53. Gessler, Stefan. "Location beyond position: Requirements for Location-based Services Portals", Location based Services Summit, pulver.com, Boston (USA), May 2001. Gessler, Stefan and Jesse, Kai. Advanced Location Modeling to enable sophisticated LBS Provisioning in 3G networks. In: Workshop proceedings Ubicomp 2001 “Location modelling for ubiquitous computing”, Michael Beigl, Phil Gray and Daniel Salber (Eds.), Atlanta September 30, 2001. Hjelm, Johan. 2002. Creating location services for the wireless web: professional developrr’s guide, John Wiley & Sons, ISBN 0-471-40261-3. IETF. 2002. http://www.ietf.org/html-charters/svrloc-charter.html Rakotonirainy, Andry and Seng Wai Loke. 2000. Geraldine Fitzpatrick: Ambient Awareness for the Mobile Environment. In: CHI2000 Workshop #11 Proposal, The Hague, Netherlands, April 2000. Salber, Daniel. 2000. Context-awareness and multimodality. Colloque sure la multimodalité, IMAG, Grenoble. Mari Korkea-aho. 2000. Ambient aware application survey. http://www.hut.fi/~mkorkeaa/doc/ambient aware.html 27 27 Mari Korkee-aho and Haitao Tang. 2001. Experiences of Expressing Location information for Applications in the Internet. In: Workshop proceedings Ubicomp 2001 “Location modelling for ubiquitous computing”, Michael Beigl, Phil Gray and Daniel Salber (Eds.), Atlanta – September 30, 2001. Moran, Thomas P. and Paul Dourish 2001. Ambient aware computing. Special issue of Human-Computer Interaction, Volume 16, 2001. Odell, J.J., Van Dyke Parunak, H., Fleischer, M., and Bruechkner, S., ”Modeling Agents and their Environment”, AOSE Workshop at AAMAS, 2002. OMG Agent Platform Special Interest Group, "Agent Technology Green Paper", Object Management Group, OMG Document agent/00-09-01. Object Management Group. Super Distributed Objects. White paper. OMG Document sdo/01-07-01. OMG SDO DSIG. “PIM and PSM for Super Distributed Objects”. RFP. OMG Document: sdo/02-01-04 Pascoe, J., Ryan, N.S. and Morse, D.R. 1999. Issues in developing ambient aware computing. Proceeding of the international symposium on handheld and ubiquitous computing (Karsruhe, Germany, Sept 1999) Springer-Verlag, pp. 208-221. Pascoe, Jason. 1998. Adding generic contextual capabilities to wearable computers. In: The Second International Symposium on Wearable Computers, Pittsburgh, October 1998, pages 92-99, ISBN 0-8186-9074-7. Reeves, Byron and Clifford Nass. 1996. The Media Equation. How People Treat Computers, Television, and New Media Like Real People and Places. Stanford, CA: CSLI Publications, 1996. SUN. 2002. http://www.sun.com/software/jini W3C. 2002. Multimodal interaction working group charter. http://www.w3.org/2002/01/multi-modal-charter.html Wooldridge, M. J., “An introduction to multiagent systems”, John Wiley & Sons Ltd, West Sussex England, 2002. 28 28
© Copyright 2026 Paperzz