Intelligent Multi Sensor Fusion System for Advanced Situation Awareness in Urban Environments Georg Hummel1, Martin Russ1, Peter Stütz1, John Soldatos2, Lorenzo Rossi3, Thomas Knape4, Ákos Utasi5, Levente Kovács5, Tamás Szirányi5, Charalampos Doulaverakis6 and Yiannis Kompatsiaris6 1 Institute of Flight Systems, Bundeswehr University Munich, Germany {georg.hummel, martin.russ, peter.stuetz}@unibw.de 2 Athens Information Technology, 0,8Km Markopoulo Ave, Athens, Greece [email protected] 3 Vitrociset Spa, Via Tiburtina 1020 Roma, Italy [email protected] 4 Data Fusion International, Dublin, Ireland [email protected] 5 Computer and Automation Research Institute, Hungarian Academy of Sciences, Hungary {utasi, levente.kovacs, sziranyi}@sztaki.hu 6 Centre for Research and Technology Hellas, Informatics and Telematics Institute, Thessaloniki, Greece {doulaver, ikom}@iti.gr Abstract. This paper presents a distributed multi sensor data processing and fusion system providing sophisticated surveillance capabilities in the urban environment. The system enables visual/non-visual event detection, situation assessment, and semantic event-based reasoning for force protection and civil surveillance applications. The novelties lie in the high level system view approach, not only concentrating on data fusion methodologies per se, but rather on a holistic view of sensor data fusion that provides both lower (sensor) level and higher level (semantic) fusion. At the same time the system makes provisions for visualizing and processing space-time alerts from sensor detections up to high alerts based on rule-based semantic reasoning over sensor data and fusion events. The proposed architecture has been validated in a number of different urban scenarios including both synthetic and live ones. Keywords: Sensor Fusion, Common Operational Picture, UAV, JDL 1 Introduction During the last fifteen years the world has witnessed a number of major defense and security incidents in the urban environment. Most prominent examples include the events on 09/11/2001, or the bombings in the London and the Madrid subways. These adfa, p. 1, 2011. © Springer-Verlag Berlin Heidelberg 2011 incidents have manifested the vulnerabilities of the urban environment, as well as the magnitude of the social and economic costs that are associated with such incidents. At the same time, there have also been cases where military operations in urban environments were necessary. The complications of the urban environment when compared with open terrain impose particular and very specific challenges at both operational and tactical levels. The urban environment is characterized by the presence of buildings, city infrastructure and other man-made structures in a 3D space, over- and underground, as well as considerable number of civilians. This necessitates consideration of the protection of civilians’ life as well as the inherent issues governing ethnic, political, economic and cultural relationship management. Facing the complexity of operations in the urban environment, security and defense agencies are increasingly turning to pervasive multi-sensory technologies for enhancing their ability to acquire, analyze and visualize events and situations. Sensor data fusion is among the primary technologies employed towards supporting these goals. Indeed, sensor data fusion systems can enable the creation of a common operational picture (COP). However, the development of robust data fusion systems is associated with a host of technical challenges. Several of these challenges stem from the need to integrate highly distributed and heterogeneous systems, including multiple sensors (e.g., cameras, microphones, microphone arrays, LIDARs), signal processing algorithms (extracting events from raw multimedia signals) and data fusion algorithms. Furthermore, fusion is likely to take place on multiple levels as specified by the JDL (Joint Director of Laboratories) fusion models [17]. However, state-of-the-art infrastructures provide a sound starting point for the design of multi-sensory data fusion systems. Since the advent of ubiquitous and pervasive computing [1] researchers have been striving to design and develop middleware that could ease the integration of non-trivial multi-sensory context aware systems. As a result, several middleware architectures have emerged and evolved during the years, which could generally be classified into three broad categories. The first includes middleware systems for Smart Spaces [2][3][4], characterized by their emphasis on the integration of complex perceptual processing components such as audio and visual processing algorithms (e.g., real-time processing, extreme heterogeneity in terms of implementation platforms, timing synchronizations, the handling of uncertainty due to inaccurate algorithms), and are usually deployed within in-door environments. The second category includes middleware architectures for Wireless Sensors Networks (WSN) [5] and Radio Frequency Identification (RFID) systems [6], which emphasize efficient ways to integrate, link and fuse information from multiple sensor sources [7]. The focus of these systems is mainly on the integration and fusion of sensor data, without emphasis on complex perceptive processing. Finally, the third category refers to semantic middleware systems [8][9][10], which employ ontologies and semantic reasoning in order to infer context, while also assessing situations. Systems providing advanced capabilities for urban security and surveillance, as well as situation assessment and common operational picture generation, should advance and integrate all above functionalities in order to handle multiple heterogeneous sensors suitable for the urban war environment (including sensor and data feeds stemming from perceptual processing). At the same time, it needs to support JDL fusion levels mandates by deploying various fusion techniques and algorithms. As a result we introduce the Multi sEnsor Data fusion grid for Urban Situational Awareness (MEDUSA) pervasive multi-sensor fusion system, which takes into account the above considerations towards enabling COP generation in urban environments, based on a middleware architecture which extends the state-of-the-art in terms of the properties outlined above. In particular, the system facilitates integration of multiple sensors and fusion algorithms, while at the same time supporting fusion at multiple levels (including JDL levels). Hence, MEDUSA includes both low-level rule-based and high-level event-based fusion capabilities. At the application level, the system provides geo-location and geographic visualization capabilities, which are key prerequisites for COP generation. Overall, the system acts as a breadboard enabling flexible integration of diverse sensors, processing algorithms, fusion rules, as well as ontologies for semantic-based reasoning and fusion of high-level metadata. In addition to its novel architecture implementation, the system includes several component level innovations, concerning implementation and integration of novel visual processing components and mobile sensor platforms. In the area of visual signal processing, we have developed and integrated a number of robust real-time context acquisition components. Generic visual detection methods do not possess the necessary capabilities for drop-in application in urban war scenarios, since the environment and circumstances can be versatile and changing. Algorithms need to be adapted to be applicable in such situations. Methods developed and tested for such use have been integrated into the MEDUSA system and used in real-time scenarios. The main difficulties during development were in making the detectors real-time and well-performing, while creating modules that can be easily extended and trained. In the area of mobile sensor platforms the deployment of and interaction with UAVs was specifically addressed. In this respect the conventional paradigm of UAV guidance and control utilizing a dedicated, manned ground control station was replaced by a tighter, immediate coupling to the fusion grid and its mobile sensors. Overall, the paper is structured as follows: Section 2 presents an overview of the grid architecture emphasizing its novel characteristics, also illustrating sensor integration and fusion capabilities. Section 3 is devoted to the description of the low-level processing algorithms and fusion capabilities. Section 4 focuses on the semantic reasoning capabilities, including integration of ontologies and implementation of reasoning schemes. Section 5 elaborates the integration of geo-location services, also illustrating the COP interface. Section 6 illustrates the integration of mobile nodes notably of UAVs. Section 7 is devoted to the validation of the system in the scope of synthetic/simulated and live scenarios. 2 Overview of the Multi-Sensor Fusion Grid Architecture The presented Sensor Fusion Grid (SFG) has been designed as a pervasive middleware system, which combines the merits of state-of-the art middleware for multisensor integration of perceptive algorithms and semantic reasoning. In particular, the system supports the following functionalities: (a) flexible support for heterogeneous sensors and processing algorithms, (b) implementation of various fusion algorithms and sensor deployment configurations over a single middleware infrastructure, (c) support for multiple application scenarios, (d) capability of real-time context-based acquisition and interaction, (e) information acquisition and integration from mobile nodes and annexed sensors, and (f) provision of high level fusion and reasoning in order to adequately support Event generation and decision support. To fulfill the above requirements, the sensor fusion grid architecture (see Fig. 1) has been designed as a next generation pervasive grid system [15]. In essence, it is a distributed system comprising multiple nodes, each in charge of collecting and fusing information stemming from sensors, perceptive algorithms or other nodes of the SFG. The core of the system was designed on the basis of the following types of nodes, which can seamlessly communicate and exchange information with each other [16]: • Low-Level Fusion Nodes (LLF), collecting and combining information from underlying sensors and perceptive processing algorithms. LLF nodes are where sensors and sensor processing algorithms are integrated and executed. • High Level Fusion (HLF) Node (or Semantic Node), integrating ontologies for situation awareness and high level reasoning. • Mobile Nodes, including Instrumented Person Nodes (IPN) and Unmanned Aerial Vehicles (UAV), which allow the system to interface with mobile sensor platforms. • Central Control Nodes, which host centralized services such as the COP module and the Environmental Services (ES) module, also providing presentation services. Fig. 1. General system architecture The MEDUSA system provides support for all the fusion levels specified in JDL model, which is a functionally-oriented model intended to be common and useful across multiple application areas. The JDL Fusion Working Group has introduced four different levels for the data-fusion process [17], including: (a) Level 1 (Object Refinement) combining the different information (e.g. location, parametric and identity information, feature extractions) to achieve refined representations of individual objects, (b) Level 2 (Situation Refinement), taking the results of Level 1 to perform situation assessment by fusing spatial and temporal data of entities, to form an ab- stract representation, (c) Level 3 (Threat Refinement), taking the results of Level 2 to estimate future events by fusing the combined activities and capabilities of entities to infer their intentions and assess the threat that they pose and (d) Level 4 (Process Refinement) providing resource management and feedback for refinement of the previous levels and monitoring the fusion performance. Furthermore, a "pre-processing" level (Level 0) [19] operating at the sensor level and a Human-Machine Interaction Level (Level 5) have been introduced later. The MEDUSA system supports Level 0 and Level 1 fusion through its LLF nodes, and Level 2 and Level 3 fusion through semantic reasoning at HLF. Later paragraphs introduce these nodes, and illustrate HMI capabilities where Level 4 and 5 fusion capabilities are provided. From an implementation perspective the SFG is based on the following implementation choices: (a) The use of the Global Sensor Networks (GSN) middleware [5][24] which provides support for low-level data access and processing of sensor data sources. GSN provides support for important issues such as timing and sliding windows of the data collection as well as for the definition and support of virtual sensors. Hence, the LLF nodes were built over a customized GSN node. (b) The decoupling of the low-level signal processing algorithms from the GSN node, along with the implementation of multiple wrappers for the different algorithms. This way, MEDUSA partners managed to integrate their technologies despite their heterogeneity. (c) The use of the Virtuoso platform [19] as a means for ontology integration and associated reasoning mechanisms. The SFG implementation uses the Situation Theory Ontology (STO) [9] as a formal ontology which is based on situation theory initiated by Barwise and Perry [20] and developed by Devlin [21]. STO allows inferring new facts about situations from detected events. Note however that MEDUSA is not confined to the use of a single ontology (like STO). Its middleware infrastructure is flexible in the integration of more ontologies. (d) The implementation of standardized distributed interactions (web services), which renders MEDUSA a truly distributed systems that can be deployed according to multiple configurations depending on the scenario at hand (i.e. the LLF, HLF, nodes are defined according to the implementation scenario). 3 Low-Level Sensor Processing and Data Fusion Low level processing is performed on LLF Nodes responsible for fusing sensor data, i.e. JDL Level 0 and 1. Low level fusion processes raw sensor data to provide an estimation of physical attributes of specific objects (position, speed, acceleration) along with definition of other entity attributes. The information generated from these elements is fused with geo-location information (either fusing data coming from known sources or calculating position from camera references by Inverse Perspective Mapping [22]). In addition, single data source processing modules were developed and integrated, along with new algorithms for detection of unusual/abnormal events within large arrays of normal/regular events. As an example for low level detection, the visual Smoke Detector uses color and image appearance information and background subtraction in order to detect the appearance of smoke. Background subtraction segments moving pixels, based on smoke motion properties. Then, color analysis is performed to produce a threshold that sepasep rates smoke ke colored objects from others in the Region of Interest. Finally, for the remaining moving regions we perform texture analysis to distinguish fast decreases in edge energy, which is generally caused by rigid objects, from slower decreases. We calculate image age gradients by a Sobel operator and produce a temporal graph that ded notes the declination of edge energies. The combination of these techniques provides fast and robust discrimination among moving objects (e.g. Fig. 2). Another nother example, the Unusual Movement Detector is a real-time time detector for unu usual motions w.r.t. typical motions observed during a training period, period, based on [14]. It can be used in situations where unusual movement detection can be important, e.g. automatic traffic surveillance, surveillance or crowd analysis. The he direction of optical flow vectors are first extracted xtracted at image pixels; then probability based approaches are used for motion area classification, combined with Mixture of Gaussians modeling and spatial averaging based on Mean-Shift Mean Shift segmentation. A Markovian prior is introduced to get reliable spatio-temporal temporal support. The novelty of the approach is that although outdoor videos are generally of low quality and high complexity, the pixel based approach gives more robust results and spatio-temporal spatio temporal support can be incorporated without object level understanding nding (which is generally impossible or very computationally expensive for such data) (Fig. ( 3). Fig. 2. Examples for smoke detection Fig. 3. Examples for unusual movement detection. From left to right: Traffic scene. Learned traffic motion direction map. Traffic scene with bicycle going against traffic. Detection mask, where higher intensity means higher probability for unusual movement. The results of low level processing are used by HLF nodes to further analyze the scene and to provide the C2 operator with better situational awareness of detected threats and support for both risk assessment and decision making. Low level modules of the test system include: (a) multi-view view vehicle tracking, (b) detection and tracking of ground objects from UAVs [11], (c) detection and tracking of people and loitering person detection [23], ], (d) motion detection in regions of interest [12], ], (e) detection of unusual vehicle or pedestrian movements [13][14], (f) visual smoke detection. 4 Semantic Reasoning Capabilities Capa The Higher Level Fusion (HLF) layer of the proposed architecture consists of the already introduced STO ontology which is used as the backbone for performing the reasoning required for situation assessment. We use Virtuoso, as underlying Semantic Web technologies integration stack, which provides services like RDF triple storage, SPARQL compilers, and RDF views on relational data. Virtuoso acts as a federation layer for seamless integration of relational and semantic data (Fig. 4. ). Fig. 4. High level fusion scheme The chosen STO ontology is the main information gathering point in the proposed p architecture. STO supports modeling of artifacts of environment in which the sensor network is deployed. Core modeling is supported with concepts (and sub-concepts) concepts) of situation, event and their relations. Alert and detection data originating from processing modules in LLF nodes is stored here, where reasoning is performed to infer new knowledge, which corresponds to events and situation assessment that canca not be performed at the LLF level. STO is written in OWL and models the events/objects and their eir relationships in a way that can be extended according to the needs of a specific domain in order to support situation assessment [17]. In order to forward data from relational LLF GSN node databases to the ontology it has to be translated into semantic semantic notations. This is performed by mapping the relarel tional schema to semantic entities. Real time performance was a prerequisite for the system and we adopted a push strategy to process data as soon as it was generated. Pull strategy was also an option through through Virtuoso’s “RDF Views” functionality however its utilization would have induced delays in the mapping process. The push method forwards relational data from node databases to the ontology usu ing semantic notation as soon as data is generated. Push was implemented plemented in the low level layer with the semantic layer having a passive role in the process. The advantage is that the transformations are fast and the ontology is always up-to-date. up date. The disaddisa vantage is that each low level node will have to implement its own push method while there is the risk that the ontology will be populated with data even when no query is sent to the semantic layer. Additional information can be integrated by the HLF layer in order to derive situasitu tions. Information from sources like environmental services can be queried, as long as they expose I/O interfaces, and the information can be used for inferencing. ReasonReaso ing uses the ontology structure and the stored instances to draw conclusions about the situations. Under this approach, approach relations between classes like rdfs:subClassOf, rdfs:subClassOf properties like owl:sameAs and relations that are defined by experts in the ontology are used for inference. The knowledge base can be extended further by using rules to describe situations/events too complex to be defined using OWL notation only. For defining the rules that form the situation assessment inference mechanism, SPARQL CONSTRUCT formulations have been used, since SPARQL is expressive in rule formulation [25 25].. Another advantage of using SPARQL CONSTRUCTs CONSTRUCT is their efficiency in terms of speed and memory consumption for evaluation. evaluation. Using this strategy Virtuoso tuoso is treated both as storage and inference layer with high performance both in data volume handing and in query execution times, while also taking advanadva tage of Virtuoso’s other features such as query built-ins built ins and geospatial extensions. The reasoning service ervice can also use information from external services in the inference process, e.g. from the Environmental Service which is described in the next Section. Section 5 Environmental Service and COP One of the most significant challenges challenge in the development of a multi-sensor sensor data fusion grid is the acquisition and management of the context-based information of the environment in which the sensors are deployed (e.g. restricted areas, critical infrainfr structures, etc.). Therefore and due d to the needs of used data fusion algorithms, the system provides a common representation of the environment, the Environment SerSe vice (ES) [26].. The Environment Service is providing: (a) a) a common model for representation environment information, (b) an interface to allow uniform access to the environment-related related information, (c) specific services allowing users and applications to access geographical and meteorological mete data for a given location in a specific time and (d) a geometrical algorithm to calculate distances and paths. Fig. 5. Architecture and 3D view of the COP The Common Operational Picture (COP) is the presentation layer of the MEDUSA system architecture and uses a graphical interface to provide the user with a common vision of what is happening on the scene from a tactical point of view. It can be subsu divided in following main components (Fig. 5): (a) The COP API receives data (e.g. entities, sensors, events, events etc.) from LLF and HLF Nodes to store such data in the cence tral control node database. (b) COP Human Machine Interface (HMI) presents information about entities, sensors and events from the database through a graphical interface (2D/3D) to the user. 6 Mobile Nodes MEDUSA employs various sensors and each sensor is fully integrated by the LLF nodes on top of GSN [24]. The front–end sensor layer includes cameras (visual and infrared), sniper detectors, CBNR detectors, etc. Sensors can be stationary or mobile. Here we illustrate such mobile sensor equipped platforms integrated in MEDUSA. The Instrumented Person Node (IPN) comprises a soldier/person equipped with a wearable device aimed to collect data from the environment. It provides several functionalities, such as GPS localization, video streaming (via a wearable camera), radio communication, and other customizable sensors (e.g. temperature). In addition the use of airborne sensor platforms such as Unmanned Aerial Vehicles yielding dynamic sensor positioning and particular perspectives (e.g. birds-eye view) for multimodal sensor fusion was considered. One aim was the integration of such a system, bypassing traditional ground control elements. Thus we considered a UAV to be detached from its operating platoon to the network, allowing the network not only to directly receive sensor data but also send control commands to the UAV. We implemented a service-oriented architecture for the deployment of UAVs in order to exploit their capabilities, hiding their complex internals e.g. autopilot and gaze control commands. The UAV platform executes high level tasks using its EO and IR cameras and provides sensor data to the network. The on-board UAV architecture consists of the HW-level SOMA core and the middleware/application oriented SoFiSt layer. The SOMA (Sensor Oriented Missions Avionics Platform) core addresses network monitoring, inter-process communication and sensor virtualization. After an ad-hoc connection is established, the sensor network requests the available capabilities of the UAVs. Each UAV is reporting platform and sensor characteristics: sensor type, resolution, frame rate, stabilization modes, streaming protocols and onboard processing capabilities. The sensor network can send high-level tasks such as Area Search, Sensor Replacement and Road Following A received task is decomposed by sensor and perception management functions using SoFiSt (Software Framework for intelligent Sensor Management) into platform, sensor and processing plans, considering resource, context and background knowledge. Sensor data is streamed via RTP, RTSP or UDP. Status information (task ID and type, position, pose, sensor type, field of view) is sent continuously to enable fusion, monitoring and re-tasking of the UAV. During trials, a UAV system, both live and simulated, has been integrated into the proposed architecture, and UAV-derived information has been used in the scope of multi-level multi-layer data fusion. 7 System Evaluation MEDUSA components were individually evaluated and their performance was compared with ground truth and state-of-the art algorithms. Due to space constraints, detailed results will not be presented here, but the reader should refer to the related papers in the bibliography [11-14], [16], [23]. The overall MEDUSA system was assessed and validated against challenging scenarios. Specifically, three scenarios have been generated. Two synthetic scenarios, in which an urban war environment is reproduced, were implemented with deployed simulated sensors by means of battlefield simulation software [27]. A third, live scenario, where the events were staged by actors, was also setup at the UNIBW military campus in Munich in order to showcase the operation of MEDUSA in a real world environment. Here UAVs and IPNs were deployed together with video cameras in order to monitor a specific restricted area. During the test runs in both synthetic and live scenario types, events were reliably detected and processed. Events notifications and threat alarms were displayed on the COP with a maximum delay well below the required response time. Here it is assumed that a vehicle checkpoint (VCP) equipped with MEDUSA technology monitors regular street traffic but also conducts and controls local operations using own patrol vehicles. In the depicted event a car passes the VCP and starts to closely follow a military patrol, which is considered to be alarming. Table 1 describes the storyboard as well as MEDUSA reactions and related Man-Machine Interaction. Table 1. Storyboard of example event with MEDUSA actions and benefits. Storyboard MEDUSA actions/interaction A vehicle approaches a VCP. Vehicle is tracked automatically and displayed. The VCP personnel checks ve- VCP personnel considers vehicle potentially hicle and let it pass. hazardous and flags it as suspicious. The vehicle leaves the VCP. This The vehicle and the patrol are continuously coincides with a patrol setting tracked. out for its mission from the VCP. The vehicle catches up and to High level reasoning within MEDUSA analyzfollows the patrol. es vehicle status as well as temporal and spatial relations to the patrol and raises an alarm. VCP and patrol are informed about a suspicious vehicle following the patrol. 8 Conclusions The paper introduced a novel system for sophisticated multi-sensor data fusion for generation of a common operational picture in urban environments. The system integrates a range of novel algorithms for context acquisition and processing, along with ontologies for semantic reasoning. It is based on a middleware architecture that combines key concepts from state-of-the-art pervasive computing systems, with a view to successfully responding to the heterogeneity, integration and intelligence related challenges. Among the novel points of the introduced system is its ability to leverage a wide range of heterogeneous signal processing components running on various platforms. The system integrates semantic nodes and semantic capabilities, while at the same time it provides support for all JDL fusion levels, based on the distributed collaboration and interaction of the LLF and HLF nodes but also within the HLF nodes. Furthermore, the system has integrated mobile/roaming nodes such as UAVs. The system has been used in the scope of various urban scenarios handling multiple-sensors and heterogeneous events, while serving the purposes of different applications. Future work includes the implementation of predictive capabilities into the fusion system, with a view to enabling security agencies to anticipate events. This requires the study and integration of advanced reasoning schemes that could predict situations (e.g., based on Bayesian Knowledge Bases and game theory strategies). The MEDUSA system provides a sound basis for integrating the semantics of such schemes, as well as for experimenting with relevant scenarios. Acknowledgements. This work has been carried out in the scope of the MEDUSA project (Multi sEnsor Data fusion grid for Urban Situational Awareness), co-funded by the European Defense Agency’s Joint Investment Program on Force Protection. The authors acknowledge help and contributions from all partners of the project. References 1. Weiser, M.: The Computer for the 21st Century. Scientific American, vol. 265, no. 3, pp. 66–75 (1991) 2. Yau, S.S., Karim, F., Wang, Y., Wang, B., Gupta, S.K.S.: Reconfigurable ContextSensitive Middleware for Pervasive Computing, IEEE Pervasive Computing, joint special issue with IEEE Personal Communications on Context-Aware Pervasive Computing, 1(3), USA, pp. 33-40 ( 2002) 3. Soldatos, J., Pandis, I., Stamatis, K., Polymenakos, L., Crowley, J.L.: Agent based middleware infrastructure for autonomous context-aware ubiquitous computing services. Computer Communications (COMCOM), vol. 30, no. 3, pp. 577-591 (2007). 4. Dimakis, N., Soldatos, J., Polymenakos, L., Fleury, P., Curín, J., Kleindienst, J.: Integrated Development of Context-Aware Applications in Smart Spaces. IEEE Pervasive Computing, vol. 7, no. 4, pp. 71-79 (2008) 5. Aberer, K., Hauswirth, M., Salehi, A.: Infrastructure for data processing in large-scale interconnected sensor networks. In: MDM’07, pp. 198–205 (2007) 6. Floerkemeier, C., Roduner, C., Lampe, M.: RFID Application Development with the Accada Middleware Platform, IEEE Systems Journal, vol. 1, issue 2, pp.82-94 (2007) 7. Chatzigiannakis, I., Mylonas, G., Nikoletseas, S.: 50 ways to build your application: A survey of middleware and systems for Wireless Sensor Networks. In: Proc. IEEE Conference on Emerging Technologies and Factory Automation, ETFA (2007) 8. Chen ,H. et al.: Semantic Web in the Context Broker Architecture. In: Proc. Second Annual IEEE International Conference on Pervasive Computer and Communications, (2004) 9. Kokar, M.M., Matheus, C.J., Baclawski, K.: Ontology-based situation awareness. Information Fusion, Special Issue on High-level Information Fusion and Situation Awareness, vol. 10, no. 1, pp. 83–98 ( 2009) 10. Pfisterer, D., Römer, K., Bimschas, D., Kleine, O., Mietz, R., Truong, C., Hasemann, H., Kröller, A., Pagel, M., Hauswirth, M., Karnstedt, M., Leggieri, M., Passant, A., Richardson, R.: SPITFIRE: toward a semantic web of things. IEEE Communications Magazine, vol. 49, no. 11, pp. 40-48 (2011) 11. Kovács, L., Benedek, Cs.: Visual real-time detection, recognition and tracking of ground and airborne targets. In: Proc. of Computational Imaging IX, SPIE-IS&T Electronic Imaging. vol. 7873, pp. 787311–1–12. SPIE (2011) 12. Szlávik, Z., Kovács, L., Havasi, L., Benedek, Cs., Petrás, I., Utasi, Á., Licsár, A., Czúni, L., Szirányi, T.: Behavior and event detection for annotation and surveillance. In: Intl. Workshop on Content-Based Multimedia Indexing (CBMI 2008), pp. 117–124 (2008) 13. Utasi, Á.: Novel Probabilistic Methods for Visual Surveillance Applications. Phd Thesis, University of Pannonia, Veszprém, Hungary (2012) 14. Utasi, Á., Czúni, L.: Anomaly detection with low-level processes in videos. In: Proc. 3rd International Conference on Computer Vision Theory and Applications, pp. 678–681 (2008) 15. Tham, C., Buyya, R.: SensorGrid: Integrating sensor networks and grid computing. CSI Communications 29, 24–29 (2005) 16. Doulaverakis, C., Konstantinou, N., Knape, T., Kompatsiaris, I., Soldatos, J.: An approach to intelligent information fusion in sensor saturated urban environments. In: Proc. IEEE European Intelligence and Security Informatics Conference (EISIC), pp. 108-115 (2011) 17. White, F.E.: Data fusion lexicon. Data Fusion Subpanel of the Joint Directors of Laboratories, Technical Panel for C3 (1991) 18. Steinberg, A.N., Bowman, C.L., White, F.E.: Revisions to the JDL Data Fusion Model. Jrnl. Of Sensor Fusion: Architectures, Algorithms, and Applications III (1999) 19. Virtuoso Universal Server, http://virtuoso.openlinksw.com 20. Barwise, J., Perry, J., Situations and Attitudes. MIT Press (1983) 21. Devlin, K., Logic and Information. Cambridge U. Press (1991) 22. Mallot, H.A., Bülthoff, H.H., Little, J.J., Bohrer, S.: Inverse perspective mapping simplifies optical flow computation and obstacle detection. Biological Cybernetics, vol. 64, pp. 177-185 (1991) 23. Talantzis, F., Pnevmatikakis, A., Constantinides, A.G.: Audio-Visual Person Tracking: A Practical Approach. World Scientific Publication Co., Imperial College Press, London (2011). 24. Salehi, A., Riahi, M., Michel, S., Aberer, K.: GSN, middleware for stream world. In: Proc. 10th International Conference on Mobile Data Management (2009) 25. Konstantinou, N., Spanos, D.E., Stavrou, P., Mitrou, N.: Technically Approaching the Semantic Web Bottleneck. International Journal of Web Engineering and Technology (IJWET), vol. 6, no. 1, pp. 83-111 (2010) 26. Guangyu, L., Kefa, Z., Li, S., Jinlin, W., Qianfeng, W.: Regional Eco-Environmental Information Service System Based on Open Source Projects. Energy Procedia, vol. 11, pp. 3892-3898 (2011) 27. Hummel, G., Stütz, P.: Conceptual design of a simulation test bed for ad-hoc sensor networks based on a serious gaming environment. In: Proc. Intl. Training and Education Conference (ITEC) (2011)
© Copyright 2025 Paperzz