2005 International Command and Control Research and Technology Symposium The Future of Command and Control PEER -TO-PEER DISCOVERY: PEER-TO-PEER DISCOVERY: A A KEY KEY TO TO ENABLING ENABLING ROBUST, ROBUST, INTEROPERABLE INTEROPERABLE C2 C2 ARCHITECTURES ARCHITECTURES June 2005 Ray Prouty, NCW Chief Engineer Kurt Kalbus, Discovery Project Lead Dave Heddle, Chief Engineer Laura Lee, Director C2 Systems SPARTA, Inc. 13400 Sabre Springs Pkwy, Ste 220 San Diego, CA 92128 (858) 668-3570 [email protected] Troy Crites, President, Mission Systems Sector SPARTA, Inc. 1911 North Fort Myer Drive, Suite 1100 Arlington, VA 22209 (703) 558-0036 [email protected] ltl_01 Outline • NESTOR Project • Discovery Background • Non-Homogeneous Peer-to-Peer Discovery • Summary ltl_02 NESTOR: SPARTA IRAD Project • What is NESTOR… – Net-Centric Environment for System Testing and Operational Research – SPARTA’s Distributed Testbed for the Design, Implementation and Quantitative Evaluation of » NCW Concepts » NCW Infrastructure • SPARTA’s scalemodel of the GIG GIG Functional Capabilities: (CES) and Communities of Interest (COI) Force Application Intel/ Battlespace Awareness Users Business Domain C2 Focused Logistics Protection COI Capabilities Expedient (Ad Hoc) COIs - Applications - Services Comms Backbone ESM Messaging Mediation Security/IA Enterprise Service Management Discovery Collaboration Storage User Asst Core Enterprise Services (CES) Application (NCES Computing Environ) Supports Supportsall allDOD. DOD. M&S M&Sand andIO IOare areimbedded imbeddedin ineach eachaccordingly. accordingly. • What we want to accomplish… – Develop a Core Group of NCW Expertise Within SPARTA – Focused on SPARTA’s Niche Areas (System Architecture, System Engineering/Integration, M&S, Information Assurance) » Demonstrate Capabilities to NCW Customers and Potential Teammates ltl_03 NESTOR Tasks Metrics C4ISR Arch, Testing, Training for Army FCS: OneSAF Testbed Mission Planning: JEDA NCW End-to-End (NETE) Model NEMO: EffectsBased Ops Sensors: CERA 120° Ring 3 NESTOR COIs 120° Ring 1 Applications Ring 2 NESTOR Backbone WebServices Messaging: JMS Discovery 120 Agents Collaboration NESTOR CES Security/IA GIS Feature Data (ERTHA) Metrics Infrastructure 100 80 60 40 20 3 4 5 6 7 8 This ThisBriefing BriefingDescribes Describesthe theDiscovery DiscoveryTask Task ltl_04 NESTOR Physical Architecture Legend: COI Applications Core Enterprise Services Interoperability Lab NESTOR NET Mission Planner JEDA Water Universal Messaging IA/Security Web Services Agents Petrolium, Oil, Lubricants Discovery (Peer-to-Peer) IA/Security: • • • Sensor Mgmt Metrics Discovery (Peer-to-Peer) Effects-Based Models Electric Power Sensor Models Who are the Users? What is the Information? Can We “Protect” that Information? GIS Data EffectsBased Ops NEMO Computer Gen Forces NCW End-to-End Model Lines of Comm Army Core Services OneSAF Testbed Computer Gen Forces ltl_05 NESTOR Firewall Architecture NEMO Models NESTOR Firewall Hub JEDA Metrics Router NEMO Models Sensor Mgmt CERA Discovery SWAN Firewall Router Hub Services Inside SPARTA Firewall Until CY05 NESTOR Firewall NESTOR JMS SWAN Firewall Internet NEMO Models NEMO Models NESTOR Firewall Hub Web Server Router Hub NESTOR Firewall Oracle (ERTHA) Router Web Server JMS ArcIMS SWAN Firewall SWAN Firewall ltl_06 Outline • NESTOR Project • Discovery Background • Non-Homogeneous Peer-to-Peer Discovery • Summary ltl_07 Discovery Background • The GIG is A Service Oriented Architecture (SOA) with Discovery as a Key enabler – Central to converting legacy applications from stovepipes to services – Enables Runtime Integration and Self assembling applications (e.g., Find a Track Service) – Critical for Ad Hoc COIs • Common Approach to Discovery – Static Design for Four Types of Information (i.e., People, Services, Structured and Unstructured Data) – Homogenous (Monolithic) Design – Centralized Servers Four types of Discovery: Services, People, Structured (Metadata), Unstructured People Services LDAP UDDI Structured Unstructured LDAP: Lightweight Directory Access Protocol UDDI: Universal Description Discovery and Integration ltl_08 Using UDDI Registries in the GIG • Subscription / Notification is critical for the efficiency and reliability of applications – Services on the GIG will be very dynamic. – Without Notification, applications will be unaware if a Web Service they are using has been removed or changed • Subscription / Notification as defined is meant for Business to Business transactions. – Notification via Email or Web service – Delivery is not guaranteed • V3 standard has been approved (Feb 05) – More like DNS (Domain Naming Service) » Real names (SPARTA.COM vs AF239-234F-AED1-449C) » Distributed Queries – All of the major UDDI vendors have announced support Reference: www.uddi.org ltl_09 Comments on Current GIG Approach • Benefits of Approach: – Well Known – Centralized Security » Requires Centralized User Management (Problem for Scalability) – Interoperability Not an Issue – Brute Force Scalability • Disadvantages of Approach: – UDDI is designed for Semi-Static B2B applications » Dynamic nature of GIG services is not handled well – Complicated mess of API’s, different versions and associated software » Requires four separate “browsers” » Multi-Vendor Queries are problematic – Rigid Architecture » Hard to support ad hoc COIs, Cross-Service or Agency Areas or Coalition Participation – No standard for how to do a “Federated Query” – Centralized Servers susceptible to outages – Health/Status has been moved to the “Management” Service » Makes finding a service is awkward ltl_010 Discovery Approach • Examine the Gap Between Vision and reality – Vision Attributes: Broad & Dynamic Customers – Reality: Capabilities Planned Today • Work with DISA to Understand “Proposed” technologies – Gained useful insights into DISA’s thinking – Relevant to October and Next Fest Activities – Used DISA’s approach to discovery as starting point • Focus on Universal Description Discovery and Integration (UDDI) Servers – DISA standard for Discovery of Services – Least Mature Today – Can Leverage COTS, GOTS and Freeware ltl_011 SPARTA Discovery Research: UDDI Server/API Summary UDDI APIs * The current Systinet V5 Server can partially operate with JAXR. It will not access all V5 features, such as Subscription/Notification. To get those features, you must use the vendor provided API. JAXR UDDI Server Registries WebLogic Systinet Sun One JWSDP Microsoft IBM * UDDI4J worked Partial failed Systinet API WebLogic API .NET API Successful Discovery Requires Applications to Either Handle All UDDI Server Vendors or Mandate That All Applications Use a Single Vendor Or Provide an Approach that Allows Applications to Discover Services Across Multiple UDDI Server Registry Vendors (e.g., Peer-to-Peer) ltl_012 Peer to Peer Topologies • Brokered peer to peer is a P2P network in which an index of content is maintained at a centralized server, but the actual content is kept at the peers (e.g., NAPSTER). • Decentralized P2P is a network of peers that are all “equal”. There is no centralized service at all except for possibly a bootstrap initial peer discovery mechanism (e.g., Gnutella). • Semi-centralized P2P consists of “super-nodes” which are organized in a decentralized P2P network, but each super-node has weaker nodes reporting to it and the content of each weaker node is periodically uploaded to the super-node (e.g., LimeWire). ltl_013 Brokered P2P Napster is an example of a brokered P2P network. The disadvantage of such a system is that if the Main Server goes down, the entire network fails. ltl_014 Decentralized P2P The Gnutella model is fully decentralized The disadvantage is that all peers are treated equally and are part of the queries causing bandwidth utilization (across all links, even small ones). ltl_015 Semi-centralized P2P Leaf Nodes Leaf Nodes Super Nodes Leaf Nodes The Gnutella Protocol uses a semi-centralized structure (e.g., LimeWire) in which the “super-nodes” form a decentralized P2P network. In actual practice, fully decentralized P2P networks will usually self organize into a semi-centralized P2P network. This is the model that will be used for the NESTOR P2P Discovery network. ltl_016 Pros and Cons of P2P models P2P Technology Pro’s Con’s Brokered (Centralized) e.g., Napster Easiest to implement, Inherent “Mandated” interoperability De-centralized e.g., Gnutella All peers are treated Robust/Reliable Network, equally possibly leading to bandwidth Vendor Neutral for problems Discovery Servers SemiCentralized e.g., Limewire Compromise Solution (Uses super nodes to solve bandwidth problems) Single point of failure, Rigid (Hard to Adapt to Ad-Hoc COIs) Increases Firewall complexity ltl_017 Outline • NESTOR Project • Discovery Background • Non-Homogeneous Peer-to-Peer Discovery • Summary ltl_018 NESTOR Approach to Discovery Persistent resources to be registered with UDDI • We mimic the four DISA “Discovery” services – Structured, Unstructured, People, Services • Enhanced with – Unified “browser” – Peer To Peer (P2P) technology » Allows distributed/federated queries » Handles the dynamic nature of services and data • We are concentrating on the “Services” area now – UDDI (least understood, most problematic) – Three other Discovery areas are COTS NESTOR FOCUS NESTOR Services Distributed “Discovery” managed by “Peer to Peer” Network NESTOR People UDDI Discovery LDAP Discovery Peer Status Discovery Peer Meta-Data Discovery Peer “Seed” Discovery Peer Unstructured Discovery Peer NESTOR UnStructured Other SPARTA Efforts Status Monitoring Network Discovery Network Services NESTOR Structured Resource metadata, including description and connection info ltl_019 Peer-to-Peer Discovery Users Leaf Nodes Peer Peer UDDI DB Peer Peer UDDI DB Peer Peer Peer Leaf Nodes Super Nodes UDDI DB UDDI DB or UDDI DB Mission Planner UDDI DB Discovery Request Sensor Data Service Applications Problem Statement: User (or Application) Seeks to Find “Sensor Data Service” ltl_020 Discovery Example Proof of Concept WebLogic UDDI • Missile Web Service • Cera Web Service • Metrics Web Service Fir ew all Fi re wa ll Systinet UDDI Missile Web Service WL UDDI API JAXR UDDI API WL UDDI API JAXR UDDI API Discovery Peer Discovery Peer Chakotay – 157.185.24.253 (San Diego) Internet Kirk – 157.185.52.20 (Hampton) Sun UDDI on Potato Fir Discovery Peer Sun UDDI Missile Web Service ew all WL UDDI API JAXR UDDI API Discovery Peer Missile Web Service Potato – 157.185.24.29 (San Diego) Watergate – 157.185.86.236 (Rosslyn, VA) ltl_021 Discovery “Demo” 1) The above query application is running on Potato. Initially Potato only knows about Chakotay - so its sends the "missile" discovery request to Chakotay, which queries its WebLogic UDDI registry and sends a hit back to Potato (line 1 above). 2) Chakotay, however, knows about the Kirk and Watergate peers - so it successfully forwards the requests to those machines. They each query their UDDI registries get hits that they send directly back to Potato (line 2,3 above). 3) As an additional example of flexibility, the Discover Peer on Watergate does not actually query a UDDI registry on Watergate - but queries a UDDI registry running on a different machine (Potato). 4) The above query ends up accessing three different vendors of UDDI registry - WebLogic, Systinet and Sun. 5) The four machines in the above scenario reside behind 3 different firewalls. ltl_022 Summary • We Understand Discovery Approaches and Have Researched Key Ideas – GIG Operational Needs – Industry (Vendor) Capabilities and Standards – Underlying Advantages and Disadvantages » Flexibility » Security » Interoperability • We Developed a Flexible, Robust Peer-to-Peer Concept that Can Assist the GIG Discovery – Bridge Between CoIs (e.g., embodied in DISA/NCES and DGCS Efforts) – Foundation for Cross-Service and Allied Capability ltl_023
© Copyright 2026 Paperzz