draft-busibel-teas-yang-path-computation-00 IETF 97 – Seoul Italo Busi (Huawei) Sergio Belotti (Nokia) Daniele Ceccarelli (Ericsson) Victor Lopez, Oscar Gonzales de Dios (Telefonica) Michael Scharf (Nokia) Anurag Sharma (Infinera) Yan Shi (China Unicom) Ricard Vilalta (CTTC) Karthik Sethuraman (NEC) IETF 97 @ Seoul, November 13-18, 2016 Status • Initial draft presented at CCAMP WG in Berlin (IETF 96) – draft-busibel-ccamp-path-computation-api-00 • Updated to address comments received during IETF 96 – Moved to TEAS WG – Draft outline restructured – Applicability to ABNO and ACTN architectures clarified – Motivations for a YANG model added Scope • Use cases for supporting path computation request via YANG-based protocols (e.g., NETCONF or RESTCONF). • Target interface is NBI of an SDN controller. – ABNO control interface – ACTN CMI and MPI • OUTLINE – – – – – Section Section Section Section Section 2 3 4 5 6 Use cases Interaction with TE-Topology Motivation for a Yang model Path optimization request (ffs) Yang model for path computation request Use Cases • Three use-cases are addressed in this version: – IP-Optical Integration • An optical domani is providing connectivity between IP routers – Multi-domain TE Network • TE (e.g., Optical) domains interconnected by multiple inter-domains links – Data Center interconnection • TE (e.g., Optical) domain is providing connectivity among data centers. IP-Optical integration ORCHESTRATOR Request a Path IP NW CONTR0LLER TE Topology: Abstract node IP NW OPTICAL NW NETCONF/RESTCONF interface Optical NW CONTR0LLER Multi-domain TE Networks ORCHESTRATOR TE NW CONTR0LLER 1 NETCONF/RESTCONF interfaces C A TE NW CONTR0LLER 2 E G B D F H Data center interconnections CLOUD ORCHESTRATOR NETCONF/RESTCONF interfaces TE NW CONTR0LLER DC CONTR0LLER DC CONTR0LLER P PE1 DC2 DC1 PE2 PE3 DC CONTR0LLER DC3 Interactions with TE Topology • TE Topology is extending the TE Node «connectivity matrix» of RFC 7446 with specific TE attributes (e.g. delay, SRLGs, etc) – From «virtual node» model to «virtual link» model • Tradeoffs still to be considered when abstracting topology information – Accuracy versus Scalability and up-to-date information • Path Computation request allows requesting only the information that is needed and when it is needed • Abstract Topology Information can be still used to reduce the number of Path Computation requests (improving scalability with large number of domains) • Path computation request and TE topology model are complementary tools Motivation for Yang model (1) • Common data model – Path computation request should be closely aligned with Yang data models providing abstract TE topology information and TE tunnel configuration and management • Same end-point ID • Path computation constraints based on same data model Motivation for Yang model (2) • Single Interface – Simple authentication and authorization • E.g. avoid different security mechanism per interface (different credential, keys) – Consistency • Keeping data consistent over different interface not trivial – Testing – Middle-box friendliness • Complex environment scenario with also middle-box such firewall, load balancers etc. • Single protocol easier to deploy – Tooling reuse • Leveraging rapidly growing eco-system for Yang ttooling (tools, libraries) Motivation for Yang model (3) • Extensibility – Yang language to cover other typical important functionality like path computation in a seamless way • Service configuration • Notification for topology changes and alarm integration • Performance management (data telemetry and monitoring) • OAM • QoS configuration Yang model • Te-tunnel model already provides a statetful solution based on «compute-only» te-tunnel • Discussion have been hold in mailing list and in the te-tunnel weekly to elaborate pro and cons related RPC stateless vs. compute-only – The need for stateless solution have been recognized • A Yang model proposal for a stateless RPC is available on: https://github.com/rvilalta/ietf-te-path-computation – Plan to be added to the 01 version • Stateless RPC and compute-only te-tunnels are «complementary» solutions. Stateless RPC • Pro – A simple atomic operation is a natural choice expecially with a stateless PCE – No need for persistent storage of state – No need for garbage collection (no states to be deleted) • Cons – RPC response must be provided synchronously • If collaborative computations are time consuming, it may not be possibe to immediate reply to client • Possible solutions to this problem still under investigation/discussion – Stateless operation without garantee that returned path is still available when path setup is requested Compute-only te-tunnel • Pro – Support asychronous operation – Simple to model in the context of te-tunnel – Allow notifying client on changes of the computed path • Cons Several messages required for any path computation Requires persistent storage in the provider controller Need for garbage collection for stranded paths Process burden to detect changes on the computed paths in order to provide notifications update – Notifications may not be reliable nor develivered on time – – – – • Mitigate but does not solve the issue that the computed path may not be available at the setup time Next Steps • Seeking comments and feedbacks from interested WGs to improve document – avoid duplicated information with existing RFCs or other Internet-Drafts – on going discussions about compute-anddelete te tunnel and te-tunnel actions • Yang solution for stateless RPC integration into te-tunnel model? • Complete with path optimization IP+Optical: Path Computation Example Cost= 50 Cost= 10 Cost= 10 VP1 R1 VP2 VP4 VP5 Cost= 5 R2 Cost= 5 Cost= 55 • • • Orchestrator got «abstracted view» of physical resources (no optical path cost feasibility) Orchestrator can ask DC for a «set of potential optimal path» based on o ptical constraints Orchestrator select one based on its own constraints, policy and specific topology parameter (e.g. access link cost) Data center interconnections • Virtual machine in DC1 needs to transfer data to another virtual machine (in DC2 or DC3) • Optimal decision based on optical cost (DC1DC2 or DC1-DC3) and computing power • Cloud orchestrator use path computation request to Optical domain to compute the cost of feasible optical paths and to DC controller to compute the cost of computing power, and then take the decision. Multi-domain Optical Networks: (many domains) • Complementary use of TE topology and path computation – Abstract topology information provided by domain controllers limiting the number of potential optimal e2e paths – Path computation API to find optimal path within limited set.
© Copyright 2026 Paperzz