Programming Abstractions for Software-Defined Networks Jennifer Rexford http://frenetic-lang.org Joint with Nate Foster, David Walker, Arjun Guha, Rob Harrison, Chris Monsanto, Joshua Reich, Mark Reitblatt, Cole Schlesinger Programming SDNs • The Good – Network-wide visibility – Direct control over the switches – Simple data-plane abstraction • The Bad – Low-level programming interface – Functionality tied to hardware – Explicit resource control • The Ugly – Non-modular, non-compositional – Challenging distributed programming Images by Billy Perkins 2 Network Control Loop Compute Policy Read state Write policy OpenFlow Switches 3 Language-Based Abstractions Module Composition Simple query languag e Consistent updates OpenFlow Switches 4 Policy as a Function [POPL’12, NSDI’13] 5 Policy in OpenFlow • Defining “policy” is complicated – All rules in all switches – Packet-in handlers – Polling of counters • Programming “policy” is error-prone – Duplication between rules and handlers – Frequent changes in policy (e.g., flowmods) – Policy changes affect packets in flight 6 From Rules to a Policy Function • Located packet – A packet and its location (switch and port) • Policy function – From located packet to set of located packets • Examples – Original packet: identity – Drop the packet: none – Modified header: modify(f=v) – New location: fwd(a) 7 From Bit Patterns to Predicates • OpenFlow – No direct way to specify dstip!=10.0.0.1 – Requires two prioritized bitmatches • Higher priority: dstip=10.0.0.1 • Lower priority: * • Using boolean predicates – Providing &, |, and ~ – E.g., ~match(dstip=10.0.0.1) 8 Virtual Header Fields • Unified abstraction – Real headers: dstip, srcport, … – Packet location: switch and port – User-defined: e.g., traffic_class • Simple operations – Match: match(f=v) – Modify: modify(f=v) • Example – match(switch=A) & match(dstip=‘1.0.0.3’) 9 Queries as Buckets • Forwarding to a “bucket” – Q = packets(limit=1,group_by=['srcip']) • Callback functions – Q.register_callback(printer) • Multiple kinds of buckets – Packets: with limit on number – Packet counts: with time interval – Byte counts: with time interval 10 Power of Policy as a Function • Dynamic policy – A stream of policy functions • Composition – Parallel: Monitor + Route – Sequential: Firewall >> Route • Consistent updates – Apply a single policy function to each packet – … or group of related packets (e.g., a flow) 11 Computing Policy Parallel and Sequential Composition Topology Abstraction [POPL’12, NSDI’13] 12 Combining Many Networking Tasks Monolithic application Monitor + Route + FW + LB Controller Platform Hard to program, test, debug, reuse, port, … 13 Modular Controller Applications A module for each task Monitor Route FW LB Controller Platform Easier to program, test, and debug Greater reusability and portability 14 Beyond Multi-Tenancy Each module controls a different portion of the traffic Slice 1 Slice 2 ... Slice n Controller Platform Relatively easy to partition rule space, link bandwidth, and network events across modules 15 Modules Affect the Same Traffic Each module partially specifies the handling of Monitor the traffic Route FW LB Controller Platform How to combine modules into a complete application? 16 Parallel Composition dstip = 1.2.3.4 fwd(1) dstip = 3.4.5.6 fwd(2) srcip = 5.6.7.8 count Monitor on source + Route on destination Controller Platform 17 Parallel Composition dstip = 1.2.3.4 fwd(1) dstip = 3.4.5.6 fwd(2) srcip = 5.6.7.8 count Monitor on source + Route on destination Controller Platform srcip = 5.6.7.8, dstip = 1.2.3.4 fwd(1), count srcip = 5.6.7.8, dstip = 3.4.5.6 fwd(2), count srcip = 5.6.7.8 count dstip = 1.2.3.4 fwd(1) dstip = 3.4.5.6 fwd(2) 18 Sequential Composition srcip = 0*, dstip=1.2.3.4 dstip=10.0.0.1 srcip = 1*, dstip=1.2.3.4 dstip=10.0.0.2 Load Balancer >> dstip = 10.0.0.1 fwd(1) dstip = 10.0.0.2 fwd(2) Routing Controller Platform 19 Sequential Composition srcip = 0*, dstip=1.2.3.4 dstip=10.0.0.1 srcip = 1*, dstip=1.2.3.4 dstip=10.0.0.2 Load Balancer >> dstip = 10.0.0.1 fwd(1) dstip = 10.0.0.2 fwd(2) Routing Controller Platform srcip = 0*, dstip = 1.2.3.4 dstip = 10.0.0.1, fwd(1) srcip = 1*, dstip = 1.2.3.4 dstip = 10.0.0.2, fwd(2) 20 Dividing the Traffic Over Modules • Predicates – Specify which traffic traverses which modules – Based on input port and packet-header fields Web traffic dstport = 80 Load Balancer >> Routing Non-web dstport != 80 Monitor + Routing 21 Abstract Topology: Load Balancer • Present an abstract topology – Information hiding: limit what a module sees – Protection: limit what a module does – Abstraction: present a familiar interface Abstract view Real network 22 Abstract Topology: Gateway 23 Abstract Topology: Gateway • Left: learning switch on MAC addresses • Middle: ARP on gateway, plus simple repeater • Right: shortest-path forwarding on IP prefixes 24 High-Level Architecture M1 M2 M3 Main Program Controller Platform 25 Writing State Consistent Updates [SIGCOMM’12] 26 Avoiding Transient Disruption Invariants • No forwarding loops • No black holes • Access control • Traffic waypointing 27 Installing a Path for a New Flow • Rules along a path installed out of order? – Packets reach a switch before the rules do packets Must think about all possible packet and event orderings. 28 Update Consistency Semantics • Per-packet consistency P1 – Every packet is processed by – … policy P1 or policy P2 – E.g., access control, no loops or blackholes P2 29 Update Consistency Semantics • Per-packet consistency P1 – Every packet is processed by – … policy P1 or policy P2 – E.g., access control, no loops or blackholes • Per-flow consistency P2 – Sets of related packets are processed by – … policy P1 or policy P2, – E.g., server load balancer, in-order delivery, … 30 Two-Phase Update Algorithm • Version numbers – Stamp packet with version number (e.g., VLAN tag) • Unobservable updates – Add rules for P2 in the interior – … matching on version # P2 • One-touch updates – Add rules to stamp packets with version # P2 at the edge • Remove old rules – Wait for some time, then remove all version # P1 rules 31 Update Optimizations • Avoid two-phase update – Naïve version touches every switch – Doubles rule space requirements • Limit scope – Portion of the traffic – Portion of the topology • Simple policy changes – Strictly adds paths – Strictly removes paths • Incremental updates 32 Frenetic Abstractions Policy Composition Consistent Updates SQL-like queries OpenFlow Switches 33 Two Code Bases • Python: Pyretic – Built on top of POX – Simple, reactive compiler – Completing proactive compilation now • OCaml: Frenetic-OCaml – More sophisticated, proactive compiler – Large parts of the code proved correct – No support (yet) for topology abstraction – Experimental support for OpenFlow 1.3 34 Related Work • Programming languages – FRP: Yampa, FrTime, Flask, Nettle – Streaming: StreamIt, CQL, Esterel, Brooklet, GigaScope – Network protocols: NDLog • OpenFlow – Language: FML, SNAC, Resonance – Controllers: ONIX, POX, Floodlight, Nettle, FlowVisor – Testing: MiniNet, NICE, FlowChecker, OF-Rewind, OFLOPS • OpenFlow standardization – http://www.openflow.org/ – https://www.opennetworking.org/ 35 Frenetic Project • Programming languages meets networking – Cornell: Nate Foster, Gun Sirer, Arjun Guha, Robert Soule, Shrutarshi Basu, Mark Reitblatt, Alec Story – Princeton: Dave Walker, Jen Rexford, Josh Reich, Rob Harrison, Chris Monsanto, Cole Schlesinger, Naga Praveen Katta http://frenetic-lang.org Overview at http://frenetic-lang.org/publications/overview-ieeecoms13.pdf and http://www.cs.princeton.edu/~jrex/papers/pyretic13.pdf
© Copyright 2026 Paperzz