Modular Layer 2 In OpenStack Neutron Robert Kukura, Red Hat Kyle Mestery, Cisco 1. I’ve heard the Open vSwitch and Linuxbridge Neutron Plugins are being deprecated. 2. I’ve heard ML2 does some cool stuff! 3. I don’t know what ML2 is but want to learn about it and what it provides. What is Modular Layer 2? A new Neutron core plugin in Havana Modular • Drivers for layer 2 network types and mechanisms interface with agents, hardware, controllers, ... o Service plugins and their drivers for layer 3+ o • Works with existing L2 agents openvswitch o linuxbridge o hyperv o • Deprecates existing monolithic plugins openvswitch o linuxbridge o Motivations For a Modular Layer 2 Plugin Before Modular Layer 2 ... Neutron Server Neutron Server OR Open vSwitch Plugin OR ... Linuxbridge Plugin Before Modular Layer 2 ... I want to write a Neutron Plugin. What a pain. :( Neutron Server But I have to duplicate a lot of DB, segmentation, etc. work. Vendor X Plugin ML2 Use Cases • Replace existing monolithic plugins Eliminate redundant code o Reduce development & maintenance effort o • New features Top-of-Rack switch control o Avoid tunnel flooding via L2 population o Many more to come... o • Heterogeneous deployments Specialized hypervisor nodes with distinct network mechanisms o Integrate *aaS appliances o Roll new technologies into existing deployments o Modular Layer 2 Architecture The Modular Layer 2 (ML2) Plugin is a framework allowing OpenStack Neutron to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers. What’s Similar? ML2 is functionally a superset of the monolithic openvswitch, linuxbridge, and hyperv plugins: • • • • Based on NeutronDBPluginV2 Models networks in terms of provider attributes RPC interface to L2 agents Extension APIs What’s Different? ML2 introduces several innovations to achieve its goals: • • • • Cleanly separates management of network types from the mechanisms for accessing those networks o Makes types and mechanisms pluggable via drivers o Allows multiple mechanism drivers to access same network simultaneously o Optional features packaged as mechanism drivers Supports multi-segment networks Flexible port binding L3 router extension integrated as a service plugin ML2 Architecture Diagram Neutron Server API Extensions ML2 Plugin Mechanism Manager Type Manager Tail-F NCS Open vSwitch Linuxbridge L2 Population Hyper-V Cisco Nexus Arista VXLAN TypeDriver VLAN TypeDriver GRE TypeDriver Multi-Segment Networks VXLAN 123567 ● ● ● ● physnet1 VLAN 37 physnet2 VLAN 413 VM 1 VM 3 VM 2 Created via multi-provider API extension Segments bridged administratively (for now) Ports associated with network, not specific segment Ports bound automatically to segment with connectivity Type Driver API class TypeDriver(object): @abstractmethod def get_type(self): pass @abstractmethod def initialize(self): pass @abstractmethod def validate_provider_segment(self, segment): pass @abstractmethod def reserve_provider_segment(self, session, segment): pass @abstractmethod def allocate_tenant_segment(self, session): pass @abstractmethod def release_segment(self, session, segment): pass Mechanism Driver API class MechanismDriver(object): @abstractmethod def initialize(self): pass def create_network_precommit(self, context): pass def create_network_postcommit(self, context): pass def update_network_precommit(self, context): pass def update_network_postcommit(self, context): pass def delete_network_precommit(self, context): pass def delete_network_postcommit(self, context): pass def create_subnet_precommit(self, context): pass def create_subnet_postcommit(self, context): pass def update_subnet_precommit(self, context): pass def update_subnet_postcommit(self, context): pass def delete_subnet_precommit(self, context): pass def delete_subnet_postcommit(self, context): pass def create_port_precommit(self, context): pass def create_port_postcommit(self, context): pass def update_port_precommit(self, context): pass def update_port_postcommit(self, context): pass def delete_port_precommit(self, context): pass def delete_port_postcommit(self, context): pass def bind_port(self, context): pass def validate_port_binding(self, context): return False def unbind_port(self, context): pass class NetworkContext(object): @abstractproperty def current(self): pass @abstractproperty def original(self): pass @abstractproperty def network_segments(self): pass Port Binding • • • • • • • Determines values for port’s binding:vif_type and binding:capabilities attributes and selects segment Occurs when binding:host_id set on port or existing valid binding ML2 plugin calls bind_port() on registered MechanismDrivers, in order listed in config, until one succeeds or all have been tried Driver determines if it can bind based on: o context.network.network_segments o context.current[‘binding:host_id’] o context.host_agents() For L2 agent drivers, binding requires live L2 agent on port’s host that: o Supports the network_type of a segment of the port’s network o Has a mapping for that segment’s physical_network if applicable If it can bind the port, driver calls context.set_binding() with binding details If no driver succeeds, port’s binding:vif_type set to BINDING_FAILED class PortContext(object): @abstractproperty def current(self): pass @abstractproperty def original(self): pass @abstractproperty def network(self): pass @abstractproperty def bound_segment(self): pass @abstractmethod def host_agents(self, agent_type): pass @abstractmethod def set_binding(self, segment_id, vif_type, cap_port_filter): pass Havana Features Type Drivers in Havana The following are supported segmentation types in ML2 for the Havana release: ● local ● flat ● VLAN ● GRE ● VXLAN Mechanism Drivers in Havana The following ML2 MechanismDrivers exist in Havana: ● ● ● ● ● ● ● Arista Cisco Nexus Hyper-V Agent L2 Population Linuxbridge Agent Open vSwitch Agent Tail-f NCS Before ML2 L2 Population MechanismDriver “VM A” wants to talk to “VM G.” “VM A” sends a broadcast packet, which is replicated to the entire tunnel mesh. VM A VM B Host 1 VM I VM C Host 1 Host 2 VM H Host 4 VM G Host 3 VM F VM E VM D With ML2 L2 Population MechanismDriver Traffic from “VM A” to “VM G” is encapsulated and sent to “Host 4” according to the bridge forwarding table entry. The ARP request from “VM A” for “VM G” is intercepted and answered using a pre-populated neighbor entry. VM A Host 1 VM B Proxy Arp VM I VM C Host 2 Host 1 VM H Host 4 VM G Host 3 VM F VM E VM D Modular Layer 2 Futures ML2 Futures: Deprecation Items • The future of the Open vSwitch and Linuxbridge plugins These are planned for deprecation in Icehouse o ML2 supports all their functionality o ML2 works with the existing OVS and Linuxbrige agents o No new features being added in Icehouse to OVS and Linuxbridge plugins o • Migration Tool being developed Plugin vs. ML2 MechanismDriver? • Advantages of writing an ML2 Driver instead of a new monolithic plugin Much less code to write (or clone) and maintain o New neutron features supported as they are added o Support for heterogeneous deployments o • Vendors integrating new plugins should consider an ML2 Driver instead o Existing plugins may want to migrate to ML2 as well ML2 With Current Agents ● ● Existing ML2 Plugin works with existing agents Separate agents for Linuxbridge, Open vSwitch, and Hyper-V Neutron Server ML2 Plugin API Network Host A Linuxbridge Agent Host B Hyper-V Agent Host C Open vSwitch Agent Host D Open vSwitch Agent ML2 With Modular L2 Agent ● ● ● Future direction is to combine Open Source Agents Have a single agent which can support Linuxbridge and Open vSwitch Pluggable drivers for additional vSwitches, Infiniband, SR-IOV, ... Neutron Server ML2 Plugin API Network Host A Modular Agent Host B Modular Agent Host C Modular Agent Host D Modular Agent ML2 Demo What the Demo Will Show ● ML2 running with multiple MechanismDrivers ○ ○ openvswitch cisco_nexus ● Booting multiple VMs on multiple compute hosts ● Hosts are running Fedora ● Configuration of VLANs across both virtual and physical infrastructure ML2 Demo Setup Host 1 nova api neutron server Host 2 VLAN is added on thecompute VIF for VM1 nova and also on the ... br-eth2 ports by the neutron ovsML2 agentOVS MechanismDriver. neutron dhcp VLAN is added on the VIF for VM2 and also on the br-eth2 ports by neutron ovs OVS the ML2 agent MechanismDriver. nova compute neutron l3 agent vm1 br-int br-eth2 eth2 vm2 VM1 can ping VM2 … we’ve successfully completed the standard network test. br-int br-eth2 eth2 The ML2 Cisco The ML2 Cisco Nexus Nexus MechanismDriver MechanismDriver trunks the VLAN trunks the VLAN on eth2/1. on eth2/2. eth2/1 eth2/2 Cisco Nexus Switch Questions?
© Copyright 2026 Paperzz