| Specification |
| ============= |
| |
| SDN Features |
| ------------ |
| - ONOS cluster of all-active N instances affording N-way redundancy and scale, where N = 3 or N = 5 |
| - Unified operations interface (GUI/REST/CLI) |
| - Centralized configuration: all configuration is done on the controller instead of each individual switch |
| - Centralized role-based access control (RBAC) |
| - Automatic host (end-point) discovery: attached hosts, access-devices, appliances (PNFs), routers, etc. |
| based on ARP, DHCP, NDP, etc. |
| - Automatic switch, link and topology discovery and maintenance (keepalives, failure recovery) |
| |
| L2 Features |
| ----------- |
| Various L2 connectivity and tunneling support |
| |
| - VLAN-based bridging |
| |
| - Access, Trunk and Native VLAN support |
| - VLAN cross connect |
| |
| - Forward traffic based on outer VLAN id |
| - Forward traffic based on outer and inner VLAN id (QinQ) |
| - Pseudowire |
| |
| - L2 tunneling across the L3 fabric |
| - Support tunneling based on double tagged and single tagged traffic |
| |
| - Support VLAN translation of outer tag |
| |
| L3 Features |
| ----------- |
| IP connectivity |
| |
| - IPv4 and IPv6 unicast routing (internal use of MPLS Segment Routing) |
| - Subnetting configuration on all non-spine facing leaf ports; no configuration required on any spine port |
| - IPv6 router advertisement |
| - ARP, NDP, IGMP handling |
| - Number of flows in spines greatly simplified by MPLS Segment Routing |
| - Further reduction of per-leaf flows with route optimization logic |
| |
| DHCP Relay |
| ---------- |
| DHCP L3 relay |
| |
| - DHCPv4 and DHCPv6 |
| - DHCP server either directly attached to fabric leaves, or indirectly connected via upstream router |
| - DHCP client directly either attached to fabric leaves, or indirectly connected via LDRA |
| - Multiple DHCP servers for HA |
| |
| vRouter |
| ------- |
| vRouter presents the entire SD-Fabric as a single router (or dual-routers for HA), |
| with disaggregated control/data plane |
| |
| - Uses open-source protocol implementations like Quagga (or FRR) |
| - BGPv4 and BGPv6 |
| - Static routes |
| - Route blackholing |
| - ACLs based on port, L2, L3 and L4 headers |
| |
| Multicast |
| --------- |
| Centralized multicast tree computation, programming and management |
| |
| - Support both IPv4 and IPv6 multicast |
| - Dual-homed multicast sinks for HA |
| - Multiple multicast sources for HA |
| |
| API |
| --- |
| - Provide easy access for 3rd party edge application developers and for the Aether centralized management platform |
| - Support for traffic redirecting, dropping, network slicing and QoS |
| |
| Programmability |
| --------------- |
| - Support for Stratum, P4Runtime and gNMI and P4 programs |
| - Innovative services enabled by programmable pipeline |
| - 4G/5G UPF - GTP encap/decap, idle-mode buffering, QoS and more |
| - BNG - PPPoE, anti-spoofing, accounting and more |
| |
| Troubleshooting & Diagnostics |
| ----------------------------- |
| - T3: Troubleshooting tool to diagnose broken forwarding paths fabric wide |
| - ONOS-diags: One-click Diagnostics collection tool |
| |
| .. _Topology: |
| |
| Topology |
| -------- |
| SD-Fabric can start at the smallest scale (single leaf) and grow horizontally. |
| |
| .. image:: images/topology-scale.png |
| :width: 900px |
| |
| |
| Single Leaf (ToR) |
| ^^^^^^^^^^^^^^^^^ |
| This is the minimum SD-Fabric setup. In this setup, all servers are connected to a single switch. |
| |
| .. image:: images/topology-single.png |
| :width: 160px |
| |
| Single Leaf Pair (Dual-Homing) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| Compared to single switch, it provides more redundancy in terms of server NIC failure and link failure. |
| |
| .. image:: images/topology-pair.png |
| :width: 225px |
| |
| Leaf-Spine (without pairing) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| Compared to single switch, it offers more redundancy in terms of switch failure and provides better scalability. |
| |
| .. image:: images/topology-2x2.png |
| :width: 300px |
| |
| Leaf-Spine (with pairing) |
| ^^^^^^^^^^^^^^^^^^^^^^^^^ |
| It supports all the redundancy and scalability features mentioned above. |
| |
| .. image:: images/topology-2x4.png |
| :width: 450px |
| |
| Multi-Stage Leaf-Spine |
| ^^^^^^^^^^^^^^^^^^^^^^ |
| Multi-stage is specifically designed for telco service providers. |
| The first stage can be installed in the central office, while the second stage |
| can be installed in a field office that is closer to the subscribers. |
| Two stages are typically connected via long distance optical transport. |
| |
| .. image:: images/topology-full.png |
| :width: 700px |
| |
| Resiliency |
| ---------- |
| Provides HA in following scenarios |
| |
| - Controller instance failure (requires 3 or 5 node ONOS cluster) |
| - Link failures |
| - Spine failure |
| |
| Further HA support in following failure scenarios with dual-homing enabled |
| |
| - Leaf failure |
| - Upstream router failure |
| - Host NIC failure |
| |
| Scalability |
| ----------- |
| In Production |
| |
| - Up to 80k routes (with route optimization) |
| - 170k Flows |
| - 600 direct-attached hosts |
| - 8 leaf switches |
| - 2 spine switches |
| |
| In Pre-Production |
| |
| - Up to 120k routes (with route optimization) |
| - 250k flows |
| - 600 direct-attached hosts |
| - 8 leaf switches |
| - 2 spine switches |
| - 5000 active UEs, 10 call per second |
| |
| Security |
| -------- |
| - TLS-secured connection between controllers and switches (premium feature) |
| - AAA 802.1x authentication |
| |
| Aether-ready |
| ------------ |
| Fully integrated with Aether (5G/LTE private enterprise edge cloud solution) |
| including deployment automation, CI/CD, logging, monitoring, and alerting. |
| |
| Overlay Support |
| --------------- |
| Can be used/integrated with 3rd party overlay networks (e.g., OpenStack Neutron, Kubernetes CNI). |
| |
| Orchestrator Support |
| -------------------- |
| Can be integrated with an external orchestrator, optionally running from the public cloud |
| Supports logging, telemetry, monitoring and alarm services via |
| REST APIs and Elastic/Fluentbit/Kibana, Prometheus/Grafana |
| |
| Controller Server Specs |
| ----------------------- |
| Recommendation (per ONOS instance) based on 50K routes |
| |
| - CPU: 32 Cores |
| - RAM: 128GB RAM. 64GB dedicated to ONOS JVM heap |
| |
| White Box Switch Hardware |
| ------------------------- |
| - Multi-vendor: APS Networks™, Dell™, Delta Networks™, Edgecore Networks™, Inventec™, Netburg™, QCT™ |
| - Multi-chipset: |
| - Intel Tofino (supports all features, including programmability, UPF & INT) |
| - Broadcom Tomahawk®, Tomahawk+®, Trident2 (traditional fabric features only) |
| - 1/10G, 25G, 40G, 100G |
| - Refer to Supported Devices list in https://github.com/stratum/stratum for the most up-to-date hardware list |
| |
| White Box Switch Software |
| ------------------------- |
| - Open source ONL, ONIE, Docker, Kubernetes |
| - Stratum available from ONF |