Charles Chan | caebcf3 | 2021-09-20 22:17:52 -0700 | [diff] [blame] | 1 | Architecture and Design |
| 2 | ======================= |
Charles Chan | 10ad144 | 2021-10-05 16:57:26 -0700 | [diff] [blame] | 3 | |
| 4 | Architecture |
| 5 | ------------ |
| 6 | |
| 7 | Classic SDN |
| 8 | ^^^^^^^^^^^ |
| 9 | SD-Fabric operates as a hybrid L2/L3 fabric. As a pure (or classic) SDN solution, SD-Fabric does |
| 10 | not use any of the traditional control protocols typically found in networking, a non-exhaustive |
| 11 | list of which includes: STP, MSTP, RSTP, LACP, MLAG, PIM, IGMP, OSPF, IS-IS, Trill, RSVP, LDP |
| 12 | and BGP. Instead, SD-Fabric uses an SDN Controller (ONOS) decoupled from the data plane |
| 13 | hardware to directly program ASIC forwarding tables in a pipeline defined by a P4 program. In |
| 14 | this design, a set of applications running on ONOS program all the fabric functionality and |
| 15 | features, such as Ethernet switching, IP routing, mobile core user plane, multicast, DHCP Relay, |
| 16 | and more. |
| 17 | |
| 18 | |
| 19 | Topologies |
| 20 | ^^^^^^^^^^ |
| 21 | SD-Fabric supports a number of different topological variants. In its simplest instantiation, one |
| 22 | could use a single leaf or a leaf-pair to connect servers, external routers, and other equipment |
| 23 | like access nodes or physical appliances (PNFs). Such a deployment can also be scaled |
| 24 | horizontally into a leaf-and-spine fabric (2-level folded Clos), by adding 2 or 4 spines and up to |
| 25 | 10 leaves in single or paired configurations. Further scale can be achieved by distributing the |
| 26 | fabric itself across geographical regions, with spine switches in a primary central location, |
| 27 | connected to other spines in multiple secondary (remote) locations using WDM links. Such 4-level |
| 28 | topologies (leaf-spine-spine-leaf) can be used for backhaul in operator networks, where |
| 29 | the secondary locations are deeper in the network and closer to the end-user. In these |
| 30 | configurations, the spines in the secondary locations serve as aggregation devices that backhaul |
| 31 | traffic from the access nodes to the primary location which typically has the facilities for compute |
| 32 | and storage for NFV applications. |
| 33 | See :ref:`Topology` for details. |
| 34 | |
| 35 | |
| 36 | Redundancy |
| 37 | ^^^^^^^^^^ |
| 38 | SD-Fabric supports redundancy at every level. A leaf-spine fabric is redundant by design in the |
| 39 | spine layer, with the use of ECMP hashing and multiple spines. In addition, SD-Fabric supports |
| 40 | leaf pairs, where servers and external routers can be dual-homed to two ToRs in an active-active |
| 41 | configuration. In the control plane, some SDN solutions use single instance controllers, which are |
| 42 | single points of failure. Others use two controllers in active backup mode, which is redundant, |
| 43 | but may lack scale as all the work is still being done by one instance at any time and scale can |
| 44 | never exceed the capacity of one server. In contrast, SD-Fabric is based on ONOS, an SDN |
| 45 | controller that offers N-way redundancy and scale. An ONOS cluster with 3 or 5 instances are all |
| 46 | active nodes doing work simultaneously, and failure handling is fully automated and completely |
| 47 | handled by the ONOS platform. |
| 48 | |
| 49 | .. image:: images/arch-redundancy.png |
| 50 | :width: 350px |
| 51 | |
| 52 | MPLS Segment Routing (SR) |
| 53 | ^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 54 | While SR is not an externally supported feature, SD-Fabric architecture internally uses concepts |
| 55 | like globally significant MPLS labels that are assigned to each leaf and spine switch. The leaf |
| 56 | switches push an MPLS label designating the destination ToR (leaf) onto the IPv4 or IPv6 traffic, |
| 57 | before hashing the flows to the spines. In turn, the spines forward the traffic solely on the basis |
| 58 | of the MPLS labels. This design concept, popular in IP/MPLS WAN networks, has significant |
| 59 | advantages. Since the spines only maintain label state, it leads to significantly less programming |
| 60 | burden and better scale. For example, in one use case the leaf switches may each hold 100K+ |
| 61 | IPv4/v6 routes, while the spine switches need to be programmed with only 10s of labels! As a |
| 62 | result, completely different ASICs can be used for the leaf and spine switches; the leaves can |
| 63 | have bigger routing tables and deeper buffers while sacrificing switching capacity, while the |
| 64 | spines can have smaller tables with high switching capacity. |
| 65 | |
| 66 | Beyond Traditional Fabrics |
| 67 | -------------------------- |
| 68 | |
| 69 | .. image:: images/arch-features.png |
| 70 | :width: 700px |
| 71 | |
| 72 | While SD-Fabric offers advancements that go well beyond traditional fabrics, it is first helpful to |
| 73 | understand that SD-Fabric provides all the features found in network fabrics from traditional |
| 74 | networking vendors in order to make SD-Fabric compatible with all existing infrastructure |
| 75 | (servers, applications, etc.). |
| 76 | |
| 77 | At its core, SD-Fabric is a L3 fabric where both IPv4 and IPv6 packets are routed across server |
| 78 | racks using multiple equal-cost paths via spine switches. L2 bridging and VLANs are also |
| 79 | supported within each server rack, and compute nodes can be dual-homed to two Top-of-Rack |
| 80 | (ToR) switches in an active-active configuration (M-LAG). SD-Fabric assumes that the fabric |
| 81 | connects to the public Internet and the public cloud (or other networks) via traditional router(s). |
| 82 | SD-Fabric supports a number of other router features like static routes, multicast, DHCP L3 Relay |
| 83 | and the use of ACLs based on layer 2/3/4 options to drop traffic at ingress or redirect traffic via |
| 84 | Policy Based Routing. But SDN control greatly simplifies the software running on each switch, |
| 85 | and control is moved into SDN applications running in the edge cloud. |
| 86 | |
| 87 | While these traditional switching/routing features are not particularly novel, SD-Fabric’s |
| 88 | fundamental embrace of programmable silicon offers advantages that go far beyond traditional |
| 89 | fabrics. |
| 90 | |
| 91 | Programmable Data Planes & P4 |
| 92 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 93 | SD-Fabric’s data plane is fully programmable. In marked contrast to traditional fabrics, features |
| 94 | are not prescribed by switch vendors. This is made possible by P4, a high-level programming |
| 95 | language used to define the switch packet processing pipeline, which can be compiled to run at |
| 96 | line-rate on programmable ASICs like Intel Tofino (see https://opennetworking.org/p4/). P4 |
| 97 | allows operators to continuously evolve their network infrastructure by re-programming the |
| 98 | existing switches, rolling out new features and services on a weekly basis. In contrast, traditional |
| 99 | fabrics based on fixed-function ASICs are subject to extremely long hardware development |
| 100 | cycles (4 years on average) and require expensive infrastructure upgrades to support new features. |
| 101 | |
| 102 | SD-Fabric takes advantage of P4 programmability by extending the traditional L2/L3 pipeline for |
| 103 | switching and routing with specialized functions such as 4G/5G Mobile Core User Plane Function |
| 104 | (UPF) and Inband Network Telemetry (INT). |
| 105 | |
| 106 | 4G/5G Mobile Core User Plane Function (UPF) |
| 107 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 108 | Switches in SD-Fabric can be programmed to perform UPF functions at line rate. The L2/L3 |
| 109 | packet processing pipeline running on Intel Tofino switches has been extended to include |
| 110 | capabilities such as GTP-U tunnel termination, usage reporting, idle-mode buffering, QoS, slicing, |
| 111 | and more. Similar to vRouter, a new ONOS app abstracts the whole leaf-spine fabric as one big |
| 112 | UPF, providing integration with the mobile core control plane using a 3GPP-compliant |
| 113 | implementation of the Packet Forwarding Control Protocol (PFCP). |
| 114 | |
| 115 | With integrated UPF processing, SD-Fabric can implement a 4G/5G local breakout for edge |
| 116 | applications that is multi-terabit and low-latency, without taking away CPU processing power for |
| 117 | containers or VMs. In contrast to UPF solutions based on full or partial smartNIC offload, |
| 118 | SDFabric’s embedded UPF does not require additional hardware other than the same leaf and spine |
| 119 | switches used to interconnect servers and base stations. At the same time, SD-Fabric can be |
| 120 | integrated with both CPU-based or smartNIC-based UPFs to improve scale while supporting |
| 121 | differentiated services on a hardware-based fast-path at line rate for mission critical 4G/5G |
| 122 | applications (see https://opennetworking.org/sd-core/ for more details). |
| 123 | |
| 124 | Visibility with Inband Network Telemetry (INT) |
| 125 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 126 | SD-Fabric comes with scalable support for INT, providing unprecedented visibility into how |
| 127 | individual packets are processed by the fabric. To this end, the P4-defined switch pipeline has |
| 128 | been extended with the ability to generate INT reports for a number of packet events and |
| 129 | anomalies, for example: |
| 130 | |
| 131 | - For each flow (5-tuple), it produces periodic reports to monitor the path in terms of which |
| 132 | switches, ports, queues, and end-to-end latency is introduced by each network hop |
| 133 | (switch). |
| 134 | - If a packet gets dropped, it generates a report carrying the switch ID and the drop reason |
| 135 | (e.g., routing table miss, TTL zero, queue congestion, and more). |
| 136 | - During congestion, it produces reports to reconstruct a snapshot of the queue at a given |
| 137 | time, making it possible to identify exactly which flow is causing delay or drops to other flows. |
| 138 | - For GTP-U tunnels, it produces reports about the inner flow, thus monitoring the |
| 139 | forwarding behavior and perceived QoS for individual UE flows. |
| 140 | |
| 141 | SD-Fabric’s INT implementation is compliant with the open source INT specification, and it has |
| 142 | been validated to work with Intel’s DeepInsight performance monitoring solution, which acts as |
| 143 | the collector of INT reports generated by switches. Moreover, to avoid overloading the INT |
| 144 | collector and to minimize the overhead of INT reports in the fabric, SD-Fabric’s data plane uses |
| 145 | P4 to implement smart filters and triggers that drastically reduce the number of reports |
| 146 | generated, for example, by filtering out duplicates and by triggering report generation only in |
| 147 | case of meaningful anomalies (e.g., spikes in hop latency, path changes, drops, queue congestion, |
| 148 | etc.). In contrast to other sampling-based approaches which often allow some anomalies to go |
| 149 | undetected, SD-Fabric provides precise INT-based visibility that can scale to millions of flows. |
| 150 | |
| 151 | Flexible ASIC Resource Allocation |
| 152 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 153 | The P4 program at the base of SD-Fabric’s software stack defines match-action tables for |
| 154 | common L2/L3 features such as bridging, IPv4/IPv6 routing, MPLS termination, and ACL, as well |
| 155 | as specialized features like UPF, with tables that store GTP-U tunnel information and more. In |
| 156 | contrast to fixed-function ASICs used in traditional fabrics, table sizes are not fixed. The use of |
| 157 | programmable ASICs like Intel Tofino in SD-Fabric enables the P4 program to be adapted to |
| 158 | specific deployment requirements. For example, for routing-heavy deployments, one could |
| 159 | decide to increase the IPv4 routing table to take up to 90% of the total ASIC memory, with an |
| 160 | arbitrary ratio of longest-prefix match (LPM) entries and exact match /32 entries, while reducing |
| 161 | the size of other tables. Similarly, when using SD-Fabric for UPF, one could decide to recompile |
| 162 | the P4 program with larger GTP-U tunnel tables, while reducing the IPv4 routing table size to |
| 163 | 10-100 entries (since most traffic is tunneled) or by entirely removing the IPv6 tables. |
| 164 | |
| 165 | Closed Loop Control |
| 166 | ^^^^^^^^^^^^^^^^^^^ |
| 167 | With complete transparency, visibility, and verifiability, SD-Fabric becomes capable of being |
| 168 | optimized and secured through programmatic real-time closed loop control. By defining specific |
| 169 | acceptable tolerances for specific settings, measuring for compliance, and automatically adapting |
| 170 | to deviations, a closed loop network can be created that dynamically and automatically responds |
| 171 | to environmental changes. We can apply closed loop control for a variety of use cases including |
| 172 | resource optimization (traffic engineering), verification (forwarding behavior), security (DDoS |
| 173 | mitigation), and others. In particular, in collaboration with the Pronto™ project, a microburst |
| 174 | mitigation mechanism has been implemented in order to stop attackers from filling up switch |
| 175 | queues in an attack attempting to disrupt mission critical traffic. |
| 176 | |
| 177 | SDN, White Boxes, and Open Source |
| 178 | SD-Fabric is based on a purist implementation of SDN in both control and data planes. When |
| 179 | coupled with open source, this approach enables faster development of features and greater |
| 180 | flexibility for operators to deploy only what they need and customize/optimize the features the |
| 181 | way they want. Furthermore, SDN facilitates the centralized configuration of all network |
| 182 | functionality, and allows network monitoring and troubleshooting to be centralized as well. Both |
| 183 | are significant benefits over traditional box-by-box networking and enable faster deployments, |
| 184 | simplified operations, and streamlined troubleshooting. |
| 185 | |
| 186 | The use of white box (bare metal) switching hardware from ODMs significantly reduces CapEx |
| 187 | costs when compared to products from OEM vendors. By some accounts, the cost savings can |
| 188 | be as high as 60%. This is typically due to the OEM vendors amortizing the cost of developing |
| 189 | embedded switch/router software into the price of their hardware. |
| 190 | |
| 191 | Finally, open source software allows network operators to develop their own applications and |
| 192 | choose how they integrate with their backend systems. And open source is considered more |
| 193 | secure, with ‘many eyes’ making it much harder for backdoors to be intentionally or |
| 194 | unintentionally introduced into the network. |
| 195 | |
| 196 | Such unfettered ability to control timelines, features and costs compared to traditional network |
| 197 | fabrics makes SD-Fabric very attractive for operators, enterprises, and government applications. |
| 198 | |
| 199 | Extensible APIs |
| 200 | ^^^^^^^^^^^^^^^ |
| 201 | People usually think of a network fabric as an opaque pipe where applications send packets into |
| 202 | the network and hope they come out the other side. Little visibility is provided to determine |
| 203 | where things have gone wrong when a packet doesn't make it to its destination. Network |
| 204 | applications have no knowledge of how the packets are handled by the fabric. |
| 205 | |
| 206 | With the SD-Fabric API, network applications have full visibility and control over how their |
| 207 | packets are processed. For example, a delay-sensitive application has the option to be informed |
| 208 | of the network latency and instruct the fabric to redirect its packet when there is congestion on |
| 209 | the current forwarding path. Similarly, the API offers a way to associate network traffic with a |
| 210 | network slice, providing QoS guarantees and traffic isolation from other slices. The API also plays |
| 211 | a critical role in closed loop control by offering a programmatic way to dynamically change the |
| 212 | packet forwarding behavior. |
| 213 | |
| 214 | At a high level, SD-Fabric’s APIs fall into four major categories: configuration, information, |
| 215 | control, and OAM. |
| 216 | |
| 217 | - Configuration: APIs let users set up SD-Fabric features such as VLAN information for |
| 218 | bridging and subnet information for routing. |
| 219 | - Information: APIs allow users to obtain operation status, metrics, and network events |
| 220 | of SD-Fabric, such as link congestion, counters, and port status. |
| 221 | - Control: APIs enable users to dynamically change the forwarding behavior of the |
| 222 | fabric, such as drop or redirect the traffic, setting QoS classification, and applying |
| 223 | network slicing policies. |
| 224 | - OAM: APIs expose operational and management features, such as software upgrade |
| 225 | and troubleshooting, allowing SD-Fabric to be integrated with existing orchestration |
| 226 | systems and workflows. |
| 227 | |
| 228 | Edge-Cloud Ready |
| 229 | ---------------- |
| 230 | SD-Fabric adopts cloud native technologies and methodologies that are well developed and |
| 231 | widely used in the computing world. Cloud native technologies make the deployment and |
| 232 | operation of SD-Fabric similar to other software deployed in a cloud environment. |
| 233 | |
| 234 | Kubernetes Integration |
| 235 | ^^^^^^^^^^^^^^^^^^^^^^ |
| 236 | Both control plane software (ONOS™ and apps) and, importantly, data plane software (Stratum™), |
| 237 | are containerized and deployed as Kubernetes services in SD-Fabric. In other words, not only the |
| 238 | servers but also the switching hardware identify as Kubernetes ‘nodes’ and the same processes |
| 239 | can be used to manage the lifecycle of both control and data plane containers. For example, Helm |
| 240 | charts can be used for installing and configuring images for both, while Kubernetes monitors the |
| 241 | health of all containers and restarts failed instances on servers and switches alike. |
| 242 | |
| 243 | .. image:: images/arch-k8s.png |
| 244 | :width: 500px |
| 245 | |
| 246 | Configuration, Logging, and Troubleshooting |
| 247 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 248 | SD-Fabric reads all configurations from a single repository and automatically applies appropriate |
| 249 | config to the relevant components. In contrast to traditional embedded networking, there is no |
| 250 | need for network operators to go through the error-prone process of configuring individual leaf |
| 251 | and spine switches. Similarly, logs of each component in SD-Fabric are streamed to an EFK stack |
| 252 | (ElasticSearch, Fluentbit, Kibana) for log preservation, filtering and analysis. SD-Fabric offers a |
| 253 | single-pane-of-glass for logging and troubleshooting network state, which can further be |
| 254 | integrated with operator’s backend systems |
| 255 | |
| 256 | .. image:: images/arch-logging.png |
| 257 | :width: 1000px |
| 258 | |
| 259 | |
| 260 | Monitoring and Alerts |
| 261 | ^^^^^^^^^^^^^^^^^^^^^ |
| 262 | SD-Fabric continuously monitors system metrics such as bandwidth utilization and connectivity |
| 263 | health. These metrics are streamed to Prometheus and Grafana for data aggregation and |
| 264 | visualization. Additionally, alerts are triggered when metrics meet predefined conditions. This |
| 265 | allows the operators to react to certain network events such as bandwidth saturation even before |
| 266 | the issue starts to disrupt user traffic. |
| 267 | |
| 268 | .. image:: images/arch-monitoring.png |
| 269 | :width: 1000px |
| 270 | |
| 271 | Deployment Automation |
| 272 | ^^^^^^^^^^^^^^^^^^^^^ |
| 273 | SD-Fabric utilizes a CI/CD model to manage the lifecycle of the software, allowing developers to |
| 274 | make rapid iterations when introducing a new feature. New container images are generated |
| 275 | automatically when new versions are released. Once the hardware is in place, a complete |
| 276 | deployment of the entire SD-Fabric stack can be pushed from the public cloud with a single click |
| 277 | fabric-wide in less than two minutes. |
| 278 | |
| 279 | .. image:: images/arch-deployment.png |
| 280 | :width: 900px |
| 281 | |
| 282 | Aether™-Ready |
| 283 | ^^^^^^^^^^^^^ |
| 284 | SD-Fabric fits into a variety of edge use cases. Aether is ONF's private 5G/LTE enterprise edge |
| 285 | cloud platform, currently running in a dozen sites across multiple geographies as of early 2021. |
| 286 | |
| 287 | Aether consists of several edge clouds deployed at enterprise sites controlled and managed by a |
| 288 | central cloud. Each Aether Edge hosts third-party or in-house edge apps that benefit from low |
| 289 | latency and high bandwidth connectivity to the local devices and systems at the enterprise edge. |
| 290 | Each edge also hosts O-RAN compliant private-RAN control, IoT, and AI/ML platforms, and |
| 291 | terminates mobile user plane traffic by providing local breakout (UPF) at the edge sites. In |
| 292 | contrast, the Aether management platform centrally runs the shared mobile-core control plane |
| 293 | that supports all edges from the public cloud. Additionally, from a public cloud a management |
| 294 | portal for the operator and for each enterprise is provided, and Runtime Operation Control (ROC) |
| 295 | controls and configures the entire Aether solution in a centralized manner. |
| 296 | |
| 297 | SD-Fabric has been fully integrated into the Aether Edge as its underlying network infrastructure, |
| 298 | interconnecting all hardware equipment in each edge site such as servers and disaggregated RAN |
| 299 | components with bridging, routing, and advanced processing like local breakout. It is worth |
| 300 | noting that SD-Fabric can be configured and orchestrated via its configuration APIs by cloud |
| 301 | solutions, and therefore can be easily integrated with Aether or third party cloud offerings from |
| 302 | hyperscalers. In Aether, SD-Fabric configurations are centralized, modeled, and generated by |
| 303 | ROC to ensure the fabric configurations are consistent with other Aether components. |
| 304 | |
| 305 | In addition to connectivity, SD-Fabric supports a number of advanced services such as |
| 306 | hierarchical QoS, network slicing, and UPF idle-mode buffering. And given its native support for |
| 307 | programmability, we expect many more innovative services to take advantage of SD-Fabric over |
| 308 | time. |
| 309 | |
| 310 | .. image:: images/arch-aether-ready.png |
| 311 | :width: 800px |
| 312 | |
| 313 | System Components |
| 314 | ----------------- |
| 315 | |
| 316 | .. image:: images/arch-software-stack.png |
| 317 | :width: 400px |
| 318 | |
| 319 | Open Network Operating System (ONOS) |
| 320 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 321 | SD-Fabric uses ONF’s Open Network Operating System (ONOS) as the SDN controller. ONOS is |
| 322 | designed as a distributed system, composed of multiple instances operating in a cluster, with all |
| 323 | instances actively operating on the network while being functionally identical. This unique |
| 324 | capability of ONOS simultaneously affords high availability and horizontal scaling of the control |
| 325 | plane. ONOS interacts with the network devices by means of pluggable southbound interfaces. |
| 326 | In particular, SD-Fabric leverages P4Runtime™ for programming and gNMI for configuring |
| 327 | certain features (such as port speed) in the fabric switches. Like other SDN controllers, ONOS |
| 328 | provides several core services like topology discovery and end point discovery (hosts, routers, |
| 329 | etc. attached to the fabric). Unlike any other open source SDN controller, ONOS delivers these |
| 330 | core services in a distributed way over the entire cluster, such that applications running in any |
| 331 | instance of the controller have the same view and information. |
| 332 | |
| 333 | ONOS Applications |
| 334 | ^^^^^^^^^^^^^^^^^ |
| 335 | SD-Fabric uses a collection of applications that run on ONOS to provide the fabric features and |
| 336 | services. The main application responsible for fabric operation handles connectivity features |
| 337 | according to SD-Fabric architecture, while other apps like DHCP relay, AAA, UPF control, and |
| 338 | multicast handle more specialized features. Importantly, SD-Fabric uses the ONOS Flow Objective |
| 339 | API, which allows applications to program switching devices in a pipeline-agnostic |
| 340 | way. By using Flow-Objectives, applications can be written without worrying about low-level |
| 341 | pipeline details of various switching chips. The API is implemented by specific device drivers |
| 342 | that are aware of the pipelines they serve and can thus convert the application’s API calls to |
| 343 | device-specific rules. In this way, the application can be written once, and adapted to pipelines |
| 344 | from different ASIC vendors. |
| 345 | |
| 346 | Stratum |
| 347 | ^^^^^^^ |
| 348 | SD-Fabric integrates switch software from the ONF Stratum project. Stratum is an open source |
| 349 | silicon-independent switch operating system. Stratum implements the latest SDN-centric |
| 350 | northbound interfaces, including P4, P4Runtime, gNMI/OpenConfig, and gNOI, thereby enabling |
| 351 | interchangeability of forwarding devices and programmability of forwarding behaviors. On the |
| 352 | southbound interface, Stratum implements silicon-dependent adapters supporting network |
| 353 | ASICs such as Intel Tofino, Broadcom™ XGS® line, and others. |
| 354 | |
| 355 | Leaf and Spine Switch Hardware |
| 356 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 357 | In a typical configuration, the leaf and spine hardware used in SD-Fabric are typically Open |
| 358 | Compute Project (OCP)™ certified switches from a selection of different ODM vendors. The port |
| 359 | configurations and ASICs used in these switches are dependent on operator needs. For example, |
| 360 | if the need is only for traditional fabric features, a number of options are possible – e.g., Broadcom |
| 361 | StrataXGS ASICs in 48x1G/10G, 32x40G/100G configurations. For advanced needs that take |
| 362 | advantage of P4 and programmable ASICs, Intel Tofino or Broadcom Trident 4 are more |
| 363 | appropriate choices. |
| 364 | |
| 365 | ONL and ONIE |
| 366 | ^^^^^^^^^^^^ |
| 367 | The SD-Fabric switch software stack includes Open Network Linux (ONL) and Open Network |
| 368 | Install Environment (ONIE) from OCP. The switches are shipped with ONIE, a boot loader that |
| 369 | enables the installation of the target OS as part of the provisioning process. ONL, a Linux |
| 370 | distribution for bare metal switches, is used as the base operating system. It ships with a number |
| 371 | of additional drivers for bare metal switch hardware elements (e.g., LEDs, SFPs) that are typically |
| 372 | unavailable in normal Linux distributions for bare metal servers (e.g., Ubuntu). |
| 373 | |
| 374 | Docker/Kubernetes, Elasticsearch/Fluentbit/Kibana, Prometheus/Grafana |
| 375 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
| 376 | While ONOS/Stratum instances can be deployed natively on bare metal servers/switches, there |
| 377 | are advantages in deploying ONOS/Stratum instances as containers and using a container |
| 378 | management system like Kubernetes (K8s). In particular, K8s can monitor and automatically |
| 379 | reboot lost controller instances (container pods), which then rejoin the operating cluster |
| 380 | seamlessly. SD-Fabric also utilizes widely adopted cloud native technologies such as |
| 381 | Elastic/Fluentbit/Kibana for log preservation, filtering and analysis, and Prometheus/Grafana for |
| 382 | metric monitoring and alert. |