Zack Williams | 1ae109e | 2021-07-27 11:17:04 -0700 | [diff] [blame] | 1 | .. |
| 2 | SPDX-FileCopyrightText: © 2020 Open Networking Foundation <support@opennetworking.org> |
| 3 | SPDX-License-Identifier: Apache-2.0 |
| 4 | |
| 5 | |
| 6 | Pronto Deployment |
| 7 | ================= |
| 8 | |
| 9 | One of the earliest structured deployments of Aether was as a part of `Pronto |
Zack Williams | b7d4515 | 2022-03-11 09:37:34 -0700 | [diff] [blame] | 10 | <https://prontoproject.org/>`_ project. |
| 11 | |
| 12 | The topology used in Pronto is a 2x2 :ref:`Leaf-Spine (without pairing) |
| 13 | <sdfabric:specification:topology>`, which provide Spine redundancy, but does |
| 14 | not support dual-homing of devices. |
| 15 | |
| 16 | .. image:: images/edge_2x2.svg |
| 17 | :alt: 2x2 Leaf-Spine (without pairing) topology |
| 18 | |
| 19 | The specific deployment includes a production |
Zack Williams | 1ae109e | 2021-07-27 11:17:04 -0700 | [diff] [blame] | 20 | cluster with multiple servers and a 2x2 leaf-spine fabric, along with a |
| 21 | secondary development cluster with it's own servers and fabric switch, as |
| 22 | shown in this diagram: |
| 23 | |
| 24 | .. image:: images/pronto_logical_diagram.svg |
| 25 | :alt: Logical Network Diagram |
| 26 | |
| 27 | 5x Fabric Switches (4 in a 2x2 fabric for production, 1 for development) |
| 28 | |
| 29 | * `EdgeCore Wedge100BF-32X |
| 30 | <https://www.edge-core.com/productsInfo.php?cls=1&cls2=180&cls3=181&id=335>`_ |
| 31 | - a "Dual Pipe" chipset variant, used for the Spine switches |
| 32 | |
| 33 | * `EdgeCore Wedge100BF-32QS |
| 34 | <https://www.edge-core.com/productsInfo.php?cls=1&cls2=180&cls3=181&id=770>`_ |
| 35 | - a "Quad Pipe" chipset variant, used for the Leaf switches |
| 36 | |
| 37 | 7x Compute Servers (5 for production, 2 for development): |
| 38 | |
| 39 | * `Supermicro 6019U-TRTP2 |
| 40 | <https://www.supermicro.com/en/products/system/1U/6019/SYS-6019U-TRTP2.cfm>`_ |
| 41 | 1U server |
| 42 | |
| 43 | * `Supermicro 6029U-TR4 |
| 44 | <https://www.supermicro.com/en/products/system/2U/6029/SYS-6029U-TR4.cfm>`_ |
| 45 | 2U server |
| 46 | |
| 47 | These servers are configured with: |
| 48 | |
| 49 | * 2x `Intel Xeon 5220R CPUs |
| 50 | <https://ark.intel.com/content/www/us/en/ark/products/199354/intel-xeon-gold-5220r-processor-35-75m-cache-2-20-ghz.html>`_, |
| 51 | each with 24 cores, 48 threads |
| 52 | * 384GB of DDR4 Memory, made up with 12x 16GB ECC DIMMs |
| 53 | * 2TB of nVME Flash Storage |
| 54 | * 2x 6TB SATA Disk storage |
| 55 | * 2x 40GbE ports using an XL710QDA2 NIC |
| 56 | |
| 57 | The 1U servers additionally have: |
| 58 | |
| 59 | - 2x 1GbE copper network ports |
| 60 | - 2x 10GbE SFP+ network ports |
| 61 | |
| 62 | The 2U servers have: |
| 63 | |
| 64 | - 4x 1GbE copper network ports |
| 65 | |
| 66 | 1x Management Switch: `HP/Aruba 2540 Series JL356A |
| 67 | <https://www.arubanetworks.com/products/switches/access/2540-series/>`_. |
| 68 | |
| 69 | 1x Management Server: `Supermicro 5019D-FTN4 |
Zack Williams | b7d4515 | 2022-03-11 09:37:34 -0700 | [diff] [blame] | 70 | <https://www.supermicro.com/en/Aplus/system/Embedded/AS-5019D-FTN4.cfm>`_, |
| 71 | configured with: |
Zack Williams | 1ae109e | 2021-07-27 11:17:04 -0700 | [diff] [blame] | 72 | |
| 73 | * AMD Epyc 3251 CPU with 8 cores, 16 threads |
| 74 | * 32GB of DDR4 memory, in 2x 16GB ECC DIMMs |
| 75 | * 1TB of nVME Flash storage |
| 76 | * 4x 1GbE copper network ports |
| 77 | |
Zack Williams | 1ae109e | 2021-07-27 11:17:04 -0700 | [diff] [blame] | 78 | For Pronto, the primary reseller ONF and Stanford used was `ASA (aka |
| 79 | "RackLive") <https://www.asacomputers.com/>`_. for servers and switches, with |
| 80 | radio equipment purchased directly from `Sercomm <https://www.sercomm.com>`_. |
Zack Williams | b7d4515 | 2022-03-11 09:37:34 -0700 | [diff] [blame] | 81 | |
| 82 | |
| 83 | Pronto BoM Table |
| 84 | """""""""""""""" |
| 85 | |
| 86 | ============ ===================== =============================================== |
| 87 | Quantity Type Description/Use |
| 88 | ============ ===================== =============================================== |
| 89 | 5 P4 Fabric Switch 2x2 topology, plus 1 development switch |
| 90 | 1 Management Switch Must be Layer 2/3 capable |
| 91 | 1 Management Server 2x 40GbE QSFP ports recommended |
| 92 | 5 1U Compute Servers |
| 93 | 2 2U Compute Servers |
| 94 | 6 100GbE QSFP DAC cable Between Fabric switches |
| 95 | 14 40GbE QSFP DAC cable Between Compute, Management, and Fabric Switch |
| 96 | 2 QSFP to 4x SFP+ DAC Split cable between Fabric and eNB |
| 97 | 2 eNB |
| 98 | 2 10GbE to 1GbE Media Required unless using switch to convert from |
| 99 | converter fabric to eNB |
| 100 | 2 PoE+ Injector Required unless using a PoE+ Switch |
| 101 | Sufficient Cat6 Network Cabling Between all equipment, for management network |
| 102 | ============ ===================== =============================================== |