AETHER-3206 Update low level documentation for 2.0
AETHER-3330 Update diagrams and process docs for 2.0
Fix paths and language used on runtime deployment for clarity
Rename management server -> management router
Change-Id: Ib9745f45e2abee2a05ec49296dee9c7ff39b580d
diff --git a/edge_deployment/pronto.rst b/edge_deployment/pronto.rst
index 9a3b532..b9bc919 100644
--- a/edge_deployment/pronto.rst
+++ b/edge_deployment/pronto.rst
@@ -7,7 +7,16 @@
=================
One of the earliest structured deployments of Aether was as a part of `Pronto
-<https://prontoproject.org/>`_ project. This deployment includes a production
+<https://prontoproject.org/>`_ project.
+
+The topology used in Pronto is a 2x2 :ref:`Leaf-Spine (without pairing)
+<sdfabric:specification:topology>`, which provide Spine redundancy, but does
+not support dual-homing of devices.
+
+.. image:: images/edge_2x2.svg
+ :alt: 2x2 Leaf-Spine (without pairing) topology
+
+The specific deployment includes a production
cluster with multiple servers and a 2x2 leaf-spine fabric, along with a
secondary development cluster with it's own servers and fabric switch, as
shown in this diagram:
@@ -58,14 +67,36 @@
<https://www.arubanetworks.com/products/switches/access/2540-series/>`_.
1x Management Server: `Supermicro 5019D-FTN4
-<https://www.supermicro.com/en/Aplus/system/Embedded/AS-5019D-FTN4.cfm>`_, configured with:
+<https://www.supermicro.com/en/Aplus/system/Embedded/AS-5019D-FTN4.cfm>`_,
+configured with:
* AMD Epyc 3251 CPU with 8 cores, 16 threads
* 32GB of DDR4 memory, in 2x 16GB ECC DIMMs
* 1TB of nVME Flash storage
* 4x 1GbE copper network ports
-
For Pronto, the primary reseller ONF and Stanford used was `ASA (aka
"RackLive") <https://www.asacomputers.com/>`_. for servers and switches, with
radio equipment purchased directly from `Sercomm <https://www.sercomm.com>`_.
+
+
+Pronto BoM Table
+""""""""""""""""
+
+============ ===================== ===============================================
+Quantity Type Description/Use
+============ ===================== ===============================================
+5 P4 Fabric Switch 2x2 topology, plus 1 development switch
+1 Management Switch Must be Layer 2/3 capable
+1 Management Server 2x 40GbE QSFP ports recommended
+5 1U Compute Servers
+2 2U Compute Servers
+6 100GbE QSFP DAC cable Between Fabric switches
+14 40GbE QSFP DAC cable Between Compute, Management, and Fabric Switch
+2 QSFP to 4x SFP+ DAC Split cable between Fabric and eNB
+2 eNB
+2 10GbE to 1GbE Media Required unless using switch to convert from
+ converter fabric to eNB
+2 PoE+ Injector Required unless using a PoE+ Switch
+Sufficient Cat6 Network Cabling Between all equipment, for management network
+============ ===================== ===============================================