Update QoS documentation
Change-Id: I2d928e8f9a1fd824a8781e6506ccb0276bb91833
diff --git a/advanced/p4-upf.rst b/advanced/p4-upf.rst
index 77ce957..325323b 100644
--- a/advanced/p4-upf.rst
+++ b/advanced/p4-upf.rst
@@ -40,9 +40,9 @@
servers. Then, when the device radio becomes ready to receive traffic,
packets are drained from the software buffers back to the switch to be
delivered to base stations.
-* QoS: support for enforcement of maximum bitrate (MBR), minimum guaranteed
- bitrate (GBR, via admission control), and prioritization using switch
- queues and scheduling policy.
+* QoS: support for enforcement of maximum bitrate (MBR) at the application,
+ session, and slice level; and prioritization using switch queues and
+ scheduling policy.
* Slicing: multiple logical UPFs can be instantiated on the same switch, each
one with its own QoS model and isolation guarantees enforced at the hardware
level using separate queues.
diff --git a/advanced/qos.rst b/advanced/qos.rst
index 713acf2..fcbb514 100644
--- a/advanced/qos.rst
+++ b/advanced/qos.rst
@@ -13,9 +13,9 @@
Network slicing enables sharing the same physical infrastructure between
independent logical networks, each one targeting different use cases while
-providing isolation and security guarantees. Slicing permits the implementation
-of tailor-made applications with Quality of Service (QoS) specific to the needs
-of each slice, rather than a one-size-fits-all approach.
+providing isolation guarantees. Slicing permits the implementation of
+tailor-made applications with Quality of Service (QoS) specific to the needs of
+each slice, rather than a one-size-fits-all approach.
SD-Fabric supports slicing and QoS using dedicated hardware resources such as
scheduling queues and meters. Once a packet enters the fabric, it is associated
@@ -33,8 +33,8 @@
Traffic Classes
^^^^^^^^^^^^^^^
-We supports the following traffic classes that covers the spectrum of
-applications from latency-sensitive to throughput-intensive.
+We support the following traffic classes to cover the spectrum of potential
+applications, from latency-sensitive to throughput-intensive.
Control
"""""""
@@ -54,9 +54,9 @@
dedicated Real-Time queue serviced in a Round-Robin fashion to guarantee the
lowest latency at all times even with bursty senders. To avoid starvation of
lower priority classes, Real-Time queues are shaped at a maximum rate. Slices
-sending at rates higher than the configured one might observe higher latency
-because of the shaping. Real-Time queues have priority lower than Control, but
-higher than Elastic.
+sending at rates higher than the configured maximum rate might observe higher
+latency because of the queue shaping enforced by the scheduler. Real-Time queues
+have priority lower than Control, but higher than Elastic.
Elastic
"""""""
@@ -80,226 +80,101 @@
Regular traffic
"""""""""""""""
-We provide an ACL-like APIs that supports specifying wildcard match rules on the
+We provide ACL-like APIs that support specifying wildcard match rules on the
IPv4 5-tuple.
P4-UPF traffic
""""""""""""""
-When using the embedded UPF function, for GTP-U mobile traffic terminated by the
-fabric, we support integration with PFCP QoS features such as prioritization via
-QoS Flow Identifier (QFI), Maximum Bitrate (MBR) limits, and Guaranteed Bitrate
-(GBR).
+For GTP-U traffic terminated by the embedded P4-UPF function, selection of a
+slice ID and TC is based on PFCP-Agent's configuration (upf.json or Helm
+values). QoS classification uses the same table for GTP-U tunnel termination,
+for this reason, to achieve fabric-wide QoS enforcement, we recommend enabling
+the UPF function on each leaf switch using the distributed UPF mode, such that
+packets are classified as soon as they enter the fabric.
-You can configure a static one-to-one mapping between 3GPP’s QFIs and
-SD-Fabric’s TCs using the ONOS netcfg JSON file (work-in-progress), while MBR
-and GBR configuration are translated into meter configurations.
+The slice ID is specified using the ``p4rtciface.slice_id`` property in
+PFCP-Agent's ``upf.json``. All packets terminated by the P4-UPF function will be
+associated with the given Slice ID.
-QoS classification uses the same table for GTP-U tunnel termination, for this
-reason, to achieve fabric-wide QoS enforcement, we recommend enabling the UPF
-function on each leaf switch using the distributed UPF mode, such that packets
-are classified as soon as they enter the network.
-
-Support for slicing of mobile traffic is work-in-progress and will be added in
-the next SD-Fabric release.
+The TC value is instead derived from the 3GPP's QoS Flow Identifier (QFI) and
+requires coordination with the mobile core control plane (e.g., SD-Core). When
+deploying PFCP-Agent, you can configure a static many-to-one mapping between
+3GPP’s QFIs and SD-Fabric’s TCs using the ``p4rtciface.qfi_tc_mapping`` property
+in ``upf.json``. That is, multiple QFIs can be mapped to the same TC. Then, it's
+up to the mobile core control plane to insert PFCP rules classifying traffic
+using the specific QFIs.
Configuration
-------------
-.. note:: QoS and slicing configuration is currently statically configured at switch startup.
- Dynamic configuration will be supported in a next SD-Fabric release.
+.. note:: Currently we only support static configuration at switch startup. To
+ add new slices or modify TC parameters, you will need to reboot the switch.
+ Dynamic configuration will be supported in future SD-Fabric releases.
-QoS and slicing uses switch queue configuration provided via the
-``vendor_config`` portion of the Stratum Chassis Config (see
-:ref:`stratum_chassis_config`), where the queues and schedulers can be
-configured. For more information on the format of ``vendor_config``, see the
-`guide for running Stratum on Tofino-based switches
+Stratum allows configuring switch queues and schedulers using the
+``vendor_config`` portion of the Chassis Config file (see
+:ref:`stratum_chassis_config`). For more information on the format of
+``vendor_config``, see the `guide for running Stratum on Tofino-based switches
<https://github.com/stratum/stratum/blob/main/stratum/hal/bin/barefoot/README.run.md>`_
in the Stratum repository.
-We provide a convenient `script <https://github.com/stratum/fabric-tna/blob/main/util/gen-qos-config.py>`_
-to generate the configuration starting from a higher-level description provided via a YAML file.
-This file allows to configure the parameters for the traffic classes listed in the above section.
+The ONOS apps are responsible of inserting switch rules that map packets into
+different queues. For this reason, apps needs to be aware of how queues are
+mapped to the different slices and TCs.
-Here's a list of parameters that you can configure via the YAML QoS configuration file:
+We provide a convenient `script
+<https://github.com/stratum/fabric-tna/blob/main/util/gen-qos-config.py>`_ to
+generate both the Stratum and ONOS configuration starting from a high-level
+description provided via a YAML file. This file allows to define slices and
+configure TC parameters.
-* ``max_cells``: Maximum number of buffer cells, depends on the ASIC SKU/revision.
+An example of such YAML file can be found here `here <https://github.com/stratum/fabric-tna/blob/main/util/sample-qos-config.yaml>`_.
-* ``pool_allocations``: Percentage of buffer cells allocated to each traffic class.
- The sum should be 100. Usually, we leave a portion of the buffer ``unassigned``
- for queues that do not have a pool (yet).
- Example of such queues are those for the recirculation port, CPU port, etc.
+To generate the Stratum config:
- .. code-block:: yaml
+.. code-block:: console
- pool_allocations:
- control: 1
- realtime: 9
- elastic: 80
- besteffort: 9
- unassigned: 1
+ $ ./gen-qos-config.py -t stratum sample-qos-config.yaml
-* **Control** Traffic Class: The available bandwidth dedicated to Control traffic is divided in *slots*.
- Each slot has a maximum rate and burst (in packets of the given MTU).
- A slice can use one or more slots by appropriately configuring meters in the fabric ingress pipeline.
+The script will output a ``vendor_config`` section which is meant to be appended
+to an existing Chassis Config file.
- * ``control_slot_count``: Number of slots.
- * ``control_slot_rate_pps``: Packet per second rate of each slot.
- * ``control_slot_burst_pkts``: Number of packets per burst of each slot.
- * ``control_mtu_bytes``: MTU of packets for the PPS and burst values.
+To generate the ONOS config:
- .. code-block:: yaml
+.. code-block:: console
- control_slot_count: 50
- control_slot_rate_pps: 100
- control_slot_burst_pkts: 10
- control_mtu_bytes: 1500
+ $ ./gen-qos-config.py -t onos sample-qos-config.yaml
-* **Real-Time** Traffic Class Configuration:
-
- * ``realtime_max_rates_bps``: List of maximum shaping rates for Real-Time queues,
- one per slice requesting such service.
-
- * ``realtime_max_burst_s``: Maximum amount of time that a Real-Time queue can
- burst at the port speed. This parameter is used to limit delay for Elastic
- queues.
-
- .. code-block:: yaml
-
- realtime_max_rates_bps:
- - 45000000 # 45 Mbps
- - 30000000 # 30 Mbps
- - 25000000 # 25 Mbps
- realtime_max_burst_s: 0.005 # 5 ms
-
-* **Elastic** Traffic Class Configuration:
-
- * ``elastic_min_rates_bps``: List of minimum guaranteed rates for Elastic queues,
- one per slice requesting such service.
-
- .. code-block:: yaml
-
- elastic_min_rates_bps:
- - 100000000 # 100 Mbps
- - 200000000 # 200 Mbps
-
-* ``port_templates`` section: List of switch port for which we want to configure
- queues.
-
- Every ``port_templates`` element contains:
-
- * ``descr``: Description of the port purpose.
-
- * ``rate_bps``: Port speed in bit per second.
-
- * ``is_shaping_enabled``: ``true`` if the rate is enforced using shaping,
- ``false`` if the rate is the channel speed.
-
- * ``shaping_burst_bytes``: Burst size in bytes, meaningful only if port speed
- is shaped (when ``is_shaping_enabled: true``).
-
- * ``queue_count``: Number of queues assigned to the port.
-
- * ``port_ids``: List of Stratum port IDs (:ref:`singleton_port` from Stratum Chassis Config),
- using this port template. Used for port that corresponds to switch front-panel ports.
-
- Mutually exclusive with ``sdk_port_ids`` field.
-
- * ``sdk_port_ids``: List of SDK port numbers (i.e., Tofino ``DP_ID``) using this port template.
- Used for internal ports (e.g., recirculation ports).
-
- Mutually exclusive with ``port_ids`` field.
-
- .. code-block:: yaml
-
- port_templates:
- - descr: "Base station"
- rate_bps: 1000000000 # 1 Gbps
- is_shaping_enabled: true
- shaping_burst_bytes: 18000 # 2x jumbo frames
- queue_count: 16
- port_ids:
- - 100
- - descr: "Servers"
- port_ids:
- - 200
- rate_bps: 40000000000 # 40 Gbps
- is_shaping_enabled: false
- queue_count: 16
- - descr: "Recirculation"
- sdk_port_ids:
- - 68
- rate_bps: 100000000000 # 100 Gbps
- is_shaping_enabled: false
- queue_count: 16
-
-An example of a complete QoS and Slicing configuration can be found `here <https://github.com/stratum/fabric-tna/blob/main/util/sample-qos-config.yaml>`_.
+The script will output a JSON snippet representing a complete ONOS netcfg file
+with just the ``slicing`` portion of the ``fabric-tna`` app config. You will
+have to manually integrate this into the existing ONOS netcfg used for
+deployment.
REST API
--------
-We provide REST APIs with support for adding/removing/querying slices and
-traffic classes, as well as flow classification.
+Adding and removing slices in ONOS can be performed only via netcfg. We provide
+REST APIs to:
+- Get information on slices and TCs currently in the system
+- Add/remove classifier rules
-Slice
-^^^^^
+For the up-to-date documentation and example API calls, please refer to the
+auto-generated documentation on a live ONOS instance at the URL
+``http://<ONOS-host>>:<ONOS-port>/onos/v1/docs``.
-Add a slice
-"""""""""""
-A POST request with Slice ID as path parameter.
-``/slicing/slice/{sliceId}``
+Make sure to select the Fabric-TNA RESt API view:
-.. image:: ../images/qos-rest-slice-add.png
+.. image:: ../images/fabric-tna-rest-api-select.png
:width: 700px
-Remove a slice
-"""""""""""""""
-A DELETE request with Slice ID as path parameter.
-``/slicing/slice/{sliceId}``
+Classifier Flows
+^^^^^^^^^^^^^^^^
-.. image:: ../images/qos-rest-slice-remove.png
- :width: 700px
+We provide REST APIs to add/remove classifier flows. A classifier flow is used
+to instruct switches on how to associate packets to slices and TCs. It is based
+on abstraction similar to an ACL table, describing rules matching on the IPv4
+5-tuple.
-Get all slices
-""""""""""""""
-A GET request.
-Returns a collection of slice id.
-/slicing/slice
-
-.. image:: ../images/qos-rest-slice-get.png
- :width: 700px
-
-Traffic Class
-^^^^^^^^^^^^^
-.. tip::
- Traffic Class has following attributes: ``BEST_EFFORT``, ``CONTROL``, ``REAL_TIME``, ``ELASTIC``.
-
-Add a traffic class to a slice
-""""""""""""""""""""""""""""""
-A POST request with Slice ID and Traffic Class as path parameters.
-``/slicing/tc/{sliceId}/{tc}``
-
-.. image:: ../images/qos-rest-tc-add.png
- :width: 700px
-
-Remove a traffic class from a slice
-"""""""""""""""""""""""""""""""""""
-A DELETE request with Slice ID and Traffic Class as path parameters.
-``/slicing/tc/{sliceId}/{tc}``
-
-.. image:: ../images/qos-rest-tc-remove.png
- :width: 700px
-
-Get all traffic classes from a slice
-""""""""""""""""""""""""""""""""""""
-A GET request with Slice ID as path parameters.
-Returns a collection of traffic class.
-``/slicing/tc/{sliceId}``
-
-.. image:: ../images/qos-rest-tc-get.png
- :width: 700px
-
-Classify Flow
-^^^^^^^^^^^^^
-
-A flow can be defined as
+Here's an example classifier flow in JSON format to be used in REST API calls.
+For the actual API methods, please refer to the live ONOS documentation.
.. code-block:: json
@@ -335,47 +210,3 @@
}
]
}
-
-- ``IPV4_SRC``: Source IPv4 prefix
-
-- ``IPV4_DST``: Destination IPv4 prefix
-
-- ``IP_PROTO``: IP Protocol, accept 6 (TCP) and 17 (UDP)
-
-- ``TCP_SRC``: Source L4 (TCP) port
-
-- ``TCP_DST``: Destination L4 (TCP) port
-
-- ``UDP_SRC``: Source L4 (UDP) port
-
-- ``UDP_DST``: Destination L4 (UDP) port
-
-.. note::
- SD-Fabric currently supports 5-tuple only.
-
-Classify a flow to a slice and traffic class
-""""""""""""""""""""""""""""""""""""""""""""
-A POST request with Slice ID and Traffic Class as path parameters.
-And a Json of a flow as body parameters.
-``/slicing/flow/{sliceId}/{tc}``
-
-.. image:: ../images/qos-rest-classifier-add.png
- :width: 700px
-
-Remove a flow from a slice and traffic class
-""""""""""""""""""""""""""""""""""""""""""""
-A DELETE request with Slice ID and Traffic Class as path parameters.
-And a Json of a flow as body parameters.
-``/slicing/flow/{sliceId}/{tc}``
-
-.. image:: ../images/qos-rest-classifier-remove.png
- :width: 700px
-
-Get all classified flows from a slice and traffic class
-"""""""""""""""""""""""""""""""""""""""""""""""""""""""
-A GET request with Slice ID and Traffic Class as path parameters.
-Returns a collection of flow.
-``/slicing/flow/{sliceId}``
-
-.. image:: ../images/qos-rest-classifier-get.png
- :width: 700px
diff --git a/configuration/network.rst b/configuration/network.rst
index 6f6d0d0..44f4600 100644
--- a/configuration/network.rst
+++ b/configuration/network.rst
@@ -113,7 +113,7 @@
---------------------------------
Before describing the ONOS netcfg, it is worth nothing how we refer to ports for
-Tofino-based devices. Netcfg uses the format ``device:<name>/<port-number>``.
+Tofino-based devices. netcfg uses the format ``device:<name>/<port-number>``.
``<port-number>`` is a unique, arbitrary value that should be consistent
with the ``id`` field defined in Stratum chassis config.
diff --git a/dict.txt b/dict.txt
index 191c12e..f583217 100644
--- a/dict.txt
+++ b/dict.txt
@@ -135,6 +135,7 @@
unconfigured
unicast
untagged
+upf
uplink
vRouter
verifiability
diff --git a/images/fabric-tna-rest-api-select.png b/images/fabric-tna-rest-api-select.png
new file mode 100644
index 0000000..00ddd30
--- /dev/null
+++ b/images/fabric-tna-rest-api-select.png
Binary files differ
diff --git a/images/qos-rest-classifier-add.png b/images/qos-rest-classifier-add.png
deleted file mode 100644
index 41e7b36..0000000
--- a/images/qos-rest-classifier-add.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-classifier-get.png b/images/qos-rest-classifier-get.png
deleted file mode 100644
index 5554dc0..0000000
--- a/images/qos-rest-classifier-get.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-classifier-remove.png b/images/qos-rest-classifier-remove.png
deleted file mode 100644
index ee808c8..0000000
--- a/images/qos-rest-classifier-remove.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-slice-add.png b/images/qos-rest-slice-add.png
deleted file mode 100644
index 288d19e..0000000
--- a/images/qos-rest-slice-add.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-slice-get.png b/images/qos-rest-slice-get.png
deleted file mode 100644
index d9ca228..0000000
--- a/images/qos-rest-slice-get.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-slice-remove.png b/images/qos-rest-slice-remove.png
deleted file mode 100644
index 684c0ec..0000000
--- a/images/qos-rest-slice-remove.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-tc-add.png b/images/qos-rest-tc-add.png
deleted file mode 100644
index bf6511f..0000000
--- a/images/qos-rest-tc-add.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-tc-get.png b/images/qos-rest-tc-get.png
deleted file mode 100644
index 8f9e003..0000000
--- a/images/qos-rest-tc-get.png
+++ /dev/null
Binary files differ
diff --git a/images/qos-rest-tc-remove.png b/images/qos-rest-tc-remove.png
deleted file mode 100644
index 7082478..0000000
--- a/images/qos-rest-tc-remove.png
+++ /dev/null
Binary files differ