Merge "update epc section"
diff --git a/docs/installation_guide.md b/docs/installation_guide.md
index 7523866..a28d4ac 100644
--- a/docs/installation_guide.md
+++ b/docs/installation_guide.md
@@ -4,38 +4,21 @@
## Hardware Requirements
-M-CORD by default uses the NG40 software emulator including the RAN, an MME,
-and a traffic generator. For this reason, it does not require any additional
-hardware, other than the ones listed for a “traditional” [CORD
+Beside the ones listed for a “traditional” [CORD
POD](/install_physical.md#bill-of-materials-bom--hardware-requirements).
+M-CORD 5.0 by default uses UE and standalone eNodeB hardware.
-> Warning: The NG40 vTester requires a compute node with Intel XEON CPU with
-> [Westmere microarchitecture or
+> Warning: The NGIC requires a compute node with Intel XEON CPU with
+> [Haswell microarchitecture or
> better](https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures).
-
-## NG40 vTester M-CORD License
-
-As mentioned above, ng4T provides a limited version of its NG40 software, which
-requires a free license in order to work. The specific NG40 installation steps
-described in the next paragraph assume the Operator has obtained the license
-and saved it into a specific location on the development server.
-
-In order to download a free M-CORD trial license, go to the [NG40 M-CORD
-website](https://mcord.ng40.com/) and register. You will be asked for your
-company name and your company email. After successful authentication of your
-email and acknowledgment of the free M-CORD NG40 License, you can download the
-license file called `ng40-license`.
+> Note that 4.1 recommends at least `Westmere` microarchitecture but 5.0
+> highly recommend to use at least `Haswell` microarchitecture.
## M-CORD POD Installation
To install the local node you should follow the steps described in the main
[physical POD installation](/install_physical.md).
-As soon as you have the CORD repository on your development machine, transfer
-the downloaded M-CORD NG40 license file, to:
-
-`$CORD_ROOT/orchestration/xos_services/venb/xos/synchronizer/files/ng40-license`
-
When it’s time to write your pod configuration, use the [physical-example.yml
file as a
template](https://github.com/opencord/cord/blob/master/podconfig/physical-example.yml).
@@ -44,7 +27,7 @@
As cord_scenario, use `cord`.
-As cord_profile, use `mcord-ng40`.
+As cord_profile, use `mcord-cavium`.
> Warning: After you’ve finished the basic installation, configure the fabric
> and your computes, as described [here](/appendix_basic_config.md).
@@ -52,7 +35,7 @@
## Create an EPC instance
The EPC is the only component that needs to be manually configured, after the
-fabric gets properly setup. An EPC instance can be created in two ways.
+fabric gets properly setup. An EPC instance can be created inside XOS-UI.
### Create an EPC instance using the XOS-UI
@@ -62,17 +45,10 @@
* From the left panel, select `vEPC`
* Click on `Virtual Evolved Packet Core ServiceInstances`
* On the top right, press `Add`
-* Set `blueprint` to `MCORD 4.1`
-* Set `Owner id` to `vepc`
-* Set `Site id` to `MySite`
+* Set `blueprint` to `MCORD 5.0`
+* ......
* Press `Save`
-### Create an EPC instance using the XOS northbound API
-
-```shell
-curl -u xosadmin@opencord.org:<password> -X POST http://<ip address of pod>/xosapi/v1/vepc/vepcserviceinstances -H "Content-Type: application/json" -d '{"blueprint":"build", "site_id": 1}'
-```
-
## Verify a Successful Installation
To verify if the installation was successful, ssh into the head node and follow
@@ -81,10 +57,13 @@
Verify that the service synchronizers are running. Use the `docker ps` command
on head node, you should be able to see the following M-CORD synchronizers:
-* mcordng40_venb-synchronizer_1
-* mcordng40_vspgwc-synchronizer_1
-* mcordng40_vspgwu-synchronizer_1
-* mcordng40_vepc-synchronizer_1
+* mcordcavium_vspgwc-synchronizer_1
+* mcordcavium_vspgwu-synchronizer_1
+* mcordcavium_hssdb-synchronizer_1
+* mcordcavium_vhss-synchronizer_1
+* mcordcavium_vmme-synchronizer_1
+* mcordcavium_internetemulator-synchronizer_1
+* mcordcavium_vepc-synchronizer_1
Check that the ServiceInstances are running on the head node. Run these
commands on the head node:
@@ -97,13 +76,17 @@
You should see three VMs like this:
```shell
-+--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------------------------------------------------------------+
-| ID | Name | Status | Task State | Power State | Networks |
-+--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------------------------------------------------------------+
-| 9d5d1d45-2ad8-48d5-84c5-b1dd02458f81 | mysite_venb-3 | ACTIVE | - | Running | s1u_network=111.0.0.2; sgi_network=115.0.0.2; management=172.27.0.4; s11_network=112.0.0.3 |
-| 797ec8e8-d456-405f-a656-194f5748902a | mysite_vspgwc-2 | ACTIVE | - | Running | management=172.27.0.2; spgw_network=117.0.0.2; s11_network=112.0.0.2 |
-| 050393a3-ec25-4548-b02a-2d7b02b16943 | mysite_vspgwu-1 | ACTIVE | - | Running | s1u_network=111.0.0.3; sgi_network=115.0.0.3; management=172.27.0.3; spgw_network=117.0.0.3 |
-+--------------------------------------+-----------------+--------+------------+-------------+---------------------------------------------------------------------------------------------+
++--------------------------------------+---------------------------+---------+------------+-------------+-------------------------------------------------------------------------------------------------------------------------------------+
+| ID | Name | Status | Task State | Power State | Networks |
++--------------------------------------+---------------------------+---------+------------+-------------+-------------------------------------------------------------------------------------------------------------------------------------+
+| c9dc354f-e1b4-4900-9ccb-3d77dd4bdf45 | mysite_hssdb-12 | ACTIVE | - | Running | management=172.27.0.9; db_network=121.0.0.2 |
+| 82200b77-37d6-4fea-b98e-89f74646a90d | mysite_internetemulator-7 | ACTIVE | - | Running | sgi_network=115.0.0.8; management=172.27.0.8 |
+| 6b92f216-89b2-4e7e-8b49-277f1e92e448 | mysite_vhss-13 | ACTIVE | - | Running | s6a_network=120.0.0.4; management=172.27.0.16; db_network=121.0.0.5 |
+| 1710892c-d5c5-4d65-9d7e-5b1aae435831 | mysite_vmme-15 | ACTIVE | - | Running | s6a_network=120.0.0.5; flat_network_s1mme=118.0.0.3; management=172.27.0.17; s11_network=112.0.0.5; flat_network_s1mme_p4=122.0.0.2 |
+| 2e2c21d9-1e66-4425-b0ef-da90a204b7d5 | mysite_vspgwc-8 | ACTIVE | - | Running | management=172.27.0.10; spgw_network=117.0.0.2; s11_network=112.0.0.2 |
+| 70ebeb85-13c9-474d-97d0-ec40ad621281 | mysite_vspgwu-10 | ACTIVE | - | Running | management=172.27.0.12; sgi_network=115.0.0.9; spgw_network=117.0.0.3; flat_network_s1u=119.0.0.2 |
++--------------------------------------+---------------------------+---------+------------+-------------+-------------------------------------------------------------------------------------------------------------------------------------+
+
```
> Note: It may take a few minutes to provision the instances. If you don’t see
@@ -114,160 +97,12 @@
* Log into the XOS GUI on the head node
* In left panel, click on each item listed below and verify that there is a
check mark sign under “backend status”
- * Vspgwc, then Virtual Serving PDN Gateway -- Control Plane Service
- Instances
- * Vspgwu, then Virtual Serving Gateway User Plane Service Instances
- * Venb, then Virtual eNodeB Service Instances
+ * Vspgwc, the Virtual Serving PDN Gateway -- Control Plane Service Instances
+ * Vspgwu, the Virtual Serving Gateway User Plane Service Instances
+ * Hssdb, the HSS Database ServiceInstances
+ * Vhss, the Virtual Home Subscriber Server Tenants
+ * Vmme, the Virtual Mobility Management Entity Service Instances
+ * Internetemulator, the Internet Emulator Service Instances
> NOTE: It may take a few minutes to run the synchronizers. If you don’t see a
> check mark immediately, try again after some time.
-
-## Using the NG40 vTester Software
-
-You’re now ready to generate some traffic and test M-CORD. To proceed, do the
-following.
-
-* SSH into the NG40 VNF VM with username and password ng40/ng40. To understand
- how to access your VNFs look
- [here](troubleshooting.md#how-to-log-into-a-vnf-vm).
-* Run `~/verify_quick.sh`
-
-You should see the following output:
-
-```shell
-**** Load Profile Settings ****
-$pps = 1000
-$bps = 500000000
-……
-Signaling Table
- AttUE ActCTXT BrReq BrAct RelRq RelAc ActS1
-ng40_ran_1 1 1 1 1 0 0 1
-User Plane Downlink Table
- AS_PktTx AS_EthTx S1uEthRx S1uPktRx
-ng40_ran_1 132 132 144 133
-User Plane Uplink Table
- S1uPktTx S1uEthTx AS_EthRx AS_PktRx
-ng40_ran_1 49 50 48 48
-Watchtime: 9 Timeout: 1800
-```
-
-Both the downlink and uplink should show packets counters increasing. The
-downlink shows packets flowing from the AS (Application Server) to the UE. The
-uplink shows packets going from the UE to the AS.
-
-The result for all commands should look like:
-
-```shell
-Verdict(tc_attach_www) = VERDICT_PASS
-**** Packet Loss ****
-DL Loss= AS_PktTx-S1uPktRx= 0(pkts); 0.00(%)
-UL Loss= S1uPktTx-AS_PktRx= 0(pkts); 0.00(%)
-```
-
-The verdict is configured to check the control plane only.
-
-For the user plane verification you see the absolute number and percentage of
-lost packets.
-
-There are multiple test commands available (You can get the parameter
-description with the flags -h or -?):
-
-* **verify_attach.sh**
-
-```shell
-Usage: ./verify_attach.sh [<UEs> [<rate>]]
- UEs: Number of UEs 1..10, default 1
- rate: Attach rate 1..10, default 1
-```
-
-Send only attach and detach.
-
-Used to verify basic control plane functionality.
-
-* **verify_attach_data.sh**
-
-```shell
-Usage: ./verify_attach_data.sh [<UEs> [<rate>]]
- UEs: Number of UEs 1..10, default 1
- rate: Attach rate 1..10, default 1
-```
-
-Send attach, detach and a few user plane packets.
-
-Used to verify basic user plane functionality.
-
-Downlink traffic will be sent without waiting for uplink traffic to arrive at
-the Application Server.
-
-* **verify_quick.sh**
-
-```shell
-Usage: ./verify_quick.sh [<UEs> [<rate> [<pps>]]]
- UEs: Number of UEs 1..10, default 2
- rate: Attach rate 1..10, default 1
- pps: Packets per Second 1000..60000, default 1000
-```
-
-Send attach, detach and 1000 pps user plane. Userplane ramp up and down ~20
-seconds and total userplane transfer time ~70 seconds.
-
-500.000.000 bps (maximum bit rate to calculate packet size from pps setting,
-MTU 1450).
-
-Used for control plane and userplane verification with low load for a short
-time.
-
-Downlink traffic will only be send when uplink traffic arrives at the
-Application Server.
-
-* **verify_short.sh**
-
-```shell
-Usage: ./verify_short.sh [<UEs> [<rate> [<pps>]]]
- UEs: Number of UEs 1..10, default 10
- rate: Attach rate 1..10, default 5
- pps: Packets per Second 1000..60000, default 60000
-```
-
-Send attach, detach and 60000 pps user plane. Userplane ramp up and down ~20
-seconds and total userplane transfer time ~70 seconds.
-
-500.000.000 bps (maximum bit rate to calculate packet size from pps setting,
-MTU 1450). Used for control plane and userplane verification with medium load
-for a short time. Downlink traffic will only be send when uplink traffic
- arrives at the Application Server.
-
-* **verify_long.sh**
-
-```shell
-Usage: ./verify_long.sh [<UEs> [<rate> [<pps>]]]
- UEs: Number of UEs 1..10, default 10
- rate: Attach rate 1..10, default 5
- pps: Packets per Second 1000..60000, default 60000
-```
-
-Send attach, detach and 60000 pps user plane. Userplane ramp up and down ~200
-seconds and total userplane transfer time ~700 seconds.
-
-500.000.000 bps (maximum bit rate to calculate packet size from pps setting,
-MTU 1450).
-
-Used for control plane and userplane verification with medium load for a longer
-time.
-
-Downlink traffic will only be send when uplink traffic arrives at the
-Application Server.
-
-### Request / Update the NG40 vTester Software License
-
-If you forgot to request an NG40 license at the beginning of the installation,
-or if you would like to extend it, you can input your updated license once the
-NG40 VM is up, following the steps below:
-
-SSH into the NG40 VNF VM with username and password ng40/ng40. To understand
-how to access your VNFs, look
-[here](/troubleshooting.md#how-to-log-into-a-vnf-vm).
-
-Add the license file with named `ng40-license` to the folder: `~/install/`
-Run the command `~/install/ng40init`.
-
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 9a0c583..48ba576 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -10,11 +10,6 @@
CORD troubleshooting guide, at in the main [troubleshooting
section](/troubleshooting.md).
-The troubleshooting guide often mentions interfaces, IPs and networks. For
-reference, you can use the diagram below.
-
-[M-CORD VMs wiring diagram](static/images/vms_diagram.png)
-
## See the status and the IP addresses of your VNFs
You may often need to check the status of your M-CORD VNFs, or access them to
@@ -42,34 +37,6 @@
nova interface-list ID_YOU_FOUND_ABOVE
```
-## How to log into a VNF VM
-
-Sometimes, you may need to access a VNF (VM) running on one of the compute
-nodes. To access a VNF, do the following.
-
-* SSH into the head node
-* From the head node, SSH into the compute node running the VNF. Use the
- following commands:
-
-```shell
-ssh-agent bash
-ssh-add
-ssh -A ubuntu@COMPUTE_NODE_NAME
-```
-
-> NOTE: You can get your compute node name typing cord prov list on the head
-> node
-
-* SSH into the VM from compute node
-
-```shell
-ssh ubuntu@IP_OF_YOUR_VNF_VM
-```
-
-> NOTE: To know the IP of your VNF, refer to [this
-> paragraph](troubleshooting.md#see-the-status-and-the-ip-addresses-of-your-vnfs).
-> The IP you need is the one reported under the `management network`.
-
## View an interface name, inside a VNF
In the troubleshooting steps below you’ll be often asked to provide a specific
@@ -85,7 +52,7 @@
## Understand the M-CORD Synchronizers logs
-As the word says, synchronizers are XOS components responsible to synchronize
+Synchronizers are XOS components responsible to synchronize
the VNFs status with the configuration input by the Operator. More informations
about what synchronizers are and how they work, can be found
[here](/xos/dev/synchronizers.md).
@@ -135,409 +102,3 @@
* **Any other issue?** Please report it to us at <cord-dev@opencord.org>. We
will try to fix it as soon as possible.
-
-## Check the SPGW-C and the SPGW-U Status
-
-Sometimes, you may want to double check if your SPGW-C and SPGW-U components
-status. To do that, follow the steps below.
-
-### SPGW-C
-
-* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow
- [this guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
-* Become the root user (the system will pause for a few seconds)
-
- ```shell
- sudo bash
- ```
-
-* Go to `/root/ngic/cp`
-
- * If the file `finish_flag_interface_config` exists, the `SPGW-C` has been
- successfully configured
- * It the file `finish_flag_build_and_run` exists, the `SPGW-C` build
- process has successfully completed
-
-* Open the `results` file. It’s the `SPGW-C` build log file. If the file has
- similar contents to the one shown below, it means the SPGW-C has been
- successfully started.
-
- ```shell
- rx tx rx pkts tx pkts create modify b resrc create delete delete rel acc ddn
- time pkts pkts /sec /sec session bearer cmd bearer bearer session echo bearer ddn ack
- 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
- 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
- ```
-
-### SPGW-U
-
-* SSH into `SPGW-U` VNF with credentials `ngic/ngic` (to access the VNF follow
- [these guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
-* Go to `/root/ngic/dp`
- * If the file `finish_flag_interface_config` exists, the `SPGW-U` has been
- successfully configured
- * It the file `finish_flag_build_and_run` exists, the `SPGW-U` build
- process completed successfully
-* Open the `results` file. It’s the `SPGW-U` build log file. If the file has
- similar contents to the one shown below, it means the `SPGW-U` has been
- successfully started.
-
- ```shell
- DP: RTE NOTICE enabled on lcore 1
- DP: RTE INFO enabled on lcore 1
- DP: RTE NOTICE enabled on lcore 0
- DP: RTE INFO enabled on lcore 0
- DP: RTE NOTICE enabled on lcore 3
- DP: RTE INFO enabled on lcore 3
- DP: RTE NOTICE enabled on lcore 5
- DP: RTE INFO enabled on lcore 5
- DP: RTE NOTICE enabled on lcore 4
- DP: RTE INFO enabled on lcore 4
- API: RTE NOTICE enabled on lcore 4
- API: RTE INFO enabled on lcore 4
- DP: RTE NOTICE enabled on lcore 6
- DP: RTE INFO enabled on lcore 6
- DP: RTE NOTICE enabled on lcore 2
- DP: RTE INFO enabled on lcore 2
- DP: MTR_PROFILE ADD: index 1 cir:64000, cbd:3072, ebs:3072
- Logging CDR Records to ./cdr/20171122165107.cur
- DP: ACL DEL:SDF rule_id:99999
- DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
- DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
- DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
- ```
-
-> NOTE: if the last three lines don’t show up, you should re-build the `SPGW-C`
-> and the `SPGW-U`. See
-> [this](troubleshooting.md#configure-build-and-run-the-spgw-c-and-the-spgw-u)
-> for more specific instructions.
-
-## Configure, build and run the SPGW-C and the SPGW-U
-
-In most cases, the `SPGW-C` and the `SPGW-U` are automatically configured,
-built, and started, during the installation process. However, unexpected errors
-may occur, or you may simply want to re-configure these components. To do that,
-follow the steps below.
-
-> WARNING: Make sure you follow the steps in the order described. The SPGW-U
-> should be built and configured before the SPGW-C
-
-### Configuring the SPGW-U
-
-* SSH into `SPGW-U` VNF with credentials `ngic/ngic` (to access the VNF follow
- [these guidelines](troubleshooting.md#how-to-log-into-a-vnf-vm))
-
-* Become sudo
-
- ```shell
- sudo su
- ```
-
-* Go to the configuration directory
-
- ```shell
- /root/ngic/config
- ```
-
-* Edit the `interface.cfg` file:
- * **dp_comm_ip**: the IP address of the `SPGW-U` interface, attached to the spgw_network
- * **cp_comm_ip**: the IP address of the SPGW-C interface, attached to the spgw_network
-* Edit the `dp_config.cfg` file:
- * **S1U_IFACE**: the name of the SPGW-U interface, attached to the s1u_network
- * **SGI_IFACE**: the name of the SPGW-U interface, attached to the sgi_network
- * **S1U_IP**: the IP address of the SPGW-U interface, attached to the s1u_network
- * **S1U_MAC**: the MAC address of the SPGW-U interface, attached to the s1u_network
- * **SGI_IP**: the IP address of the SPGW-U interface attached to the sgi_network
- * **SGI_MAC**: the MAC address of the SPGW-U interface attached to the sgi_network
-* Edit the static_arp.cfg file:
- * Below the line `[sgi]`, you should find another line with a similar
- pattern: `IP_addr1 IP_addr2 = MAC_addr`
- * `IP_addr1` and `IP_addr2` are both the IP address of the vENB
- interface attached to the sgi_network
- * `MAC_addr` is the MAC address of the vENB interface, attached to the
- sgi_network
- * Below the line `[s1u]`, you should find another line with a similar
- pattern `IP_addr1 IP_addr2 = MAC_addr`
- * `IP_addr1` and `IP_addr2` are the IP addresses of the vENB interfaces
- attached to the s1u_network
- * `MAC_addr` is the MAC address of the vENB attached to the s1u_network
-
-> Note: To know the IP addresses and MAC addresses mentioned above, follow the
-> instructions
-> [here](troubleshooting.md#view-interface-details-for-a-specific-vnf).
->
-> To know the interface names mentioned above, follow the instructions
-> [here](troubleshooting.md#view-an-interface-name-inside-a-vnf).
-
-### Configuring the SPGW-C
-
-* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow
- [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
-* Become sudo
-
- ```shell
- sudo su
- ```
-
-* Go to the configuration directory
-
- ```shell
- cd /root/ngic/config
- ```
-
-* Edit the `interface.cfg` file
- * **dp_comm_ip**: the IP address of the `SPGW-U` interface, attached to the spgw_network
- * **cp_comm_ip**: the IP address of the `SPGW-C` interface, attached to the spgw_network
-* Edit the cp_config.cfg file
- * **S11_SGW_IP**: the IP address of the SPGW-C interface, attached to the s11_network
- * **S11_MME_IP**: the IP address of the vENB interface, attached to the s11_network
- * **S1U_SGW_IP**: the IP addresses of the SPGW-U interface, attached to the s1u_network
-
-> Note: To know the IP addresses and MAC addresses mentioned above, follow the
-> instructions
-> [here](troubleshooting.md#view-interface-details-for-a-specific-vnf).
->
-> To know the interface names mentioned above, follow the instructions
-> [here](troubleshooting.md#view-an-interface-name-inside-a-vnf).
-
-### Building and running the SPGW-U
-
-* SSH into `SPGW-U` VNF with credentials ngic/ngic (to access the VNF follow
- [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
-
-* Become sudo
-
- ```shell
- sudo su
- ```
-
-* Go to the configuration directory
-
- ```shell
- cd /root/ngic/dp
- ```
-
-* Run the following commands:
-
- ```shell
- ../setenv.sh
- make build
- make
- ./udev.sh > results &
- ```
-
-* Open `results` file. If the process succeeds, you should see an output
- similar to the one below
-
- ```shell
- DP: RTE NOTICE enabled on lcore 1
- DP: RTE INFO enabled on lcore 1
- DP: RTE NOTICE enabled on lcore 0
- DP: RTE INFO enabled on lcore 0
- DP: RTE NOTICE enabled on lcore 3
- DP: RTE INFO enabled on lcore 3
- DP: RTE NOTICE enabled on lcore 5
- DP: RTE INFO enabled on lcore 5
- DP: RTE NOTICE enabled on lcore 4
- DP: RTE INFO enabled on lcore 4
- API: RTE NOTICE enabled on lcore 4
- API: RTE INFO enabled on lcore 4
- DP: RTE NOTICE enabled on lcore 6
- DP: RTE INFO enabled on lcore 6
- DP: RTE NOTICE enabled on lcore 2
- DP: RTE INFO enabled on lcore 2
- Building and running the SPGW-C
- SSH into the head node
- SSH into SPGW-C VNF with credentials ngic/ngic account (to access the VNF follow the guide, here)
- Become sudo (sudo su)
- Go to the configuration dir
- ectory: cd /root/ngic/cp
- Run the following commands
- ../setenv.sh
- make build
- make
- ./run.sh > results &
- Open the “results” file. If the process succeeds, you should see an output similar to the one below
-
- rx tx rx pkts tx pkts create modify b resrc create delete delete rel acc ddn
- time pkts pkts /sec /sec session bearer cmd bearer bearer session echo bearer ddn ack
- 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
- 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
- ```
-
-* Go back to `SPGW-U` VNF and open `results` file in `/root/ngic/dp`. If the
- process succeeds, you should see a new output similar to one below
-
- ```shell
- DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
- DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
- DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
- ```
-
-### Building and running the SPGW-C
-
-* SSH into the head node
-* SSH into `SPGW-C` VNF with credentials `ngic/ngic` (to access the VNF follow
- [these instructions](troubleshooting.md#how-to-log-into-a-vnf-vm))
-* Become sudo
-
- ```shell
- sudo su
- ```
-
-* Go to the configuration directory:
-
- ```shell
- cd /root/ngic/cp
- ```
-
-* Run the following commands
-
- ```shell
- ../setenv.sh
- make build
- make
- ./run.sh > results &
- ```
-
-* Open the `results` file. If the process succeeds, you should see an output
- similar to the one below
-
- ```shell
- rx tx rx pkts tx pkts create modify b resrc create delete delete rel acc ddn
- time pkts pkts /sec /sec session bearer cmd bearer bearer session echo bearer ddn ack
- 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
- 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
- ```
-
-* Go back to `SPGW-U` VNF and open `results` file in `/root/ngic/dp`. If the
- process succeeds, you should see a new output similar to one below
-
- ```shell
- DP: MTR_PROFILE ADD: index 2 cir:125000, cbd:3072, ebs:3072
- DP: MTR_PROFILE ADD: index 3 cir:250000, cbd:3072, ebs:3072
- DP: MTR_PROFILE ADD: index 4 cir:500000, cbd:3072, ebs:3072
- ```
-
-## Troubleshooting with the NG40 vTester
-
-The guide describes how the NG40 results should be interpreted for
-troubleshooting.
-
-### Failure on S11 or SPGW-C not running
-
-```shell
-Signaling Table
- AttUE ActCTXT BrReq BrAct RelRq RelAc ActS1
-ng40_ran_1 0 0 0 0 0 0 1
-
-User Plane Downlink Table
- AS_PktTx AS_EthTx S1uEthRx S1uPktRx
-ng40_ran_1 0 0 0 0
-
-User Plane Uplink Table
- S1uPktTx S1uEthTx AS_EthRx AS_PktRx
-ng40_ran_1 0 0 0 0
-
-Watchtime: 57 Timeout: 1800
-All Tests are finished
-Verdict(tc_attach) = VERDICT_FAIL
-**** Packet Loss ****
-No pkt DL tx
-No pkt UL tx
-Wait for RAN shutdown
-done
-Testlist verdict = VERDICT_FAIL
-```
-
-When you see a `VERDICT_FAIL` and there are 0 values for `BrReq` and `BrAct` in
-the Signaling table, check if the `SPGW-C` is running. If it is, check the
-`S11` connection between the `NG40` VM and the `SPGW-C` VM. The
-`verify_attach.sh` test can help you to verify the status of the control plane
-connectivity on the `S11` network.
-
-If the `ActS1` counter is 0, there’s an internal error on the `S1mme`
-interface, or the NG40 components did not start correctly.
-
-Run the command `ng40forcecleanup all` to restart all the `NG40` components.
-The `NG40` system components may take some time to start.
-
-Wait 10 seconds before you start a new test.
-
-### Failure on S1u and Sgi or SPGW-U
-
-```shell
-Signaling Table
- AttUE ActCTXT BrReq BrAct RelRq RelAc ActS1
-ng40_ran_1 0 0 1 1 0 0 1
-
-User Plane Downlink Table
- AS_PktTx AS_EthTx S1uEthRx S1uPktRx
-ng40_ran_1 185 0 0 0
-
-User Plane Uplink Table
- S1uPktTx S1uEthTx AS_EthRx AS_PktRx
-ng40_ran_1 64 0 0 0
-
-Watchtime: 18 Timeout: 1800
-All Tests are finished
-Verdict(tc_attach_www) = VERDICT_PASS
-**** Packet Loss ****
-DL Loss= AS_PktTx-S1uPktRx= 185(pkts); 100.00(%)
-UL Loss= S1uPktTx-AS_PktRx= 64(pkts); 100.00(%)
-```
-
-If you running `attach_verify_data.sh` and you see `VERDICT_PASS` but 100% `DL`
-and `UL` Loss values, check if the `SGPW-U` is running. If it is, check the
-`S1u` and the `SGi` connections.
-
-When packets are generated (see values `AS_PktTx` and `S1uPktTx`), but are not
-sent to Ethernet (see values `AS_EthTx` and `S1uEthTx`) it may be that no
-`NG40` ARP requests get answered.
-
-### Routing problem on S1u
-
-```shell
-Signaling Table
- AttUE ActCTXT BrReq BrAct RelRq RelAc ActS1
-ng40_ran_1 0 0 1 1 0 0 1
-
-User Plane Downlink Table
- AS_PktTx AS_EthTx S1uEthRx S1uPktRx
-ng40_ran_1 185 185 185 185
-
-User Plane Uplink Table
- S1uPktTx S1uEthTx AS_EthRx AS_PktRx
-ng40_ran_1 64 66 0 0
-
-Watchtime: 18 Timeout: 1800
-All Tests are finished
-Verdict(tc_attach_www) = VERDICT_PASS
-**** Packet Loss ****
-DL Loss= AS_PktTx-S1uPktRx= 185(pkts); 0.00(%)
-UL Loss= S1uPktTx-AS_PktRx= 64(pkts); 100.00(%)
-```
-
-When you run `attach_verify_data.sh` and you see `Data sent` on `S1u` and `SGi`
-(see values `AS_EthTx` and `S1uEthTx`), but no data received at the Application
-Server (see values `AS_EthRx` and `AS_PktRx`), check the `SPGW-U CDRs`. If the
-packets going `uplink` are processed in the `CDRs`, either your routing or the
-allowed IP settings on the `SGi` interface are not correct.
-
-If packets are processed, but 0 bytes are sent through the `uplink`, there’s a
-mismatch in the `GTPU header size` configuration. For example the `SPGW-U` is
-compiled together with a `SessionID`, but the `NG40` VM is configured without
-`SessionID`.
-
-### Other problems
-
-If you see timeouts, exceptions, strange behaviors or the `ActS1` counter is 0,
-try one of the solutions below:
-
-* Cleanup and restart the NG40 process, running `ng40forcecleanup all`. Wait
- about 10 seconds to allow the system to restart. Then, run a new test.
-* Check the NG40 license is still active, and in case install a new one, as
- described
- [here](installation_guide.md#request--update-the-ng40-vtester-software-license).
-
-Re-initialize the NG40 processes, running `~/install/ng40init`.