Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 1 | # Copyright 2022-2023 Open Networking Foundation (ONF) and the ONF Contributors |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| 12 | # See the License for the specific language governing permissions and |
| 13 | # limitations under the License. |
| 14 | # FIXME Can we use the same test against BBSim and Hardware? |
| 15 | |
| 16 | *** Settings *** |
| 17 | Documentation Test various end-to-end scenarios |
| 18 | Suite Setup Setup Suite |
| 19 | Test Setup Setup |
| 20 | Test Teardown Teardown |
| 21 | Suite Teardown Teardown Suite |
| 22 | Library Collections |
| 23 | Library String |
| 24 | Library OperatingSystem |
| 25 | Library XML |
| 26 | Library RequestsLibrary |
| 27 | Library ../../libraries/DependencyLibrary.py |
| 28 | Resource ../../libraries/vgc.robot |
| 29 | Resource ../../libraries/voltctl.robot |
| 30 | Resource ../../libraries/voltha.robot |
| 31 | Resource ../../libraries/utils_vgc.robot |
| 32 | Resource ../../libraries/k8s.robot |
| 33 | Resource ../../variables/variables.robot |
| 34 | Resource ../../libraries/power_switch.robot |
| 35 | |
| 36 | *** Variables *** |
| 37 | ${POD_NAME} flex-ocp-cord |
| 38 | ${VOLTHA_POD_NUM} 8 |
| 39 | ${NAMESPACE} voltha |
| 40 | ${INFRA_NAMESPACE} default |
| 41 | ${STACK_NAME} voltha |
| 42 | # For below variable value, using deployment name as using grep for |
| 43 | # parsing radius pod name, we can also use full radius pod name |
| 44 | ${RESTART_POD_NAME} radius |
| 45 | ${timeout} 120s |
| 46 | ${of_id} 0 |
| 47 | ${logical_id} 0 |
| 48 | ${has_dataplane} True |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 49 | ${kafka} voltha-voltha-api |
| 50 | ${KAFKA_PORT} 55555 |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 51 | ${teardown_device} False |
| 52 | ${scripts} ../../scripts |
| 53 | |
| 54 | # Per-test logging on failure is turned off by default; set this variable to enable |
| 55 | ${container_log_dir} ${None} |
| 56 | |
| 57 | # logging flag to enable Collect Logs, can be passed via the command line too |
| 58 | # example: -v logging:False |
| 59 | ${logging} True |
| 60 | |
| 61 | # Flag specific to Soak Jobs |
| 62 | ${SOAK_TEST} False |
| 63 | |
| 64 | *** Test Cases *** |
| 65 | Verify restart openonu-adapter container after subscriber provisioning for DT |
| 66 | [Documentation] Restart openonu-adapter container after VOLTHA is operational. |
| 67 | ... Prerequisite : ONUs are authenticated and pingable. |
| 68 | [Tags] Restart-OpenOnu-Dt soak raj |
| 69 | [Setup] Start Logging Restart-OpenOnu-Dt |
| 70 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 71 | ... AND Stop Logging Restart-OpenOnu-Dt |
| 72 | |
| 73 | # Add OLT device |
| 74 | |
| 75 | Run Keyword If '${SOAK_TEST}'=='False' Setup |
| 76 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 77 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 78 | Perform Sanity Test DT |
| 79 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 80 | Log ${podStatusOutput} |
| 81 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 82 | ${podName} Set Variable adapter-open-onu |
| 83 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 84 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 85 | ... app ${podName} Running |
| 86 | # Wait for 1 min after openonu adapter is restarted |
| 87 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 88 | Perform Sanity Test DT |
| 89 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 90 | Log ${podStatusOutput} |
| 91 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 92 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 93 | Log to console Pod ${podName} restarted and sanity checks passed successfully |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 94 | # "Once the onu adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if the OLT is deleted |
| 95 | # before the ONUs are reconiled successfully there would be stale entries. This scenario is not handled in VOLTHA as |
| 96 | # of now. And there is no other to check if the reconcile has happened for all the ONUs. Due to this limitations a |
| 97 | # sleep of 60s is introduced to give enough time for onu adapter to reconcile the ONUs." |
| 98 | Sleep 60s |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 99 | Run Keyword If '${SOAK_TEST}'=='False' Delete All Devices and Verify |
| 100 | |
| 101 | Verify restart openolt-adapter container after subscriber provisioning for DT |
| 102 | [Documentation] Restart openolt-adapter container after VOLTHA is operational. |
| 103 | ... Prerequisite : ONUs are authenticated and pingable. |
| 104 | [Tags] Restart-OpenOlt-Dt soak raj |
| 105 | [Setup] Start Logging Restart-OpenOlt-Dt |
| 106 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 107 | ... AND Stop Logging Restart-OpenOlt-Dt |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 108 | # Add OLT_device |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 109 | Run Keyword If '${SOAK_TEST}'=='False' setup |
| 110 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 111 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 112 | Perform Sanity Test DT |
| 113 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 114 | Log ${podStatusOutput} |
| 115 | ${countBforRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 116 | ${podName} Set Variable ${OLT_ADAPTER_APP_LABEL} |
| 117 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 118 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 119 | ... app ${podName} Running |
| 120 | # Wait for 1 min after openolt adapter is restarted |
| 121 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 122 | Perform Sanity Test DT |
| 123 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 124 | Log ${podStatusOutput} |
| 125 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 126 | Should Be Equal As Strings ${countAfterRestart} ${countBforRestart} |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 127 | # "Once the olt adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT |
| 128 | # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as |
| 129 | # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a |
| 130 | # sleep of 60s is introduced to give enough time for OLT adapter to reconcile the OLTs." |
| 131 | Sleep 60s |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 132 | Log to console Pod ${podName} restarted and sanity checks passed successfully |
| 133 | |
| 134 | Verify openolt adapter restart before subscriber provisioning for DT |
| 135 | [Documentation] Restart openolt-adapter container before adding the subscriber. |
| 136 | [Tags] functionalDt olt-adapter-restart-Dt raj |
| 137 | [Setup] Start Logging OltAdapterRestart-Dt |
| 138 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 139 | ... AND Stop Logging OltAdapterRestart-Dt |
| 140 | # Add OLT device |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 141 | Sleep 120s |
| 142 | Deactivate Subscribers In VGC |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 143 | Clear All Devices Then Create New Device |
| 144 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 145 | Set Global Variable ${of_id} |
| 146 | |
| 147 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 148 | ${src}= Set Variable ${hosts.src[${I}]} |
| 149 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 150 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 151 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s Get ONU Port in VGC ${src['onu']} |
| 152 | ... ${of_id} ${src['uni_id']} |
| 153 | ${onu_device_id}= Get Device ID From SN ${src['onu']} |
| 154 | Wait Until Keyword Succeeds ${timeout} 5s |
| 155 | ... Validate Device ENABLED ACTIVE REACHABLE |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 156 | ... ${onu_device_id} onu=True onu_reason=initial-mib-downloaded by_dev_id=True |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 157 | END |
| 158 | # Scale down the open OLT adapter deployment to 0 PODs and once confirmed, scale it back to 1 |
| 159 | Scale K8s Deployment by Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 0 |
| 160 | Wait Until Keyword Succeeds ${timeout} 2s Pods Do Not Exist By Label ${NAMESPACE} app |
| 161 | ... ${OLT_ADAPTER_APP_LABEL} |
| 162 | # Scale up the open OLT adapter deployment and make sure both it and the ofagent deployment are back |
| 163 | Scale K8s Deployment by Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 1 |
| 164 | Wait Until Keyword Succeeds ${timeout} 2s |
| 165 | ... Check Expected Available Deployment Replicas By Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 1 |
| 166 | Wait Until Keyword Succeeds ${timeout} 3s Pods Are Ready By Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} |
| 167 | |
| 168 | # Ensure the device is available in ONOS, this represents system connectivity being restored |
| 169 | FOR ${I} IN RANGE 0 ${olt_count} |
| 170 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 171 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 172 | ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC |
| 173 | ... ${olt_serial_number} |
| 174 | Wait Until Keyword Succeeds 120s 2s Device Is Available In VGC |
| 175 | ... ${of_id} |
| 176 | END |
| 177 | |
| 178 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 179 | ${src}= Set Variable ${hosts.src[${I}]} |
| 180 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 181 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 182 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 183 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s Get ONU Port in VGC ${src['onu']} |
| 184 | ... ${of_id} ${src['uni_id']} |
| 185 | # Add subscriber access and verify that DHCP completes to ensure system is still functioning properly |
| 186 | Add Subscriber Details ${of_id} ${onu_port} |
| 187 | # Verify Meters in ONOS |
| 188 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s |
| 189 | ... Verify Meters in VGC Ietf ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} ${onu_port} |
| 190 | # Verify subscriber access flows are added for the ONU port |
| 191 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s |
| 192 | ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} |
| 193 | ... ${onu_port} ${nni_port} ${src['s_tag']} |
| 194 | Wait Until Keyword Succeeds ${timeout} 5s Validate Device |
| 195 | ... ENABLED ACTIVE REACHABLE |
| 196 | ... ${src['onu']} onu=True onu_reason=omci-flows-pushed |
| 197 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure Validate DHCP and Ping True |
| 198 | ... True ${src['dp_iface_name']} ${src['s_tag']} ${src['c_tag']} ${dst['dp_iface_ip_qinq']} |
| 199 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 200 | ... ${dst['dp_iface_name']} ${dst['ip']} ${dst['user']} ${dst['pass']} ${dst['container_type']} |
| 201 | ... ${dst['container_name']} |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 202 | # "Once the olt adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT |
| 203 | # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as |
| 204 | # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a |
| 205 | # sleep of 60s is introduced to give enough time for OLT adapter to reconcile the OLTs." |
| 206 | Sleep 60s |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 207 | END |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 208 | Deactivate Subscribers In VGC |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 209 | |
| 210 | Sanity E2E Test for OLT/ONU on POD With Core Fail and Restart for DT |
| 211 | [Documentation] Deploys an device instance and waits for it to authenticate. After |
| 212 | ... authentication is successful the rw-core deployment is scaled to 0 instances to |
| 213 | ... simulate a POD crash. The test then scales the rw-core back to a single instance |
| 214 | ... and configures ONOS for access. The test succeeds if the device is able to |
| 215 | ... complete the DHCP sequence. |
| 216 | [Tags] functionalDt rwcore-restart-Dt raj |
| 217 | [Setup] Run Keywords Start Logging RwCoreFailAndRestart-Dt |
| 218 | ... AND Clear All Devices Then Create New Device |
| 219 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 220 | ... AND Stop Logging RwCoreFailAndRestart-Dt |
| 221 | #... AND Delete Device and Verify |
| 222 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 223 | FOR ${I} IN RANGE 0 ${olt_count} |
| 224 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 225 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 226 | ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC |
| 227 | ... ${olt_serial_number} |
| 228 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 229 | END |
| 230 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 231 | ${src}= Set Variable ${hosts.src[${I}]} |
| 232 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 233 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 234 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s |
| 235 | ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']} |
| 236 | ${onu_device_id}= Get Device ID From SN ${src['onu']} |
| 237 | # Bring up the device and verify it authenticates |
| 238 | Wait Until Keyword Succeeds 360s 5s Validate Device ENABLED ACTIVE REACHABLE |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 239 | ... ${onu_device_id} onu=True onu_reason=initial-mib-downloaded by_dev_id=True |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 240 | END |
| 241 | |
| 242 | # Scale down the rw-core deployment to 0 PODs and once confirmed, scale it back to 1 |
| 243 | Scale K8s Deployment voltha voltha-voltha-rw-core 0 |
| 244 | Wait Until Keyword Succeeds ${timeout} 2s Pod Does Not Exist voltha voltha-voltha-rw-core |
| 245 | # Ensure the ofagent POD goes "not-ready" as expected |
| 246 | Wait Until keyword Succeeds ${timeout} 2s |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 247 | ... Check Expected Available Deployment Replicas voltha voltha-voltha-go-controller 1 |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 248 | # Scale up the core deployment and make sure both it and the ofagent deployment are back |
| 249 | Scale K8s Deployment voltha voltha-voltha-rw-core 1 |
| 250 | Wait Until Keyword Succeeds ${timeout} 2s |
| 251 | ... Check Expected Available Deployment Replicas voltha voltha-voltha-rw-core 1 |
| 252 | Wait Until Keyword Succeeds ${timeout} 2s |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 253 | ... Check Expected Available Deployment Replicas voltha voltha-voltha-go-controller 1 |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 254 | # For some reason scaling down and up the POD behind a service causes the port forward to stop working, |
| 255 | # so restart the port forwarding for the API service |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 256 | Restart VOLTHA Port Forward voltha-api 55555:55555 |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 257 | # Ensure that the ofagent pod is up and ready and the device is available in ONOS, this |
| 258 | # represents system connectivity being restored |
| 259 | FOR ${I} IN RANGE 0 ${olt_count} |
| 260 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 261 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 262 | ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC |
| 263 | ... ${olt_serial_number} |
| 264 | Wait Until Keyword Succeeds 120s 2s Device Is Available In VGC |
| 265 | ... ${of_id} |
| 266 | END |
| 267 | |
| 268 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 269 | ${src}= Set Variable ${hosts.src[${I}]} |
| 270 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 271 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 272 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 273 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s |
| 274 | ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']} |
| 275 | # Add subscriber access and verify that DHCP completes to ensure system is still functioning properly |
| 276 | Post Request VGC services/${of_id}/${onu_port} |
| 277 | # Verify subscriber access flows are added for the ONU port |
| 278 | Wait Until Keyword Succeeds ${timeout} 5s |
| 279 | ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} |
| 280 | ... ${onu_port} ${nni_port} ${src['s_tag']} |
| 281 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure Validate DHCP and Ping True |
| 282 | ... True ${src['dp_iface_name']} ${src['s_tag']} ${src['c_tag']} ${dst['dp_iface_ip_qinq']} |
| 283 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 284 | ... ${dst['dp_iface_name']} ${dst['ip']} ${dst['user']} ${dst['pass']} ${dst['container_type']} |
| 285 | ... ${dst['container_name']} |
| 286 | END |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 287 | Restart VOLTHA Port Forward voltha-api |
| 288 | ${port_fwd} Start Process kubectl -n voltha port-forward svc/${kafka} ${KAFKA_PORT}:${KAFKA_PORT} --address 0.0.0.0 & shell=true |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 289 | |
| 290 | Verify OLT Soft Reboot for DT |
| 291 | [Documentation] Test soft reboot of the OLT using voltctl command |
| 292 | [Tags] VOL-2818 OLTSoftRebootDt functionalDt raj |
| 293 | [Setup] Start Logging OLTSoftRebootDt |
| 294 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 295 | ... AND Stop Logging OLTSoftRebootDt |
| 296 | FOR ${I} IN RANGE 0 ${olt_count} |
| 297 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 298 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 299 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds 360s 5s |
| 300 | ... Validate OLT Device ENABLED ACTIVE |
| 301 | ... REACHABLE ${olt_serial_number} |
| 302 | # Reboot the OLT using "voltctl device reboot" command |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 303 | Wait Until Keyword Succeeds 360s 5s Reboot Device ${olt_device_id} |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 304 | # Wait for the OLT to actually go down |
| 305 | Wait Until Keyword Succeeds 360s 5s Validate OLT Device ENABLED UNKNOWN UNREACHABLE |
| 306 | ... ${olt_serial_number} |
| 307 | END |
| 308 | #Verify that ping fails |
| 309 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 310 | ${src}= Set Variable ${hosts.src[${I}]} |
| 311 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 312 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 313 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 314 | ... Check Ping False ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 315 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 316 | END |
| 317 | # Check OLT states |
| 318 | FOR ${I} IN RANGE 0 ${olt_count} |
| 319 | ${olt_serial_number}= Get From Dictionary ${list_olts}[${I}] sn |
| 320 | ${olt_ssh_ip}= Get From Dictionary ${list_olts}[${I}] sship |
| 321 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 322 | # Wait for the OLT to come back up |
| 323 | Run Keyword If ${has_dataplane} Wait Until Keyword Succeeds 120s 10s |
| 324 | ... Check Remote System Reachability True ${olt_ssh_ip} |
| 325 | # Check OLT states |
| 326 | Wait Until Keyword Succeeds 360s 5s |
| 327 | ... Validate OLT Device ENABLED ACTIVE |
| 328 | ... REACHABLE ${olt_serial_number} |
| 329 | END |
| 330 | # Waiting extra time for the ONUs to come up |
| 331 | Sleep 60s |
| 332 | #Check after reboot that ONUs are active, DHCP/pingable |
| 333 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 334 | Perform Sanity Test DT |
| 335 | |
| 336 | Verify restart openonu-adapter container for DT |
| 337 | [Documentation] Restart openonu-adapter container after VOLTHA is operational. |
| 338 | ... Run the ping continuously in background during container restart, |
| 339 | ... and verify that there should be no affect on the dataplane. |
| 340 | ... Also, verify that the voltha control plane functionality is not affected. |
| 341 | [Tags] functionalDt RestartOpenOnuPingDt raj |
| 342 | [Setup] Start Logging RestartOpenOnuPingDt |
| 343 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 344 | ... AND Stop Logging RestartOpenOnuPingDt |
| 345 | Clear All Devices Then Create New Device |
| 346 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 347 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 348 | Perform Sanity Test DT |
| 349 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 350 | ${src}= Set Variable ${hosts.src[${I}]} |
| 351 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 352 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 353 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 354 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 355 | ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 356 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 357 | END |
| 358 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 359 | Log ${podStatusOutput} |
| 360 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 361 | ${podName} Set Variable adapter-open-onu |
| 362 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 363 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 364 | ... app ${podName} Running |
| 365 | # Wait for 1 min after openonu adapter is restarted |
| 366 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 367 | Log ${podStatusOutput} |
| 368 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 369 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 370 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 371 | ${src}= Set Variable ${hosts.src[${I}]} |
| 372 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 373 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 374 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 375 | ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']} |
| 376 | ... ${src['container_type']} ${src['container_name']} |
| 377 | END |
| 378 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 379 | ${src}= Set Variable ${hosts.src[${I}]} |
| 380 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 381 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 382 | ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 383 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 384 | ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']} |
| 385 | ... ${src['container_type']} ${src['container_name']} |
| 386 | Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output} |
| 387 | END |
| 388 | # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 389 | # "Once the onu adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if the OLT is deleted |
| 390 | # before the ONUs are reconiled successfully there would be stale entries. This scenario is not handled in VOLTHA as |
| 391 | # of now. And there is no other to check if the reconcile has happened for all the ONUs. Due to this limitations a |
| 392 | # sleep of 60s is introduced to give enough time for onu adapter to reconcile the ONUs." |
| 393 | Sleep 60s |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 394 | Verify Control Plane After Pod Restart DT |
| 395 | |
| 396 | Verify restart openolt-adapter container for DT |
| 397 | [Documentation] Restart openolt-adapter container after VOLTHA is operational. |
| 398 | ... Run the ping continuously in background during container restart, |
| 399 | ... and verify that there should be no affect on the dataplane. |
| 400 | ... Also, verify that the voltha control plane functionality is not affected. |
| 401 | [Tags] functionalDt RestartOpenOltPingDt raj |
| 402 | [Setup] Start Logging RestartOpenOltPingDt |
| 403 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 404 | ... AND Stop Logging RestartOpenOltPingDt |
| 405 | Clear All Devices Then Create New Device |
| 406 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 407 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 408 | Perform Sanity Test DT |
| 409 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 410 | ${src}= Set Variable ${hosts.src[${I}]} |
| 411 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 412 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 413 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 414 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 415 | ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 416 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 417 | END |
| 418 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 419 | Log ${podStatusOutput} |
| 420 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 421 | ${podName} Set Variable ${OLT_ADAPTER_APP_LABEL} |
| 422 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 423 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 424 | ... app ${podName} Running |
| 425 | # Wait for 1 min after openolt adapter is restarted |
| 426 | Sleep 60s |
| 427 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 428 | Log ${podStatusOutput} |
| 429 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 430 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 431 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 432 | ${src}= Set Variable ${hosts.src[${I}]} |
| 433 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 434 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 435 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 436 | ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']} |
| 437 | ... ${src['container_type']} ${src['container_name']} |
| 438 | END |
| 439 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 440 | ${src}= Set Variable ${hosts.src[${I}]} |
| 441 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 442 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 443 | ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 444 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 445 | ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']} |
| 446 | ... ${src['container_type']} ${src['container_name']} |
| 447 | Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output} |
| 448 | END |
| 449 | # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 450 | # "Once the olt adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT |
| 451 | # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as |
| 452 | # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a |
| 453 | # sleep of 60s is introduced to give enough time for OLT adapter to reconcile the OLTs." |
| 454 | Sleep 60s |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 455 | Verify Control Plane After Pod Restart DT |
| 456 | |
| 457 | Verify restart rw-core container for DT |
| 458 | [Documentation] Restart rw-core container after VOLTHA is operational. |
| 459 | ... Run the ping continuously in background during container restart, |
| 460 | ... and verify that there should be no affect on the dataplane. |
| 461 | ... Also, verify that the voltha control plane functionality is not affected. |
| 462 | [Tags] functionalDt RestartRwCorePingDt raj |
| 463 | [Setup] Start Logging RestartRwCorePingDt |
| 464 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 465 | ... AND Stop Logging RestartRwCorePingDt |
| 466 | Clear All Devices Then Create New Device |
| 467 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 468 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 469 | Perform Sanity Test DT |
| 470 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 471 | ${src}= Set Variable ${hosts.src[${I}]} |
| 472 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 473 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 474 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 475 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 476 | ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 477 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 478 | END |
| 479 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 480 | Log ${podStatusOutput} |
| 481 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 482 | ${podName} Set Variable rw-core |
| 483 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 484 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 485 | ... app ${podName} Running |
| 486 | # Wait for 1 min after rw-core is restarted |
| 487 | Sleep 60s |
| 488 | # For some reason scaling down and up the POD behind a service causes the port forward to stop working, |
| 489 | # so restart the port forwarding for the API service |
| 490 | Restart VOLTHA Port Forward voltha-api |
| 491 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 492 | Log ${podStatusOutput} |
| 493 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 494 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 495 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 496 | ${src}= Set Variable ${hosts.src[${I}]} |
| 497 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 498 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 499 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 500 | ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']} |
| 501 | ... ${src['container_type']} ${src['container_name']} |
| 502 | END |
| 503 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 504 | ${src}= Set Variable ${hosts.src[${I}]} |
| 505 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 506 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 507 | ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 508 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 509 | ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']} |
| 510 | ... ${src['container_type']} ${src['container_name']} |
| 511 | Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output} |
| 512 | END |
| 513 | # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber |
rbodapat | fe5bb15 | 2025-03-17 11:34:42 +0530 | [diff] [blame] | 514 | # "Once the rw core is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT |
| 515 | # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as |
| 516 | # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a |
| 517 | # sleep of 60s is introduced to give enough time for rw core to reconcile the OLTs." |
| 518 | Sleep 60s |
Guru Prasanna | 0ae8e69 | 2025-05-02 15:03:40 +0530 | [diff] [blame^] | 519 | ${port_fwd} Start Process kubectl -n voltha port-forward svc/${kafka} ${KAFKA_PORT}:${KAFKA_PORT} --address 0.0.0.0 & shell=true |
Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame] | 520 | Verify Control Plane After Pod Restart DT |
| 521 | |
| 522 | *** Keywords *** |
| 523 | Setup Suite |
| 524 | [Documentation] Set up the test suite |
| 525 | Common Test Suite Setup |
| 526 | #power_switch.robot needs it to support different vendor's power switch |
| 527 | ${switch_type}= Get Variable Value ${web_power_switch.type} |
| 528 | Run Keyword If "${switch_type}"!="" Set Global Variable ${powerswitch_type} ${switch_type} |
| 529 | # Run Pre-test Setup for Soak Job |
| 530 | # Note: As soak requirement, it expects that the devices under test are already created and enabled |
| 531 | Run Keyword If '${SOAK_TEST}'=='True' Setup Soak |
| 532 | |
| 533 | |
| 534 | Clear All Devices Then Create New Device |
| 535 | [Documentation] Remove any devices from VOLTHA and ONOS |
| 536 | # Remove all devices from voltha and nos |
| 537 | Delete All Devices and Verify |
| 538 | # Execute normal test Setup Keyword |
| 539 | Setup |
| 540 | |
| 541 | Verify Control Plane After Pod Restart DT |
| 542 | [Documentation] Verifies the control plane functionality after the voltha pod restart |
| 543 | ... by deleting and re-adding the subscriber |
| 544 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 545 | ${src}= Set Variable ${hosts.src[${I}]} |
| 546 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 547 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 548 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 549 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s |
| 550 | ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']} |
| 551 | ${onu_device_id}= Wait Until Keyword Succeeds ${timeout} 2s Get Device ID From SN ${src['onu']} |
| 552 | # Remove Subscriber Access |
| 553 | Remove Subscriber Access ${of_id} ${onu_port} |
| 554 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 555 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 556 | ... Check Ping False ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 557 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 558 | # Disable and Re-Enable the ONU (To replicate DT current workflow) |
| 559 | # TODO: Delete and Auto-Discovery Add of ONU (not yet supported) |
| 560 | Disable Device ${onu_device_id} |
| 561 | Wait Until Keyword Succeeds ${timeout} 5s |
| 562 | ... Validate Device DISABLED UNKNOWN |
| 563 | ... REACHABLE ${src['onu']} |
| 564 | Enable Device ${onu_device_id} |
| 565 | Wait Until Keyword Succeeds ${timeout} 5s |
| 566 | ... Validate Device ENABLED ACTIVE |
| 567 | ... REACHABLE ${src['onu']} |
| 568 | # Add Subscriber Access |
| 569 | Add Subscriber Details ${of_id} ${onu_port} |
| 570 | # Verify subscriber access flows are added for the ONU port |
| 571 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s |
| 572 | ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} |
| 573 | ... ${onu_port} ${nni_port} ${src['s_tag']} |
| 574 | Wait Until Keyword Succeeds ${timeout} 5s |
| 575 | ... Validate Device ENABLED ACTIVE |
| 576 | ... REACHABLE ${src['onu']} onu=True onu_reason=omci-flows-pushed |
| 577 | # Workaround for issue seen in VOL-4489. Keep this workaround until VOL-4489 is fixed. |
| 578 | Run Keyword If ${has_dataplane} Reboot XGSPON ONU ${src['olt']} ${src['onu']} omci-flows-pushed |
| 579 | # Workaround ends here for issue seen in VOL-4489. |
| 580 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 581 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 582 | ... Check Ping True ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 583 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 584 | END |