Akash Soni | 35cafc3 | 2023-12-20 12:48:08 +0530 | [diff] [blame^] | 1 | # Copyright 2022-2023 Open Networking Foundation (ONF) and the ONF Contributors |
| 2 | # |
| 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | # you may not use this file except in compliance with the License. |
| 5 | # You may obtain a copy of the License at |
| 6 | # |
| 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | # |
| 9 | # Unless required by applicable law or agreed to in writing, software |
| 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| 12 | # See the License for the specific language governing permissions and |
| 13 | # limitations under the License. |
| 14 | # FIXME Can we use the same test against BBSim and Hardware? |
| 15 | |
| 16 | *** Settings *** |
| 17 | Documentation Test various end-to-end scenarios |
| 18 | Suite Setup Setup Suite |
| 19 | Test Setup Setup |
| 20 | Test Teardown Teardown |
| 21 | Suite Teardown Teardown Suite |
| 22 | Library Collections |
| 23 | Library String |
| 24 | Library OperatingSystem |
| 25 | Library XML |
| 26 | Library RequestsLibrary |
| 27 | Library ../../libraries/DependencyLibrary.py |
| 28 | Resource ../../libraries/vgc.robot |
| 29 | Resource ../../libraries/voltctl.robot |
| 30 | Resource ../../libraries/voltha.robot |
| 31 | Resource ../../libraries/utils_vgc.robot |
| 32 | Resource ../../libraries/k8s.robot |
| 33 | Resource ../../variables/variables.robot |
| 34 | Resource ../../libraries/power_switch.robot |
| 35 | |
| 36 | *** Variables *** |
| 37 | ${POD_NAME} flex-ocp-cord |
| 38 | ${VOLTHA_POD_NUM} 8 |
| 39 | ${NAMESPACE} voltha |
| 40 | ${INFRA_NAMESPACE} default |
| 41 | ${STACK_NAME} voltha |
| 42 | # For below variable value, using deployment name as using grep for |
| 43 | # parsing radius pod name, we can also use full radius pod name |
| 44 | ${RESTART_POD_NAME} radius |
| 45 | ${timeout} 120s |
| 46 | ${of_id} 0 |
| 47 | ${logical_id} 0 |
| 48 | ${has_dataplane} True |
| 49 | ${teardown_device} False |
| 50 | ${scripts} ../../scripts |
| 51 | |
| 52 | # Per-test logging on failure is turned off by default; set this variable to enable |
| 53 | ${container_log_dir} ${None} |
| 54 | |
| 55 | # logging flag to enable Collect Logs, can be passed via the command line too |
| 56 | # example: -v logging:False |
| 57 | ${logging} True |
| 58 | |
| 59 | # Flag specific to Soak Jobs |
| 60 | ${SOAK_TEST} False |
| 61 | |
| 62 | *** Test Cases *** |
| 63 | Verify restart openonu-adapter container after subscriber provisioning for DT |
| 64 | [Documentation] Restart openonu-adapter container after VOLTHA is operational. |
| 65 | ... Prerequisite : ONUs are authenticated and pingable. |
| 66 | [Tags] Restart-OpenOnu-Dt soak raj |
| 67 | [Setup] Start Logging Restart-OpenOnu-Dt |
| 68 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 69 | ... AND Stop Logging Restart-OpenOnu-Dt |
| 70 | |
| 71 | # Add OLT device |
| 72 | |
| 73 | Run Keyword If '${SOAK_TEST}'=='False' Setup |
| 74 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 75 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 76 | Perform Sanity Test DT |
| 77 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 78 | Log ${podStatusOutput} |
| 79 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 80 | ${podName} Set Variable adapter-open-onu |
| 81 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 82 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 83 | ... app ${podName} Running |
| 84 | # Wait for 1 min after openonu adapter is restarted |
| 85 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 86 | Perform Sanity Test DT |
| 87 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 88 | Log ${podStatusOutput} |
| 89 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 90 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 91 | Log to console Pod ${podName} restarted and sanity checks passed successfully |
| 92 | Run Keyword If '${SOAK_TEST}'=='False' Delete All Devices and Verify |
| 93 | |
| 94 | Verify restart openolt-adapter container after subscriber provisioning for DT |
| 95 | [Documentation] Restart openolt-adapter container after VOLTHA is operational. |
| 96 | ... Prerequisite : ONUs are authenticated and pingable. |
| 97 | [Tags] Restart-OpenOlt-Dt soak raj |
| 98 | [Setup] Start Logging Restart-OpenOlt-Dt |
| 99 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 100 | ... AND Stop Logging Restart-OpenOlt-Dt |
| 101 | # Add OLT device |
| 102 | Run Keyword If '${SOAK_TEST}'=='False' setup |
| 103 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 104 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 105 | Perform Sanity Test DT |
| 106 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 107 | Log ${podStatusOutput} |
| 108 | ${countBforRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 109 | ${podName} Set Variable ${OLT_ADAPTER_APP_LABEL} |
| 110 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 111 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 112 | ... app ${podName} Running |
| 113 | # Wait for 1 min after openolt adapter is restarted |
| 114 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 115 | Perform Sanity Test DT |
| 116 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 117 | Log ${podStatusOutput} |
| 118 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 119 | Should Be Equal As Strings ${countAfterRestart} ${countBforRestart} |
| 120 | Log to console Pod ${podName} restarted and sanity checks passed successfully |
| 121 | |
| 122 | Verify openolt adapter restart before subscriber provisioning for DT |
| 123 | [Documentation] Restart openolt-adapter container before adding the subscriber. |
| 124 | [Tags] functionalDt olt-adapter-restart-Dt raj |
| 125 | [Setup] Start Logging OltAdapterRestart-Dt |
| 126 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 127 | ... AND Stop Logging OltAdapterRestart-Dt |
| 128 | # Add OLT device |
| 129 | Clear All Devices Then Create New Device |
| 130 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 131 | Set Global Variable ${of_id} |
| 132 | |
| 133 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 134 | ${src}= Set Variable ${hosts.src[${I}]} |
| 135 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 136 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 137 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s Get ONU Port in VGC ${src['onu']} |
| 138 | ... ${of_id} ${src['uni_id']} |
| 139 | ${onu_device_id}= Get Device ID From SN ${src['onu']} |
| 140 | Wait Until Keyword Succeeds ${timeout} 5s |
| 141 | ... Validate Device ENABLED ACTIVE REACHABLE |
| 142 | ... ${onu_device_id} onu=True onu_reason=initial-mib-downloaded by_dev_id=True |
| 143 | END |
| 144 | # Scale down the open OLT adapter deployment to 0 PODs and once confirmed, scale it back to 1 |
| 145 | Scale K8s Deployment by Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 0 |
| 146 | Wait Until Keyword Succeeds ${timeout} 2s Pods Do Not Exist By Label ${NAMESPACE} app |
| 147 | ... ${OLT_ADAPTER_APP_LABEL} |
| 148 | # Scale up the open OLT adapter deployment and make sure both it and the ofagent deployment are back |
| 149 | Scale K8s Deployment by Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 1 |
| 150 | Wait Until Keyword Succeeds ${timeout} 2s |
| 151 | ... Check Expected Available Deployment Replicas By Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 1 |
| 152 | Wait Until Keyword Succeeds ${timeout} 3s Pods Are Ready By Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} |
| 153 | |
| 154 | # Ensure the device is available in ONOS, this represents system connectivity being restored |
| 155 | FOR ${I} IN RANGE 0 ${olt_count} |
| 156 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 157 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 158 | ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC |
| 159 | ... ${olt_serial_number} |
| 160 | Wait Until Keyword Succeeds 120s 2s Device Is Available In VGC |
| 161 | ... ${of_id} |
| 162 | END |
| 163 | |
| 164 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 165 | ${src}= Set Variable ${hosts.src[${I}]} |
| 166 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 167 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 168 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 169 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s Get ONU Port in VGC ${src['onu']} |
| 170 | ... ${of_id} ${src['uni_id']} |
| 171 | # Add subscriber access and verify that DHCP completes to ensure system is still functioning properly |
| 172 | Add Subscriber Details ${of_id} ${onu_port} |
| 173 | # Verify Meters in ONOS |
| 174 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s |
| 175 | ... Verify Meters in VGC Ietf ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} ${onu_port} |
| 176 | # Verify subscriber access flows are added for the ONU port |
| 177 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s |
| 178 | ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} |
| 179 | ... ${onu_port} ${nni_port} ${src['s_tag']} |
| 180 | Wait Until Keyword Succeeds ${timeout} 5s Validate Device |
| 181 | ... ENABLED ACTIVE REACHABLE |
| 182 | ... ${src['onu']} onu=True onu_reason=omci-flows-pushed |
| 183 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure Validate DHCP and Ping True |
| 184 | ... True ${src['dp_iface_name']} ${src['s_tag']} ${src['c_tag']} ${dst['dp_iface_ip_qinq']} |
| 185 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 186 | ... ${dst['dp_iface_name']} ${dst['ip']} ${dst['user']} ${dst['pass']} ${dst['container_type']} |
| 187 | ... ${dst['container_name']} |
| 188 | END |
| 189 | |
| 190 | Sanity E2E Test for OLT/ONU on POD With Core Fail and Restart for DT |
| 191 | [Documentation] Deploys an device instance and waits for it to authenticate. After |
| 192 | ... authentication is successful the rw-core deployment is scaled to 0 instances to |
| 193 | ... simulate a POD crash. The test then scales the rw-core back to a single instance |
| 194 | ... and configures ONOS for access. The test succeeds if the device is able to |
| 195 | ... complete the DHCP sequence. |
| 196 | [Tags] functionalDt rwcore-restart-Dt raj |
| 197 | [Setup] Run Keywords Start Logging RwCoreFailAndRestart-Dt |
| 198 | ... AND Clear All Devices Then Create New Device |
| 199 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 200 | ... AND Stop Logging RwCoreFailAndRestart-Dt |
| 201 | #... AND Delete Device and Verify |
| 202 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 203 | FOR ${I} IN RANGE 0 ${olt_count} |
| 204 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 205 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 206 | ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC |
| 207 | ... ${olt_serial_number} |
| 208 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 209 | END |
| 210 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 211 | ${src}= Set Variable ${hosts.src[${I}]} |
| 212 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 213 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 214 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s |
| 215 | ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']} |
| 216 | ${onu_device_id}= Get Device ID From SN ${src['onu']} |
| 217 | # Bring up the device and verify it authenticates |
| 218 | Wait Until Keyword Succeeds 360s 5s Validate Device ENABLED ACTIVE REACHABLE |
| 219 | ... ${onu_device_id} onu=True onu_reason=initial-mib-downloaded by_dev_id=True |
| 220 | END |
| 221 | |
| 222 | # Scale down the rw-core deployment to 0 PODs and once confirmed, scale it back to 1 |
| 223 | Scale K8s Deployment voltha voltha-voltha-rw-core 0 |
| 224 | Wait Until Keyword Succeeds ${timeout} 2s Pod Does Not Exist voltha voltha-voltha-rw-core |
| 225 | # Ensure the ofagent POD goes "not-ready" as expected |
| 226 | Wait Until keyword Succeeds ${timeout} 2s |
| 227 | ... Check Expected Available Deployment Replicas voltha voltha-voltha-ofagent 0 |
| 228 | # Scale up the core deployment and make sure both it and the ofagent deployment are back |
| 229 | Scale K8s Deployment voltha voltha-voltha-rw-core 1 |
| 230 | Wait Until Keyword Succeeds ${timeout} 2s |
| 231 | ... Check Expected Available Deployment Replicas voltha voltha-voltha-rw-core 1 |
| 232 | Wait Until Keyword Succeeds ${timeout} 2s |
| 233 | ... Check Expected Available Deployment Replicas voltha voltha-voltha-ofagent 1 |
| 234 | # For some reason scaling down and up the POD behind a service causes the port forward to stop working, |
| 235 | # so restart the port forwarding for the API service |
| 236 | Restart VOLTHA Port Forward voltha-api |
| 237 | # Ensure that the ofagent pod is up and ready and the device is available in ONOS, this |
| 238 | # represents system connectivity being restored |
| 239 | FOR ${I} IN RANGE 0 ${olt_count} |
| 240 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 241 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 242 | ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC |
| 243 | ... ${olt_serial_number} |
| 244 | Wait Until Keyword Succeeds 120s 2s Device Is Available In VGC |
| 245 | ... ${of_id} |
| 246 | END |
| 247 | |
| 248 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 249 | ${src}= Set Variable ${hosts.src[${I}]} |
| 250 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 251 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 252 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 253 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s |
| 254 | ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']} |
| 255 | # Add subscriber access and verify that DHCP completes to ensure system is still functioning properly |
| 256 | Post Request VGC services/${of_id}/${onu_port} |
| 257 | # Verify subscriber access flows are added for the ONU port |
| 258 | Wait Until Keyword Succeeds ${timeout} 5s |
| 259 | ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} |
| 260 | ... ${onu_port} ${nni_port} ${src['s_tag']} |
| 261 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure Validate DHCP and Ping True |
| 262 | ... True ${src['dp_iface_name']} ${src['s_tag']} ${src['c_tag']} ${dst['dp_iface_ip_qinq']} |
| 263 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 264 | ... ${dst['dp_iface_name']} ${dst['ip']} ${dst['user']} ${dst['pass']} ${dst['container_type']} |
| 265 | ... ${dst['container_name']} |
| 266 | END |
| 267 | |
| 268 | Verify OLT Soft Reboot for DT |
| 269 | [Documentation] Test soft reboot of the OLT using voltctl command |
| 270 | [Tags] VOL-2818 OLTSoftRebootDt functionalDt raj |
| 271 | [Setup] Start Logging OLTSoftRebootDt |
| 272 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 273 | ... AND Stop Logging OLTSoftRebootDt |
| 274 | FOR ${I} IN RANGE 0 ${olt_count} |
| 275 | ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn |
| 276 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 277 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds 360s 5s |
| 278 | ... Validate OLT Device ENABLED ACTIVE |
| 279 | ... REACHABLE ${olt_serial_number} |
| 280 | # Reboot the OLT using "voltctl device reboot" command |
| 281 | Reboot Device ${olt_device_id} |
| 282 | # Wait for the OLT to actually go down |
| 283 | Wait Until Keyword Succeeds 360s 5s Validate OLT Device ENABLED UNKNOWN UNREACHABLE |
| 284 | ... ${olt_serial_number} |
| 285 | END |
| 286 | #Verify that ping fails |
| 287 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 288 | ${src}= Set Variable ${hosts.src[${I}]} |
| 289 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 290 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 291 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 292 | ... Check Ping False ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 293 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 294 | END |
| 295 | # Check OLT states |
| 296 | FOR ${I} IN RANGE 0 ${olt_count} |
| 297 | ${olt_serial_number}= Get From Dictionary ${list_olts}[${I}] sn |
| 298 | ${olt_ssh_ip}= Get From Dictionary ${list_olts}[${I}] sship |
| 299 | ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number} |
| 300 | # Wait for the OLT to come back up |
| 301 | Run Keyword If ${has_dataplane} Wait Until Keyword Succeeds 120s 10s |
| 302 | ... Check Remote System Reachability True ${olt_ssh_ip} |
| 303 | # Check OLT states |
| 304 | Wait Until Keyword Succeeds 360s 5s |
| 305 | ... Validate OLT Device ENABLED ACTIVE |
| 306 | ... REACHABLE ${olt_serial_number} |
| 307 | END |
| 308 | # Waiting extra time for the ONUs to come up |
| 309 | Sleep 60s |
| 310 | #Check after reboot that ONUs are active, DHCP/pingable |
| 311 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 312 | Perform Sanity Test DT |
| 313 | |
| 314 | Verify restart openonu-adapter container for DT |
| 315 | [Documentation] Restart openonu-adapter container after VOLTHA is operational. |
| 316 | ... Run the ping continuously in background during container restart, |
| 317 | ... and verify that there should be no affect on the dataplane. |
| 318 | ... Also, verify that the voltha control plane functionality is not affected. |
| 319 | [Tags] functionalDt RestartOpenOnuPingDt raj |
| 320 | [Setup] Start Logging RestartOpenOnuPingDt |
| 321 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 322 | ... AND Stop Logging RestartOpenOnuPingDt |
| 323 | Clear All Devices Then Create New Device |
| 324 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 325 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 326 | Perform Sanity Test DT |
| 327 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 328 | ${src}= Set Variable ${hosts.src[${I}]} |
| 329 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 330 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 331 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 332 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 333 | ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 334 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 335 | END |
| 336 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 337 | Log ${podStatusOutput} |
| 338 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 339 | ${podName} Set Variable adapter-open-onu |
| 340 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 341 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 342 | ... app ${podName} Running |
| 343 | # Wait for 1 min after openonu adapter is restarted |
| 344 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 345 | Log ${podStatusOutput} |
| 346 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 347 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 348 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 349 | ${src}= Set Variable ${hosts.src[${I}]} |
| 350 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 351 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 352 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 353 | ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']} |
| 354 | ... ${src['container_type']} ${src['container_name']} |
| 355 | END |
| 356 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 357 | ${src}= Set Variable ${hosts.src[${I}]} |
| 358 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 359 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 360 | ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 361 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 362 | ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']} |
| 363 | ... ${src['container_type']} ${src['container_name']} |
| 364 | Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output} |
| 365 | END |
| 366 | # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber |
| 367 | Verify Control Plane After Pod Restart DT |
| 368 | |
| 369 | Verify restart openolt-adapter container for DT |
| 370 | [Documentation] Restart openolt-adapter container after VOLTHA is operational. |
| 371 | ... Run the ping continuously in background during container restart, |
| 372 | ... and verify that there should be no affect on the dataplane. |
| 373 | ... Also, verify that the voltha control plane functionality is not affected. |
| 374 | [Tags] functionalDt RestartOpenOltPingDt raj |
| 375 | [Setup] Start Logging RestartOpenOltPingDt |
| 376 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 377 | ... AND Stop Logging RestartOpenOltPingDt |
| 378 | Clear All Devices Then Create New Device |
| 379 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 380 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 381 | Perform Sanity Test DT |
| 382 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 383 | ${src}= Set Variable ${hosts.src[${I}]} |
| 384 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 385 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 386 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 387 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 388 | ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 389 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 390 | END |
| 391 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 392 | Log ${podStatusOutput} |
| 393 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 394 | ${podName} Set Variable ${OLT_ADAPTER_APP_LABEL} |
| 395 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 396 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 397 | ... app ${podName} Running |
| 398 | # Wait for 1 min after openolt adapter is restarted |
| 399 | Sleep 60s |
| 400 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 401 | Log ${podStatusOutput} |
| 402 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 403 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 404 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 405 | ${src}= Set Variable ${hosts.src[${I}]} |
| 406 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 407 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 408 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 409 | ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']} |
| 410 | ... ${src['container_type']} ${src['container_name']} |
| 411 | END |
| 412 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 413 | ${src}= Set Variable ${hosts.src[${I}]} |
| 414 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 415 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 416 | ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 417 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 418 | ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']} |
| 419 | ... ${src['container_type']} ${src['container_name']} |
| 420 | Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output} |
| 421 | END |
| 422 | # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber |
| 423 | Verify Control Plane After Pod Restart DT |
| 424 | |
| 425 | Verify restart rw-core container for DT |
| 426 | [Documentation] Restart rw-core container after VOLTHA is operational. |
| 427 | ... Run the ping continuously in background during container restart, |
| 428 | ... and verify that there should be no affect on the dataplane. |
| 429 | ... Also, verify that the voltha control plane functionality is not affected. |
| 430 | [Tags] functionalDt RestartRwCorePingDt raj |
| 431 | [Setup] Start Logging RestartRwCorePingDt |
| 432 | [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs |
| 433 | ... AND Stop Logging RestartRwCorePingDt |
| 434 | Clear All Devices Then Create New Device |
| 435 | # Performing Sanity Test to make sure subscribers are all DHCP and pingable |
| 436 | Run Keyword If ${has_dataplane} Clean Up Linux |
| 437 | Perform Sanity Test DT |
| 438 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 439 | ${src}= Set Variable ${hosts.src[${I}]} |
| 440 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 441 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 442 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 443 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 444 | ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 445 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 446 | END |
| 447 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 448 | Log ${podStatusOutput} |
| 449 | ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 450 | ${podName} Set Variable rw-core |
| 451 | Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName} |
| 452 | Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE} |
| 453 | ... app ${podName} Running |
| 454 | # Wait for 1 min after rw-core is restarted |
| 455 | Sleep 60s |
| 456 | # For some reason scaling down and up the POD behind a service causes the port forward to stop working, |
| 457 | # so restart the port forwarding for the API service |
| 458 | Restart VOLTHA Port Forward voltha-api |
| 459 | ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE} |
| 460 | Log ${podStatusOutput} |
| 461 | ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l |
| 462 | Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart} |
| 463 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 464 | ${src}= Set Variable ${hosts.src[${I}]} |
| 465 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 466 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 467 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 468 | ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']} |
| 469 | ... ${src['container_type']} ${src['container_name']} |
| 470 | END |
| 471 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 472 | ${src}= Set Variable ${hosts.src[${I}]} |
| 473 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 474 | ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping |
| 475 | ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 476 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 477 | ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']} |
| 478 | ... ${src['container_type']} ${src['container_name']} |
| 479 | Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output} |
| 480 | END |
| 481 | # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber |
| 482 | Verify Control Plane After Pod Restart DT |
| 483 | |
| 484 | *** Keywords *** |
| 485 | Setup Suite |
| 486 | [Documentation] Set up the test suite |
| 487 | Common Test Suite Setup |
| 488 | #power_switch.robot needs it to support different vendor's power switch |
| 489 | ${switch_type}= Get Variable Value ${web_power_switch.type} |
| 490 | Run Keyword If "${switch_type}"!="" Set Global Variable ${powerswitch_type} ${switch_type} |
| 491 | # Run Pre-test Setup for Soak Job |
| 492 | # Note: As soak requirement, it expects that the devices under test are already created and enabled |
| 493 | Run Keyword If '${SOAK_TEST}'=='True' Setup Soak |
| 494 | |
| 495 | |
| 496 | Clear All Devices Then Create New Device |
| 497 | [Documentation] Remove any devices from VOLTHA and ONOS |
| 498 | # Remove all devices from voltha and nos |
| 499 | Delete All Devices and Verify |
| 500 | # Execute normal test Setup Keyword |
| 501 | Setup |
| 502 | |
| 503 | Verify Control Plane After Pod Restart DT |
| 504 | [Documentation] Verifies the control plane functionality after the voltha pod restart |
| 505 | ... by deleting and re-adding the subscriber |
| 506 | FOR ${I} IN RANGE 0 ${num_all_onus} |
| 507 | ${src}= Set Variable ${hosts.src[${I}]} |
| 508 | ${dst}= Set Variable ${hosts.dst[${I}]} |
| 509 | ${of_id}= Get ofID From OLT List ${src['olt']} |
| 510 | ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id} |
| 511 | ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s |
| 512 | ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']} |
| 513 | ${onu_device_id}= Wait Until Keyword Succeeds ${timeout} 2s Get Device ID From SN ${src['onu']} |
| 514 | # Remove Subscriber Access |
| 515 | Remove Subscriber Access ${of_id} ${onu_port} |
| 516 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 517 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 518 | ... Check Ping False ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 519 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 520 | # Disable and Re-Enable the ONU (To replicate DT current workflow) |
| 521 | # TODO: Delete and Auto-Discovery Add of ONU (not yet supported) |
| 522 | Disable Device ${onu_device_id} |
| 523 | Wait Until Keyword Succeeds ${timeout} 5s |
| 524 | ... Validate Device DISABLED UNKNOWN |
| 525 | ... REACHABLE ${src['onu']} |
| 526 | Enable Device ${onu_device_id} |
| 527 | Wait Until Keyword Succeeds ${timeout} 5s |
| 528 | ... Validate Device ENABLED ACTIVE |
| 529 | ... REACHABLE ${src['onu']} |
| 530 | # Add Subscriber Access |
| 531 | Add Subscriber Details ${of_id} ${onu_port} |
| 532 | # Verify subscriber access flows are added for the ONU port |
| 533 | Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s |
| 534 | ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} |
| 535 | ... ${onu_port} ${nni_port} ${src['s_tag']} |
| 536 | Wait Until Keyword Succeeds ${timeout} 5s |
| 537 | ... Validate Device ENABLED ACTIVE |
| 538 | ... REACHABLE ${src['onu']} onu=True onu_reason=omci-flows-pushed |
| 539 | # Workaround for issue seen in VOL-4489. Keep this workaround until VOL-4489 is fixed. |
| 540 | Run Keyword If ${has_dataplane} Reboot XGSPON ONU ${src['olt']} ${src['onu']} omci-flows-pushed |
| 541 | # Workaround ends here for issue seen in VOL-4489. |
| 542 | Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure |
| 543 | ... Wait Until Keyword Succeeds ${timeout} 2s |
| 544 | ... Check Ping True ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']} |
| 545 | ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']} |
| 546 | END |