blob: 1cdb4d32642acfba1cdaedfb7a83e09550f849de [file] [log] [blame]
Akash Soni35cafc32023-12-20 12:48:08 +05301# Copyright 2022-2023 Open Networking Foundation (ONF) and the ONF Contributors
2#
3# Licensed under the Apache License, Version 2.0 (the "License");
4# you may not use this file except in compliance with the License.
5# You may obtain a copy of the License at
6#
7# http://www.apache.org/licenses/LICENSE-2.0
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under the License is distributed on an "AS IS" BASIS,
11# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12# See the License for the specific language governing permissions and
13# limitations under the License.
14# FIXME Can we use the same test against BBSim and Hardware?
15
16*** Settings ***
17Documentation Test various end-to-end scenarios
18Suite Setup Setup Suite
19Test Setup Setup
20Test Teardown Teardown
21Suite Teardown Teardown Suite
22Library Collections
23Library String
24Library OperatingSystem
25Library XML
26Library RequestsLibrary
27Library ../../libraries/DependencyLibrary.py
28Resource ../../libraries/vgc.robot
29Resource ../../libraries/voltctl.robot
30Resource ../../libraries/voltha.robot
31Resource ../../libraries/utils_vgc.robot
32Resource ../../libraries/k8s.robot
33Resource ../../variables/variables.robot
34Resource ../../libraries/power_switch.robot
35
36*** Variables ***
37${POD_NAME} flex-ocp-cord
38${VOLTHA_POD_NUM} 8
39${NAMESPACE} voltha
40${INFRA_NAMESPACE} default
41${STACK_NAME} voltha
42# For below variable value, using deployment name as using grep for
43# parsing radius pod name, we can also use full radius pod name
44${RESTART_POD_NAME} radius
45${timeout} 120s
46${of_id} 0
47${logical_id} 0
48${has_dataplane} True
49${teardown_device} False
50${scripts} ../../scripts
51
52# Per-test logging on failure is turned off by default; set this variable to enable
53${container_log_dir} ${None}
54
55# logging flag to enable Collect Logs, can be passed via the command line too
56# example: -v logging:False
57${logging} True
58
59# Flag specific to Soak Jobs
60${SOAK_TEST} False
61
62*** Test Cases ***
63Verify restart openonu-adapter container after subscriber provisioning for DT
64 [Documentation] Restart openonu-adapter container after VOLTHA is operational.
65 ... Prerequisite : ONUs are authenticated and pingable.
66 [Tags] Restart-OpenOnu-Dt soak raj
67 [Setup] Start Logging Restart-OpenOnu-Dt
68 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
69 ... AND Stop Logging Restart-OpenOnu-Dt
70
71 # Add OLT device
72
73 Run Keyword If '${SOAK_TEST}'=='False' Setup
74 # Performing Sanity Test to make sure subscribers are all DHCP and pingable
75 Run Keyword If ${has_dataplane} Clean Up Linux
76 Perform Sanity Test DT
77 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
78 Log ${podStatusOutput}
79 ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
80 ${podName} Set Variable adapter-open-onu
81 Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName}
82 Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE}
83 ... app ${podName} Running
84 # Wait for 1 min after openonu adapter is restarted
85 Run Keyword If ${has_dataplane} Clean Up Linux
86 Perform Sanity Test DT
87 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
88 Log ${podStatusOutput}
89 ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
90 Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart}
91 Log to console Pod ${podName} restarted and sanity checks passed successfully
rbodapatfe5bb152025-03-17 11:34:42 +053092 # "Once the onu adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if the OLT is deleted
93 # before the ONUs are reconiled successfully there would be stale entries. This scenario is not handled in VOLTHA as
94 # of now. And there is no other to check if the reconcile has happened for all the ONUs. Due to this limitations a
95 # sleep of 60s is introduced to give enough time for onu adapter to reconcile the ONUs."
96 Sleep 60s
Akash Soni35cafc32023-12-20 12:48:08 +053097 Run Keyword If '${SOAK_TEST}'=='False' Delete All Devices and Verify
98
99Verify restart openolt-adapter container after subscriber provisioning for DT
100 [Documentation] Restart openolt-adapter container after VOLTHA is operational.
101 ... Prerequisite : ONUs are authenticated and pingable.
102 [Tags] Restart-OpenOlt-Dt soak raj
103 [Setup] Start Logging Restart-OpenOlt-Dt
104 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
105 ... AND Stop Logging Restart-OpenOlt-Dt
106 # Add OLT device
107 Run Keyword If '${SOAK_TEST}'=='False' setup
108 # Performing Sanity Test to make sure subscribers are all DHCP and pingable
109 Run Keyword If ${has_dataplane} Clean Up Linux
110 Perform Sanity Test DT
111 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
112 Log ${podStatusOutput}
113 ${countBforRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
114 ${podName} Set Variable ${OLT_ADAPTER_APP_LABEL}
115 Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName}
116 Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE}
117 ... app ${podName} Running
118 # Wait for 1 min after openolt adapter is restarted
119 Run Keyword If ${has_dataplane} Clean Up Linux
120 Perform Sanity Test DT
121 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
122 Log ${podStatusOutput}
123 ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
124 Should Be Equal As Strings ${countAfterRestart} ${countBforRestart}
rbodapatfe5bb152025-03-17 11:34:42 +0530125 # "Once the olt adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT
126 # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as
127 # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a
128 # sleep of 60s is introduced to give enough time for OLT adapter to reconcile the OLTs."
129 Sleep 60s
Akash Soni35cafc32023-12-20 12:48:08 +0530130 Log to console Pod ${podName} restarted and sanity checks passed successfully
131
132Verify openolt adapter restart before subscriber provisioning for DT
133 [Documentation] Restart openolt-adapter container before adding the subscriber.
134 [Tags] functionalDt olt-adapter-restart-Dt raj
135 [Setup] Start Logging OltAdapterRestart-Dt
136 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
137 ... AND Stop Logging OltAdapterRestart-Dt
138 # Add OLT device
139 Clear All Devices Then Create New Device
140 Run Keyword If ${has_dataplane} Clean Up Linux
141 Set Global Variable ${of_id}
142
143 FOR ${I} IN RANGE 0 ${num_all_onus}
144 ${src}= Set Variable ${hosts.src[${I}]}
145 ${dst}= Set Variable ${hosts.dst[${I}]}
146 ${of_id}= Get ofID From OLT List ${src['olt']}
147 ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s Get ONU Port in VGC ${src['onu']}
148 ... ${of_id} ${src['uni_id']}
149 ${onu_device_id}= Get Device ID From SN ${src['onu']}
150 Wait Until Keyword Succeeds ${timeout} 5s
151 ... Validate Device ENABLED ACTIVE REACHABLE
rbodapatfe5bb152025-03-17 11:34:42 +0530152 ... ${onu_device_id} onu=True onu_reason=omci-flows-pushed by_dev_id=True
Akash Soni35cafc32023-12-20 12:48:08 +0530153 END
154 # Scale down the open OLT adapter deployment to 0 PODs and once confirmed, scale it back to 1
155 Scale K8s Deployment by Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 0
156 Wait Until Keyword Succeeds ${timeout} 2s Pods Do Not Exist By Label ${NAMESPACE} app
157 ... ${OLT_ADAPTER_APP_LABEL}
158 # Scale up the open OLT adapter deployment and make sure both it and the ofagent deployment are back
159 Scale K8s Deployment by Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 1
160 Wait Until Keyword Succeeds ${timeout} 2s
161 ... Check Expected Available Deployment Replicas By Pod Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL} 1
162 Wait Until Keyword Succeeds ${timeout} 3s Pods Are Ready By Label ${NAMESPACE} app ${OLT_ADAPTER_APP_LABEL}
163
164 # Ensure the device is available in ONOS, this represents system connectivity being restored
165 FOR ${I} IN RANGE 0 ${olt_count}
166 ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn
167 ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number}
168 ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC
169 ... ${olt_serial_number}
170 Wait Until Keyword Succeeds 120s 2s Device Is Available In VGC
171 ... ${of_id}
172 END
173
174 FOR ${I} IN RANGE 0 ${num_all_onus}
175 ${src}= Set Variable ${hosts.src[${I}]}
176 ${dst}= Set Variable ${hosts.dst[${I}]}
177 ${of_id}= Get ofID From OLT List ${src['olt']}
178 ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id}
179 ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s Get ONU Port in VGC ${src['onu']}
180 ... ${of_id} ${src['uni_id']}
181 # Add subscriber access and verify that DHCP completes to ensure system is still functioning properly
182 Add Subscriber Details ${of_id} ${onu_port}
183 # Verify Meters in ONOS
184 Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s
185 ... Verify Meters in VGC Ietf ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id} ${onu_port}
186 # Verify subscriber access flows are added for the ONU port
187 Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s
188 ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id}
189 ... ${onu_port} ${nni_port} ${src['s_tag']}
190 Wait Until Keyword Succeeds ${timeout} 5s Validate Device
191 ... ENABLED ACTIVE REACHABLE
192 ... ${src['onu']} onu=True onu_reason=omci-flows-pushed
193 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure Validate DHCP and Ping True
194 ... True ${src['dp_iface_name']} ${src['s_tag']} ${src['c_tag']} ${dst['dp_iface_ip_qinq']}
195 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
196 ... ${dst['dp_iface_name']} ${dst['ip']} ${dst['user']} ${dst['pass']} ${dst['container_type']}
197 ... ${dst['container_name']}
rbodapatfe5bb152025-03-17 11:34:42 +0530198 # "Once the olt adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT
199 # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as
200 # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a
201 # sleep of 60s is introduced to give enough time for OLT adapter to reconcile the OLTs."
202 Sleep 60s
Akash Soni35cafc32023-12-20 12:48:08 +0530203 END
204
205Sanity E2E Test for OLT/ONU on POD With Core Fail and Restart for DT
206 [Documentation] Deploys an device instance and waits for it to authenticate. After
207 ... authentication is successful the rw-core deployment is scaled to 0 instances to
208 ... simulate a POD crash. The test then scales the rw-core back to a single instance
209 ... and configures ONOS for access. The test succeeds if the device is able to
210 ... complete the DHCP sequence.
211 [Tags] functionalDt rwcore-restart-Dt raj
212 [Setup] Run Keywords Start Logging RwCoreFailAndRestart-Dt
213 ... AND Clear All Devices Then Create New Device
214 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
215 ... AND Stop Logging RwCoreFailAndRestart-Dt
216 #... AND Delete Device and Verify
217 Run Keyword If ${has_dataplane} Clean Up Linux
218 FOR ${I} IN RANGE 0 ${olt_count}
219 ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn
220 ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number}
221 ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC
222 ... ${olt_serial_number}
223 ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id}
224 END
225 FOR ${I} IN RANGE 0 ${num_all_onus}
226 ${src}= Set Variable ${hosts.src[${I}]}
227 ${dst}= Set Variable ${hosts.dst[${I}]}
228 ${of_id}= Get ofID From OLT List ${src['olt']}
229 ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s
230 ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']}
231 ${onu_device_id}= Get Device ID From SN ${src['onu']}
232 # Bring up the device and verify it authenticates
233 Wait Until Keyword Succeeds 360s 5s Validate Device ENABLED ACTIVE REACHABLE
rbodapatfe5bb152025-03-17 11:34:42 +0530234 ... ${onu_device_id} onu=True onu_reason=omci-flows-pushed by_dev_id=True
Akash Soni35cafc32023-12-20 12:48:08 +0530235 END
236
237 # Scale down the rw-core deployment to 0 PODs and once confirmed, scale it back to 1
238 Scale K8s Deployment voltha voltha-voltha-rw-core 0
239 Wait Until Keyword Succeeds ${timeout} 2s Pod Does Not Exist voltha voltha-voltha-rw-core
240 # Ensure the ofagent POD goes "not-ready" as expected
241 Wait Until keyword Succeeds ${timeout} 2s
rbodapatfe5bb152025-03-17 11:34:42 +0530242 ... Check Expected Available Deployment Replicas voltha voltha-voltha-go-controller 1
Akash Soni35cafc32023-12-20 12:48:08 +0530243 # Scale up the core deployment and make sure both it and the ofagent deployment are back
244 Scale K8s Deployment voltha voltha-voltha-rw-core 1
245 Wait Until Keyword Succeeds ${timeout} 2s
246 ... Check Expected Available Deployment Replicas voltha voltha-voltha-rw-core 1
247 Wait Until Keyword Succeeds ${timeout} 2s
rbodapatfe5bb152025-03-17 11:34:42 +0530248 ... Check Expected Available Deployment Replicas voltha voltha-voltha-go-controller 1
Akash Soni35cafc32023-12-20 12:48:08 +0530249 # For some reason scaling down and up the POD behind a service causes the port forward to stop working,
250 # so restart the port forwarding for the API service
251 Restart VOLTHA Port Forward voltha-api
252 # Ensure that the ofagent pod is up and ready and the device is available in ONOS, this
253 # represents system connectivity being restored
254 FOR ${I} IN RANGE 0 ${olt_count}
255 ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn
256 ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number}
257 ${of_id}= Wait Until Keyword Succeeds ${timeout} 15s Validate OLT Device in VGC
258 ... ${olt_serial_number}
259 Wait Until Keyword Succeeds 120s 2s Device Is Available In VGC
260 ... ${of_id}
261 END
262
263 FOR ${I} IN RANGE 0 ${num_all_onus}
264 ${src}= Set Variable ${hosts.src[${I}]}
265 ${dst}= Set Variable ${hosts.dst[${I}]}
266 ${of_id}= Get ofID From OLT List ${src['olt']}
267 ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id}
268 ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s
269 ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']}
270 # Add subscriber access and verify that DHCP completes to ensure system is still functioning properly
271 Post Request VGC services/${of_id}/${onu_port}
272 # Verify subscriber access flows are added for the ONU port
273 Wait Until Keyword Succeeds ${timeout} 5s
274 ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id}
275 ... ${onu_port} ${nni_port} ${src['s_tag']}
276 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure Validate DHCP and Ping True
277 ... True ${src['dp_iface_name']} ${src['s_tag']} ${src['c_tag']} ${dst['dp_iface_ip_qinq']}
278 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
279 ... ${dst['dp_iface_name']} ${dst['ip']} ${dst['user']} ${dst['pass']} ${dst['container_type']}
280 ... ${dst['container_name']}
281 END
282
283Verify OLT Soft Reboot for DT
284 [Documentation] Test soft reboot of the OLT using voltctl command
285 [Tags] VOL-2818 OLTSoftRebootDt functionalDt raj
286 [Setup] Start Logging OLTSoftRebootDt
287 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
288 ... AND Stop Logging OLTSoftRebootDt
289 FOR ${I} IN RANGE 0 ${olt_count}
290 ${olt_serial_number}= Get From Dictionary ${olt_ids}[${I}] sn
291 ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number}
292 Run Keyword And Continue On Failure Wait Until Keyword Succeeds 360s 5s
293 ... Validate OLT Device ENABLED ACTIVE
294 ... REACHABLE ${olt_serial_number}
295 # Reboot the OLT using "voltctl device reboot" command
296 Reboot Device ${olt_device_id}
297 # Wait for the OLT to actually go down
298 Wait Until Keyword Succeeds 360s 5s Validate OLT Device ENABLED UNKNOWN UNREACHABLE
299 ... ${olt_serial_number}
300 END
301 #Verify that ping fails
302 FOR ${I} IN RANGE 0 ${num_all_onus}
303 ${src}= Set Variable ${hosts.src[${I}]}
304 ${dst}= Set Variable ${hosts.dst[${I}]}
305 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
306 ... Wait Until Keyword Succeeds ${timeout} 2s
307 ... Check Ping False ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']}
308 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
309 END
310 # Check OLT states
311 FOR ${I} IN RANGE 0 ${olt_count}
312 ${olt_serial_number}= Get From Dictionary ${list_olts}[${I}] sn
313 ${olt_ssh_ip}= Get From Dictionary ${list_olts}[${I}] sship
314 ${olt_device_id}= Get OLTDeviceID From OLT List ${olt_serial_number}
315 # Wait for the OLT to come back up
316 Run Keyword If ${has_dataplane} Wait Until Keyword Succeeds 120s 10s
317 ... Check Remote System Reachability True ${olt_ssh_ip}
318 # Check OLT states
319 Wait Until Keyword Succeeds 360s 5s
320 ... Validate OLT Device ENABLED ACTIVE
321 ... REACHABLE ${olt_serial_number}
322 END
323 # Waiting extra time for the ONUs to come up
324 Sleep 60s
325 #Check after reboot that ONUs are active, DHCP/pingable
326 Run Keyword If ${has_dataplane} Clean Up Linux
327 Perform Sanity Test DT
328
329Verify restart openonu-adapter container for DT
330 [Documentation] Restart openonu-adapter container after VOLTHA is operational.
331 ... Run the ping continuously in background during container restart,
332 ... and verify that there should be no affect on the dataplane.
333 ... Also, verify that the voltha control plane functionality is not affected.
334 [Tags] functionalDt RestartOpenOnuPingDt raj
335 [Setup] Start Logging RestartOpenOnuPingDt
336 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
337 ... AND Stop Logging RestartOpenOnuPingDt
338 Clear All Devices Then Create New Device
339 # Performing Sanity Test to make sure subscribers are all DHCP and pingable
340 Run Keyword If ${has_dataplane} Clean Up Linux
341 Perform Sanity Test DT
342 FOR ${I} IN RANGE 0 ${num_all_onus}
343 ${src}= Set Variable ${hosts.src[${I}]}
344 ${dst}= Set Variable ${hosts.dst[${I}]}
345 ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping
346 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
347 ... Wait Until Keyword Succeeds ${timeout} 2s
348 ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']}
349 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
350 END
351 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
352 Log ${podStatusOutput}
353 ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
354 ${podName} Set Variable adapter-open-onu
355 Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName}
356 Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE}
357 ... app ${podName} Running
358 # Wait for 1 min after openonu adapter is restarted
359 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
360 Log ${podStatusOutput}
361 ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
362 Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart}
363 FOR ${I} IN RANGE 0 ${num_all_onus}
364 ${src}= Set Variable ${hosts.src[${I}]}
365 ${dst}= Set Variable ${hosts.dst[${I}]}
366 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
367 ... Wait Until Keyword Succeeds ${timeout} 2s
368 ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']}
369 ... ${src['container_type']} ${src['container_name']}
370 END
371 FOR ${I} IN RANGE 0 ${num_all_onus}
372 ${src}= Set Variable ${hosts.src[${I}]}
373 ${dst}= Set Variable ${hosts.dst[${I}]}
374 ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping
375 ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
376 ... Wait Until Keyword Succeeds ${timeout} 2s
377 ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']}
378 ... ${src['container_type']} ${src['container_name']}
379 Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output}
380 END
381 # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber
rbodapatfe5bb152025-03-17 11:34:42 +0530382 # "Once the onu adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if the OLT is deleted
383 # before the ONUs are reconiled successfully there would be stale entries. This scenario is not handled in VOLTHA as
384 # of now. And there is no other to check if the reconcile has happened for all the ONUs. Due to this limitations a
385 # sleep of 60s is introduced to give enough time for onu adapter to reconcile the ONUs."
386 Sleep 60s
Akash Soni35cafc32023-12-20 12:48:08 +0530387 Verify Control Plane After Pod Restart DT
388
389Verify restart openolt-adapter container for DT
390 [Documentation] Restart openolt-adapter container after VOLTHA is operational.
391 ... Run the ping continuously in background during container restart,
392 ... and verify that there should be no affect on the dataplane.
393 ... Also, verify that the voltha control plane functionality is not affected.
394 [Tags] functionalDt RestartOpenOltPingDt raj
395 [Setup] Start Logging RestartOpenOltPingDt
396 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
397 ... AND Stop Logging RestartOpenOltPingDt
398 Clear All Devices Then Create New Device
399 # Performing Sanity Test to make sure subscribers are all DHCP and pingable
400 Run Keyword If ${has_dataplane} Clean Up Linux
401 Perform Sanity Test DT
402 FOR ${I} IN RANGE 0 ${num_all_onus}
403 ${src}= Set Variable ${hosts.src[${I}]}
404 ${dst}= Set Variable ${hosts.dst[${I}]}
405 ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping
406 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
407 ... Wait Until Keyword Succeeds ${timeout} 2s
408 ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']}
409 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
410 END
411 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
412 Log ${podStatusOutput}
413 ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
414 ${podName} Set Variable ${OLT_ADAPTER_APP_LABEL}
415 Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName}
416 Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE}
417 ... app ${podName} Running
418 # Wait for 1 min after openolt adapter is restarted
419 Sleep 60s
420 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
421 Log ${podStatusOutput}
422 ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
423 Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart}
424 FOR ${I} IN RANGE 0 ${num_all_onus}
425 ${src}= Set Variable ${hosts.src[${I}]}
426 ${dst}= Set Variable ${hosts.dst[${I}]}
427 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
428 ... Wait Until Keyword Succeeds ${timeout} 2s
429 ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']}
430 ... ${src['container_type']} ${src['container_name']}
431 END
432 FOR ${I} IN RANGE 0 ${num_all_onus}
433 ${src}= Set Variable ${hosts.src[${I}]}
434 ${dst}= Set Variable ${hosts.dst[${I}]}
435 ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping
436 ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
437 ... Wait Until Keyword Succeeds ${timeout} 2s
438 ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']}
439 ... ${src['container_type']} ${src['container_name']}
440 Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output}
441 END
442 # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber
rbodapatfe5bb152025-03-17 11:34:42 +0530443 # "Once the olt adapter is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT
444 # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as
445 # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a
446 # sleep of 60s is introduced to give enough time for OLT adapter to reconcile the OLTs."
447 Sleep 60s
Akash Soni35cafc32023-12-20 12:48:08 +0530448 Verify Control Plane After Pod Restart DT
449
450Verify restart rw-core container for DT
451 [Documentation] Restart rw-core container after VOLTHA is operational.
452 ... Run the ping continuously in background during container restart,
453 ... and verify that there should be no affect on the dataplane.
454 ... Also, verify that the voltha control plane functionality is not affected.
455 [Tags] functionalDt RestartRwCorePingDt raj
456 [Setup] Start Logging RestartRwCorePingDt
457 [Teardown] Run Keywords Run Keyword If ${logging} Collect Logs
458 ... AND Stop Logging RestartRwCorePingDt
459 Clear All Devices Then Create New Device
460 # Performing Sanity Test to make sure subscribers are all DHCP and pingable
461 Run Keyword If ${has_dataplane} Clean Up Linux
462 Perform Sanity Test DT
463 FOR ${I} IN RANGE 0 ${num_all_onus}
464 ${src}= Set Variable ${hosts.src[${I}]}
465 ${dst}= Set Variable ${hosts.dst[${I}]}
466 ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping
467 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
468 ... Wait Until Keyword Succeeds ${timeout} 2s
469 ... Run Ping In Background ${ping_output_file} ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']}
470 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
471 END
472 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
473 Log ${podStatusOutput}
474 ${countBeforeRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
475 ${podName} Set Variable rw-core
476 Wait Until Keyword Succeeds ${timeout} 15s Delete K8s Pods By Label ${NAMESPACE} app ${podName}
477 Wait Until Keyword Succeeds ${timeout} 2s Validate Pods Status By Label ${NAMESPACE}
478 ... app ${podName} Running
479 # Wait for 1 min after rw-core is restarted
480 Sleep 60s
481 # For some reason scaling down and up the POD behind a service causes the port forward to stop working,
482 # so restart the port forwarding for the API service
483 Restart VOLTHA Port Forward voltha-api
484 ${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
485 Log ${podStatusOutput}
486 ${countAfterRestart}= Run kubectl get pods -n ${NAMESPACE} | grep Running | wc -l
487 Should Be Equal As Strings ${countAfterRestart} ${countBeforeRestart}
488 FOR ${I} IN RANGE 0 ${num_all_onus}
489 ${src}= Set Variable ${hosts.src[${I}]}
490 ${dst}= Set Variable ${hosts.dst[${I}]}
491 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
492 ... Wait Until Keyword Succeeds ${timeout} 2s
493 ... Stop Ping Running In Background ${src['ip']} ${src['user']} ${src['pass']}
494 ... ${src['container_type']} ${src['container_name']}
495 END
496 FOR ${I} IN RANGE 0 ${num_all_onus}
497 ${src}= Set Variable ${hosts.src[${I}]}
498 ${dst}= Set Variable ${hosts.dst[${I}]}
499 ${ping_output_file}= Set Variable /tmp/${src['onu']}_ping
500 ${ping_output}= Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
501 ... Wait Until Keyword Succeeds ${timeout} 2s
502 ... Retrieve Remote File Contents ${ping_output_file} ${src['ip']} ${src['user']} ${src['pass']}
503 ... ${src['container_type']} ${src['container_name']}
504 Run Keyword If ${has_dataplane} Check Ping Result True ${ping_output}
505 END
506 # Verify Control Plane Functionality by Deleting and Re-adding the Subscriber
rbodapatfe5bb152025-03-17 11:34:42 +0530507 # "Once the rw core is restarted, it takes a bit of time for the OLT's/ONUs to reconcile, if try to delete OLT
508 # before the OLT's are reconiled successfully there would be recocile error. This scenario is not handled in VOLTHA as
509 # of now. And there is no other to check if the reconcile has happened for all the OLTs. Due to this limitations a
510 # sleep of 60s is introduced to give enough time for rw core to reconcile the OLTs."
511 Sleep 60s
Akash Soni35cafc32023-12-20 12:48:08 +0530512 Verify Control Plane After Pod Restart DT
513
514*** Keywords ***
515Setup Suite
516 [Documentation] Set up the test suite
517 Common Test Suite Setup
518 #power_switch.robot needs it to support different vendor's power switch
519 ${switch_type}= Get Variable Value ${web_power_switch.type}
520 Run Keyword If "${switch_type}"!="" Set Global Variable ${powerswitch_type} ${switch_type}
521 # Run Pre-test Setup for Soak Job
522 # Note: As soak requirement, it expects that the devices under test are already created and enabled
523 Run Keyword If '${SOAK_TEST}'=='True' Setup Soak
524
525
526Clear All Devices Then Create New Device
527 [Documentation] Remove any devices from VOLTHA and ONOS
528 # Remove all devices from voltha and nos
529 Delete All Devices and Verify
530 # Execute normal test Setup Keyword
531 Setup
532
533Verify Control Plane After Pod Restart DT
534 [Documentation] Verifies the control plane functionality after the voltha pod restart
535 ... by deleting and re-adding the subscriber
536 FOR ${I} IN RANGE 0 ${num_all_onus}
537 ${src}= Set Variable ${hosts.src[${I}]}
538 ${dst}= Set Variable ${hosts.dst[${I}]}
539 ${of_id}= Get ofID From OLT List ${src['olt']}
540 ${nni_port}= Wait Until Keyword Succeeds ${timeout} 2s Get NNI Port in VGC ${of_id}
541 ${onu_port}= Wait Until Keyword Succeeds ${timeout} 2s
542 ... Get ONU Port in VGC ${src['onu']} ${of_id} ${src['uni_id']}
543 ${onu_device_id}= Wait Until Keyword Succeeds ${timeout} 2s Get Device ID From SN ${src['onu']}
544 # Remove Subscriber Access
545 Remove Subscriber Access ${of_id} ${onu_port}
546 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
547 ... Wait Until Keyword Succeeds ${timeout} 2s
548 ... Check Ping False ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']}
549 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
550 # Disable and Re-Enable the ONU (To replicate DT current workflow)
551 # TODO: Delete and Auto-Discovery Add of ONU (not yet supported)
552 Disable Device ${onu_device_id}
553 Wait Until Keyword Succeeds ${timeout} 5s
554 ... Validate Device DISABLED UNKNOWN
555 ... REACHABLE ${src['onu']}
556 Enable Device ${onu_device_id}
557 Wait Until Keyword Succeeds ${timeout} 5s
558 ... Validate Device ENABLED ACTIVE
559 ... REACHABLE ${src['onu']}
560 # Add Subscriber Access
561 Add Subscriber Details ${of_id} ${onu_port}
562 # Verify subscriber access flows are added for the ONU port
563 Run Keyword And Continue On Failure Wait Until Keyword Succeeds ${timeout} 5s
564 ... Verify Subscriber Access Flows Added for ONU DT in VGC ${VGC_SSH_IP} ${VGC_SSH_PORT} ${of_id}
565 ... ${onu_port} ${nni_port} ${src['s_tag']}
566 Wait Until Keyword Succeeds ${timeout} 5s
567 ... Validate Device ENABLED ACTIVE
568 ... REACHABLE ${src['onu']} onu=True onu_reason=omci-flows-pushed
569 # Workaround for issue seen in VOL-4489. Keep this workaround until VOL-4489 is fixed.
570 Run Keyword If ${has_dataplane} Reboot XGSPON ONU ${src['olt']} ${src['onu']} omci-flows-pushed
571 # Workaround ends here for issue seen in VOL-4489.
572 Run Keyword If ${has_dataplane} Run Keyword And Continue On Failure
573 ... Wait Until Keyword Succeeds ${timeout} 2s
574 ... Check Ping True ${dst['dp_iface_ip_qinq']} ${src['dp_iface_name']}
575 ... ${src['ip']} ${src['user']} ${src['pass']} ${src['container_type']} ${src['container_name']}
576 END