Updated ofagentRestart scenario for ATT, DT and TT to verify pod status during test teardown
Change-Id: Ifa2c9eb46f1cc6fe44b6e0d16e156ff0529c7d91
diff --git a/tests/functional/Voltha_FailureScenarios.robot b/tests/functional/Voltha_FailureScenarios.robot
index e7a9143..b7140fb 100755
--- a/tests/functional/Voltha_FailureScenarios.robot
+++ b/tests/functional/Voltha_FailureScenarios.robot
@@ -357,6 +357,10 @@
[Teardown] Run Keywords Collect Logs
... AND Stop Logging ofagentRestart
... AND Scale K8s Deployment ${NAMESPACE} ${STACK_NAME}-voltha-ofagent 1
+ ... AND Wait Until Keyword Succeeds ${timeout} 2s
+ ... Validate Pods Status By Label ${NAMESPACE} app ofagent Running
+ ... AND Wait Until Keyword Succeeds ${timeout} 3s
+ ... Pods Are Ready By Label ${NAMESPACE} app ofagent
# set timeout value
${waitforRestart} Set Variable 120s
${podStatusOutput}= Run kubectl get pods -n ${NAMESPACE}
@@ -414,6 +418,7 @@
Scale K8s Deployment ${NAMESPACE} ${STACK_NAME}-voltha-ofagent 1
Wait Until Keyword Succeeds ${waitforRestart} 2s Validate Pod Status ofagent ${NAMESPACE}
... Running
+ Wait Until Keyword Succeeds ${timeout} 3s Pods Are Ready By Label ${NAMESPACE} app ${podName}
# Performing Sanity Test to make sure subscribers are all AUTH+DHCP and pingable
Run Keyword If ${has_dataplane} Clean Up Linux
Wait Until Keyword Succeeds ${timeout} 2s Perform Sanity Test ${suppressaddsubscriber}