Console Output

Skipping 183 KB.. Full Log
  reconcile controllers deployment
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/machine-api-operator.go:25
I0906 15:09:26.786512    5009 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking deployment "machine-api-controllers" is available
I0906 15:09:26.799549    5009 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
STEP: deleting deployment "machine-api-controllers"
STEP: checking deployment "machine-api-controllers" is available again
E0906 15:09:26.807416    5009 deloyment.go:25] Error querying api for Deployment object "machine-api-controllers": deployments.apps "machine-api-controllers" not found, retrying...
E0906 15:09:27.810060    5009 deloyment.go:55] Deployment "machine-api-controllers" is not available. Status: (replicas: 1, updated: 1, ready: 0, available: 0, unavailable: 1)
E0906 15:09:28.812152    5009 deloyment.go:55] Deployment "machine-api-controllers" is not available. Status: (replicas: 1, updated: 1, ready: 0, available: 0, unavailable: 1)
I0906 15:09:29.812670    5009 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
•SS
Ran 7 of 16 Specs in 3.160 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (3.16s)
PASS
ok  	github.com/openshift/cluster-api-actuator-pkg/pkg/e2e	3.213s
hack/ci-integration.sh  -ginkgo.v -ginkgo.noColor=true -ginkgo.skip "Feature:Operators|TechPreview" -ginkgo.failFast -ginkgo.seed=1
=== RUN   TestE2E
Running Suite: Machine Suite
============================
Random Seed: 1
Will run 7 of 16 specs

SSSSSSSS
------------------------------
[Feature:Machines] Autoscaler should 
  scale up and down
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:234
I0906 15:09:33.015012    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
I0906 15:09:33.021567    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
I0906 15:09:33.052537    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Getting existing machinesets
STEP: Getting existing machines
STEP: Getting existing nodes
I0906 15:09:33.071035    5527 autoscaler.go:286] Have 4 existing machinesets
I0906 15:09:33.071058    5527 autoscaler.go:287] Have 5 existing machines
I0906 15:09:33.071069    5527 autoscaler.go:288] Have 5 existing nodes
STEP: Creating 3 transient machinesets
STEP: [15m0s remaining] Waiting for nodes to be Ready in 3 transient machinesets
E0906 15:09:33.099071    5527 utils.go:157] Machine "e2e-5508c-w-0-mlv9z" has no NodeRef
STEP: [14m57s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0906 15:09:36.121878    5527 utils.go:165] Machine "e2e-5508c-w-0-mlv9z" is backing node "00db6f98-a3ac-4b9d-9b06-baeefad63df4"
I0906 15:09:36.121907    5527 utils.go:149] MachineSet "e2e-5508c-w-0" have 1 nodes
E0906 15:09:36.131240    5527 utils.go:157] Machine "e2e-5508c-w-1-xxftr" has no NodeRef
STEP: [14m54s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0906 15:09:39.137754    5527 utils.go:165] Machine "e2e-5508c-w-0-mlv9z" is backing node "00db6f98-a3ac-4b9d-9b06-baeefad63df4"
I0906 15:09:39.137783    5527 utils.go:149] MachineSet "e2e-5508c-w-0" have 1 nodes
I0906 15:09:39.143301    5527 utils.go:165] Machine "e2e-5508c-w-1-xxftr" is backing node "7ab053ab-5975-4dd7-a60f-6db3990be26f"
I0906 15:09:39.143323    5527 utils.go:149] MachineSet "e2e-5508c-w-1" have 1 nodes
I0906 15:09:39.148525    5527 utils.go:165] Machine "e2e-5508c-w-2-wj4jh" is backing node "66cf1356-1533-4fee-8ea0-24d40b6aef5f"
I0906 15:09:39.148547    5527 utils.go:149] MachineSet "e2e-5508c-w-2" have 1 nodes
I0906 15:09:39.148555    5527 utils.go:177] Node "00db6f98-a3ac-4b9d-9b06-baeefad63df4" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:09:39.148635    5527 utils.go:177] Node "7ab053ab-5975-4dd7-a60f-6db3990be26f" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:09:37 +0000 UTC 2019-09-06 15:09:35 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:09:39.148659    5527 utils.go:177] Node "66cf1356-1533-4fee-8ea0-24d40b6aef5f" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:09:38 +0000 UTC 2019-09-06 15:09:36 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:09:38 +0000 UTC 2019-09-06 15:09:36 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:09:38 +0000 UTC 2019-09-06 15:09:36 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:09:38 +0000 UTC 2019-09-06 15:09:36 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:09:38 +0000 UTC 2019-09-06 15:09:36 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: Getting nodes
STEP: Creating 3 machineautoscalers
I0906 15:09:39.151562    5527 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-5508c-w-0 - min:1, max:2
I0906 15:09:39.158479    5527 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-5508c-w-1 - min:1, max:2
I0906 15:09:39.162577    5527 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-5508c-w-2 - min:1, max:2
STEP: Creating ClusterAutoscaler configured with maxNodesTotal:10
STEP: Deriving Memory capacity from machine "kubemark-actuator-testing-machineset"
I0906 15:09:39.276486    5527 autoscaler.go:377] Memory capacity of worker node "359b0676-397f-402c-b209-ed17aa0a216c" is 3840Mi
STEP: Creating scale-out workload: jobs: 11, memory: 2818572300
I0906 15:09:39.304637    5527 autoscaler.go:399] [15m0s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:09:40.379733    5527 autoscaler.go:361] cluster-autoscaler: cluster-autoscaler-default-598c649f66-tgmls became leader
I0906 15:09:42.304866    5527 autoscaler.go:399] [14m57s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:09:45.305082    5527 autoscaler.go:399] [14m54s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:09:48.305196    5527 autoscaler.go:399] [14m51s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:09:50.515652    5527 autoscaler.go:361] cluster-autoscaler-status: Max total nodes in cluster reached: 10
I0906 15:09:50.518206    5527 autoscaler.go:361] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-5508c-w-1 size to 2
I0906 15:09:50.523617    5527 autoscaler.go:361] cluster-autoscaler-status: Scale-up: group kube-system/e2e-5508c-w-1 size set to 2
I0906 15:09:50.526317    5527 autoscaler.go:361] e2e-autoscaler-workload-k7d25: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:50.532696    5527 autoscaler.go:361] e2e-autoscaler-workload-7745h: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:50.538502    5527 autoscaler.go:361] e2e-autoscaler-workload-x9srw: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:50.544645    5527 autoscaler.go:361] e2e-autoscaler-workload-n5xxj: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:50.552081    5527 autoscaler.go:361] e2e-autoscaler-workload-2h24c: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:50.563329    5527 autoscaler.go:361] e2e-autoscaler-workload-hl5bk: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:50.570678    5527 autoscaler.go:361] e2e-autoscaler-workload-2lbwq: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:50.715740    5527 autoscaler.go:361] e2e-autoscaler-workload-cks94: pod triggered scale-up: [{kube-system/e2e-5508c-w-1 1->2 (max: 2)}]
I0906 15:09:51.305407    5527 autoscaler.go:399] [14m48s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:09:54.305658    5527 autoscaler.go:399] [14m45s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:09:57.306516    5527 autoscaler.go:399] [14m42s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:10:00.306733    5527 autoscaler.go:399] [14m39s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:10:00.548986    5527 autoscaler.go:361] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-5508c-w-0 size to 2
I0906 15:10:00.553746    5527 autoscaler.go:361] e2e-autoscaler-workload-cks94: pod triggered scale-up: [{kube-system/e2e-5508c-w-0 1->2 (max: 2)}]
I0906 15:10:00.560145    5527 autoscaler.go:361] cluster-autoscaler-status: Scale-up: group kube-system/e2e-5508c-w-0 size set to 2
I0906 15:10:00.562190    5527 autoscaler.go:361] e2e-autoscaler-workload-7745h: pod triggered scale-up: [{kube-system/e2e-5508c-w-0 1->2 (max: 2)}]
I0906 15:10:00.564159    5527 autoscaler.go:361] e2e-autoscaler-workload-n5xxj: pod triggered scale-up: [{kube-system/e2e-5508c-w-0 1->2 (max: 2)}]
I0906 15:10:00.570030    5527 autoscaler.go:361] e2e-autoscaler-workload-k7d25: pod triggered scale-up: [{kube-system/e2e-5508c-w-0 1->2 (max: 2)}]
I0906 15:10:00.578727    5527 autoscaler.go:361] e2e-autoscaler-workload-x9srw: pod triggered scale-up: [{kube-system/e2e-5508c-w-0 1->2 (max: 2)}]
I0906 15:10:00.587666    5527 autoscaler.go:361] e2e-autoscaler-workload-hl5bk: pod triggered scale-up: [{kube-system/e2e-5508c-w-0 1->2 (max: 2)}]
I0906 15:10:00.591015    5527 autoscaler.go:361] e2e-autoscaler-workload-2lbwq: pod triggered scale-up: [{kube-system/e2e-5508c-w-0 1->2 (max: 2)}]
I0906 15:10:03.306991    5527 autoscaler.go:399] [14m36s remaining] Expecting 2 "ScaledUpGroup" events; observed 2
I0906 15:10:03.307900    5527 autoscaler.go:414] [1m0s remaining] Waiting for cluster-autoscaler to generate a "MaxNodesTotalReached" event; observed 1
I0906 15:10:03.307930    5527 autoscaler.go:422] [1m0s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:06.308135    5527 autoscaler.go:422] [57s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:09.308419    5527 autoscaler.go:422] [54s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:12.308836    5527 autoscaler.go:422] [51s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:15.309087    5527 autoscaler.go:422] [48s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:18.309347    5527 autoscaler.go:422] [45s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:21.309578    5527 autoscaler.go:422] [42s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:24.309822    5527 autoscaler.go:422] [39s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:27.310774    5527 autoscaler.go:422] [36s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:30.310996    5527 autoscaler.go:422] [33s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:33.311243    5527 autoscaler.go:422] [30s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:36.311532    5527 autoscaler.go:422] [27s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:39.311794    5527 autoscaler.go:422] [24s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:42.312009    5527 autoscaler.go:422] [21s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:45.312269    5527 autoscaler.go:422] [18s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:48.312537    5527 autoscaler.go:422] [15s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:51.312789    5527 autoscaler.go:422] [12s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:54.313064    5527 autoscaler.go:422] [9s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:10:57.313292    5527 autoscaler.go:422] [6s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:11:00.313459    5527 autoscaler.go:422] [3s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
STEP: Deleting workload
I0906 15:11:03.308151    5527 autoscaler.go:249] [cleanup] "e2e-autoscaler-workload" (*v1.Job)
I0906 15:11:03.313341    5527 autoscaler.go:434] [15m0s remaining] Expecting 2 "ScaleDownEmpty" events; observed 2
I0906 15:11:03.348034    5527 autoscaler.go:445] still have workload POD: "e2e-autoscaler-workload-2h24c"
I0906 15:11:03.348073    5527 autoscaler.go:249] [cleanup] "default" (*v1.ClusterAutoscaler)
I0906 15:11:03.452490    5527 autoscaler.go:465] Waiting for cluster-autoscaler POD "cluster-autoscaler-default-598c649f66-tgmls" to disappear
STEP: Scaling transient machinesets to zero
I0906 15:11:03.452550    5527 autoscaler.go:474] Scaling transient machineset "e2e-5508c-w-0" to zero
I0906 15:11:03.458112    5527 autoscaler.go:474] Scaling transient machineset "e2e-5508c-w-1" to zero
I0906 15:11:03.466094    5527 autoscaler.go:474] Scaling transient machineset "e2e-5508c-w-2" to zero
STEP: Waiting for scaled up nodes to be deleted
I0906 15:11:03.522000    5527 autoscaler.go:491] [15m0s remaining] Waiting for cluster to reach original node count of 5; currently have 10
I0906 15:11:06.526461    5527 autoscaler.go:491] [14m57s remaining] Waiting for cluster to reach original node count of 5; currently have 8
I0906 15:11:09.530138    5527 autoscaler.go:491] [14m54s remaining] Waiting for cluster to reach original node count of 5; currently have 5
STEP: Waiting for scaled up machines to be deleted
I0906 15:11:09.533584    5527 autoscaler.go:501] [15m0s remaining] Waiting for cluster to reach original machine count of 5; currently have 5
I0906 15:11:09.533616    5527 autoscaler.go:249] [cleanup] "autoscale-e2e-5508c-w-0mtzfn" (*v1beta1.MachineAutoscaler)
I0906 15:11:09.536918    5527 autoscaler.go:249] [cleanup] "autoscale-e2e-5508c-w-1zmp8d" (*v1beta1.MachineAutoscaler)
I0906 15:11:09.540193    5527 autoscaler.go:249] [cleanup] "autoscale-e2e-5508c-w-2z6hhv" (*v1beta1.MachineAutoscaler)
I0906 15:11:09.545457    5527 autoscaler.go:249] [cleanup] "e2e-5508c-w-0" (*v1beta1.MachineSet)
I0906 15:11:09.549133    5527 autoscaler.go:249] [cleanup] "e2e-5508c-w-1" (*v1beta1.MachineSet)
I0906 15:11:09.554079    5527 autoscaler.go:249] [cleanup] "e2e-5508c-w-2" (*v1beta1.MachineSet)

• [SLOW TEST:96.546 seconds]
[Feature:Machines] Autoscaler should
/tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:233
  scale up and down
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:234
------------------------------
S
------------------------------
[Feature:Machines] Managed cluster should 
  have machines linked with nodes
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:136
I0906 15:11:09.561108    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
I0906 15:11:09.579106    5527 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:11:09.579139    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-6pt7l" is linked to node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
I0906 15:11:09.579152    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-hpgct" is linked to node "8d76d38d-5446-4aef-802c-ad0fcfdb4546"
I0906 15:11:09.579160    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-nr9lx" is linked to node "359b0676-397f-402c-b209-ed17aa0a216c"
I0906 15:11:09.579169    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-s4l9g" is linked to node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b"
I0906 15:11:09.579185    5527 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
•
------------------------------
[Feature:Machines] Managed cluster should 
  have ability to additively reconcile taints from machine to nodes
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:145
I0906 15:11:09.579237    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: getting machine "kubemark-actuator-testing-machineset-6pt7l"
I0906 15:11:09.598496    5527 utils.go:165] Machine "kubemark-actuator-testing-machineset-6pt7l" is backing node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
STEP: getting the backed node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
STEP: updating node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab" with taint: {not-from-machine true NoSchedule <nil>}
STEP: updating machine "kubemark-actuator-testing-machineset-6pt7l" with taint: {from-machine-8e92327e-d0b8-11e9-978c-0a445740e986 true NoSchedule <nil>}
I0906 15:11:09.607997    5527 infra.go:184] Getting node from machine again for verification of taints
I0906 15:11:09.611944    5527 utils.go:165] Machine "kubemark-actuator-testing-machineset-6pt7l" is backing node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
I0906 15:11:09.611980    5527 infra.go:194] Expected : map[not-from-machine:{} from-machine-8e92327e-d0b8-11e9-978c-0a445740e986:{}], observed map[kubemark:{} not-from-machine:{} from-machine-8e92327e-d0b8-11e9-978c-0a445740e986:{}] , difference map[], 
STEP: Getting the latest version of the original machine
STEP: Setting back the original machine taints
STEP: Getting the latest version of the node
I0906 15:11:09.625610    5527 utils.go:165] Machine "kubemark-actuator-testing-machineset-6pt7l" is backing node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
STEP: Setting back the original node taints
•
------------------------------
[Feature:Machines] Managed cluster should 
  recover from deleted worker machines
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
I0906 15:11:09.629879    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking initial cluster state
I0906 15:11:09.657202    5527 utils.go:87] Cluster size is 5 nodes
I0906 15:11:09.657230    5527 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:11:09.661265    5527 utils.go:99] MachineSet "e2e-5508c-w-0" replicas 0. Ready: 0, available 0
I0906 15:11:09.661290    5527 utils.go:99] MachineSet "e2e-5508c-w-1" replicas 0. Ready: 0, available 0
I0906 15:11:09.661299    5527 utils.go:99] MachineSet "e2e-5508c-w-2" replicas 0. Ready: 0, available 0
I0906 15:11:09.661307    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:11:09.661316    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:11:09.661325    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:11:09.661334    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:11:09.664335    5527 utils.go:231] Node "359b0676-397f-402c-b209-ed17aa0a216c". Ready: true. Unschedulable: false
I0906 15:11:09.664360    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:11:09.664371    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:11:09.664376    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:11:09.664382    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:11:09.667091    5527 utils.go:87] Cluster size is 5 nodes
I0906 15:11:09.667108    5527 utils.go:257] waiting for all nodes to be ready
I0906 15:11:09.670260    5527 utils.go:262] waiting for all nodes to be schedulable
I0906 15:11:09.674411    5527 utils.go:290] [remaining 1m0s] Node "359b0676-397f-402c-b209-ed17aa0a216c" is schedulable
I0906 15:11:09.674440    5527 utils.go:290] [remaining 1m0s] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab" is schedulable
I0906 15:11:09.674450    5527 utils.go:290] [remaining 1m0s] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546" is schedulable
I0906 15:11:09.674457    5527 utils.go:290] [remaining 1m0s] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b" is schedulable
I0906 15:11:09.674463    5527 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:11:09.674471    5527 utils.go:267] waiting for each node to be backed by a machine
I0906 15:11:09.684919    5527 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:11:09.684955    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-6pt7l" is linked to node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
I0906 15:11:09.684970    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-hpgct" is linked to node "8d76d38d-5446-4aef-802c-ad0fcfdb4546"
I0906 15:11:09.684984    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-nr9lx" is linked to node "359b0676-397f-402c-b209-ed17aa0a216c"
I0906 15:11:09.684997    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-s4l9g" is linked to node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b"
I0906 15:11:09.685015    5527 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
STEP: getting worker node
STEP: deleting machine object "kubemark-actuator-testing-machineset-green-nr9lx"
STEP: waiting for node object "359b0676-397f-402c-b209-ed17aa0a216c" to go away
I0906 15:11:09.699018    5527 infra.go:255] Node "359b0676-397f-402c-b209-ed17aa0a216c" still exists. Node conditions are: [{OutOfDisk False 2019-09-06 15:11:09 +0000 UTC 2019-09-06 15:08:33 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:11:09 +0000 UTC 2019-09-06 15:08:33 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:11:09 +0000 UTC 2019-09-06 15:08:33 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:11:09 +0000 UTC 2019-09-06 15:08:33 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:11:09 +0000 UTC 2019-09-06 15:08:33 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: waiting for new node object to come up
I0906 15:11:14.703992    5527 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:11:14.707665    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:11:14.707687    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:11:14.707694    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:11:14.707699    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:11:14.710430    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:11:14.710449    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:11:14.710454    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:11:14.710459    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:11:14.710468    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:11:14.713015    5527 utils.go:87] Cluster size is 5 nodes
I0906 15:11:14.713041    5527 utils.go:257] waiting for all nodes to be ready
I0906 15:11:14.715847    5527 utils.go:262] waiting for all nodes to be schedulable
I0906 15:11:14.718805    5527 utils.go:290] [remaining 1m0s] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab" is schedulable
I0906 15:11:14.718828    5527 utils.go:290] [remaining 1m0s] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546" is schedulable
I0906 15:11:14.718835    5527 utils.go:290] [remaining 1m0s] Node "b3408843-b44c-4857-ab9d-3b13ab158aea" is schedulable
I0906 15:11:14.718842    5527 utils.go:290] [remaining 1m0s] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b" is schedulable
I0906 15:11:14.718862    5527 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:11:14.718868    5527 utils.go:267] waiting for each node to be backed by a machine
I0906 15:11:14.724556    5527 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:11:14.724583    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-6pt7l" is linked to node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
I0906 15:11:14.724594    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-hpgct" is linked to node "8d76d38d-5446-4aef-802c-ad0fcfdb4546"
I0906 15:11:14.724602    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-scthk" is linked to node "b3408843-b44c-4857-ab9d-3b13ab158aea"
I0906 15:11:14.724613    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-s4l9g" is linked to node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b"
I0906 15:11:14.724634    5527 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"

• [SLOW TEST:5.095 seconds]
[Feature:Machines] Managed cluster should
/tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  recover from deleted worker machines
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
------------------------------
[Feature:Machines] Managed cluster should 
  grow and decrease when scaling different machineSets simultaneously
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
I0906 15:11:14.724720    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0906 15:11:14.740937    5527 utils.go:87] Cluster size is 5 nodes
STEP: getting worker machineSets
I0906 15:11:14.743851    5527 infra.go:297] Creating transient MachineSet "e2e-91a2d-w-0"
I0906 15:11:14.748839    5527 infra.go:297] Creating transient MachineSet "e2e-91a2d-w-1"
STEP: scaling "e2e-91a2d-w-0" from 0 to 2 replicas
I0906 15:11:14.752871    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-91a2d-w-1" from 0 to 2 replicas
I0906 15:11:14.772375    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
E0906 15:11:14.812124    5527 utils.go:157] Machine "e2e-91a2d-w-0-bcdrm" has no NodeRef
I0906 15:11:19.829884    5527 utils.go:165] Machine "e2e-91a2d-w-0-bcdrm" is backing node "41e2bf6d-a04c-4354-afea-7e711d38300e"
I0906 15:11:19.838522    5527 utils.go:165] Machine "e2e-91a2d-w-0-v2vd5" is backing node "a18bf460-d110-4af6-91a0-4af7a5c1fe76"
I0906 15:11:19.838545    5527 utils.go:149] MachineSet "e2e-91a2d-w-0" have 2 nodes
E0906 15:11:19.852028    5527 utils.go:157] Machine "e2e-91a2d-w-1-kxg8f" has no NodeRef
I0906 15:11:24.860019    5527 utils.go:165] Machine "e2e-91a2d-w-0-bcdrm" is backing node "41e2bf6d-a04c-4354-afea-7e711d38300e"
I0906 15:11:24.862527    5527 utils.go:165] Machine "e2e-91a2d-w-0-v2vd5" is backing node "a18bf460-d110-4af6-91a0-4af7a5c1fe76"
I0906 15:11:24.862548    5527 utils.go:149] MachineSet "e2e-91a2d-w-0" have 2 nodes
I0906 15:11:24.868337    5527 utils.go:165] Machine "e2e-91a2d-w-1-kxg8f" is backing node "86eff62d-6aee-4907-b3a3-b0af551e243b"
I0906 15:11:24.870121    5527 utils.go:165] Machine "e2e-91a2d-w-1-z5zn4" is backing node "f94e2b84-7660-436d-b933-ce06e9220145"
I0906 15:11:24.870145    5527 utils.go:149] MachineSet "e2e-91a2d-w-1" have 2 nodes
I0906 15:11:24.870156    5527 utils.go:177] Node "41e2bf6d-a04c-4354-afea-7e711d38300e" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:11:23 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:11:23 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:11:23 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:11:23 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:11:23 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:11:24.870250    5527 utils.go:177] Node "a18bf460-d110-4af6-91a0-4af7a5c1fe76" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:17 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:11:24.870288    5527 utils.go:177] Node "86eff62d-6aee-4907-b3a3-b0af551e243b" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:19 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:19 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:19 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:11:24 +0000 UTC 2019-09-06 15:11:19 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:11:24.870315    5527 utils.go:177] Node "f94e2b84-7660-436d-b933-ce06e9220145" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:11:22 +0000 UTC 2019-09-06 15:11:18 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:11:22 +0000 UTC 2019-09-06 15:11:18 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:11:22 +0000 UTC 2019-09-06 15:11:18 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:11:22 +0000 UTC 2019-09-06 15:11:18 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:11:22 +0000 UTC 2019-09-06 15:11:18 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: scaling "e2e-91a2d-w-0" from 2 to 0 replicas
I0906 15:11:24.870364    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-91a2d-w-1" from 2 to 0 replicas
I0906 15:11:24.892046    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0906 15:11:24.924116    5527 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:11:24.989484    5527 utils.go:99] MachineSet "e2e-91a2d-w-0" replicas 0. Ready: 2, available 2
I0906 15:11:24.989519    5527 utils.go:99] MachineSet "e2e-91a2d-w-1" replicas 0. Ready: 2, available 2
I0906 15:11:24.989529    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:11:24.989539    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:11:24.989548    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:11:24.989558    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:11:25.006743    5527 utils.go:231] Node "41e2bf6d-a04c-4354-afea-7e711d38300e". Ready: true. Unschedulable: false
I0906 15:11:25.006770    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:11:25.006779    5527 utils.go:231] Node "86eff62d-6aee-4907-b3a3-b0af551e243b". Ready: true. Unschedulable: false
I0906 15:11:25.006787    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:11:25.006795    5527 utils.go:231] Node "a18bf460-d110-4af6-91a0-4af7a5c1fe76". Ready: true. Unschedulable: false
I0906 15:11:25.006803    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:11:25.006811    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:11:25.006823    5527 utils.go:231] Node "f94e2b84-7660-436d-b933-ce06e9220145". Ready: true. Unschedulable: false
I0906 15:11:25.006831    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:11:25.023990    5527 utils.go:87] Cluster size is 9 nodes
I0906 15:11:30.024230    5527 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0906 15:11:30.029565    5527 utils.go:99] MachineSet "e2e-91a2d-w-0" replicas 0. Ready: 0, available 0
I0906 15:11:30.029588    5527 utils.go:99] MachineSet "e2e-91a2d-w-1" replicas 0. Ready: 0, available 0
I0906 15:11:30.029598    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:11:30.029607    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:11:30.029613    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:11:30.029618    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:11:30.035465    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:11:30.035485    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:11:30.035495    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:11:30.035503    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:11:30.035512    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:11:30.038851    5527 utils.go:87] Cluster size is 5 nodes
I0906 15:11:30.038883    5527 utils.go:257] waiting for all nodes to be ready
I0906 15:11:30.042502    5527 utils.go:262] waiting for all nodes to be schedulable
I0906 15:11:30.049309    5527 utils.go:290] [remaining 1m0s] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab" is schedulable
I0906 15:11:30.049334    5527 utils.go:290] [remaining 1m0s] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546" is schedulable
I0906 15:11:30.049346    5527 utils.go:290] [remaining 1m0s] Node "b3408843-b44c-4857-ab9d-3b13ab158aea" is schedulable
I0906 15:11:30.049357    5527 utils.go:290] [remaining 1m0s] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b" is schedulable
I0906 15:11:30.049367    5527 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:11:30.049376    5527 utils.go:267] waiting for each node to be backed by a machine
I0906 15:11:30.058252    5527 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:11:30.058283    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-6pt7l" is linked to node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
I0906 15:11:30.058303    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-hpgct" is linked to node "8d76d38d-5446-4aef-802c-ad0fcfdb4546"
I0906 15:11:30.058318    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-scthk" is linked to node "b3408843-b44c-4857-ab9d-3b13ab158aea"
I0906 15:11:30.058331    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-s4l9g" is linked to node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b"
I0906 15:11:30.058344    5527 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"

• [SLOW TEST:15.344 seconds]
[Feature:Machines] Managed cluster should
/tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  grow and decrease when scaling different machineSets simultaneously
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
------------------------------
[Feature:Machines] Managed cluster should 
  drain node before removing machine resource
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
I0906 15:11:30.068510    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0906 15:11:30.085166    5527 utils.go:87] Cluster size is 5 nodes
STEP: Taking the first worker machineset (assuming only worker machines are backed by machinesets)
STEP: Creating two new machines, one for node about to be drained, other for moving workload from drained node
STEP: Waiting until both new nodes are ready
E0906 15:11:30.096585    5527 utils.go:342] [remaining 15m0s] Expecting 2 nodes with map[string]string{"node-role.kubernetes.io/worker":"", "node-draining-test":"54ff90f1-d0b8-11e9-978c-0a445740e986"} labels in Ready state, got 0
I0906 15:11:35.100243    5527 utils.go:346] [14m55s remaining] Expected number (2) of nodes with map[node-draining-test:54ff90f1-d0b8-11e9-978c-0a445740e986 node-role.kubernetes.io/worker:] label in Ready state found
STEP: Creating RC with workload
STEP: Creating PDB for RC
STEP: Wait until all replicas are ready
I0906 15:11:35.141657    5527 utils.go:396] [15m0s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 0
I0906 15:11:40.145072    5527 utils.go:396] [14m55s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 20
I0906 15:11:45.143917    5527 utils.go:399] [14m50s remaining] Waiting for RC ready replicas, ReadyReplicas: 20, Replicas: 20
I0906 15:11:45.153706    5527 utils.go:416] POD #0/20: {
  "metadata": {
    "name": "pdb-workload-5wbhf",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-5wbhf",
    "uid": "9dce09a4-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3788",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.211.234.220",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:40Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://2e0cea5ea8d141c4"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.153865    5527 utils.go:416] POD #1/20: {
  "metadata": {
    "name": "pdb-workload-747sq",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-747sq",
    "uid": "9dcb94ff-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3767",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.78.42.110",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://31ab02dda3e57412"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154031    5527 utils.go:416] POD #2/20: {
  "metadata": {
    "name": "pdb-workload-bmgt5",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-bmgt5",
    "uid": "9dcdb415-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3816",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.202.74.28",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:40Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://b137ec1b132a04ce"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154199    5527 utils.go:416] POD #3/20: {
  "metadata": {
    "name": "pdb-workload-bzxqt",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-bzxqt",
    "uid": "9dcd97eb-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3804",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.206.214.232",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://7c95463b3528976d"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154362    5527 utils.go:416] POD #4/20: {
  "metadata": {
    "name": "pdb-workload-csr24",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-csr24",
    "uid": "9dc9102d-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3779",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.183.109.152",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://6659cb427942f37b"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154490    5527 utils.go:416] POD #5/20: {
  "metadata": {
    "name": "pdb-workload-cwsx6",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-cwsx6",
    "uid": "9dcbb286-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3782",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.107.50.185",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://e6662125b7fe70a2"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154610    5527 utils.go:416] POD #6/20: {
  "metadata": {
    "name": "pdb-workload-d9knp",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-d9knp",
    "uid": "9dc9dc0a-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3770",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.239.96.124",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:38Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://733660889ad6ab4e"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154717    5527 utils.go:416] POD #7/20: {
  "metadata": {
    "name": "pdb-workload-fzkkf",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-fzkkf",
    "uid": "9dc9c804-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3841",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:42Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.16.218.162",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:41Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://84965017a9a309e5"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154838    5527 utils.go:416] POD #8/20: {
  "metadata": {
    "name": "pdb-workload-hkgss",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-hkgss",
    "uid": "9dd0f0e1-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3838",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.24.253.172",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://874196afe775dd5e"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.154958    5527 utils.go:416] POD #9/20: {
  "metadata": {
    "name": "pdb-workload-jq5l8",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-jq5l8",
    "uid": "9dcda9db-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3773",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.188.188.167",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:40Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://4f58ac0b6009aa49"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.155088    5527 utils.go:416] POD #10/20: {
  "metadata": {
    "name": "pdb-workload-jvzrv",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-jvzrv",
    "uid": "9dd1759c-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3827",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.83.195.238",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:41Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://3e1150987debfa17"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.155225    5527 utils.go:416] POD #11/20: {
  "metadata": {
    "name": "pdb-workload-lf6xd",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-lf6xd",
    "uid": "9dcb7029-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3811",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.197.89.194",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:38Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://1ed8492024500655"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.155374    5527 utils.go:416] POD #12/20: {
  "metadata": {
    "name": "pdb-workload-qpcqf",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-qpcqf",
    "uid": "9dcde4a1-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3820",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.61.205.104",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:40Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://1ab2d89934fe92bd"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.155505    5527 utils.go:416] POD #13/20: {
  "metadata": {
    "name": "pdb-workload-rv85j",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-rv85j",
    "uid": "9dcbaeef-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3807",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.206.128.173",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:41Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://68d2b560e2b67fc7"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.155647    5527 utils.go:416] POD #14/20: {
  "metadata": {
    "name": "pdb-workload-t968g",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-t968g",
    "uid": "9dcdd176-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3800",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.103.218.85",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:40Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://b117c42312eec019"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.155781    5527 utils.go:416] POD #15/20: {
  "metadata": {
    "name": "pdb-workload-tn4bp",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-tn4bp",
    "uid": "9dce0b68-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3823",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.236.24.225",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://41e63de3e86f5c12"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.155912    5527 utils.go:416] POD #16/20: {
  "metadata": {
    "name": "pdb-workload-w4kh2",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-w4kh2",
    "uid": "9dd1592e-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3795",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.142.223.35",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://d279a033e940fab0"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.156065    5527 utils.go:416] POD #17/20: {
  "metadata": {
    "name": "pdb-workload-wfsmn",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-wfsmn",
    "uid": "9dd14795-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3831",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "310d2184-6584-443c-83cf-1df6982bea38",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.18",
    "podIP": "10.220.235.32",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:40Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://36b3b7994c4f0845"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.156207    5527 utils.go:416] POD #18/20: {
  "metadata": {
    "name": "pdb-workload-zs4hj",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-zs4hj",
    "uid": "9dcdc610-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3776",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.28.179.7",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:40Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://3d35c91aed42ae00"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:11:45.156365    5527 utils.go:416] POD #19/20: {
  "metadata": {
    "name": "pdb-workload-zvpdf",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-zvpdf",
    "uid": "9dd1a952-d0b8-11e9-b3bc-0a445740e986",
    "resourceVersion": "3785",
    "creationTimestamp": "2019-09-06T15:11:35Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "9dc65a09-d0b8-11e9-b3bc-0a445740e986",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-t266s",
        "secret": {
          "secretName": "default-token-t266s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-t266s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "54ff90f1-d0b8-11e9-978c-0a445740e986",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "927f2a33-8b87-455d-9a89-7c030aa4fcf2",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:41Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:11:35Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.218.46.158",
    "startTime": "2019-09-06T15:11:35Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:11:39Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://8be426a8c14c7d52"
      }
    ],
    "qosClass": "Burstable"
  }
}
STEP: Delete machine to trigger node draining
STEP: Observing and verifying node draining
E0906 15:11:45.165841    5527 utils.go:451] Node "310d2184-6584-443c-83cf-1df6982bea38" is expected to be marked as unschedulable, it is not
I0906 15:11:50.170823    5527 utils.go:455] [remaining 14m55s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:11:50.177921    5527 utils.go:474] [remaining 14m55s] Have 9 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:11:50.179578    5527 utils.go:490] [remaining 14m55s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:11:50.179598    5527 utils.go:500] [remaining 14m55s] Expecting at most 2 pods to be scheduled to drained node "310d2184-6584-443c-83cf-1df6982bea38", got 9
I0906 15:11:55.177703    5527 utils.go:455] [remaining 14m50s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:11:55.191140    5527 utils.go:474] [remaining 14m50s] Have 8 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:11:55.195667    5527 utils.go:490] [remaining 14m50s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:11:55.195696    5527 utils.go:500] [remaining 14m50s] Expecting at most 2 pods to be scheduled to drained node "310d2184-6584-443c-83cf-1df6982bea38", got 8
I0906 15:12:00.170299    5527 utils.go:455] [remaining 14m45s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:12:00.177185    5527 utils.go:474] [remaining 14m45s] Have 7 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:12:00.178921    5527 utils.go:490] [remaining 14m45s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:12:00.178945    5527 utils.go:500] [remaining 14m45s] Expecting at most 2 pods to be scheduled to drained node "310d2184-6584-443c-83cf-1df6982bea38", got 7
I0906 15:12:05.169999    5527 utils.go:455] [remaining 14m40s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:12:05.177028    5527 utils.go:474] [remaining 14m40s] Have 6 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:12:05.179833    5527 utils.go:490] [remaining 14m40s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:12:05.179861    5527 utils.go:500] [remaining 14m40s] Expecting at most 2 pods to be scheduled to drained node "310d2184-6584-443c-83cf-1df6982bea38", got 6
I0906 15:12:10.170902    5527 utils.go:455] [remaining 14m35s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:12:10.177679    5527 utils.go:474] [remaining 14m35s] Have 5 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:12:10.179435    5527 utils.go:490] [remaining 14m35s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:12:10.179487    5527 utils.go:500] [remaining 14m35s] Expecting at most 2 pods to be scheduled to drained node "310d2184-6584-443c-83cf-1df6982bea38", got 5
I0906 15:12:15.169974    5527 utils.go:455] [remaining 14m30s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:12:15.177332    5527 utils.go:474] [remaining 14m30s] Have 4 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:12:15.178891    5527 utils.go:490] [remaining 14m30s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:12:15.178918    5527 utils.go:500] [remaining 14m30s] Expecting at most 2 pods to be scheduled to drained node "310d2184-6584-443c-83cf-1df6982bea38", got 4
I0906 15:12:20.171174    5527 utils.go:455] [remaining 14m25s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:12:20.177183    5527 utils.go:474] [remaining 14m25s] Have 3 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:12:20.178915    5527 utils.go:490] [remaining 14m25s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:12:20.178944    5527 utils.go:500] [remaining 14m25s] Expecting at most 2 pods to be scheduled to drained node "310d2184-6584-443c-83cf-1df6982bea38", got 3
I0906 15:12:25.170112    5527 utils.go:455] [remaining 14m20s] Node "310d2184-6584-443c-83cf-1df6982bea38" is mark unschedulable as expected
I0906 15:12:25.176608    5527 utils.go:474] [remaining 14m20s] Have 2 pods scheduled to node "310d2184-6584-443c-83cf-1df6982bea38"
I0906 15:12:25.178235    5527 utils.go:490] [remaining 14m20s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:12:25.178259    5527 utils.go:504] [remaining 14m20s] Expected result: all pods from the RC up to last one or two got scheduled to a different node while respecting PDB
STEP: Validating the machine is deleted
E0906 15:12:25.179998    5527 infra.go:454] Machine "machine1" not yet deleted
E0906 15:12:30.182527    5527 infra.go:454] Machine "machine1" not yet deleted
I0906 15:12:35.182231    5527 infra.go:463] Machine "machine1" successfully deleted
STEP: Validate underlying node corresponding to machine1 is removed as well
I0906 15:12:35.183733    5527 utils.go:530] [15m0s remaining] Node "310d2184-6584-443c-83cf-1df6982bea38" successfully deleted
STEP: Delete PDB
STEP: Delete machine2
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0906 15:12:35.191084    5527 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:12:35.197497    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:12:35.197522    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:12:35.197532    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:12:35.197541    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:12:35.201850    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:12:35.201871    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:12:35.201881    5527 utils.go:231] Node "927f2a33-8b87-455d-9a89-7c030aa4fcf2". Ready: true. Unschedulable: true
I0906 15:12:35.201889    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:12:35.201897    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:12:35.201909    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:12:35.205873    5527 utils.go:87] Cluster size is 6 nodes
I0906 15:12:40.206153    5527 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0906 15:12:40.209599    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:12:40.209627    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:12:40.209637    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:12:40.209646    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:12:40.212928    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:12:40.212949    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:12:40.212955    5527 utils.go:231] Node "927f2a33-8b87-455d-9a89-7c030aa4fcf2". Ready: true. Unschedulable: true
I0906 15:12:40.212963    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:12:40.212973    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:12:40.212981    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:12:40.216851    5527 utils.go:87] Cluster size is 6 nodes
I0906 15:12:45.206118    5527 utils.go:239] [remaining 14m50s] Cluster size expected to be 5 nodes
I0906 15:12:45.209024    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:12:45.209051    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:12:45.209061    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:12:45.209070    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:12:45.212171    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:12:45.212193    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:12:45.212203    5527 utils.go:231] Node "927f2a33-8b87-455d-9a89-7c030aa4fcf2". Ready: true. Unschedulable: true
I0906 15:12:45.212212    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:12:45.212220    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:12:45.212228    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:12:45.216040    5527 utils.go:87] Cluster size is 6 nodes
I0906 15:12:50.206134    5527 utils.go:239] [remaining 14m45s] Cluster size expected to be 5 nodes
I0906 15:12:50.209012    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:12:50.209034    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:12:50.209040    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:12:50.209046    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:12:50.211890    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:12:50.211910    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:12:50.211916    5527 utils.go:231] Node "927f2a33-8b87-455d-9a89-7c030aa4fcf2". Ready: true. Unschedulable: true
I0906 15:12:50.211921    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:12:50.211929    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:12:50.211937    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:12:50.218996    5527 utils.go:87] Cluster size is 6 nodes
I0906 15:12:55.206486    5527 utils.go:239] [remaining 14m40s] Cluster size expected to be 5 nodes
I0906 15:12:55.209887    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:12:55.209921    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:12:55.209933    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:12:55.209944    5527 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:12:55.212888    5527 utils.go:231] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab". Ready: true. Unschedulable: false
I0906 15:12:55.212917    5527 utils.go:231] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546". Ready: true. Unschedulable: false
I0906 15:12:55.212928    5527 utils.go:231] Node "b3408843-b44c-4857-ab9d-3b13ab158aea". Ready: true. Unschedulable: false
I0906 15:12:55.212937    5527 utils.go:231] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b". Ready: true. Unschedulable: false
I0906 15:12:55.212947    5527 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:12:55.216206    5527 utils.go:87] Cluster size is 5 nodes
I0906 15:12:55.216237    5527 utils.go:257] waiting for all nodes to be ready
I0906 15:12:55.220294    5527 utils.go:262] waiting for all nodes to be schedulable
I0906 15:12:55.223696    5527 utils.go:290] [remaining 1m0s] Node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab" is schedulable
I0906 15:12:55.223727    5527 utils.go:290] [remaining 1m0s] Node "8d76d38d-5446-4aef-802c-ad0fcfdb4546" is schedulable
I0906 15:12:55.223740    5527 utils.go:290] [remaining 1m0s] Node "b3408843-b44c-4857-ab9d-3b13ab158aea" is schedulable
I0906 15:12:55.223750    5527 utils.go:290] [remaining 1m0s] Node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b" is schedulable
I0906 15:12:55.223761    5527 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:12:55.223770    5527 utils.go:267] waiting for each node to be backed by a machine
I0906 15:12:55.232480    5527 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:12:55.232512    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-6pt7l" is linked to node "6ed3bc5e-d85d-4e5c-bce4-61d11ef633ab"
I0906 15:12:55.232527    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-hpgct" is linked to node "8d76d38d-5446-4aef-802c-ad0fcfdb4546"
I0906 15:12:55.232541    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-scthk" is linked to node "b3408843-b44c-4857-ab9d-3b13ab158aea"
I0906 15:12:55.232555    5527 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-s4l9g" is linked to node "c81bafaa-7edf-4fb4-b5c9-b78f1548066b"
I0906 15:12:55.232569    5527 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
I0906 15:12:55.242816    5527 utils.go:378] [15m0s remaining] Found 0 number of nodes with map[node-role.kubernetes.io/worker: node-draining-test:54ff90f1-d0b8-11e9-978c-0a445740e986] label as expected

• [SLOW TEST:85.174 seconds]
[Feature:Machines] Managed cluster should
/tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  drain node before removing machine resource
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
------------------------------
[Feature:Machines] Managed cluster should 
  reject invalid machinesets
  /tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:487
I0906 15:12:55.242925    5527 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Creating invalid machineset
STEP: Waiting for ReconcileError MachineSet event
I0906 15:12:55.327608    5527 infra.go:506] Fetching ReconcileError MachineSet invalid-machineset event
I0906 15:12:55.327648    5527 infra.go:512] Found ReconcileError event for "invalid-machineset" machine set with the following message: "invalid-machineset" machineset validation failed: spec.template.metadata.labels: Invalid value: map[string]string{"big-kitty":"i-am-bit-kitty"}: `selector` does not match template `labels`
STEP: Verify no machine from "invalid-machineset" machineset were created
I0906 15:12:55.330968    5527 infra.go:528] Have 0 machines generated from "invalid-machineset" machineset
STEP: Deleting invalid machineset
•
Ran 7 of 16 Specs in 202.323 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (202.32s)
PASS
ok  	github.com/openshift/cluster-api-actuator-pkg/pkg/e2e	202.381s
make[1]: Leaving directory `/tmp/tmp.3XEIfW31vl/src/github.com/openshift/cluster-api-actuator-pkg'
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN E2E TESTS [00h 04m 28s] ##########
[PostBuildScript] - Executing post build scripts.
[workspace] $ /bin/bash /tmp/jenkins6506429167453858818.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/gathered
/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/gathered

0 directories, 0 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins5975698528833924011.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/generated
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/generated
/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
└── pid1.journal

0 directories, 8 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins7054677787600306698.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/journals
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/journals
/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service

0 directories, 3 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins2021010689578215985.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/pull-ci-openshift-machine-api-operator-master-e2e/716/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/builds/716/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r 'artifacts/gathered/*' gcs/artifacts/
cp: cannot stat ‘artifacts/gathered/*’: No such file or directory
++ export status=FAILURE
++ status=FAILURE
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins5167898857619189381.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
++ mktemp
+ script=/tmp/tmp.O7bm8Z9vJ4
+ cat
+ chmod +x /tmp/tmp.O7bm8Z9vJ4
+ scp -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.O7bm8Z9vJ4 openshiftdevel:/tmp/tmp.O7bm8Z9vJ4
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.O7bm8Z9vJ4"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"pull-ci-openshift-machine-api-operator-master-e2e","buildid":"1169987167749935104","prowjobid":"2695ec19-d0b6-11e9-a06a-0a58ac108d5e","refs":{"org":"openshift","repo":"machine-api-operator","repo_link":"https://github.com/openshift/machine-api-operator","base_ref":"master","base_sha":"474e14e4965a8c5e6788417c851ccc7fad1acb3a","base_link":"https://github.com/openshift/machine-api-operator/commit/474e14e4965a8c5e6788417c851ccc7fad1acb3a","pulls":[{"number":389,"author":"sadasu","sha":"229c7ea627e98ef3b7c1927a25352d366fea7023","link":"https://github.com/openshift/machine-api-operator/pull/389","commit_link":"https://github.com/openshift/machine-api-operator/pull/389/commits/229c7ea627e98ef3b7c1927a25352d366fea7023","author_link":"https://github.com/sadasu"}]}} ]]
++ jq --compact-output '.buildid |= "716"'
+ JOB_SPEC='{"type":"presubmit","job":"pull-ci-openshift-machine-api-operator-master-e2e","buildid":"716","prowjobid":"2695ec19-d0b6-11e9-a06a-0a58ac108d5e","refs":{"org":"openshift","repo":"machine-api-operator","repo_link":"https://github.com/openshift/machine-api-operator","base_ref":"master","base_sha":"474e14e4965a8c5e6788417c851ccc7fad1acb3a","base_link":"https://github.com/openshift/machine-api-operator/commit/474e14e4965a8c5e6788417c851ccc7fad1acb3a","pulls":[{"number":389,"author":"sadasu","sha":"229c7ea627e98ef3b7c1927a25352d366fea7023","link":"https://github.com/openshift/machine-api-operator/pull/389","commit_link":"https://github.com/openshift/machine-api-operator/pull/389/commits/229c7ea627e98ef3b7c1927a25352d366fea7023","author_link":"https://github.com/sadasu"}]}}'
+ docker run -e 'JOB_SPEC={"type":"presubmit","job":"pull-ci-openshift-machine-api-operator-master-e2e","buildid":"716","prowjobid":"2695ec19-d0b6-11e9-a06a-0a58ac108d5e","refs":{"org":"openshift","repo":"machine-api-operator","repo_link":"https://github.com/openshift/machine-api-operator","base_ref":"master","base_sha":"474e14e4965a8c5e6788417c851ccc7fad1acb3a","base_link":"https://github.com/openshift/machine-api-operator/commit/474e14e4965a8c5e6788417c851ccc7fad1acb3a","pulls":[{"number":389,"author":"sadasu","sha":"229c7ea627e98ef3b7c1927a25352d366fea7023","link":"https://github.com/openshift/machine-api-operator/pull/389","commit_link":"https://github.com/openshift/machine-api-operator/pull/389/commits/229c7ea627e98ef3b7c1927a25352d366fea7023","author_link":"https://github.com/sadasu"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-path=gs://origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin '/data/gcs/*'
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ... 
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
a073c86ecf9e: Already exists
cc3fc741b1a9: Already exists
822bed51ba40: Pulling fs layer
85cea451eec0: Pulling fs layer
85cea451eec0: Verifying Checksum
85cea451eec0: Download complete
822bed51ba40: Download complete
822bed51ba40: Pull complete
85cea451eec0: Pull complete
Digest: sha256:03aad50d7ec631ee07c12ac2ba679bd48c7781f7d5754f9e0dcc4e7260e35208
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","file":"prow/gcsupload/run.go:107","func":"k8s.io/test-infra/prow/gcsupload.Options.assembleTargets","level":"warning","msg":"Encountered error in resolving items to upload for /data/gcs/*: stat /data/gcs/*: no such file or directory","time":"2019-09-06T15:13:16Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-machine-api-operator-master-e2e/716.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-06T15:13:16Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-machine-api-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-06T15:13:16Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_machine-api-operator/389/pull-ci-openshift-machine-api-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-06T15:13:16Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-machine-api-operator-master-e2e/716.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-06T15:13:17Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_machine-api-operator/389/pull-ci-openshift-machine-api-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-06T15:13:17Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-machine-api-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-06T15:13:17Z"}
{"component":"gcsupload","file":"prow/gcsupload/run.go:65","func":"k8s.io/test-infra/prow/gcsupload.Options.Run","level":"info","msg":"Finished upload to GCS","time":"2019-09-06T15:13:17Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 06s] ##########
[workspace] $ /bin/bash /tmp/jenkins2649458443145145662.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2019-09-06 11:13:18.727101", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2019-09-06 11:13:18.729657", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2019-09-06 11:13:19.545389", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:13:20.212821", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-06550787d42cc325e."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:13:21.285786", 
    "instance_ids": [
        "i-06550787d42cc325e"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-03d9d644224906960"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-0bbfd421d51201f8f"
                }
            }, 
            "dns_name": "ec2-52-200-5-193.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-06550787d42cc325e", 
            "image_id": "ami-0b77b87a37c3e662c", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2019-09-06T14:54:32.000Z", 
            "placement": "us-east-1c", 
            "private_dns_name": "ip-172-18-28-208.ec2.internal", 
            "private_ip": "172.18.28.208", 
            "public_dns_name": "ec2-52-200-5-193.compute-1.amazonaws.com", 
            "public_ip": "52.200.5.193", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:13:21.523102", 
    "path": "/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.28.208.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:13:22.014626", 
    "path": "/var/lib/jenkins/jobs/pull-ci-openshift-machine-api-operator-master-e2e/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 05s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
Finished: SUCCESS