SuccessConsole Output

Skipping 186 KB.. Full Log
809 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 16:08:09.635926   30809 deloyment.go:58] Deployment "machine-api-operator" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
•
------------------------------
[Feature:Operators] Machine API operator deployment should 
  reconcile controllers deployment
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/machine-api-operator.go:25
I0904 16:08:09.635978   30809 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking deployment "machine-api-controllers" is available
I0904 16:08:09.652880   30809 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
STEP: deleting deployment "machine-api-controllers"
STEP: checking deployment "machine-api-controllers" is available again
E0904 16:08:09.660282   30809 deloyment.go:25] Error querying api for Deployment object "machine-api-controllers": deployments.apps "machine-api-controllers" not found, retrying...
E0904 16:08:10.664898   30809 deloyment.go:55] Deployment "machine-api-controllers" is not available. Status: (replicas: 1, updated: 1, ready: 0, available: 0, unavailable: 1)
I0904 16:08:11.668245   30809 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
•SSSSSS
Ran 7 of 16 Specs in 2.171 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (2.17s)
PASS
ok  	github.com/openshift/cluster-api-actuator-pkg/pkg/e2e	2.232s
NAMESPACE=kube-system hack/ci-integration.sh  -ginkgo.v -ginkgo.noColor=true -ginkgo.skip "Feature:Operators|TechPreview" -ginkgo.failFast -ginkgo.seed=1
=== RUN   TestE2E
Running Suite: Machine Suite
============================
Random Seed: 1
Will run 7 of 16 specs

SSSSSSSS
------------------------------
[Feature:Machines] Autoscaler should 
  scale up and down
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:234
I0904 16:08:14.919232   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 16:08:14.924660   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 16:08:14.947914   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Getting existing machinesets
STEP: Getting existing machines
STEP: Getting existing nodes
I0904 16:08:14.960758   31343 autoscaler.go:286] Have 4 existing machinesets
I0904 16:08:14.960781   31343 autoscaler.go:287] Have 5 existing machines
I0904 16:08:14.960788   31343 autoscaler.go:288] Have 5 existing nodes
STEP: Creating 3 transient machinesets
STEP: [15m0s remaining] Waiting for nodes to be Ready in 3 transient machinesets
E0904 16:08:14.990278   31343 utils.go:157] Machine "e2e-336b0-w-0-vf6h5" has no NodeRef
STEP: [14m57s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0904 16:08:18.006539   31343 utils.go:165] Machine "e2e-336b0-w-0-vf6h5" is backing node "f04d1ba0-086a-426e-b543-563e6e04838a"
I0904 16:08:18.006564   31343 utils.go:149] MachineSet "e2e-336b0-w-0" have 1 nodes
E0904 16:08:18.011713   31343 utils.go:157] Machine "e2e-336b0-w-1-2f49v" has no NodeRef
STEP: [14m54s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0904 16:08:21.019773   31343 utils.go:165] Machine "e2e-336b0-w-0-vf6h5" is backing node "f04d1ba0-086a-426e-b543-563e6e04838a"
I0904 16:08:21.019804   31343 utils.go:149] MachineSet "e2e-336b0-w-0" have 1 nodes
I0904 16:08:21.025347   31343 utils.go:165] Machine "e2e-336b0-w-1-2f49v" is backing node "f3d83ee7-21fd-4ef5-bf29-5dfa22ea38f4"
I0904 16:08:21.025373   31343 utils.go:149] MachineSet "e2e-336b0-w-1" have 1 nodes
I0904 16:08:21.030509   31343 utils.go:165] Machine "e2e-336b0-w-2-chf54" is backing node "5944e33a-b080-44b2-b9c0-b4bee9b818e6"
I0904 16:08:21.030536   31343 utils.go:149] MachineSet "e2e-336b0-w-2" have 1 nodes
I0904 16:08:21.030583   31343 utils.go:177] Node "f04d1ba0-086a-426e-b543-563e6e04838a" is ready. Conditions are: [{OutOfDisk False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:17 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:17 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:17 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:17 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:17 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 16:08:21.030662   31343 utils.go:177] Node "f3d83ee7-21fd-4ef5-bf29-5dfa22ea38f4" is ready. Conditions are: [{OutOfDisk False 2019-09-04 16:08:20 +0000 UTC 2019-09-04 16:08:18 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:08:20 +0000 UTC 2019-09-04 16:08:18 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:08:20 +0000 UTC 2019-09-04 16:08:18 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:08:20 +0000 UTC 2019-09-04 16:08:18 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:08:20 +0000 UTC 2019-09-04 16:08:18 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 16:08:21.030706   31343 utils.go:177] Node "5944e33a-b080-44b2-b9c0-b4bee9b818e6" is ready. Conditions are: [{OutOfDisk False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:19 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:19 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:19 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:08:19 +0000 UTC 2019-09-04 16:08:19 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: Getting nodes
STEP: Creating 3 machineautoscalers
I0904 16:08:21.033787   31343 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-336b0-w-0 - min:1, max:2
I0904 16:08:21.040195   31343 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-336b0-w-1 - min:1, max:2
I0904 16:08:21.043839   31343 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-336b0-w-2 - min:1, max:2
STEP: Creating ClusterAutoscaler configured with maxNodesTotal:10
STEP: Deriving Memory capacity from machine "kubemark-actuator-testing-machineset"
I0904 16:08:21.158022   31343 autoscaler.go:377] Memory capacity of worker node "1683dbc1-25e9-424c-94c0-5f6e2f3b5bab" is 3840Mi
STEP: Creating scale-out workload: jobs: 11, memory: 2818572300
I0904 16:08:21.183317   31343 autoscaler.go:399] [15m0s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 16:08:23.013209   31343 autoscaler.go:361] cluster-autoscaler: cluster-autoscaler-default-598c649f66-p5mnf became leader
I0904 16:08:24.183473   31343 autoscaler.go:399] [14m57s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 16:08:27.183624   31343 autoscaler.go:399] [14m54s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 16:08:30.183881   31343 autoscaler.go:399] [14m51s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 16:08:33.172918   31343 autoscaler.go:361] cluster-autoscaler-status: Max total nodes in cluster reached: 10
I0904 16:08:33.175073   31343 autoscaler.go:361] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-336b0-w-1 size to 2
I0904 16:08:33.183536   31343 autoscaler.go:361] cluster-autoscaler-status: Scale-up: group kube-system/e2e-336b0-w-1 size set to 2
I0904 16:08:33.184608   31343 autoscaler.go:399] [14m48s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 16:08:33.190835   31343 autoscaler.go:361] e2e-autoscaler-workload-c6k57: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:33.202568   31343 autoscaler.go:361] e2e-autoscaler-workload-4pksh: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:33.206871   31343 autoscaler.go:361] e2e-autoscaler-workload-pxx5t: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:33.210784   31343 autoscaler.go:361] e2e-autoscaler-workload-cvv84: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:33.219791   31343 autoscaler.go:361] e2e-autoscaler-workload-b5qtw: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:33.229732   31343 autoscaler.go:361] e2e-autoscaler-workload-gv8nf: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:33.236764   31343 autoscaler.go:361] e2e-autoscaler-workload-q9zt6: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:33.372051   31343 autoscaler.go:361] e2e-autoscaler-workload-q8d7r: pod triggered scale-up: [{kube-system/e2e-336b0-w-1 1->2 (max: 2)}]
I0904 16:08:36.184813   31343 autoscaler.go:399] [14m45s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 16:08:39.185813   31343 autoscaler.go:399] [14m42s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 16:08:42.186027   31343 autoscaler.go:399] [14m39s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 16:08:43.210655   31343 autoscaler.go:361] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-336b0-w-0 size to 2
I0904 16:08:43.220194   31343 autoscaler.go:361] cluster-autoscaler-status: Scale-up: group kube-system/e2e-336b0-w-0 size set to 2
I0904 16:08:43.224832   31343 autoscaler.go:361] e2e-autoscaler-workload-q9zt6: pod triggered scale-up: [{kube-system/e2e-336b0-w-0 1->2 (max: 2)}]
I0904 16:08:43.231707   31343 autoscaler.go:361] e2e-autoscaler-workload-q8d7r: pod triggered scale-up: [{kube-system/e2e-336b0-w-0 1->2 (max: 2)}]
I0904 16:08:43.239883   31343 autoscaler.go:361] e2e-autoscaler-workload-4pksh: pod triggered scale-up: [{kube-system/e2e-336b0-w-0 1->2 (max: 2)}]
I0904 16:08:43.244711   31343 autoscaler.go:361] e2e-autoscaler-workload-pxx5t: pod triggered scale-up: [{kube-system/e2e-336b0-w-0 1->2 (max: 2)}]
I0904 16:08:43.247814   31343 autoscaler.go:361] e2e-autoscaler-workload-gv8nf: pod triggered scale-up: [{kube-system/e2e-336b0-w-0 1->2 (max: 2)}]
I0904 16:08:43.252787   31343 autoscaler.go:361] e2e-autoscaler-workload-cvv84: pod triggered scale-up: [{kube-system/e2e-336b0-w-0 1->2 (max: 2)}]
I0904 16:08:43.261014   31343 autoscaler.go:361] e2e-autoscaler-workload-c6k57: pod triggered scale-up: [{kube-system/e2e-336b0-w-0 1->2 (max: 2)}]
I0904 16:08:45.186244   31343 autoscaler.go:399] [14m36s remaining] Expecting 2 "ScaledUpGroup" events; observed 2
I0904 16:08:45.187101   31343 autoscaler.go:414] [1m0s remaining] Waiting for cluster-autoscaler to generate a "MaxNodesTotalReached" event; observed 1
I0904 16:08:45.187133   31343 autoscaler.go:422] [1m0s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:08:48.187330   31343 autoscaler.go:422] [57s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:08:51.187548   31343 autoscaler.go:422] [54s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:08:54.187776   31343 autoscaler.go:422] [51s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:08:57.188037   31343 autoscaler.go:422] [48s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:00.188281   31343 autoscaler.go:422] [45s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:03.189280   31343 autoscaler.go:422] [42s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:06.189561   31343 autoscaler.go:422] [39s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:09.189778   31343 autoscaler.go:422] [36s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:12.190023   31343 autoscaler.go:422] [33s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:15.190249   31343 autoscaler.go:422] [30s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:18.190464   31343 autoscaler.go:422] [27s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:21.190679   31343 autoscaler.go:422] [24s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:24.190915   31343 autoscaler.go:422] [21s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:27.191084   31343 autoscaler.go:422] [18s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:30.191341   31343 autoscaler.go:422] [15s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:33.191591   31343 autoscaler.go:422] [12s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:36.191898   31343 autoscaler.go:422] [9s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:39.192117   31343 autoscaler.go:422] [6s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 16:09:42.192360   31343 autoscaler.go:422] [3s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
STEP: Deleting workload
I0904 16:09:45.187291   31343 autoscaler.go:249] [cleanup] "e2e-autoscaler-workload" (*v1.Job)
I0904 16:09:45.192630   31343 autoscaler.go:434] [15m0s remaining] Expecting 2 "ScaleDownEmpty" events; observed 2
I0904 16:09:45.210456   31343 autoscaler.go:445] still have workload POD: "e2e-autoscaler-workload-4pksh"
I0904 16:09:45.210485   31343 autoscaler.go:249] [cleanup] "default" (*v1.ClusterAutoscaler)
I0904 16:09:45.256709   31343 autoscaler.go:465] Waiting for cluster-autoscaler POD "cluster-autoscaler-default-598c649f66-p5mnf" to disappear
STEP: Scaling transient machinesets to zero
I0904 16:09:45.256748   31343 autoscaler.go:474] Scaling transient machineset "e2e-336b0-w-0" to zero
I0904 16:09:45.266766   31343 autoscaler.go:474] Scaling transient machineset "e2e-336b0-w-1" to zero
I0904 16:09:45.273896   31343 autoscaler.go:474] Scaling transient machineset "e2e-336b0-w-2" to zero
STEP: Waiting for scaled up nodes to be deleted
I0904 16:09:45.294687   31343 autoscaler.go:491] [15m0s remaining] Waiting for cluster to reach original node count of 5; currently have 10
I0904 16:09:48.297755   31343 autoscaler.go:491] [14m57s remaining] Waiting for cluster to reach original node count of 5; currently have 5
STEP: Waiting for scaled up machines to be deleted
I0904 16:09:48.300969   31343 autoscaler.go:501] [15m0s remaining] Waiting for cluster to reach original machine count of 5; currently have 5
I0904 16:09:48.300997   31343 autoscaler.go:249] [cleanup] "e2e-336b0-w-0" (*v1beta1.MachineSet)
I0904 16:09:48.304663   31343 autoscaler.go:249] [cleanup] "e2e-336b0-w-1" (*v1beta1.MachineSet)
I0904 16:09:48.308589   31343 autoscaler.go:249] [cleanup] "e2e-336b0-w-2" (*v1beta1.MachineSet)
I0904 16:09:48.313711   31343 autoscaler.go:249] [cleanup] "autoscale-e2e-336b0-w-0clsbx" (*v1beta1.MachineAutoscaler)
I0904 16:09:48.323750   31343 autoscaler.go:249] [cleanup] "autoscale-e2e-336b0-w-1hj9gw" (*v1beta1.MachineAutoscaler)
I0904 16:09:48.333624   31343 autoscaler.go:249] [cleanup] "autoscale-e2e-336b0-w-2zpc4r" (*v1beta1.MachineAutoscaler)

• [SLOW TEST:93.420 seconds]
[Feature:Machines] Autoscaler should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:233
  scale up and down
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:234
------------------------------
S
------------------------------
[Feature:Machines] Managed cluster should 
  have machines linked with nodes
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:136
I0904 16:09:48.339487   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 16:09:48.359164   31343 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 16:09:48.359198   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is linked to node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
I0904 16:09:48.359216   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-zrxlb" is linked to node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da"
I0904 16:09:48.359230   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-lhs6r" is linked to node "1683dbc1-25e9-424c-94c0-5f6e2f3b5bab"
I0904 16:09:48.359244   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-566rv" is linked to node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1"
I0904 16:09:48.359258   31343 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
•
------------------------------
[Feature:Machines] Managed cluster should 
  have ability to additively reconcile taints from machine to nodes
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:145
I0904 16:09:48.359317   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: getting machine "kubemark-actuator-testing-machineset-blue-8mk44"
I0904 16:09:48.384682   31343 utils.go:165] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is backing node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
STEP: getting the backed node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
STEP: updating node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c" with taint: {not-from-machine true NoSchedule <nil>}
STEP: updating machine "kubemark-actuator-testing-machineset-blue-8mk44" with taint: {from-machine-6b1b07e3-cf2e-11e9-82ed-0a1de5b610ea true NoSchedule <nil>}
I0904 16:09:48.395004   31343 infra.go:184] Getting node from machine again for verification of taints
I0904 16:09:48.401828   31343 utils.go:165] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is backing node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
I0904 16:09:48.401857   31343 infra.go:194] Expected : map[from-machine-6b1b07e3-cf2e-11e9-82ed-0a1de5b610ea:{} not-from-machine:{}], observed map[kubemark:{} not-from-machine:{} from-machine-6b1b07e3-cf2e-11e9-82ed-0a1de5b610ea:{}] , difference map[], 
STEP: Getting the latest version of the original machine
STEP: Setting back the original machine taints
STEP: Getting the latest version of the node
I0904 16:09:48.409444   31343 utils.go:165] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is backing node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
STEP: Setting back the original node taints
•
------------------------------
[Feature:Machines] Managed cluster should 
  recover from deleted worker machines
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
I0904 16:09:48.414743   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking initial cluster state
I0904 16:09:48.440213   31343 utils.go:87] Cluster size is 5 nodes
I0904 16:09:48.440250   31343 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 16:09:48.444434   31343 utils.go:99] MachineSet "e2e-336b0-w-0" replicas 0. Ready: 0, available 0
I0904 16:09:48.444462   31343 utils.go:99] MachineSet "e2e-336b0-w-1" replicas 0. Ready: 0, available 0
I0904 16:09:48.444472   31343 utils.go:99] MachineSet "e2e-336b0-w-2" replicas 0. Ready: 0, available 0
I0904 16:09:48.444480   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:09:48.444489   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:09:48.444498   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:09:48.444513   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:09:48.447505   31343 utils.go:231] Node "1683dbc1-25e9-424c-94c0-5f6e2f3b5bab". Ready: true. Unschedulable: false
I0904 16:09:48.447526   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:09:48.447536   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:09:48.447542   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:09:48.447589   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:09:48.452880   31343 utils.go:87] Cluster size is 5 nodes
I0904 16:09:48.452905   31343 utils.go:257] waiting for all nodes to be ready
I0904 16:09:48.459598   31343 utils.go:262] waiting for all nodes to be schedulable
I0904 16:09:48.466681   31343 utils.go:290] [remaining 1m0s] Node "1683dbc1-25e9-424c-94c0-5f6e2f3b5bab" is schedulable
I0904 16:09:48.466716   31343 utils.go:290] [remaining 1m0s] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1" is schedulable
I0904 16:09:48.466727   31343 utils.go:290] [remaining 1m0s] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da" is schedulable
I0904 16:09:48.466738   31343 utils.go:290] [remaining 1m0s] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c" is schedulable
I0904 16:09:48.466747   31343 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 16:09:48.466756   31343 utils.go:267] waiting for each node to be backed by a machine
I0904 16:09:48.478834   31343 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 16:09:48.478873   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is linked to node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
I0904 16:09:48.478884   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-zrxlb" is linked to node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da"
I0904 16:09:48.478892   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-lhs6r" is linked to node "1683dbc1-25e9-424c-94c0-5f6e2f3b5bab"
I0904 16:09:48.478901   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-566rv" is linked to node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1"
I0904 16:09:48.478909   31343 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
STEP: getting worker node
STEP: deleting machine object "kubemark-actuator-testing-machineset-lhs6r"
STEP: waiting for node object "1683dbc1-25e9-424c-94c0-5f6e2f3b5bab" to go away
I0904 16:09:48.501172   31343 infra.go:255] Node "1683dbc1-25e9-424c-94c0-5f6e2f3b5bab" still exists. Node conditions are: [{OutOfDisk False 2019-09-04 16:09:48 +0000 UTC 2019-09-04 16:07:23 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:09:48 +0000 UTC 2019-09-04 16:07:23 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:09:48 +0000 UTC 2019-09-04 16:07:23 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:09:48 +0000 UTC 2019-09-04 16:07:23 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:09:48 +0000 UTC 2019-09-04 16:07:23 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: waiting for new node object to come up
I0904 16:09:53.506286   31343 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 16:09:53.509266   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:09:53.509286   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:09:53.509292   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:09:53.509298   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:09:53.511880   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:09:53.511900   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:09:53.511906   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:09:53.511911   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:09:53.511916   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:09:53.514496   31343 utils.go:87] Cluster size is 5 nodes
I0904 16:09:53.514515   31343 utils.go:257] waiting for all nodes to be ready
I0904 16:09:53.516985   31343 utils.go:262] waiting for all nodes to be schedulable
I0904 16:09:53.519703   31343 utils.go:290] [remaining 1m0s] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1" is schedulable
I0904 16:09:53.519726   31343 utils.go:290] [remaining 1m0s] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da" is schedulable
I0904 16:09:53.519733   31343 utils.go:290] [remaining 1m0s] Node "80076895-9615-40cc-9535-18c66ad0cef2" is schedulable
I0904 16:09:53.519741   31343 utils.go:290] [remaining 1m0s] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c" is schedulable
I0904 16:09:53.519752   31343 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 16:09:53.519760   31343 utils.go:267] waiting for each node to be backed by a machine
I0904 16:09:53.525171   31343 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 16:09:53.525198   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is linked to node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
I0904 16:09:53.525209   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-zrxlb" is linked to node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da"
I0904 16:09:53.525217   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-566rv" is linked to node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1"
I0904 16:09:53.525229   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-zml5d" is linked to node "80076895-9615-40cc-9535-18c66ad0cef2"
I0904 16:09:53.525250   31343 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"

• [SLOW TEST:5.111 seconds]
[Feature:Machines] Managed cluster should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  recover from deleted worker machines
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
------------------------------
[Feature:Machines] Managed cluster should 
  grow and decrease when scaling different machineSets simultaneously
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
I0904 16:09:53.525367   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0904 16:09:53.540849   31343 utils.go:87] Cluster size is 5 nodes
STEP: getting worker machineSets
I0904 16:09:53.543748   31343 infra.go:297] Creating transient MachineSet "e2e-6e2d9-w-0"
I0904 16:09:53.547755   31343 infra.go:297] Creating transient MachineSet "e2e-6e2d9-w-1"
STEP: scaling "e2e-6e2d9-w-0" from 0 to 2 replicas
I0904 16:09:53.552225   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-6e2d9-w-1" from 0 to 2 replicas
I0904 16:09:53.575402   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
E0904 16:09:53.617056   31343 utils.go:157] Machine "e2e-6e2d9-w-0-2hfhb" has no NodeRef
I0904 16:09:58.629883   31343 utils.go:165] Machine "e2e-6e2d9-w-0-2hfhb" is backing node "5990abf0-6713-4894-b1a5-b644f715c0d3"
I0904 16:09:58.633382   31343 utils.go:165] Machine "e2e-6e2d9-w-0-6mp9h" is backing node "f0146bf9-e4da-4b54-b814-e8810d959059"
I0904 16:09:58.633408   31343 utils.go:149] MachineSet "e2e-6e2d9-w-0" have 2 nodes
I0904 16:09:58.645108   31343 utils.go:165] Machine "e2e-6e2d9-w-1-5swwn" is backing node "a1fdacb6-1a18-4da2-beec-a85047ca34c9"
I0904 16:09:58.649647   31343 utils.go:165] Machine "e2e-6e2d9-w-1-z7vkt" is backing node "0760bafd-0d42-4313-905a-b8cecc50b160"
I0904 16:09:58.649668   31343 utils.go:149] MachineSet "e2e-6e2d9-w-1" have 2 nodes
I0904 16:09:58.649679   31343 utils.go:177] Node "5990abf0-6713-4894-b1a5-b644f715c0d3" is ready. Conditions are: [{OutOfDisk False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 16:09:58.649730   31343 utils.go:177] Node "f0146bf9-e4da-4b54-b814-e8810d959059" is ready. Conditions are: [{OutOfDisk False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:56 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:56 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:56 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:56 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:56 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 16:09:58.649769   31343 utils.go:177] Node "a1fdacb6-1a18-4da2-beec-a85047ca34c9" is ready. Conditions are: [{OutOfDisk False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:09:57 +0000 UTC 2019-09-04 16:09:57 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 16:09:58.649804   31343 utils.go:177] Node "0760bafd-0d42-4313-905a-b8cecc50b160" is ready. Conditions are: [{OutOfDisk False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:58 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:58 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:58 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:58 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 16:09:58 +0000 UTC 2019-09-04 16:09:58 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: scaling "e2e-6e2d9-w-0" from 2 to 0 replicas
I0904 16:09:58.649851   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-6e2d9-w-1" from 2 to 0 replicas
I0904 16:09:58.670220   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0904 16:09:58.703731   31343 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 16:09:58.712061   31343 utils.go:99] MachineSet "e2e-6e2d9-w-0" replicas 0. Ready: 0, available 0
I0904 16:09:58.712086   31343 utils.go:99] MachineSet "e2e-6e2d9-w-1" replicas 0. Ready: 2, available 2
I0904 16:09:58.712096   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:09:58.712106   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:09:58.712115   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:09:58.712125   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:09:58.718729   31343 utils.go:231] Node "0760bafd-0d42-4313-905a-b8cecc50b160". Ready: true. Unschedulable: false
I0904 16:09:58.718753   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:09:58.718762   31343 utils.go:231] Node "5990abf0-6713-4894-b1a5-b644f715c0d3". Ready: true. Unschedulable: true
I0904 16:09:58.718770   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:09:58.718779   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:09:58.718787   31343 utils.go:231] Node "a1fdacb6-1a18-4da2-beec-a85047ca34c9". Ready: true. Unschedulable: false
I0904 16:09:58.718799   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:09:58.718808   31343 utils.go:231] Node "f0146bf9-e4da-4b54-b814-e8810d959059". Ready: true. Unschedulable: false
I0904 16:09:58.718818   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:09:58.724720   31343 utils.go:87] Cluster size is 9 nodes
I0904 16:10:03.724952   31343 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0904 16:10:03.730269   31343 utils.go:99] MachineSet "e2e-6e2d9-w-0" replicas 0. Ready: 0, available 0
I0904 16:10:03.730294   31343 utils.go:99] MachineSet "e2e-6e2d9-w-1" replicas 0. Ready: 0, available 0
I0904 16:10:03.730304   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:10:03.730313   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:10:03.730321   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:10:03.730331   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:10:03.733078   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:10:03.733103   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:10:03.733112   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:10:03.733120   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:10:03.733128   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:10:03.735744   31343 utils.go:87] Cluster size is 5 nodes
I0904 16:10:03.735776   31343 utils.go:257] waiting for all nodes to be ready
I0904 16:10:03.738335   31343 utils.go:262] waiting for all nodes to be schedulable
I0904 16:10:03.740803   31343 utils.go:290] [remaining 1m0s] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1" is schedulable
I0904 16:10:03.740831   31343 utils.go:290] [remaining 1m0s] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da" is schedulable
I0904 16:10:03.740842   31343 utils.go:290] [remaining 1m0s] Node "80076895-9615-40cc-9535-18c66ad0cef2" is schedulable
I0904 16:10:03.740853   31343 utils.go:290] [remaining 1m0s] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c" is schedulable
I0904 16:10:03.740862   31343 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 16:10:03.740871   31343 utils.go:267] waiting for each node to be backed by a machine
I0904 16:10:03.747154   31343 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 16:10:03.747181   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is linked to node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
I0904 16:10:03.747196   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-zrxlb" is linked to node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da"
I0904 16:10:03.747204   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-566rv" is linked to node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1"
I0904 16:10:03.747212   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-zml5d" is linked to node "80076895-9615-40cc-9535-18c66ad0cef2"
I0904 16:10:03.747221   31343 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"

• [SLOW TEST:10.229 seconds]
[Feature:Machines] Managed cluster should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  grow and decrease when scaling different machineSets simultaneously
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
------------------------------
[Feature:Machines] Managed cluster should 
  drain node before removing machine resource
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
I0904 16:10:03.754615   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0904 16:10:03.769480   31343 utils.go:87] Cluster size is 5 nodes
STEP: Taking the first worker machineset (assuming only worker machines are backed by machinesets)
STEP: Creating two new machines, one for node about to be drained, other for moving workload from drained node
STEP: Waiting until both new nodes are ready
E0904 16:10:03.783175   31343 utils.go:342] [remaining 15m0s] Expecting 2 nodes with map[string]string{"node-role.kubernetes.io/worker":"", "node-draining-test":"33644a25-cf2e-11e9-82ed-0a1de5b610ea"} labels in Ready state, got 0
I0904 16:10:08.787450   31343 utils.go:346] [14m55s remaining] Expected number (2) of nodes with map[node-role.kubernetes.io/worker: node-draining-test:33644a25-cf2e-11e9-82ed-0a1de5b610ea] label in Ready state found
STEP: Creating RC with workload
STEP: Creating PDB for RC
STEP: Wait until all replicas are ready
I0904 16:10:08.820431   31343 utils.go:396] [15m0s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 0
I0904 16:10:13.822699   31343 utils.go:396] [14m55s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 20
I0904 16:10:18.828453   31343 utils.go:399] [14m50s remaining] Waiting for RC ready replicas, ReadyReplicas: 20, Replicas: 20
I0904 16:10:18.840783   31343 utils.go:416] POD #0/20: {
  "metadata": {
    "name": "pdb-workload-44skz",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-44skz",
    "uid": "774ce4a1-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3629",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.1.87.163",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:13Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://659ad13e9da9b149"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.840960   31343 utils.go:416] POD #1/20: {
  "metadata": {
    "name": "pdb-workload-562sf",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-562sf",
    "uid": "77483249-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3614",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.96.142.133",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:12Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://9eb5b14bdaf3b81f"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.841130   31343 utils.go:416] POD #2/20: {
  "metadata": {
    "name": "pdb-workload-8hkkh",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-8hkkh",
    "uid": "7749ed79-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3645",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.150.2.250",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:12Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://30cb9364cce051bf"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.841293   31343 utils.go:416] POD #3/20: {
  "metadata": {
    "name": "pdb-workload-8mk8g",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-8mk8g",
    "uid": "77482348-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3651",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.83.140.22",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:13Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://cfa38c985545536b"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.841465   31343 utils.go:416] POD #4/20: {
  "metadata": {
    "name": "pdb-workload-b64kw",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-b64kw",
    "uid": "7749ff9e-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3636",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.16.112.169",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:14Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://94c0b5da7b038274"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.841652   31343 utils.go:416] POD #5/20: {
  "metadata": {
    "name": "pdb-workload-d9hmr",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-d9hmr",
    "uid": "7749a9d6-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3617",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.4.23.125",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:13Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://3a5b9ebf8625c556"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.841824   31343 utils.go:416] POD #6/20: {
  "metadata": {
    "name": "pdb-workload-dpdhc",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-dpdhc",
    "uid": "774839fa-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3639",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.130.209.46",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:12Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://8cbaa31cf930da75"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.841991   31343 utils.go:416] POD #7/20: {
  "metadata": {
    "name": "pdb-workload-fxh4d",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-fxh4d",
    "uid": "774c9727-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3626",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.120.196.208",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:14Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://131055c35f161e1f"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.842156   31343 utils.go:416] POD #8/20: {
  "metadata": {
    "name": "pdb-workload-llfcf",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-llfcf",
    "uid": "7746da41-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3661",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.37.230.204",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:12Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://51fd727defdc7440"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.842325   31343 utils.go:416] POD #9/20: {
  "metadata": {
    "name": "pdb-workload-lxjdl",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-lxjdl",
    "uid": "7749feb9-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3623",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.106.236.16",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:12Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://76aabfeee9f550f2"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.842512   31343 utils.go:416] POD #10/20: {
  "metadata": {
    "name": "pdb-workload-nljcz",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-nljcz",
    "uid": "7746312f-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3653",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.138.192.35",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:12Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://c619ba0844a858c6"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.842697   31343 utils.go:416] POD #11/20: {
  "metadata": {
    "name": "pdb-workload-qh4qm",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-qh4qm",
    "uid": "7749eeab-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3611",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.124.34.189",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:13Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://11b83cab86be9fce"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.842865   31343 utils.go:416] POD #12/20: {
  "metadata": {
    "name": "pdb-workload-qr8vf",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-qr8vf",
    "uid": "774ccfe9-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3646",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.210.60.249",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:13Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://7d22044ec73d37b6"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.843015   31343 utils.go:416] POD #13/20: {
  "metadata": {
    "name": "pdb-workload-tdckx",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-tdckx",
    "uid": "774cbaf9-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3647",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.3.79.215",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:15Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://309ee52824ebd9bc"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.843175   31343 utils.go:416] POD #14/20: {
  "metadata": {
    "name": "pdb-workload-v957b",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-v957b",
    "uid": "7746cb15-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3605",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.145.44.67",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:12Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://dafc656e30328bd5"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.843361   31343 utils.go:416] POD #15/20: {
  "metadata": {
    "name": "pdb-workload-w7d6c",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-w7d6c",
    "uid": "774ccaf5-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3608",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.253.106.182",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:14Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://9ce1f3549383af64"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.843524   31343 utils.go:416] POD #16/20: {
  "metadata": {
    "name": "pdb-workload-x8bpj",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-x8bpj",
    "uid": "7749ebc7-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3658",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.75.58.116",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:14Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://199f9e0605216c31"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.843845   31343 utils.go:416] POD #17/20: {
  "metadata": {
    "name": "pdb-workload-xvg24",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-xvg24",
    "uid": "774822b7-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3620",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "e2b95e4f-f485-4be2-b168-06db1dc1c517",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:14Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.63.172.5",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:13Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://56dbe2ec92b178b"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.844012   31343 utils.go:416] POD #18/20: {
  "metadata": {
    "name": "pdb-workload-z5x5d",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-z5x5d",
    "uid": "7749fdb9-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3633",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.111.64.5",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:13Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://a6cec783c297c851"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0904 16:10:18.844167   31343 utils.go:416] POD #19/20: {
  "metadata": {
    "name": "pdb-workload-zsjsn",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-zsjsn",
    "uid": "7749efff-cf2e-11e9-b208-0a1de5b610ea",
    "resourceVersion": "3642",
    "creationTimestamp": "2019-09-04T16:10:08Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "7744bf9c-cf2e-11e9-b208-0a1de5b610ea",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8gnqs",
        "secret": {
          "secretName": "default-token-8gnqs",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8gnqs",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "33644a25-cf2e-11e9-82ed-0a1de5b610ea",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "dd7372de-7bf8-4b42-b9aa-a32b9f251e36",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:15Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-04T16:10:08Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.203.226.194",
    "startTime": "2019-09-04T16:10:08Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-04T16:10:14Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://7f97173b9c4f94c1"
      }
    ],
    "qosClass": "Burstable"
  }
}
STEP: Delete machine to trigger node draining
STEP: Observing and verifying node draining
E0904 16:10:18.865119   31343 utils.go:451] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is expected to be marked as unschedulable, it is not
I0904 16:10:23.869428   31343 utils.go:455] [remaining 14m55s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:23.876526   31343 utils.go:474] [remaining 14m55s] Have 9 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:23.878510   31343 utils.go:490] [remaining 14m55s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:23.878534   31343 utils.go:500] [remaining 14m55s] Expecting at most 2 pods to be scheduled to drained node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36", got 9
I0904 16:10:28.869758   31343 utils.go:455] [remaining 14m50s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:28.879334   31343 utils.go:474] [remaining 14m50s] Have 8 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:28.882202   31343 utils.go:490] [remaining 14m50s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:28.882238   31343 utils.go:500] [remaining 14m50s] Expecting at most 2 pods to be scheduled to drained node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36", got 8
I0904 16:10:33.870502   31343 utils.go:455] [remaining 14m45s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:33.877698   31343 utils.go:474] [remaining 14m45s] Have 7 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:33.879315   31343 utils.go:490] [remaining 14m45s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:33.879348   31343 utils.go:500] [remaining 14m45s] Expecting at most 2 pods to be scheduled to drained node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36", got 7
I0904 16:10:38.868856   31343 utils.go:455] [remaining 14m40s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:38.875034   31343 utils.go:474] [remaining 14m40s] Have 6 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:38.876632   31343 utils.go:490] [remaining 14m40s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:38.876657   31343 utils.go:500] [remaining 14m40s] Expecting at most 2 pods to be scheduled to drained node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36", got 6
I0904 16:10:43.870005   31343 utils.go:455] [remaining 14m35s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:43.876984   31343 utils.go:474] [remaining 14m35s] Have 5 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:43.878526   31343 utils.go:490] [remaining 14m35s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:43.878551   31343 utils.go:500] [remaining 14m35s] Expecting at most 2 pods to be scheduled to drained node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36", got 5
I0904 16:10:48.870107   31343 utils.go:455] [remaining 14m30s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:48.876600   31343 utils.go:474] [remaining 14m30s] Have 4 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:48.878988   31343 utils.go:490] [remaining 14m30s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:48.879016   31343 utils.go:500] [remaining 14m30s] Expecting at most 2 pods to be scheduled to drained node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36", got 4
I0904 16:10:53.869350   31343 utils.go:455] [remaining 14m25s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:53.875312   31343 utils.go:474] [remaining 14m25s] Have 3 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:53.877048   31343 utils.go:490] [remaining 14m25s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:53.877073   31343 utils.go:500] [remaining 14m25s] Expecting at most 2 pods to be scheduled to drained node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36", got 3
I0904 16:10:58.869061   31343 utils.go:455] [remaining 14m20s] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" is mark unschedulable as expected
I0904 16:10:58.875170   31343 utils.go:474] [remaining 14m20s] Have 2 pods scheduled to node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36"
I0904 16:10:58.876753   31343 utils.go:490] [remaining 14m20s] RC ReadyReplicas: 20, Replicas: 20
I0904 16:10:58.876777   31343 utils.go:504] [remaining 14m20s] Expected result: all pods from the RC up to last one or two got scheduled to a different node while respecting PDB
STEP: Validating the machine is deleted
E0904 16:10:58.878370   31343 infra.go:454] Machine "machine1" not yet deleted
E0904 16:11:03.880592   31343 infra.go:454] Machine "machine1" not yet deleted
I0904 16:11:08.880681   31343 infra.go:463] Machine "machine1" successfully deleted
STEP: Validate underlying node corresponding to machine1 is removed as well
I0904 16:11:08.882127   31343 utils.go:530] [15m0s remaining] Node "dd7372de-7bf8-4b42-b9aa-a32b9f251e36" successfully deleted
STEP: Delete PDB
STEP: Delete machine2
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0904 16:11:08.889608   31343 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 16:11:08.893269   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:11:08.893293   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:11:08.893303   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:11:08.893312   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:11:08.896790   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:11:08.896815   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:11:08.896824   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:11:08.896832   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:11:08.896840   31343 utils.go:231] Node "e2b95e4f-f485-4be2-b168-06db1dc1c517". Ready: true. Unschedulable: false
I0904 16:11:08.896848   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:11:08.901776   31343 utils.go:87] Cluster size is 6 nodes
I0904 16:11:13.902038   31343 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0904 16:11:13.905033   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:11:13.905055   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:11:13.905061   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:11:13.905067   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:11:13.908030   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:11:13.908055   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:11:13.908064   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:11:13.908072   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:11:13.908080   31343 utils.go:231] Node "e2b95e4f-f485-4be2-b168-06db1dc1c517". Ready: true. Unschedulable: true
I0904 16:11:13.908088   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:11:13.911405   31343 utils.go:87] Cluster size is 6 nodes
I0904 16:11:18.902023   31343 utils.go:239] [remaining 14m50s] Cluster size expected to be 5 nodes
I0904 16:11:18.905106   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:11:18.905128   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:11:18.905134   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:11:18.905140   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:11:18.907989   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:11:18.908010   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:11:18.908016   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:11:18.908021   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:11:18.908026   31343 utils.go:231] Node "e2b95e4f-f485-4be2-b168-06db1dc1c517". Ready: true. Unschedulable: true
I0904 16:11:18.908031   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:11:18.910742   31343 utils.go:87] Cluster size is 6 nodes
I0904 16:11:23.902003   31343 utils.go:239] [remaining 14m45s] Cluster size expected to be 5 nodes
I0904 16:11:23.905033   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:11:23.905055   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:11:23.905061   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:11:23.905066   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:11:23.908028   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:11:23.908049   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:11:23.908056   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:11:23.908061   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:11:23.908066   31343 utils.go:231] Node "e2b95e4f-f485-4be2-b168-06db1dc1c517". Ready: true. Unschedulable: true
I0904 16:11:23.908071   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:11:23.910849   31343 utils.go:87] Cluster size is 6 nodes
I0904 16:11:28.902028   31343 utils.go:239] [remaining 14m40s] Cluster size expected to be 5 nodes
I0904 16:11:28.904909   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 16:11:28.904934   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 16:11:28.904943   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 16:11:28.904952   31343 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 16:11:28.907621   31343 utils.go:231] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1". Ready: true. Unschedulable: false
I0904 16:11:28.907641   31343 utils.go:231] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da". Ready: true. Unschedulable: false
I0904 16:11:28.907646   31343 utils.go:231] Node "80076895-9615-40cc-9535-18c66ad0cef2". Ready: true. Unschedulable: false
I0904 16:11:28.907651   31343 utils.go:231] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c". Ready: true. Unschedulable: false
I0904 16:11:28.907656   31343 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 16:11:28.910148   31343 utils.go:87] Cluster size is 5 nodes
I0904 16:11:28.910176   31343 utils.go:257] waiting for all nodes to be ready
I0904 16:11:28.912858   31343 utils.go:262] waiting for all nodes to be schedulable
I0904 16:11:28.915441   31343 utils.go:290] [remaining 1m0s] Node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1" is schedulable
I0904 16:11:28.915473   31343 utils.go:290] [remaining 1m0s] Node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da" is schedulable
I0904 16:11:28.915485   31343 utils.go:290] [remaining 1m0s] Node "80076895-9615-40cc-9535-18c66ad0cef2" is schedulable
I0904 16:11:28.915495   31343 utils.go:290] [remaining 1m0s] Node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c" is schedulable
I0904 16:11:28.915505   31343 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 16:11:28.915520   31343 utils.go:267] waiting for each node to be backed by a machine
I0904 16:11:28.922648   31343 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 16:11:28.922677   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-8mk44" is linked to node "ddd54ef8-8cf1-4b6c-bf00-32919eabf62c"
I0904 16:11:28.922693   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-zrxlb" is linked to node "7ef1e50f-8bcf-499a-afb4-fc2aca1370da"
I0904 16:11:28.922707   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-566rv" is linked to node "3f04fdee-11f3-437a-ba46-09f9e9ced5e1"
I0904 16:11:28.922729   31343 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-zml5d" is linked to node "80076895-9615-40cc-9535-18c66ad0cef2"
I0904 16:11:28.922742   31343 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
I0904 16:11:28.934139   31343 utils.go:378] [15m0s remaining] Found 0 number of nodes with map[node-role.kubernetes.io/worker: node-draining-test:33644a25-cf2e-11e9-82ed-0a1de5b610ea] label as expected

• [SLOW TEST:85.180 seconds]
[Feature:Machines] Managed cluster should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  drain node before removing machine resource
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
------------------------------
[Feature:Machines] Managed cluster should 
  reject invalid machinesets
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:487
I0904 16:11:28.934257   31343 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Creating invalid machineset
STEP: Waiting for ReconcileError MachineSet event
I0904 16:11:28.990980   31343 infra.go:506] Fetching ReconcileError MachineSet invalid-machineset event
I0904 16:11:34.021999   31343 infra.go:506] Fetching ReconcileError MachineSet invalid-machineset event
I0904 16:11:34.022036   31343 infra.go:512] Found ReconcileError event for "invalid-machineset" machine set with the following message: "invalid-machineset" machineset validation failed: spec.template.metadata.labels: Invalid value: map[string]string{"big-kitty":"i-am-bit-kitty"}: `selector` does not match template `labels`
STEP: Verify no machine from "invalid-machineset" machineset were created
I0904 16:11:34.024915   31343 infra.go:528] Have 0 machines generated from "invalid-machineset" machineset
STEP: Deleting invalid machineset

• [SLOW TEST:5.094 seconds]
[Feature:Machines] Managed cluster should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  reject invalid machinesets
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:487
------------------------------

Ran 7 of 16 Specs in 199.110 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (199.11s)
PASS
ok  	github.com/openshift/cluster-api-actuator-pkg/pkg/e2e	199.156s
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN E2E TESTS [00h 04m 19s] ##########
[PostBuildScript] - Executing post build scripts.
[workspace] $ /bin/bash /tmp/jenkins5867762640308533448.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/gathered
/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/gathered

0 directories, 0 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins9210298648789660748.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/generated
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/generated
/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
└── pid1.journal

0 directories, 8 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins8848910018268914327.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/journals
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/journals
/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service

0 directories, 3 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins2441593752441850400.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/548/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/builds/548/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r 'artifacts/gathered/*' gcs/artifacts/
cp: cannot stat ‘artifacts/gathered/*’: No such file or directory
++ export status=FAILURE
++ status=FAILURE
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins7806775163298608468.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
++ mktemp
+ script=/tmp/tmp.h6ZAPMH85u
+ cat
+ chmod +x /tmp/tmp.h6ZAPMH85u
+ scp -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.h6ZAPMH85u openshiftdevel:/tmp/tmp.h6ZAPMH85u
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.h6ZAPMH85u"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"pull-ci-openshift-cluster-api-actuator-pkg-master-e2e","buildid":"1169277617493250048","prowjobid":"47aee29c-cf2c-11e9-bd8d-0a58ac102ebf","refs":{"org":"openshift","repo":"cluster-api-actuator-pkg","repo_link":"https://github.com/openshift/cluster-api-actuator-pkg","base_ref":"master","base_sha":"b3dad42ded9cf0288809ca2cef3311c06339e749","base_link":"https://github.com/openshift/cluster-api-actuator-pkg/commit/b3dad42ded9cf0288809ca2cef3311c06339e749","pulls":[{"number":114,"author":"frobware","sha":"801a89559c2b745e456c44495e502136bfd3391b","link":"https://github.com/openshift/cluster-api-actuator-pkg/pull/114","commit_link":"https://github.com/openshift/cluster-api-actuator-pkg/pull/114/commits/801a89559c2b745e456c44495e502136bfd3391b","author_link":"https://github.com/frobware"}]}} ]]
++ jq --compact-output '.buildid |= "548"'
+ JOB_SPEC='{"type":"presubmit","job":"pull-ci-openshift-cluster-api-actuator-pkg-master-e2e","buildid":"548","prowjobid":"47aee29c-cf2c-11e9-bd8d-0a58ac102ebf","refs":{"org":"openshift","repo":"cluster-api-actuator-pkg","repo_link":"https://github.com/openshift/cluster-api-actuator-pkg","base_ref":"master","base_sha":"b3dad42ded9cf0288809ca2cef3311c06339e749","base_link":"https://github.com/openshift/cluster-api-actuator-pkg/commit/b3dad42ded9cf0288809ca2cef3311c06339e749","pulls":[{"number":114,"author":"frobware","sha":"801a89559c2b745e456c44495e502136bfd3391b","link":"https://github.com/openshift/cluster-api-actuator-pkg/pull/114","commit_link":"https://github.com/openshift/cluster-api-actuator-pkg/pull/114/commits/801a89559c2b745e456c44495e502136bfd3391b","author_link":"https://github.com/frobware"}]}}'
+ docker run -e 'JOB_SPEC={"type":"presubmit","job":"pull-ci-openshift-cluster-api-actuator-pkg-master-e2e","buildid":"548","prowjobid":"47aee29c-cf2c-11e9-bd8d-0a58ac102ebf","refs":{"org":"openshift","repo":"cluster-api-actuator-pkg","repo_link":"https://github.com/openshift/cluster-api-actuator-pkg","base_ref":"master","base_sha":"b3dad42ded9cf0288809ca2cef3311c06339e749","base_link":"https://github.com/openshift/cluster-api-actuator-pkg/commit/b3dad42ded9cf0288809ca2cef3311c06339e749","pulls":[{"number":114,"author":"frobware","sha":"801a89559c2b745e456c44495e502136bfd3391b","link":"https://github.com/openshift/cluster-api-actuator-pkg/pull/114","commit_link":"https://github.com/openshift/cluster-api-actuator-pkg/pull/114/commits/801a89559c2b745e456c44495e502136bfd3391b","author_link":"https://github.com/frobware"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-path=gs://origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin '/data/gcs/*'
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ... 
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
a073c86ecf9e: Already exists
cc3fc741b1a9: Already exists
822bed51ba40: Pulling fs layer
85cea451eec0: Pulling fs layer
85cea451eec0: Verifying Checksum
85cea451eec0: Download complete
822bed51ba40: Verifying Checksum
822bed51ba40: Download complete
822bed51ba40: Pull complete
85cea451eec0: Pull complete
Digest: sha256:03aad50d7ec631ee07c12ac2ba679bd48c7781f7d5754f9e0dcc4e7260e35208
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","file":"prow/gcsupload/run.go:107","func":"k8s.io/test-infra/prow/gcsupload.Options.assembleTargets","level":"warning","msg":"Encountered error in resolving items to upload for /data/gcs/*: stat /data/gcs/*: no such file or directory","time":"2019-09-04T16:11:56Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/548.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-04T16:11:56Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-04T16:11:56Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_cluster-api-actuator-pkg/114/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-04T16:11:56Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/548.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-04T16:11:56Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_cluster-api-actuator-pkg/114/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-04T16:11:56Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-04T16:11:56Z"}
{"component":"gcsupload","file":"prow/gcsupload/run.go:65","func":"k8s.io/test-infra/prow/gcsupload.Options.Run","level":"info","msg":"Finished upload to GCS","time":"2019-09-04T16:11:56Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 06s] ##########
[workspace] $ /bin/bash /tmp/jenkins7542035323342997284.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2019-09-04 12:11:57.949766", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2019-09-04 12:11:57.952962", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2019-09-04 12:11:58.793986", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-04 12:11:59.480704", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-08faf914cd4493369."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-04 12:12:00.385070", 
    "instance_ids": [
        "i-08faf914cd4493369"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-00985a53f235b4484"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-02c3588bcfd4d1e4f"
                }
            }, 
            "dns_name": "ec2-3-95-66-137.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-08faf914cd4493369", 
            "image_id": "ami-0b77b87a37c3e662c", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2019-09-04T15:55:05.000Z", 
            "placement": "us-east-1c", 
            "private_dns_name": "ip-172-18-25-165.ec2.internal", 
            "private_ip": "172.18.25.165", 
            "public_dns_name": "ec2-3-95-66-137.compute-1.amazonaws.com", 
            "public_ip": "3.95.66.137", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-04 12:12:00.631641", 
    "path": "/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.25.165.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-04 12:12:01.147466", 
    "path": "/var/lib/jenkins/jobs/pull-ci-openshift-cluster-api-actuator-pkg-master-e2e/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 05s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
Finished: SUCCESS