SuccessConsole Output

Skipping 184 KB.. Full Log
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/machine-api-operator.go:25
I0906 15:51:12.620467   30802 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking deployment "machine-api-controllers" is available
I0906 15:51:12.633404   30802 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
STEP: deleting deployment "machine-api-controllers"
STEP: checking deployment "machine-api-controllers" is available again
E0906 15:51:12.642533   30802 deloyment.go:25] Error querying api for Deployment object "machine-api-controllers": deployments.apps "machine-api-controllers" not found, retrying...
E0906 15:51:13.645429   30802 deloyment.go:55] Deployment "machine-api-controllers" is not available. Status: (replicas: 1, updated: 1, ready: 0, available: 0, unavailable: 1)
I0906 15:51:14.652796   30802 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
•
------------------------------
[Feature:Operators] Cluster autoscaler cluster operator status should 
  be available
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/cluster-autoscaler-operator.go:90
I0906 15:51:14.652873   30802 framework.go:406] >>> kubeConfig: /root/.kube/config
•
------------------------------
[Feature:Operators] Machine API cluster operator status should 
  be available
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/machine-api-operator.go:53
I0906 15:51:14.670509   30802 framework.go:406] >>> kubeConfig: /root/.kube/config
•
Ran 7 of 16 Specs in 2.182 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (2.18s)
PASS
ok  	github.com/openshift/cluster-api-actuator-pkg/pkg/e2e	2.240s
NAMESPACE=kube-system hack/ci-integration.sh  -ginkgo.v -ginkgo.noColor=true -ginkgo.skip "Feature:Operators|TechPreview" -ginkgo.failFast -ginkgo.seed=1
=== RUN   TestE2E
Running Suite: Machine Suite
============================
Random Seed: 1
Will run 7 of 16 specs

SSSSSSSS
------------------------------
[Feature:Machines] Autoscaler should 
  scale up and down
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:234
I0906 15:51:17.798305   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
I0906 15:51:17.803136   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
I0906 15:51:17.826238   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Getting existing machinesets
STEP: Getting existing machines
STEP: Getting existing nodes
I0906 15:51:17.839593   31205 autoscaler.go:286] Have 4 existing machinesets
I0906 15:51:17.839616   31205 autoscaler.go:287] Have 5 existing machines
I0906 15:51:17.839622   31205 autoscaler.go:288] Have 5 existing nodes
STEP: Creating 3 transient machinesets
STEP: [15m0s remaining] Waiting for nodes to be Ready in 3 transient machinesets
E0906 15:51:17.862011   31205 utils.go:157] Machine "e2e-29fe2-w-0-wjq5z" has no NodeRef
STEP: [14m57s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0906 15:51:20.871746   31205 utils.go:165] Machine "e2e-29fe2-w-0-wjq5z" is backing node "baf0662c-c75d-4915-b585-168375d3e877"
I0906 15:51:20.871770   31205 utils.go:149] MachineSet "e2e-29fe2-w-0" have 1 nodes
E0906 15:51:20.875721   31205 utils.go:157] Machine "e2e-29fe2-w-1-zh8bl" has no NodeRef
STEP: [14m54s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0906 15:51:23.882205   31205 utils.go:165] Machine "e2e-29fe2-w-0-wjq5z" is backing node "baf0662c-c75d-4915-b585-168375d3e877"
I0906 15:51:23.882235   31205 utils.go:149] MachineSet "e2e-29fe2-w-0" have 1 nodes
I0906 15:51:23.887807   31205 utils.go:165] Machine "e2e-29fe2-w-1-zh8bl" is backing node "3f5c1c24-bdd4-4aab-aaa3-62c71422f580"
I0906 15:51:23.887829   31205 utils.go:149] MachineSet "e2e-29fe2-w-1" have 1 nodes
I0906 15:51:23.892864   31205 utils.go:165] Machine "e2e-29fe2-w-2-6xt9z" is backing node "c94b17dc-568c-45c5-a83d-8a40e6ef47d2"
I0906 15:51:23.892887   31205 utils.go:149] MachineSet "e2e-29fe2-w-2" have 1 nodes
I0906 15:51:23.892895   31205 utils.go:177] Node "baf0662c-c75d-4915-b585-168375d3e877" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:20 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:20 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:20 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:20 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:20 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:51:23.892953   31205 utils.go:177] Node "3f5c1c24-bdd4-4aab-aaa3-62c71422f580" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:22 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:51:22 +0000 UTC 2019-09-06 15:51:22 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:51:23.893071   31205 utils.go:177] Node "c94b17dc-568c-45c5-a83d-8a40e6ef47d2" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:51:23 +0000 UTC 2019-09-06 15:51:21 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:51:23 +0000 UTC 2019-09-06 15:51:21 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:51:23 +0000 UTC 2019-09-06 15:51:21 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:51:23 +0000 UTC 2019-09-06 15:51:21 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:51:23 +0000 UTC 2019-09-06 15:51:21 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: Getting nodes
STEP: Creating 3 machineautoscalers
I0906 15:51:23.896153   31205 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-29fe2-w-0 - min:1, max:2
I0906 15:51:23.903413   31205 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-29fe2-w-1 - min:1, max:2
I0906 15:51:23.907148   31205 autoscaler.go:340] Create MachineAutoscaler backed by MachineSet kube-system/e2e-29fe2-w-2 - min:1, max:2
STEP: Creating ClusterAutoscaler configured with maxNodesTotal:10
STEP: Deriving Memory capacity from machine "kubemark-actuator-testing-machineset"
I0906 15:51:24.021565   31205 autoscaler.go:377] Memory capacity of worker node "13421fc4-81a1-45e2-b819-ab55a5e6869c" is 3840Mi
STEP: Creating scale-out workload: jobs: 11, memory: 2818572300
I0906 15:51:24.053685   31205 autoscaler.go:399] [15m0s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:51:25.011725   31205 autoscaler.go:361] cluster-autoscaler: cluster-autoscaler-default-598c649f66-ffktq became leader
I0906 15:51:27.053906   31205 autoscaler.go:399] [14m57s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:51:30.054076   31205 autoscaler.go:399] [14m54s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:51:33.054337   31205 autoscaler.go:399] [14m51s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0906 15:51:35.157265   31205 autoscaler.go:361] cluster-autoscaler-status: Max total nodes in cluster reached: 10
I0906 15:51:35.160626   31205 autoscaler.go:361] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-29fe2-w-1 size to 2
I0906 15:51:35.168294   31205 autoscaler.go:361] e2e-autoscaler-workload-vw6n7: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:35.170642   31205 autoscaler.go:361] cluster-autoscaler-status: Scale-up: group kube-system/e2e-29fe2-w-1 size set to 2
I0906 15:51:35.175691   31205 autoscaler.go:361] e2e-autoscaler-workload-xgx9k: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:35.178323   31205 autoscaler.go:361] e2e-autoscaler-workload-b5g5t: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:35.189429   31205 autoscaler.go:361] e2e-autoscaler-workload-wlk77: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:35.192968   31205 autoscaler.go:361] e2e-autoscaler-workload-sf2vz: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:35.199674   31205 autoscaler.go:361] e2e-autoscaler-workload-xpml9: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:35.204102   31205 autoscaler.go:361] e2e-autoscaler-workload-f4ql7: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:35.357804   31205 autoscaler.go:361] e2e-autoscaler-workload-nndtb: pod triggered scale-up: [{kube-system/e2e-29fe2-w-1 1->2 (max: 2)}]
I0906 15:51:36.054561   31205 autoscaler.go:399] [14m48s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:51:39.054808   31205 autoscaler.go:399] [14m45s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:51:42.055678   31205 autoscaler.go:399] [14m42s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:51:45.055896   31205 autoscaler.go:399] [14m39s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0906 15:51:45.181499   31205 autoscaler.go:361] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-29fe2-w-0 size to 2
I0906 15:51:45.198986   31205 autoscaler.go:361] e2e-autoscaler-workload-nndtb: pod triggered scale-up: [{kube-system/e2e-29fe2-w-0 1->2 (max: 2)}]
I0906 15:51:45.208041   31205 autoscaler.go:361] cluster-autoscaler-status: Scale-up: group kube-system/e2e-29fe2-w-0 size set to 2
I0906 15:51:45.215316   31205 autoscaler.go:361] e2e-autoscaler-workload-wlk77: pod triggered scale-up: [{kube-system/e2e-29fe2-w-0 1->2 (max: 2)}]
I0906 15:51:45.221284   31205 autoscaler.go:361] e2e-autoscaler-workload-f4ql7: pod triggered scale-up: [{kube-system/e2e-29fe2-w-0 1->2 (max: 2)}]
I0906 15:51:45.230159   31205 autoscaler.go:361] e2e-autoscaler-workload-vw6n7: pod triggered scale-up: [{kube-system/e2e-29fe2-w-0 1->2 (max: 2)}]
I0906 15:51:45.247487   31205 autoscaler.go:361] e2e-autoscaler-workload-xgx9k: pod triggered scale-up: [{kube-system/e2e-29fe2-w-0 1->2 (max: 2)}]
I0906 15:51:45.258599   31205 autoscaler.go:361] e2e-autoscaler-workload-b5g5t: pod triggered scale-up: [{kube-system/e2e-29fe2-w-0 1->2 (max: 2)}]
I0906 15:51:45.262247   31205 autoscaler.go:361] e2e-autoscaler-workload-sf2vz: pod triggered scale-up: [{kube-system/e2e-29fe2-w-0 1->2 (max: 2)}]
I0906 15:51:48.057447   31205 autoscaler.go:399] [14m36s remaining] Expecting 2 "ScaledUpGroup" events; observed 2
I0906 15:51:48.058204   31205 autoscaler.go:414] [1m0s remaining] Waiting for cluster-autoscaler to generate a "MaxNodesTotalReached" event; observed 1
I0906 15:51:48.058233   31205 autoscaler.go:422] [1m0s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:51:51.058384   31205 autoscaler.go:422] [57s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:51:54.058677   31205 autoscaler.go:422] [54s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:51:57.058904   31205 autoscaler.go:422] [51s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:00.059109   31205 autoscaler.go:422] [48s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:03.059417   31205 autoscaler.go:422] [45s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:06.059636   31205 autoscaler.go:422] [42s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:09.059807   31205 autoscaler.go:422] [39s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:12.060053   31205 autoscaler.go:422] [36s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:15.060269   31205 autoscaler.go:422] [33s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:18.060526   31205 autoscaler.go:422] [30s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:21.060765   31205 autoscaler.go:422] [27s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:24.060983   31205 autoscaler.go:422] [24s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:27.061172   31205 autoscaler.go:422] [21s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:30.061412   31205 autoscaler.go:422] [18s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:33.061604   31205 autoscaler.go:422] [15s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:36.061817   31205 autoscaler.go:422] [12s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:39.062069   31205 autoscaler.go:422] [9s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:42.062292   31205 autoscaler.go:422] [6s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0906 15:52:45.062574   31205 autoscaler.go:422] [3s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
STEP: Deleting workload
I0906 15:52:48.058399   31205 autoscaler.go:249] [cleanup] "e2e-autoscaler-workload" (*v1.Job)
I0906 15:52:48.062005   31205 autoscaler.go:434] [15m0s remaining] Expecting 2 "ScaleDownEmpty" events; observed 2
I0906 15:52:48.079623   31205 autoscaler.go:445] still have workload POD: "e2e-autoscaler-workload-28ddq"
I0906 15:52:48.079657   31205 autoscaler.go:249] [cleanup] "default" (*v1.ClusterAutoscaler)
I0906 15:52:48.136581   31205 autoscaler.go:465] Waiting for cluster-autoscaler POD "cluster-autoscaler-default-598c649f66-ffktq" to disappear
STEP: Scaling transient machinesets to zero
I0906 15:52:48.136654   31205 autoscaler.go:474] Scaling transient machineset "e2e-29fe2-w-0" to zero
I0906 15:52:48.145148   31205 autoscaler.go:474] Scaling transient machineset "e2e-29fe2-w-1" to zero
I0906 15:52:48.162610   31205 autoscaler.go:474] Scaling transient machineset "e2e-29fe2-w-2" to zero
STEP: Waiting for scaled up nodes to be deleted
I0906 15:52:48.197229   31205 autoscaler.go:491] [15m0s remaining] Waiting for cluster to reach original node count of 5; currently have 10
I0906 15:52:51.200770   31205 autoscaler.go:491] [14m57s remaining] Waiting for cluster to reach original node count of 5; currently have 6
I0906 15:52:54.204030   31205 autoscaler.go:491] [14m54s remaining] Waiting for cluster to reach original node count of 5; currently have 5
STEP: Waiting for scaled up machines to be deleted
I0906 15:52:54.207891   31205 autoscaler.go:501] [15m0s remaining] Waiting for cluster to reach original machine count of 5; currently have 5
I0906 15:52:54.207933   31205 autoscaler.go:249] [cleanup] "autoscale-e2e-29fe2-w-0pvc9x" (*v1beta1.MachineAutoscaler)
I0906 15:52:54.211122   31205 autoscaler.go:249] [cleanup] "autoscale-e2e-29fe2-w-12bgd5" (*v1beta1.MachineAutoscaler)
I0906 15:52:54.214499   31205 autoscaler.go:249] [cleanup] "autoscale-e2e-29fe2-w-27d6rb" (*v1beta1.MachineAutoscaler)
I0906 15:52:54.220025   31205 autoscaler.go:249] [cleanup] "e2e-29fe2-w-0" (*v1beta1.MachineSet)
I0906 15:52:54.223460   31205 autoscaler.go:249] [cleanup] "e2e-29fe2-w-1" (*v1beta1.MachineSet)
I0906 15:52:54.228116   31205 autoscaler.go:249] [cleanup] "e2e-29fe2-w-2" (*v1beta1.MachineSet)

• [SLOW TEST:96.433 seconds]
[Feature:Machines] Autoscaler should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:233
  scale up and down
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:234
------------------------------
S
------------------------------
[Feature:Machines] Managed cluster should 
  have machines linked with nodes
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:136
I0906 15:52:54.231648   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
I0906 15:52:54.249245   31205 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:52:54.249273   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-q2n9r" is linked to node "13421fc4-81a1-45e2-b819-ab55a5e6869c"
I0906 15:52:54.249288   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2kj9n" is linked to node "f6279809-ed73-41b2-8127-87cfb777a67a"
I0906 15:52:54.249301   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-h4t9b" is linked to node "de17a193-07be-4165-bf27-a3510a938d6e"
I0906 15:52:54.249318   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-ff79d" is linked to node "fb6245e1-f356-4fab-9152-f02e39073964"
I0906 15:52:54.249327   31205 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
•
------------------------------
[Feature:Machines] Managed cluster should 
  have ability to additively reconcile taints from machine to nodes
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:145
I0906 15:52:54.249431   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: getting machine "kubemark-actuator-testing-machineset-blue-q2n9r"
I0906 15:52:54.271595   31205 utils.go:165] Machine "kubemark-actuator-testing-machineset-blue-q2n9r" is backing node "13421fc4-81a1-45e2-b819-ab55a5e6869c"
STEP: getting the backed node "13421fc4-81a1-45e2-b819-ab55a5e6869c"
STEP: updating node "13421fc4-81a1-45e2-b819-ab55a5e6869c" with taint: {not-from-machine true NoSchedule <nil>}
STEP: updating machine "kubemark-actuator-testing-machineset-blue-q2n9r" with taint: {from-machine-63791ce9-d0be-11e9-953e-0a2b6dea9fc2 true NoSchedule <nil>}
I0906 15:52:54.281984   31205 infra.go:184] Getting node from machine again for verification of taints
I0906 15:52:54.285206   31205 utils.go:165] Machine "kubemark-actuator-testing-machineset-blue-q2n9r" is backing node "13421fc4-81a1-45e2-b819-ab55a5e6869c"
I0906 15:52:54.285231   31205 infra.go:194] Expected : map[from-machine-63791ce9-d0be-11e9-953e-0a2b6dea9fc2:{} not-from-machine:{}], observed map[kubemark:{} not-from-machine:{} from-machine-63791ce9-d0be-11e9-953e-0a2b6dea9fc2:{}] , difference map[], 
STEP: Getting the latest version of the original machine
STEP: Setting back the original machine taints
STEP: Getting the latest version of the node
I0906 15:52:54.294018   31205 utils.go:165] Machine "kubemark-actuator-testing-machineset-blue-q2n9r" is backing node "13421fc4-81a1-45e2-b819-ab55a5e6869c"
STEP: Setting back the original node taints
•
------------------------------
[Feature:Machines] Managed cluster should 
  recover from deleted worker machines
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
I0906 15:52:54.297187   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking initial cluster state
I0906 15:52:54.317831   31205 utils.go:87] Cluster size is 5 nodes
I0906 15:52:54.317864   31205 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:52:54.321331   31205 utils.go:99] MachineSet "e2e-29fe2-w-0" replicas 0. Ready: 0, available 0
I0906 15:52:54.321354   31205 utils.go:99] MachineSet "e2e-29fe2-w-1" replicas 0. Ready: 0, available 0
I0906 15:52:54.321363   31205 utils.go:99] MachineSet "e2e-29fe2-w-2" replicas 0. Ready: 0, available 0
I0906 15:52:54.321371   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:52:54.321379   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:52:54.321388   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:52:54.321408   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:52:54.323804   31205 utils.go:231] Node "13421fc4-81a1-45e2-b819-ab55a5e6869c". Ready: true. Unschedulable: false
I0906 15:52:54.323825   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:52:54.323831   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:52:54.323836   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:52:54.323841   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:52:54.326244   31205 utils.go:87] Cluster size is 5 nodes
I0906 15:52:54.326261   31205 utils.go:257] waiting for all nodes to be ready
I0906 15:52:54.329147   31205 utils.go:262] waiting for all nodes to be schedulable
I0906 15:52:54.332376   31205 utils.go:290] [remaining 1m0s] Node "13421fc4-81a1-45e2-b819-ab55a5e6869c" is schedulable
I0906 15:52:54.332394   31205 utils.go:290] [remaining 1m0s] Node "de17a193-07be-4165-bf27-a3510a938d6e" is schedulable
I0906 15:52:54.332401   31205 utils.go:290] [remaining 1m0s] Node "f6279809-ed73-41b2-8127-87cfb777a67a" is schedulable
I0906 15:52:54.332412   31205 utils.go:290] [remaining 1m0s] Node "fb6245e1-f356-4fab-9152-f02e39073964" is schedulable
I0906 15:52:54.332422   31205 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:52:54.332434   31205 utils.go:267] waiting for each node to be backed by a machine
I0906 15:52:54.338337   31205 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:52:54.338364   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-q2n9r" is linked to node "13421fc4-81a1-45e2-b819-ab55a5e6869c"
I0906 15:52:54.338374   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2kj9n" is linked to node "f6279809-ed73-41b2-8127-87cfb777a67a"
I0906 15:52:54.338382   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-h4t9b" is linked to node "de17a193-07be-4165-bf27-a3510a938d6e"
I0906 15:52:54.338389   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-ff79d" is linked to node "fb6245e1-f356-4fab-9152-f02e39073964"
I0906 15:52:54.338397   31205 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
STEP: getting worker node
STEP: deleting machine object "kubemark-actuator-testing-machineset-blue-q2n9r"
STEP: waiting for node object "13421fc4-81a1-45e2-b819-ab55a5e6869c" to go away
I0906 15:52:54.351340   31205 infra.go:255] Node "13421fc4-81a1-45e2-b819-ab55a5e6869c" still exists. Node conditions are: [{OutOfDisk False 2019-09-06 15:52:53 +0000 UTC 2019-09-06 15:50:28 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:52:53 +0000 UTC 2019-09-06 15:50:28 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:52:53 +0000 UTC 2019-09-06 15:50:28 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:52:53 +0000 UTC 2019-09-06 15:50:28 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:52:53 +0000 UTC 2019-09-06 15:50:28 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: waiting for new node object to come up
I0906 15:52:59.356163   31205 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:52:59.359372   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:52:59.359397   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:52:59.359404   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:52:59.359410   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:52:59.362047   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:52:59.362070   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:52:59.362080   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:52:59.362088   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:52:59.362096   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:52:59.364796   31205 utils.go:87] Cluster size is 5 nodes
I0906 15:52:59.364819   31205 utils.go:257] waiting for all nodes to be ready
I0906 15:52:59.368341   31205 utils.go:262] waiting for all nodes to be schedulable
I0906 15:52:59.372408   31205 utils.go:290] [remaining 1m0s] Node "dbe55a04-5eb1-42df-ae80-846f79da969a" is schedulable
I0906 15:52:59.372431   31205 utils.go:290] [remaining 1m0s] Node "de17a193-07be-4165-bf27-a3510a938d6e" is schedulable
I0906 15:52:59.372438   31205 utils.go:290] [remaining 1m0s] Node "f6279809-ed73-41b2-8127-87cfb777a67a" is schedulable
I0906 15:52:59.372444   31205 utils.go:290] [remaining 1m0s] Node "fb6245e1-f356-4fab-9152-f02e39073964" is schedulable
I0906 15:52:59.372450   31205 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:52:59.372455   31205 utils.go:267] waiting for each node to be backed by a machine
I0906 15:52:59.381314   31205 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:52:59.381344   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-c54hf" is linked to node "dbe55a04-5eb1-42df-ae80-846f79da969a"
I0906 15:52:59.381354   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2kj9n" is linked to node "f6279809-ed73-41b2-8127-87cfb777a67a"
I0906 15:52:59.381362   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-h4t9b" is linked to node "de17a193-07be-4165-bf27-a3510a938d6e"
I0906 15:52:59.381370   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-ff79d" is linked to node "fb6245e1-f356-4fab-9152-f02e39073964"
I0906 15:52:59.381378   31205 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"

• [SLOW TEST:5.084 seconds]
[Feature:Machines] Managed cluster should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  recover from deleted worker machines
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
------------------------------
[Feature:Machines] Managed cluster should 
  grow and decrease when scaling different machineSets simultaneously
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
I0906 15:52:59.381476   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0906 15:52:59.397353   31205 utils.go:87] Cluster size is 5 nodes
STEP: getting worker machineSets
I0906 15:52:59.400298   31205 infra.go:297] Creating transient MachineSet "e2e-66871-w-0"
I0906 15:52:59.403194   31205 infra.go:297] Creating transient MachineSet "e2e-66871-w-1"
STEP: scaling "e2e-66871-w-0" from 0 to 2 replicas
I0906 15:52:59.407065   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-66871-w-1" from 0 to 2 replicas
I0906 15:52:59.427226   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
E0906 15:52:59.467584   31205 utils.go:157] Machine "e2e-66871-w-0-7pnmc" has no NodeRef
I0906 15:53:04.519443   31205 utils.go:165] Machine "e2e-66871-w-0-7pnmc" is backing node "5f623183-6d0a-4623-8aaf-40106f8e4c82"
E0906 15:53:04.520189   31205 utils.go:157] Machine "e2e-66871-w-0-8mf5p" has no NodeRef
I0906 15:53:09.527000   31205 utils.go:165] Machine "e2e-66871-w-0-7pnmc" is backing node "5f623183-6d0a-4623-8aaf-40106f8e4c82"
I0906 15:53:09.529387   31205 utils.go:165] Machine "e2e-66871-w-0-8mf5p" is backing node "dc4e2890-f2e8-44ec-9f83-2b029b3c40c5"
I0906 15:53:09.529411   31205 utils.go:149] MachineSet "e2e-66871-w-0" have 2 nodes
I0906 15:53:09.535078   31205 utils.go:165] Machine "e2e-66871-w-1-5mc8v" is backing node "83b89346-0d2b-4efb-8058-c8aebddc8542"
I0906 15:53:09.536884   31205 utils.go:165] Machine "e2e-66871-w-1-nx7px" is backing node "76df007a-03ab-40b0-a473-d29caacf0c6b"
I0906 15:53:09.536910   31205 utils.go:149] MachineSet "e2e-66871-w-1" have 2 nodes
I0906 15:53:09.536920   31205 utils.go:177] Node "5f623183-6d0a-4623-8aaf-40106f8e4c82" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:02 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:02 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:02 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:02 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:02 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:53:09.537003   31205 utils.go:177] Node "dc4e2890-f2e8-44ec-9f83-2b029b3c40c5" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:53:07 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:53:07 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:53:07 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:53:07 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:53:07 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:53:09.537037   31205 utils.go:177] Node "83b89346-0d2b-4efb-8058-c8aebddc8542" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:04 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:53:08 +0000 UTC 2019-09-06 15:53:04 +0000 UTC KubeletReady kubelet is posting ready status}]
I0906 15:53:09.537059   31205 utils.go:177] Node "76df007a-03ab-40b0-a473-d29caacf0c6b" is ready. Conditions are: [{OutOfDisk False 2019-09-06 15:53:09 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-06 15:53:09 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-06 15:53:09 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-06 15:53:09 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-06 15:53:09 +0000 UTC 2019-09-06 15:53:03 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: scaling "e2e-66871-w-0" from 2 to 0 replicas
I0906 15:53:09.537109   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-66871-w-1" from 2 to 0 replicas
I0906 15:53:09.558313   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0906 15:53:09.591717   31205 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:53:09.598372   31205 utils.go:99] MachineSet "e2e-66871-w-0" replicas 0. Ready: 2, available 2
I0906 15:53:09.598399   31205 utils.go:99] MachineSet "e2e-66871-w-1" replicas 0. Ready: 2, available 2
I0906 15:53:09.598408   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:53:09.598416   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:53:09.598425   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:53:09.598433   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:53:09.603322   31205 utils.go:231] Node "5f623183-6d0a-4623-8aaf-40106f8e4c82". Ready: true. Unschedulable: false
I0906 15:53:09.603352   31205 utils.go:231] Node "76df007a-03ab-40b0-a473-d29caacf0c6b". Ready: true. Unschedulable: false
I0906 15:53:09.603362   31205 utils.go:231] Node "83b89346-0d2b-4efb-8058-c8aebddc8542". Ready: true. Unschedulable: false
I0906 15:53:09.603370   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:53:09.603379   31205 utils.go:231] Node "dc4e2890-f2e8-44ec-9f83-2b029b3c40c5". Ready: true. Unschedulable: true
I0906 15:53:09.603388   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:53:09.603406   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:53:09.603415   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:53:09.603423   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:53:09.607374   31205 utils.go:87] Cluster size is 9 nodes
I0906 15:53:14.607531   31205 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0906 15:53:14.611809   31205 utils.go:99] MachineSet "e2e-66871-w-0" replicas 0. Ready: 0, available 0
I0906 15:53:14.611833   31205 utils.go:99] MachineSet "e2e-66871-w-1" replicas 0. Ready: 0, available 0
I0906 15:53:14.611839   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:53:14.611845   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:53:14.611850   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:53:14.611855   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:53:14.616847   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:53:14.616874   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:53:14.616883   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:53:14.616891   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:53:14.616899   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:53:14.623395   31205 utils.go:87] Cluster size is 5 nodes
I0906 15:53:14.623426   31205 utils.go:257] waiting for all nodes to be ready
I0906 15:53:14.627403   31205 utils.go:262] waiting for all nodes to be schedulable
I0906 15:53:14.633097   31205 utils.go:290] [remaining 1m0s] Node "dbe55a04-5eb1-42df-ae80-846f79da969a" is schedulable
I0906 15:53:14.633134   31205 utils.go:290] [remaining 1m0s] Node "de17a193-07be-4165-bf27-a3510a938d6e" is schedulable
I0906 15:53:14.633146   31205 utils.go:290] [remaining 1m0s] Node "f6279809-ed73-41b2-8127-87cfb777a67a" is schedulable
I0906 15:53:14.633156   31205 utils.go:290] [remaining 1m0s] Node "fb6245e1-f356-4fab-9152-f02e39073964" is schedulable
I0906 15:53:14.633166   31205 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:53:14.633176   31205 utils.go:267] waiting for each node to be backed by a machine
I0906 15:53:14.647933   31205 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:53:14.647971   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-c54hf" is linked to node "dbe55a04-5eb1-42df-ae80-846f79da969a"
I0906 15:53:14.648020   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2kj9n" is linked to node "f6279809-ed73-41b2-8127-87cfb777a67a"
I0906 15:53:14.648034   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-h4t9b" is linked to node "de17a193-07be-4165-bf27-a3510a938d6e"
I0906 15:53:14.648047   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-ff79d" is linked to node "fb6245e1-f356-4fab-9152-f02e39073964"
I0906 15:53:14.648061   31205 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"

• [SLOW TEST:15.288 seconds]
[Feature:Machines] Managed cluster should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  grow and decrease when scaling different machineSets simultaneously
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
------------------------------
[Feature:Machines] Managed cluster should 
  drain node before removing machine resource
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
I0906 15:53:14.669376   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0906 15:53:14.695637   31205 utils.go:87] Cluster size is 5 nodes
STEP: Taking the first worker machineset (assuming only worker machines are backed by machinesets)
STEP: Creating two new machines, one for node about to be drained, other for moving workload from drained node
STEP: Waiting until both new nodes are ready
E0906 15:53:14.722237   31205 utils.go:342] [remaining 15m0s] Expecting 2 nodes with map[string]string{"node-draining-test":"29f771af-d0be-11e9-953e-0a2b6dea9fc2", "node-role.kubernetes.io/worker":""} labels in Ready state, got 0
I0906 15:53:19.725948   31205 utils.go:346] [14m55s remaining] Expected number (2) of nodes with map[node-role.kubernetes.io/worker: node-draining-test:29f771af-d0be-11e9-953e-0a2b6dea9fc2] label in Ready state found
STEP: Creating RC with workload
STEP: Creating PDB for RC
STEP: Wait until all replicas are ready
I0906 15:53:19.763639   31205 utils.go:396] [15m0s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 0
I0906 15:53:24.766554   31205 utils.go:396] [14m55s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 20
I0906 15:53:29.768951   31205 utils.go:399] [14m50s remaining] Waiting for RC ready replicas, ReadyReplicas: 20, Replicas: 20
I0906 15:53:29.777181   31205 utils.go:416] POD #0/20: {
  "metadata": {
    "name": "pdb-workload-6mzsg",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-6mzsg",
    "uid": "72afe262-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3928",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.242.115.158",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://d5fe32c9cf0ea20e"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.777360   31205 utils.go:416] POD #1/20: {
  "metadata": {
    "name": "pdb-workload-6xfbb",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-6xfbb",
    "uid": "72b0398a-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3878",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.218.158.209",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:24Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://b8fe0f23db96470a"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.777467   31205 utils.go:416] POD #2/20: {
  "metadata": {
    "name": "pdb-workload-7h78b",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-7h78b",
    "uid": "72b02bd5-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3913",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.3.202.174",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:24Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://dfba837a0b789be4"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.777568   31205 utils.go:416] POD #3/20: {
  "metadata": {
    "name": "pdb-workload-8gznh",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-8gznh",
    "uid": "72aac801-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3855",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:25Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.42.167.43",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://b3e49d5a53dffa00"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.777671   31205 utils.go:416] POD #4/20: {
  "metadata": {
    "name": "pdb-workload-96snx",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-96snx",
    "uid": "72abffe7-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3862",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.0.145.182",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:23Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://e4ae83e9e4297ca"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.777806   31205 utils.go:416] POD #5/20: {
  "metadata": {
    "name": "pdb-workload-c6pgd",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-c6pgd",
    "uid": "72a7e106-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3846",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:25Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.226.107.89",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:23Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://f89a9b7dbd75c70f"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.777906   31205 utils.go:416] POD #6/20: {
  "metadata": {
    "name": "pdb-workload-cksdb",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-cksdb",
    "uid": "72abebeb-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3865",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.127.41.233",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:23Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://5a0ef37224f0bf9b"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.778359   31205 utils.go:416] POD #7/20: {
  "metadata": {
    "name": "pdb-workload-jsrjw",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-jsrjw",
    "uid": "72b0025c-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3887",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.179.140.209",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:24Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://48f8cb6fd4f781a7"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.778483   31205 utils.go:416] POD #8/20: {
  "metadata": {
    "name": "pdb-workload-l7296",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-l7296",
    "uid": "72aa8cc5-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3919",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.176.46.224",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:23Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://9c151bb46379b142"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.778626   31205 utils.go:416] POD #9/20: {
  "metadata": {
    "name": "pdb-workload-mxzsb",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-mxzsb",
    "uid": "72b3fe59-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3852",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:25Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.255.206.213",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:24Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://19cbe1bbdf180c2b"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.778753   31205 utils.go:416] POD #10/20: {
  "metadata": {
    "name": "pdb-workload-n5757",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-n5757",
    "uid": "72b3fb06-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3922",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.153.214.93",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:24Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://59ff0eb97fbda054"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.778856   31205 utils.go:416] POD #11/20: {
  "metadata": {
    "name": "pdb-workload-qq4kz",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-qq4kz",
    "uid": "72ac07ef-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3884",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.219.194.167",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://e9df851cec3002f4"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779010   31205 utils.go:416] POD #12/20: {
  "metadata": {
    "name": "pdb-workload-r94jg",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-r94jg",
    "uid": "72b31904-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3925",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.8.249.41",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://1a773eaec329fda"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779143   31205 utils.go:416] POD #13/20: {
  "metadata": {
    "name": "pdb-workload-rfdnd",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-rfdnd",
    "uid": "72b02ad2-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3890",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.208.17.53",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:23Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://ba62090ab2dd7c88"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779254   31205 utils.go:416] POD #14/20: {
  "metadata": {
    "name": "pdb-workload-rs82d",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-rs82d",
    "uid": "72b062db-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3916",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.42.204.186",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://70f6430ab35af1e6"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779361   31205 utils.go:416] POD #15/20: {
  "metadata": {
    "name": "pdb-workload-s6c5q",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-s6c5q",
    "uid": "72b3516f-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3910",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.165.9.213",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:24Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://6e2a950df3c36e12"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779483   31205 utils.go:416] POD #16/20: {
  "metadata": {
    "name": "pdb-workload-s7p5x",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-s7p5x",
    "uid": "72b3f424-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3893",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.131.27.249",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://e68b96c35f64ad49"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779639   31205 utils.go:416] POD #17/20: {
  "metadata": {
    "name": "pdb-workload-tlvcg",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-tlvcg",
    "uid": "72b04e0b-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3907",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:27Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.23",
    "podIP": "10.168.129.169",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://a0f729110fb6b284"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779785   31205 utils.go:416] POD #18/20: {
  "metadata": {
    "name": "pdb-workload-twsg7",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-twsg7",
    "uid": "72ac0a92-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3849",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:25Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.56.25.63",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:25Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://b7627cccb9e4ad23"
      }
    ],
    "qosClass": "Burstable"
  }
}
I0906 15:53:29.779924   31205 utils.go:416] POD #19/20: {
  "metadata": {
    "name": "pdb-workload-vg7x7",
    "generateName": "pdb-workload-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/pdb-workload-vg7x7",
    "uid": "72b057e2-d0be-11e9-81a4-0a2b6dea9fc2",
    "resourceVersion": "3881",
    "creationTimestamp": "2019-09-06T15:53:19Z",
    "labels": {
      "app": "nginx"
    },
    "ownerReferences": [
      {
        "apiVersion": "v1",
        "kind": "ReplicationController",
        "name": "pdb-workload",
        "uid": "72a5ba31-d0be-11e9-81a4-0a2b6dea9fc2",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "default-token-8zl7s",
        "secret": {
          "secretName": "default-token-8zl7s",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "work",
        "image": "busybox",
        "command": [
          "sleep",
          "10h"
        ],
        "resources": {
          "requests": {
            "cpu": "50m",
            "memory": "50Mi"
          }
        },
        "volumeMounts": [
          {
            "name": "default-token-8zl7s",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "nodeSelector": {
      "node-draining-test": "29f771af-d0be-11e9-953e-0a2b6dea9fc2",
      "node-role.kubernetes.io/worker": ""
    },
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "984dc528-9f2c-48f8-a73c-92d338000376",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "kubemark",
        "operator": "Exists"
      },
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:26Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": null
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-09-06T15:53:19Z"
      }
    ],
    "hostIP": "172.17.0.22",
    "podIP": "10.181.241.206",
    "startTime": "2019-09-06T15:53:19Z",
    "containerStatuses": [
      {
        "name": "work",
        "state": {
          "running": {
            "startedAt": "2019-09-06T15:53:24Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "busybox:latest",
        "imageID": "docker://busybox:latest",
        "containerID": "docker://5d02b2778d1b20e9"
      }
    ],
    "qosClass": "Burstable"
  }
}
STEP: Delete machine to trigger node draining
STEP: Observing and verifying node draining
E0906 15:53:29.794226   31205 utils.go:451] Node "984dc528-9f2c-48f8-a73c-92d338000376" is expected to be marked as unschedulable, it is not
I0906 15:53:34.798476   31205 utils.go:455] [remaining 14m55s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:53:34.804745   31205 utils.go:474] [remaining 14m55s] Have 9 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:53:34.806365   31205 utils.go:490] [remaining 14m55s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:53:34.806391   31205 utils.go:500] [remaining 14m55s] Expecting at most 2 pods to be scheduled to drained node "984dc528-9f2c-48f8-a73c-92d338000376", got 9
I0906 15:53:39.810053   31205 utils.go:455] [remaining 14m50s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:53:39.824570   31205 utils.go:474] [remaining 14m50s] Have 8 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:53:39.831813   31205 utils.go:490] [remaining 14m50s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:53:39.831843   31205 utils.go:500] [remaining 14m50s] Expecting at most 2 pods to be scheduled to drained node "984dc528-9f2c-48f8-a73c-92d338000376", got 8
I0906 15:53:44.799299   31205 utils.go:455] [remaining 14m45s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:53:44.806334   31205 utils.go:474] [remaining 14m45s] Have 7 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:53:44.808008   31205 utils.go:490] [remaining 14m45s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:53:44.808030   31205 utils.go:500] [remaining 14m45s] Expecting at most 2 pods to be scheduled to drained node "984dc528-9f2c-48f8-a73c-92d338000376", got 7
I0906 15:53:49.800777   31205 utils.go:455] [remaining 14m40s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:53:49.810625   31205 utils.go:474] [remaining 14m40s] Have 6 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:53:49.812716   31205 utils.go:490] [remaining 14m40s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:53:49.812744   31205 utils.go:500] [remaining 14m40s] Expecting at most 2 pods to be scheduled to drained node "984dc528-9f2c-48f8-a73c-92d338000376", got 6
I0906 15:53:54.798419   31205 utils.go:455] [remaining 14m35s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:53:54.804692   31205 utils.go:474] [remaining 14m35s] Have 5 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:53:54.806344   31205 utils.go:490] [remaining 14m35s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:53:54.806370   31205 utils.go:500] [remaining 14m35s] Expecting at most 2 pods to be scheduled to drained node "984dc528-9f2c-48f8-a73c-92d338000376", got 5
I0906 15:53:59.799072   31205 utils.go:455] [remaining 14m30s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:53:59.805632   31205 utils.go:474] [remaining 14m30s] Have 4 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:53:59.807192   31205 utils.go:490] [remaining 14m30s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:53:59.807215   31205 utils.go:500] [remaining 14m30s] Expecting at most 2 pods to be scheduled to drained node "984dc528-9f2c-48f8-a73c-92d338000376", got 4
I0906 15:54:04.798543   31205 utils.go:455] [remaining 14m25s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:54:04.805194   31205 utils.go:474] [remaining 14m25s] Have 3 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:54:04.806905   31205 utils.go:490] [remaining 14m25s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:54:04.806934   31205 utils.go:500] [remaining 14m25s] Expecting at most 2 pods to be scheduled to drained node "984dc528-9f2c-48f8-a73c-92d338000376", got 3
I0906 15:54:09.799172   31205 utils.go:455] [remaining 14m20s] Node "984dc528-9f2c-48f8-a73c-92d338000376" is mark unschedulable as expected
I0906 15:54:09.805313   31205 utils.go:474] [remaining 14m20s] Have 2 pods scheduled to node "984dc528-9f2c-48f8-a73c-92d338000376"
I0906 15:54:09.807006   31205 utils.go:490] [remaining 14m20s] RC ReadyReplicas: 20, Replicas: 20
I0906 15:54:09.807030   31205 utils.go:504] [remaining 14m20s] Expected result: all pods from the RC up to last one or two got scheduled to a different node while respecting PDB
STEP: Validating the machine is deleted
E0906 15:54:09.808635   31205 infra.go:454] Machine "machine1" not yet deleted
E0906 15:54:14.810892   31205 infra.go:454] Machine "machine1" not yet deleted
I0906 15:54:19.810914   31205 infra.go:463] Machine "machine1" successfully deleted
STEP: Validate underlying node corresponding to machine1 is removed as well
I0906 15:54:19.812678   31205 utils.go:530] [15m0s remaining] Node "984dc528-9f2c-48f8-a73c-92d338000376" successfully deleted
STEP: Delete PDB
STEP: Delete machine2
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0906 15:54:19.820252   31205 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0906 15:54:19.825376   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:54:19.825396   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:54:19.825402   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:54:19.825408   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:54:19.828772   31205 utils.go:231] Node "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf". Ready: true. Unschedulable: false
I0906 15:54:19.828795   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:54:19.828804   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:54:19.828812   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:54:19.828820   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:54:19.828828   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:54:19.835160   31205 utils.go:87] Cluster size is 6 nodes
I0906 15:54:24.835346   31205 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0906 15:54:24.839236   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:54:24.839265   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:54:24.839276   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:54:24.839285   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:54:24.843463   31205 utils.go:231] Node "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf". Ready: true. Unschedulable: true
I0906 15:54:24.843490   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:54:24.843500   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:54:24.843509   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:54:24.843518   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:54:24.843526   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:54:24.847143   31205 utils.go:87] Cluster size is 6 nodes
I0906 15:54:29.835385   31205 utils.go:239] [remaining 14m50s] Cluster size expected to be 5 nodes
I0906 15:54:29.838538   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:54:29.838561   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:54:29.838568   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:54:29.838573   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:54:29.841478   31205 utils.go:231] Node "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf". Ready: true. Unschedulable: true
I0906 15:54:29.841501   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:54:29.841511   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:54:29.841518   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:54:29.841527   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:54:29.841539   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:54:29.844317   31205 utils.go:87] Cluster size is 6 nodes
I0906 15:54:34.835446   31205 utils.go:239] [remaining 14m45s] Cluster size expected to be 5 nodes
I0906 15:54:34.838475   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:54:34.838497   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:54:34.838503   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:54:34.838508   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:54:34.841381   31205 utils.go:231] Node "2681b5eb-5ab3-41f4-ae2a-df4454b1e4bf". Ready: true. Unschedulable: true
I0906 15:54:34.841401   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:54:34.841407   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:54:34.841412   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:54:34.841417   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:54:34.841422   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:54:34.844205   31205 utils.go:87] Cluster size is 6 nodes
I0906 15:54:39.835391   31205 utils.go:239] [remaining 14m40s] Cluster size expected to be 5 nodes
I0906 15:54:39.838613   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0906 15:54:39.838635   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0906 15:54:39.838642   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0906 15:54:39.838647   31205 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0906 15:54:39.841465   31205 utils.go:231] Node "dbe55a04-5eb1-42df-ae80-846f79da969a". Ready: true. Unschedulable: false
I0906 15:54:39.841485   31205 utils.go:231] Node "de17a193-07be-4165-bf27-a3510a938d6e". Ready: true. Unschedulable: false
I0906 15:54:39.841491   31205 utils.go:231] Node "f6279809-ed73-41b2-8127-87cfb777a67a". Ready: true. Unschedulable: false
I0906 15:54:39.841496   31205 utils.go:231] Node "fb6245e1-f356-4fab-9152-f02e39073964". Ready: true. Unschedulable: false
I0906 15:54:39.841501   31205 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0906 15:54:39.847040   31205 utils.go:87] Cluster size is 5 nodes
I0906 15:54:39.847069   31205 utils.go:257] waiting for all nodes to be ready
I0906 15:54:39.850004   31205 utils.go:262] waiting for all nodes to be schedulable
I0906 15:54:39.852769   31205 utils.go:290] [remaining 1m0s] Node "dbe55a04-5eb1-42df-ae80-846f79da969a" is schedulable
I0906 15:54:39.852798   31205 utils.go:290] [remaining 1m0s] Node "de17a193-07be-4165-bf27-a3510a938d6e" is schedulable
I0906 15:54:39.852810   31205 utils.go:290] [remaining 1m0s] Node "f6279809-ed73-41b2-8127-87cfb777a67a" is schedulable
I0906 15:54:39.852820   31205 utils.go:290] [remaining 1m0s] Node "fb6245e1-f356-4fab-9152-f02e39073964" is schedulable
I0906 15:54:39.852830   31205 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0906 15:54:39.852843   31205 utils.go:267] waiting for each node to be backed by a machine
I0906 15:54:39.860864   31205 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0906 15:54:39.860900   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-c54hf" is linked to node "dbe55a04-5eb1-42df-ae80-846f79da969a"
I0906 15:54:39.860916   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2kj9n" is linked to node "f6279809-ed73-41b2-8127-87cfb777a67a"
I0906 15:54:39.860929   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-h4t9b" is linked to node "de17a193-07be-4165-bf27-a3510a938d6e"
I0906 15:54:39.860944   31205 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-ff79d" is linked to node "fb6245e1-f356-4fab-9152-f02e39073964"
I0906 15:54:39.860956   31205 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
I0906 15:54:39.871218   31205 utils.go:378] [15m0s remaining] Found 0 number of nodes with map[node-role.kubernetes.io/worker: node-draining-test:29f771af-d0be-11e9-953e-0a2b6dea9fc2] label as expected

• [SLOW TEST:85.202 seconds]
[Feature:Machines] Managed cluster should
/data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
  drain node before removing machine resource
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
------------------------------
[Feature:Machines] Managed cluster should 
  reject invalid machinesets
  /data/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:487
I0906 15:54:39.871336   31205 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Creating invalid machineset
STEP: Waiting for ReconcileError MachineSet event
I0906 15:54:39.948426   31205 infra.go:506] Fetching ReconcileError MachineSet invalid-machineset event
I0906 15:54:39.948474   31205 infra.go:512] Found ReconcileError event for "invalid-machineset" machine set with the following message: "invalid-machineset" machineset validation failed: spec.template.metadata.labels: Invalid value: map[string]string{"big-kitty":"i-am-bit-kitty"}: `selector` does not match template `labels`
STEP: Verify no machine from "invalid-machineset" machineset were created
I0906 15:54:39.951489   31205 infra.go:528] Have 0 machines generated from "invalid-machineset" machineset
STEP: Deleting invalid machineset
•
Ran 7 of 16 Specs in 202.157 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (202.16s)
PASS
ok  	github.com/openshift/cluster-api-actuator-pkg/pkg/e2e	202.203s
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN E2E TESTS [00h 04m 22s] ##########
[PostBuildScript] - Executing post build scripts.
[workspace] $ /bin/bash /tmp/jenkins4543774126737117882.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/gathered
/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/gathered

0 directories, 0 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins1104027302954288158.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/generated
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/generated
/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
└── pid1.journal

0 directories, 8 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins1219475881723780557.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/journals
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/journals
/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service

0 directories, 3 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins8317497395116082701.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/pull-ci-openshift-kubernetes-autoscaler-master-e2e/41/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/builds/41/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r 'artifacts/gathered/*' gcs/artifacts/
cp: cannot stat ‘artifacts/gathered/*’: No such file or directory
++ export status=FAILURE
++ status=FAILURE
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins6009427369200317087.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
++ mktemp
+ script=/tmp/tmp.1lvabC4ORu
+ cat
+ chmod +x /tmp/tmp.1lvabC4ORu
+ scp -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.1lvabC4ORu openshiftdevel:/tmp/tmp.1lvabC4ORu
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.1lvabC4ORu"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"pull-ci-openshift-kubernetes-autoscaler-master-e2e","buildid":"1169998239923965952","prowjobid":"44835668-d0bc-11e9-a06a-0a58ac108d5e","refs":{"org":"openshift","repo":"kubernetes-autoscaler","repo_link":"https://github.com/openshift/kubernetes-autoscaler","base_ref":"master","base_sha":"18a08df116691fed1236f4e53a67614dbc85b1fb","base_link":"https://github.com/openshift/kubernetes-autoscaler/commit/18a08df116691fed1236f4e53a67614dbc85b1fb","pulls":[{"number":116,"author":"frobware","sha":"470bd635e18fe2399da3e23cf71c3d649266c164","link":"https://github.com/openshift/kubernetes-autoscaler/pull/116","commit_link":"https://github.com/openshift/kubernetes-autoscaler/pull/116/commits/470bd635e18fe2399da3e23cf71c3d649266c164","author_link":"https://github.com/frobware"}]}} ]]
++ jq --compact-output '.buildid |= "41"'
+ JOB_SPEC='{"type":"presubmit","job":"pull-ci-openshift-kubernetes-autoscaler-master-e2e","buildid":"41","prowjobid":"44835668-d0bc-11e9-a06a-0a58ac108d5e","refs":{"org":"openshift","repo":"kubernetes-autoscaler","repo_link":"https://github.com/openshift/kubernetes-autoscaler","base_ref":"master","base_sha":"18a08df116691fed1236f4e53a67614dbc85b1fb","base_link":"https://github.com/openshift/kubernetes-autoscaler/commit/18a08df116691fed1236f4e53a67614dbc85b1fb","pulls":[{"number":116,"author":"frobware","sha":"470bd635e18fe2399da3e23cf71c3d649266c164","link":"https://github.com/openshift/kubernetes-autoscaler/pull/116","commit_link":"https://github.com/openshift/kubernetes-autoscaler/pull/116/commits/470bd635e18fe2399da3e23cf71c3d649266c164","author_link":"https://github.com/frobware"}]}}'
+ docker run -e 'JOB_SPEC={"type":"presubmit","job":"pull-ci-openshift-kubernetes-autoscaler-master-e2e","buildid":"41","prowjobid":"44835668-d0bc-11e9-a06a-0a58ac108d5e","refs":{"org":"openshift","repo":"kubernetes-autoscaler","repo_link":"https://github.com/openshift/kubernetes-autoscaler","base_ref":"master","base_sha":"18a08df116691fed1236f4e53a67614dbc85b1fb","base_link":"https://github.com/openshift/kubernetes-autoscaler/commit/18a08df116691fed1236f4e53a67614dbc85b1fb","pulls":[{"number":116,"author":"frobware","sha":"470bd635e18fe2399da3e23cf71c3d649266c164","link":"https://github.com/openshift/kubernetes-autoscaler/pull/116","commit_link":"https://github.com/openshift/kubernetes-autoscaler/pull/116/commits/470bd635e18fe2399da3e23cf71c3d649266c164","author_link":"https://github.com/frobware"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-path=gs://origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin '/data/gcs/*'
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ... 
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
a073c86ecf9e: Already exists
cc3fc741b1a9: Already exists
822bed51ba40: Pulling fs layer
85cea451eec0: Pulling fs layer
85cea451eec0: Verifying Checksum
85cea451eec0: Download complete
822bed51ba40: Verifying Checksum
822bed51ba40: Download complete
822bed51ba40: Pull complete
85cea451eec0: Pull complete
Digest: sha256:03aad50d7ec631ee07c12ac2ba679bd48c7781f7d5754f9e0dcc4e7260e35208
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","file":"prow/gcsupload/run.go:107","func":"k8s.io/test-infra/prow/gcsupload.Options.assembleTargets","level":"warning","msg":"Encountered error in resolving items to upload for /data/gcs/*: stat /data/gcs/*: no such file or directory","time":"2019-09-06T15:55:01Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-kubernetes-autoscaler-master-e2e/41.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-06T15:55:01Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-kubernetes-autoscaler-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-06T15:55:01Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_kubernetes-autoscaler/116/pull-ci-openshift-kubernetes-autoscaler-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-06T15:55:01Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_kubernetes-autoscaler/116/pull-ci-openshift-kubernetes-autoscaler-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-06T15:55:02Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-kubernetes-autoscaler-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-06T15:55:02Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-kubernetes-autoscaler-master-e2e/41.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-06T15:55:02Z"}
{"component":"gcsupload","file":"prow/gcsupload/run.go:65","func":"k8s.io/test-infra/prow/gcsupload.Options.Run","level":"info","msg":"Finished upload to GCS","time":"2019-09-06T15:55:02Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 06s] ##########
[workspace] $ /bin/bash /tmp/jenkins47189649540297075.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2019-09-06 11:55:03.401697", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2019-09-06 11:55:03.406036", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2019-09-06 11:55:04.224685", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:55:04.911253", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-03f1b8259c215731f."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:55:05.810097", 
    "instance_ids": [
        "i-03f1b8259c215731f"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-03eae6f84a19d5d4a"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-0afc8078905a83f21"
                }
            }, 
            "dns_name": "ec2-18-215-182-23.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-03f1b8259c215731f", 
            "image_id": "ami-0b77b87a37c3e662c", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2019-09-06T15:38:33.000Z", 
            "placement": "us-east-1c", 
            "private_dns_name": "ip-172-18-17-249.ec2.internal", 
            "private_ip": "172.18.17.249", 
            "public_dns_name": "ec2-18-215-182-23.compute-1.amazonaws.com", 
            "public_ip": "18.215.182.23", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:55:06.049339", 
    "path": "/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.17.249.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2019-09-06 11:55:06.549505", 
    "path": "/var/lib/jenkins/jobs/pull-ci-openshift-kubernetes-autoscaler-master-e2e/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 04s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
Finished: SUCCESS