Console Output
Skipping 182 KB..
Full Log•
------------------------------
[Feature:Operators] Machine API operator deployment should
be available
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/machine-api-operator.go:18
I0904 11:21:13.390201 31231 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 11:21:13.402035 31231 deloyment.go:58] Deployment "machine-api-operator" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
•
------------------------------
[Feature:Operators] Machine API operator deployment should
reconcile controllers deployment
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/machine-api-operator.go:25
I0904 11:21:13.402146 31231 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking deployment "machine-api-controllers" is available
I0904 11:21:13.415492 31231 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
STEP: deleting deployment "machine-api-controllers"
STEP: checking deployment "machine-api-controllers" is available again
E0904 11:21:13.424132 31231 deloyment.go:25] Error querying api for Deployment object "machine-api-controllers": deployments.apps "machine-api-controllers" not found, retrying...
E0904 11:21:14.427695 31231 deloyment.go:55] Deployment "machine-api-controllers" is not available. Status: (replicas: 1, updated: 1, ready: 0, available: 0, unavailable: 1)
I0904 11:21:15.430997 31231 deloyment.go:58] Deployment "machine-api-controllers" is available. Status: (replicas: 1, updated: 1, ready: 1, available: 1, unavailable: 0)
•
------------------------------
[Feature:Operators] Cluster autoscaler cluster operator status should
be available
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/operators/cluster-autoscaler-operator.go:90
I0904 11:21:15.431078 31231 framework.go:406] >>> kubeConfig: /root/.kube/config
•SSSSSSSS
Ran 7 of 16 Specs in 2.149 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (2.15s)
PASS
ok github.com/openshift/cluster-api-actuator-pkg/pkg/e2e 2.195s
hack/ci-integration.sh -ginkgo.v -ginkgo.noColor=true -ginkgo.skip "Feature:Operators|TechPreview" -ginkgo.failFast -ginkgo.seed=1
=== RUN TestE2E
Running Suite: Machine Suite
============================
Random Seed: 1
Will run 7 of 16 specs
SSSSSSSS
------------------------------
[Feature:Machines] Autoscaler should
scale up and down
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:233
I0904 11:21:18.541875 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 11:21:18.546461 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 11:21:18.569071 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Getting existing machinesets
STEP: Getting existing machines
STEP: Getting existing nodes
I0904 11:21:18.581530 31689 autoscaler.go:283] Have 4 existing machinesets
I0904 11:21:18.581553 31689 autoscaler.go:284] Have 5 existing machines
I0904 11:21:18.581560 31689 autoscaler.go:285] Have 5 existing nodes
STEP: Creating 3 transient machinesets
STEP: [15m0s remaining] Waiting for nodes to be Ready in 3 transient machinesets
E0904 11:21:18.603947 31689 utils.go:157] Machine "e2e-1da80-w-0-6c589" has no NodeRef
STEP: [14m57s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0904 11:21:21.698815 31689 utils.go:165] Machine "e2e-1da80-w-0-6c589" is backing node "4298df12-a5cb-4a86-a580-159eb479c4a5"
I0904 11:21:21.698846 31689 utils.go:149] MachineSet "e2e-1da80-w-0" have 1 nodes
I0904 11:21:21.743401 31689 utils.go:165] Machine "e2e-1da80-w-1-khf5z" is backing node "6099b7bf-6c51-4bb5-99d6-475c3a0cdb75"
I0904 11:21:21.743432 31689 utils.go:149] MachineSet "e2e-1da80-w-1" have 1 nodes
E0904 11:21:21.754308 31689 utils.go:157] Machine "e2e-1da80-w-2-f2xrd" has no NodeRef
STEP: [14m54s remaining] Waiting for nodes to be Ready in 3 transient machinesets
I0904 11:21:24.762978 31689 utils.go:165] Machine "e2e-1da80-w-0-6c589" is backing node "4298df12-a5cb-4a86-a580-159eb479c4a5"
I0904 11:21:24.763002 31689 utils.go:149] MachineSet "e2e-1da80-w-0" have 1 nodes
I0904 11:21:24.768620 31689 utils.go:165] Machine "e2e-1da80-w-1-khf5z" is backing node "6099b7bf-6c51-4bb5-99d6-475c3a0cdb75"
I0904 11:21:24.768646 31689 utils.go:149] MachineSet "e2e-1da80-w-1" have 1 nodes
I0904 11:21:24.774126 31689 utils.go:165] Machine "e2e-1da80-w-2-f2xrd" is backing node "22efef53-29f9-4f0f-a3f7-af3147b31bdd"
I0904 11:21:24.774151 31689 utils.go:149] MachineSet "e2e-1da80-w-2" have 1 nodes
I0904 11:21:24.774189 31689 utils.go:177] Node "4298df12-a5cb-4a86-a580-159eb479c4a5" is ready. Conditions are: [{OutOfDisk False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 11:21:24.774270 31689 utils.go:177] Node "6099b7bf-6c51-4bb5-99d6-475c3a0cdb75" is ready. Conditions are: [{OutOfDisk False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:21:23 +0000 UTC 2019-09-04 11:21:21 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 11:21:24.774379 31689 utils.go:177] Node "22efef53-29f9-4f0f-a3f7-af3147b31bdd" is ready. Conditions are: [{OutOfDisk False 2019-09-04 11:21:24 +0000 UTC 2019-09-04 11:21:22 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:21:24 +0000 UTC 2019-09-04 11:21:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:21:24 +0000 UTC 2019-09-04 11:21:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:21:24 +0000 UTC 2019-09-04 11:21:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:21:24 +0000 UTC 2019-09-04 11:21:22 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: Getting nodes
STEP: Creating 3 machineautoscalers
I0904 11:21:24.777723 31689 autoscaler.go:337] Create MachineAutoscaler backed by MachineSet kube-system/e2e-1da80-w-0 - min:1, max:2
I0904 11:21:24.785138 31689 autoscaler.go:337] Create MachineAutoscaler backed by MachineSet kube-system/e2e-1da80-w-1 - min:1, max:2
I0904 11:21:24.790208 31689 autoscaler.go:337] Create MachineAutoscaler backed by MachineSet kube-system/e2e-1da80-w-2 - min:1, max:2
STEP: Creating ClusterAutoscaler configured with maxNodesTotal:10
STEP: Deriving Memory capacity from machine "kubemark-actuator-testing-machineset"
I0904 11:21:24.905663 31689 autoscaler.go:374] Memory capacity of worker node "166234dd-4999-47ec-b113-6931509cece9" is 3840Mi
STEP: Creating scale-out workload: jobs: 11, memory: 2818572300
I0904 11:21:24.932305 31689 autoscaler.go:396] [15m0s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 11:21:25.920519 31689 autoscaler.go:358] cluster-autoscaler: cluster-autoscaler-default-598c649f66-7rbkx became leader
I0904 11:21:27.932553 31689 autoscaler.go:396] [14m57s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 11:21:30.933197 31689 autoscaler.go:396] [14m54s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 11:21:33.933430 31689 autoscaler.go:396] [14m51s remaining] Expecting 2 "ScaledUpGroup" events; observed 0
I0904 11:21:36.077048 31689 autoscaler.go:358] cluster-autoscaler-status: Max total nodes in cluster reached: 10
I0904 11:21:36.078816 31689 autoscaler.go:358] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-1da80-w-1 size to 2
I0904 11:21:36.089127 31689 autoscaler.go:358] workload-lznjj: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.095772 31689 autoscaler.go:358] cluster-autoscaler-status: Scale-up: group kube-system/e2e-1da80-w-1 size set to 2
I0904 11:21:36.102734 31689 autoscaler.go:358] workload-g6j9j: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.107747 31689 autoscaler.go:358] workload-v97gq: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.111364 31689 autoscaler.go:358] workload-thqdf: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.119918 31689 autoscaler.go:358] workload-qhq29: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.123795 31689 autoscaler.go:358] workload-72l8n: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.126080 31689 autoscaler.go:358] workload-vqh28: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.276609 31689 autoscaler.go:358] workload-njgzw: pod triggered scale-up: [{kube-system/e2e-1da80-w-1 1->2 (max: 2)}]
I0904 11:21:36.933645 31689 autoscaler.go:396] [14m48s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 11:21:39.934653 31689 autoscaler.go:396] [14m45s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 11:21:42.934882 31689 autoscaler.go:396] [14m42s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 11:21:45.935131 31689 autoscaler.go:396] [14m39s remaining] Expecting 2 "ScaledUpGroup" events; observed 1
I0904 11:21:46.102392 31689 autoscaler.go:358] cluster-autoscaler-status: Scale-up: setting group kube-system/e2e-1da80-w-0 size to 2
I0904 11:21:46.111248 31689 autoscaler.go:358] workload-v97gq: pod triggered scale-up: [{kube-system/e2e-1da80-w-0 1->2 (max: 2)}]
I0904 11:21:46.115274 31689 autoscaler.go:358] cluster-autoscaler-status: Scale-up: group kube-system/e2e-1da80-w-0 size set to 2
I0904 11:21:46.117297 31689 autoscaler.go:358] workload-qhq29: pod triggered scale-up: [{kube-system/e2e-1da80-w-0 1->2 (max: 2)}]
I0904 11:21:46.121886 31689 autoscaler.go:358] workload-vqh28: pod triggered scale-up: [{kube-system/e2e-1da80-w-0 1->2 (max: 2)}]
I0904 11:21:46.127055 31689 autoscaler.go:358] workload-72l8n: pod triggered scale-up: [{kube-system/e2e-1da80-w-0 1->2 (max: 2)}]
I0904 11:21:46.129344 31689 autoscaler.go:358] workload-thqdf: pod triggered scale-up: [{kube-system/e2e-1da80-w-0 1->2 (max: 2)}]
I0904 11:21:46.131534 31689 autoscaler.go:358] workload-lznjj: pod triggered scale-up: [{kube-system/e2e-1da80-w-0 1->2 (max: 2)}]
I0904 11:21:46.136155 31689 autoscaler.go:358] workload-g6j9j: pod triggered scale-up: [{kube-system/e2e-1da80-w-0 1->2 (max: 2)}]
I0904 11:21:48.935321 31689 autoscaler.go:396] [14m36s remaining] Expecting 2 "ScaledUpGroup" events; observed 2
I0904 11:21:48.935914 31689 autoscaler.go:411] [1m0s remaining] Waiting for cluster-autoscaler to generate a "MaxNodesTotalReached" event; observed 1
I0904 11:21:48.935931 31689 autoscaler.go:419] [1m0s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:21:51.936841 31689 autoscaler.go:419] [57s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:21:54.937572 31689 autoscaler.go:419] [54s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:21:57.937822 31689 autoscaler.go:419] [51s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:00.938042 31689 autoscaler.go:419] [48s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:03.939203 31689 autoscaler.go:419] [45s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:06.939343 31689 autoscaler.go:419] [42s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:09.939580 31689 autoscaler.go:419] [39s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:12.939816 31689 autoscaler.go:419] [36s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:15.940083 31689 autoscaler.go:419] [33s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:18.940259 31689 autoscaler.go:419] [30s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:21.940479 31689 autoscaler.go:419] [27s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:24.940753 31689 autoscaler.go:419] [24s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:27.940956 31689 autoscaler.go:419] [21s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:30.941213 31689 autoscaler.go:419] [18s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:33.941482 31689 autoscaler.go:419] [15s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:36.941719 31689 autoscaler.go:419] [12s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:39.942283 31689 autoscaler.go:419] [9s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:42.942517 31689 autoscaler.go:419] [6s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
I0904 11:22:45.942788 31689 autoscaler.go:419] [3s remaining] At max cluster size and expecting no more "ScaledUpGroup" events; currently have 2, max=2
STEP: Deleting workload
I0904 11:22:48.947960 31689 autoscaler.go:433] [15m0s remaining] Expecting 2 "ScaleDownEmpty" events; observed 2
STEP: Scaling transient machinesets to zero
I0904 11:22:48.948005 31689 autoscaler.go:440] Scaling transient machineset "e2e-1da80-w-0" to zero
I0904 11:22:48.963142 31689 autoscaler.go:440] Scaling transient machineset "e2e-1da80-w-1" to zero
I0904 11:22:48.985850 31689 autoscaler.go:440] Scaling transient machineset "e2e-1da80-w-2" to zero
STEP: Waiting for scaled up nodes to be deleted
I0904 11:22:49.029485 31689 autoscaler.go:457] [15m0s remaining] Waiting for cluster to reach original node count of 5; currently have 10
I0904 11:22:52.034833 31689 autoscaler.go:457] [14m57s remaining] Waiting for cluster to reach original node count of 5; currently have 10
I0904 11:22:55.037992 31689 autoscaler.go:457] [14m54s remaining] Waiting for cluster to reach original node count of 5; currently have 5
STEP: Waiting for scaled up machines to be deleted
I0904 11:22:55.041671 31689 autoscaler.go:467] [15m0s remaining] Waiting for cluster to reach original machine count of 5; currently have 5
• [SLOW TEST:96.571 seconds]
[Feature:Machines] Autoscaler should
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:232
scale up and down
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/autoscaler/autoscaler.go:233
------------------------------
S
------------------------------
[Feature:Machines] Managed cluster should
have machines linked with nodes
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:136
I0904 11:22:55.112756 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
I0904 11:22:55.139287 31689 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 11:22:55.139322 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-8drlz" is linked to node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
I0904 11:22:55.139337 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-kx7wv" is linked to node "cbe9202c-babb-4a83-bd70-8f21d75df033"
I0904 11:22:55.139349 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-rchkh" is linked to node "166234dd-4999-47ec-b113-6931509cece9"
I0904 11:22:55.139362 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-kpmfw" is linked to node "d1f145c4-1ed2-493f-97af-c24f5256cf46"
I0904 11:22:55.139374 31689 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
•
------------------------------
[Feature:Machines] Managed cluster should
have ability to additively reconcile taints from machine to nodes
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:145
I0904 11:22:55.139440 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: getting machine "kubemark-actuator-testing-machineset-8drlz"
I0904 11:22:55.176481 31689 utils.go:165] Machine "kubemark-actuator-testing-machineset-8drlz" is backing node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
STEP: getting the backed node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
STEP: updating node "595f4b23-c3ad-4106-bf5c-21ec235eb424" with taint: {not-from-machine true NoSchedule <nil>}
STEP: updating machine "kubemark-actuator-testing-machineset-8drlz" with taint: {from-machine-573c4d4f-cf06-11e9-95fe-0ac9a22f5366 true NoSchedule <nil>}
I0904 11:22:55.197529 31689 infra.go:184] Getting node from machine again for verification of taints
I0904 11:22:55.207514 31689 utils.go:165] Machine "kubemark-actuator-testing-machineset-8drlz" is backing node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
I0904 11:22:55.207559 31689 infra.go:194] Expected : map[from-machine-573c4d4f-cf06-11e9-95fe-0ac9a22f5366:{} not-from-machine:{}], observed map[kubemark:{} not-from-machine:{} from-machine-573c4d4f-cf06-11e9-95fe-0ac9a22f5366:{}] , difference map[],
STEP: Getting the latest version of the original machine
STEP: Setting back the original machine taints
STEP: Getting the latest version of the node
I0904 11:22:55.224880 31689 utils.go:165] Machine "kubemark-actuator-testing-machineset-8drlz" is backing node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
STEP: Setting back the original node taints
•
------------------------------
[Feature:Machines] Managed cluster should
recover from deleted worker machines
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
I0904 11:22:55.230511 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking initial cluster state
I0904 11:22:55.247452 31689 utils.go:87] Cluster size is 5 nodes
I0904 11:22:55.247479 31689 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 11:22:55.250905 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:22:55.250933 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:22:55.250943 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:22:55.250952 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:22:55.253923 31689 utils.go:231] Node "166234dd-4999-47ec-b113-6931509cece9". Ready: true. Unschedulable: false
I0904 11:22:55.253946 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:22:55.253956 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:22:55.253964 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:22:55.253972 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:22:55.257009 31689 utils.go:87] Cluster size is 5 nodes
I0904 11:22:55.257033 31689 utils.go:257] waiting for all nodes to be ready
I0904 11:22:55.263452 31689 utils.go:262] waiting for all nodes to be schedulable
I0904 11:22:55.266890 31689 utils.go:290] [remaining 1m0s] Node "166234dd-4999-47ec-b113-6931509cece9" is schedulable
I0904 11:22:55.266919 31689 utils.go:290] [remaining 1m0s] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424" is schedulable
I0904 11:22:55.266931 31689 utils.go:290] [remaining 1m0s] Node "cbe9202c-babb-4a83-bd70-8f21d75df033" is schedulable
I0904 11:22:55.266941 31689 utils.go:290] [remaining 1m0s] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46" is schedulable
I0904 11:22:55.266950 31689 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 11:22:55.266959 31689 utils.go:267] waiting for each node to be backed by a machine
I0904 11:22:55.273183 31689 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 11:22:55.273214 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-8drlz" is linked to node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
I0904 11:22:55.273230 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-kx7wv" is linked to node "cbe9202c-babb-4a83-bd70-8f21d75df033"
I0904 11:22:55.273245 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-rchkh" is linked to node "166234dd-4999-47ec-b113-6931509cece9"
I0904 11:22:55.273261 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-kpmfw" is linked to node "d1f145c4-1ed2-493f-97af-c24f5256cf46"
I0904 11:22:55.273274 31689 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
STEP: getting worker node
STEP: deleting machine object "kubemark-actuator-testing-machineset-green-rchkh"
STEP: waiting for node object "166234dd-4999-47ec-b113-6931509cece9" to go away
I0904 11:22:55.289102 31689 infra.go:255] Node "166234dd-4999-47ec-b113-6931509cece9" still exists. Node conditions are: [{OutOfDisk False 2019-09-04 11:22:53 +0000 UTC 2019-09-04 11:20:19 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:22:53 +0000 UTC 2019-09-04 11:20:19 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:22:53 +0000 UTC 2019-09-04 11:20:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:22:53 +0000 UTC 2019-09-04 11:20:19 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:22:53 +0000 UTC 2019-09-04 11:20:19 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: waiting for new node object to come up
I0904 11:23:00.293878 31689 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 11:23:00.296894 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:23:00.296912 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:23:00.296919 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:23:00.296924 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:23:00.299793 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:23:00.299815 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:23:00.299821 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:23:00.299826 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:23:00.299831 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:23:00.302488 31689 utils.go:87] Cluster size is 5 nodes
I0904 11:23:00.302511 31689 utils.go:257] waiting for all nodes to be ready
I0904 11:23:00.311154 31689 utils.go:262] waiting for all nodes to be schedulable
I0904 11:23:00.323120 31689 utils.go:290] [remaining 1m0s] Node "4169660f-ac78-4b87-b90f-0358fa5efca9" is schedulable
I0904 11:23:00.323152 31689 utils.go:290] [remaining 1m0s] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424" is schedulable
I0904 11:23:00.323182 31689 utils.go:290] [remaining 1m0s] Node "cbe9202c-babb-4a83-bd70-8f21d75df033" is schedulable
I0904 11:23:00.323198 31689 utils.go:290] [remaining 1m0s] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46" is schedulable
I0904 11:23:00.323208 31689 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 11:23:00.323219 31689 utils.go:267] waiting for each node to be backed by a machine
I0904 11:23:00.330953 31689 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 11:23:00.330985 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-8drlz" is linked to node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
I0904 11:23:00.331002 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-kx7wv" is linked to node "cbe9202c-babb-4a83-bd70-8f21d75df033"
I0904 11:23:00.331015 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2twbz" is linked to node "4169660f-ac78-4b87-b90f-0358fa5efca9"
I0904 11:23:00.331028 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-kpmfw" is linked to node "d1f145c4-1ed2-493f-97af-c24f5256cf46"
I0904 11:23:00.331040 31689 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
• [SLOW TEST:5.101 seconds]
[Feature:Machines] Managed cluster should
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
recover from deleted worker machines
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:220
------------------------------
[Feature:Machines] Managed cluster should
grow and decrease when scaling different machineSets simultaneously
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
I0904 11:23:00.331134 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0904 11:23:00.355220 31689 utils.go:87] Cluster size is 5 nodes
STEP: getting worker machineSets
I0904 11:23:00.361022 31689 infra.go:297] Creating transient MachineSet "e2e-5a525-w-0"
I0904 11:23:00.368505 31689 infra.go:297] Creating transient MachineSet "e2e-5a525-w-1"
STEP: scaling "e2e-5a525-w-0" from 0 to 2 replicas
I0904 11:23:00.374122 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-5a525-w-1" from 0 to 2 replicas
I0904 11:23:00.407243 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
E0904 11:23:00.447542 31689 utils.go:157] Machine "e2e-5a525-w-0-bgz4t" has no NodeRef
I0904 11:23:05.458397 31689 utils.go:165] Machine "e2e-5a525-w-0-bgz4t" is backing node "3c21bb7a-4f3e-444a-b521-dd4dbeef8007"
I0904 11:23:05.462166 31689 utils.go:165] Machine "e2e-5a525-w-0-klz8g" is backing node "e969fe05-b8d5-427b-8cc1-86207add2106"
I0904 11:23:05.462192 31689 utils.go:149] MachineSet "e2e-5a525-w-0" have 2 nodes
E0904 11:23:05.467615 31689 utils.go:157] Machine "e2e-5a525-w-1-bkn26" has no NodeRef
I0904 11:23:10.475346 31689 utils.go:165] Machine "e2e-5a525-w-0-bgz4t" is backing node "3c21bb7a-4f3e-444a-b521-dd4dbeef8007"
I0904 11:23:10.477765 31689 utils.go:165] Machine "e2e-5a525-w-0-klz8g" is backing node "e969fe05-b8d5-427b-8cc1-86207add2106"
I0904 11:23:10.477789 31689 utils.go:149] MachineSet "e2e-5a525-w-0" have 2 nodes
I0904 11:23:10.483570 31689 utils.go:165] Machine "e2e-5a525-w-1-bkn26" is backing node "8084551b-1aa4-4ec3-a41c-e09f6c3b541c"
I0904 11:23:10.485195 31689 utils.go:165] Machine "e2e-5a525-w-1-hgxsw" is backing node "4a823af9-c90b-4bae-a66a-afee939f78ea"
I0904 11:23:10.485226 31689 utils.go:149] MachineSet "e2e-5a525-w-1" have 2 nodes
I0904 11:23:10.485237 31689 utils.go:177] Node "3c21bb7a-4f3e-444a-b521-dd4dbeef8007" is ready. Conditions are: [{OutOfDisk False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:03 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:03 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:03 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:03 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:03 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 11:23:10.485294 31689 utils.go:177] Node "e969fe05-b8d5-427b-8cc1-86207add2106" is ready. Conditions are: [{OutOfDisk False 2019-09-04 11:23:10 +0000 UTC 2019-09-04 11:23:04 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:23:10 +0000 UTC 2019-09-04 11:23:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:23:10 +0000 UTC 2019-09-04 11:23:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:23:10 +0000 UTC 2019-09-04 11:23:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:23:10 +0000 UTC 2019-09-04 11:23:04 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 11:23:10.485335 31689 utils.go:177] Node "8084551b-1aa4-4ec3-a41c-e09f6c3b541c" is ready. Conditions are: [{OutOfDisk False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletReady kubelet is posting ready status}]
I0904 11:23:10.485372 31689 utils.go:177] Node "4a823af9-c90b-4bae-a66a-afee939f78ea" is ready. Conditions are: [{OutOfDisk False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-09-04 11:23:09 +0000 UTC 2019-09-04 11:23:05 +0000 UTC KubeletReady kubelet is posting ready status}]
STEP: scaling "e2e-5a525-w-0" from 2 to 0 replicas
I0904 11:23:10.485412 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: scaling "e2e-5a525-w-1" from 2 to 0 replicas
I0904 11:23:10.502654 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0904 11:23:10.529022 31689 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 11:23:10.540856 31689 utils.go:99] MachineSet "e2e-5a525-w-0" replicas 0. Ready: 0, available 0
I0904 11:23:10.540893 31689 utils.go:99] MachineSet "e2e-5a525-w-1" replicas 0. Ready: 2, available 2
I0904 11:23:10.540903 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:23:10.540912 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:23:10.540922 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:23:10.540930 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:23:10.552258 31689 utils.go:231] Node "3c21bb7a-4f3e-444a-b521-dd4dbeef8007". Ready: true. Unschedulable: false
I0904 11:23:10.552286 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:23:10.552296 31689 utils.go:231] Node "4a823af9-c90b-4bae-a66a-afee939f78ea". Ready: true. Unschedulable: false
I0904 11:23:10.552304 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:23:10.552312 31689 utils.go:231] Node "8084551b-1aa4-4ec3-a41c-e09f6c3b541c". Ready: true. Unschedulable: false
I0904 11:23:10.552326 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:23:10.552335 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:23:10.552343 31689 utils.go:231] Node "e969fe05-b8d5-427b-8cc1-86207add2106". Ready: true. Unschedulable: true
I0904 11:23:10.552351 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:23:10.560662 31689 utils.go:87] Cluster size is 9 nodes
I0904 11:23:15.560914 31689 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0904 11:23:15.564364 31689 utils.go:99] MachineSet "e2e-5a525-w-0" replicas 0. Ready: 0, available 0
I0904 11:23:15.564392 31689 utils.go:99] MachineSet "e2e-5a525-w-1" replicas 0. Ready: 0, available 0
I0904 11:23:15.564402 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:23:15.564408 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:23:15.564413 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:23:15.564419 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:23:15.567109 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:23:15.567130 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:23:15.567136 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:23:15.567141 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:23:15.567147 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:23:15.569725 31689 utils.go:87] Cluster size is 5 nodes
I0904 11:23:15.569754 31689 utils.go:257] waiting for all nodes to be ready
I0904 11:23:15.572226 31689 utils.go:262] waiting for all nodes to be schedulable
I0904 11:23:15.574889 31689 utils.go:290] [remaining 1m0s] Node "4169660f-ac78-4b87-b90f-0358fa5efca9" is schedulable
I0904 11:23:15.574915 31689 utils.go:290] [remaining 1m0s] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424" is schedulable
I0904 11:23:15.574926 31689 utils.go:290] [remaining 1m0s] Node "cbe9202c-babb-4a83-bd70-8f21d75df033" is schedulable
I0904 11:23:15.574936 31689 utils.go:290] [remaining 1m0s] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46" is schedulable
I0904 11:23:15.574945 31689 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 11:23:15.574954 31689 utils.go:267] waiting for each node to be backed by a machine
I0904 11:23:15.580408 31689 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 11:23:15.580434 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-8drlz" is linked to node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
I0904 11:23:15.580451 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-kx7wv" is linked to node "cbe9202c-babb-4a83-bd70-8f21d75df033"
I0904 11:23:15.580466 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2twbz" is linked to node "4169660f-ac78-4b87-b90f-0358fa5efca9"
I0904 11:23:15.580486 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-kpmfw" is linked to node "d1f145c4-1ed2-493f-97af-c24f5256cf46"
I0904 11:23:15.580494 31689 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
• [SLOW TEST:15.257 seconds]
[Feature:Machines] Managed cluster should
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
grow and decrease when scaling different machineSets simultaneously
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:267
------------------------------
[Feature:Machines] Managed cluster should
drain node before removing machine resource
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
I0904 11:23:15.588118 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: checking existing cluster size
I0904 11:23:15.604141 31689 utils.go:87] Cluster size is 5 nodes
STEP: Taking the first worker machineset (assuming only worker machines are backed by machinesets)
STEP: Creating two new machines, one for node about to be drained, other for moving workload from drained node
STEP: Waiting until both new nodes are ready
E0904 11:23:15.616829 31689 utils.go:342] [remaining 15m0s] Expecting 2 nodes with map[string]string{"node-role.kubernetes.io/worker":"", "node-draining-test":"1da1871e-cf06-11e9-95fe-0ac9a22f5366"} labels in Ready state, got 0
I0904 11:23:20.621042 31689 utils.go:346] [14m55s remaining] Expected number (2) of nodes with map[node-role.kubernetes.io/worker: node-draining-test:1da1871e-cf06-11e9-95fe-0ac9a22f5366] label in Ready state found
STEP: Creating RC with workload
STEP: Creating PDB for RC
STEP: Wait until all replicas are ready
I0904 11:23:20.667296 31689 utils.go:396] [15m0s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 0
I0904 11:23:25.671679 31689 utils.go:396] [14m55s remaining] Waiting for at least one RC ready replica, ReadyReplicas: 0, Replicas: 20
I0904 11:23:30.669459 31689 utils.go:399] [14m50s remaining] Waiting for RC ready replicas, ReadyReplicas: 20, Replicas: 20
I0904 11:23:30.677461 31689 utils.go:416] POD #0/20: {
"metadata": {
"name": "pdb-workload-4w47v",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-4w47v",
"uid": "666db241-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3873",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.98.148.170",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:26Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://4e82b8628edddfd9"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.677674 31689 utils.go:416] POD #1/20: {
"metadata": {
"name": "pdb-workload-5c7cd",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-5c7cd",
"uid": "666af806-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3884",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.180.138.248",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:24Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://16188ab58ccb3d61"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.677835 31689 utils.go:416] POD #2/20: {
"metadata": {
"name": "pdb-workload-5f54q",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-5f54q",
"uid": "666b5e57-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3862",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.21.228.50",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:24Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://dd6dc66fa6768e97"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.677963 31689 utils.go:416] POD #3/20: {
"metadata": {
"name": "pdb-workload-5hjx4",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-5hjx4",
"uid": "66691cea-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3867",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.49.69.160",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:23Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://c89d391cdf8a2e5a"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.678130 31689 utils.go:416] POD #4/20: {
"metadata": {
"name": "pdb-workload-86g26",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-86g26",
"uid": "666dc0e0-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3904",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.152.63.108",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:25Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://a797ebfbc27fc293"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.678296 31689 utils.go:416] POD #5/20: {
"metadata": {
"name": "pdb-workload-8lrnl",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-8lrnl",
"uid": "666da5ad-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3935",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.36.74.157",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:26Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://ddaff08aa04da5df"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.678453 31689 utils.go:416] POD #6/20: {
"metadata": {
"name": "pdb-workload-b8g8h",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-b8g8h",
"uid": "666db8b7-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3876",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.109.5.97",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:25Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://d049e49c28167cd0"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.678600 31689 utils.go:416] POD #7/20: {
"metadata": {
"name": "pdb-workload-bpp8w",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-bpp8w",
"uid": "666b2185-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3842",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:25Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.184.204.15",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:24Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://8d78eeb9878a3d69"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.678758 31689 utils.go:416] POD #8/20: {
"metadata": {
"name": "pdb-workload-dd4l5",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-dd4l5",
"uid": "666a0cea-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3915",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.129.247.243",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:24Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://cbfbb96d5dfbfcf5"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.678900 31689 utils.go:416] POD #9/20: {
"metadata": {
"name": "pdb-workload-ddklf",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-ddklf",
"uid": "66704ea4-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3901",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.160.50.61",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:26Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://d325d4677e5748da"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.679066 31689 utils.go:416] POD #10/20: {
"metadata": {
"name": "pdb-workload-ddnwb",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-ddnwb",
"uid": "666d4106-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3912",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.221.138.233",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:25Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://3a5f0a545c728810"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.679214 31689 utils.go:416] POD #11/20: {
"metadata": {
"name": "pdb-workload-dhrnf",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-dhrnf",
"uid": "6670280e-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3945",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:28Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.147.143.34",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:26Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://8c84c4854d59a4f2"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.679344 31689 utils.go:416] POD #12/20: {
"metadata": {
"name": "pdb-workload-f2tr2",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-f2tr2",
"uid": "66703d4d-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3942",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:28Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.119.0.192",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:26Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://ee7b4f2c7a83f40e"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.679469 31689 utils.go:416] POD #13/20: {
"metadata": {
"name": "pdb-workload-f7bz4",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-f7bz4",
"uid": "666ff6dd-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3878",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.117.4.9",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:26Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://7b65b80630451f0d"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.679593 31689 utils.go:416] POD #14/20: {
"metadata": {
"name": "pdb-workload-jxgnk",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-jxgnk",
"uid": "666a0ca1-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3887",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.49.175.248",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:24Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://96c603a0a87fac0c"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.679713 31689 utils.go:416] POD #15/20: {
"metadata": {
"name": "pdb-workload-rdwbj",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-rdwbj",
"uid": "666d8ce4-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3907",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.207.3.207",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:26Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://8f334d3bd706bdc5"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.679856 31689 utils.go:416] POD #16/20: {
"metadata": {
"name": "pdb-workload-ssc4q",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-ssc4q",
"uid": "666dc085-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3918",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.147.245.166",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:25Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://517068b0ab704408"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.680017 31689 utils.go:416] POD #17/20: {
"metadata": {
"name": "pdb-workload-t9wqs",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-t9wqs",
"uid": "666b1170-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3898",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "cae9ea51-ebbf-486e-a863-4da5f73358d7",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:27Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.23",
"podIP": "10.204.209.11",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:24Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://1e12391546c927c3"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.680194 31689 utils.go:416] POD #18/20: {
"metadata": {
"name": "pdb-workload-wz8rf",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-wz8rf",
"uid": "667060ae-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3870",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.72.198.79",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:25Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://bebcdfa74f538be0"
}
],
"qosClass": "Burstable"
}
}
I0904 11:23:30.680377 31689 utils.go:416] POD #19/20: {
"metadata": {
"name": "pdb-workload-z5dmt",
"generateName": "pdb-workload-",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/pdb-workload-z5dmt",
"uid": "666db6ff-cf06-11e9-9d68-0ac9a22f5366",
"resourceVersion": "3881",
"creationTimestamp": "2019-09-04T11:23:20Z",
"labels": {
"app": "nginx"
},
"ownerReferences": [
{
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "pdb-workload",
"uid": "66676fbe-cf06-11e9-9d68-0ac9a22f5366",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "default-token-ffdcj",
"secret": {
"secretName": "default-token-ffdcj",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "work",
"image": "busybox",
"command": [
"sleep",
"10h"
],
"resources": {
"requests": {
"cpu": "50m",
"memory": "50Mi"
}
},
"volumeMounts": [
{
"name": "default-token-ffdcj",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"nodeSelector": {
"node-draining-test": "1da1871e-cf06-11e9-95fe-0ac9a22f5366",
"node-role.kubernetes.io/worker": ""
},
"serviceAccountName": "default",
"serviceAccount": "default",
"nodeName": "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4",
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "kubemark",
"operator": "Exists"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute",
"tolerationSeconds": 300
}
],
"priority": 0
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:26Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": null
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2019-09-04T11:23:20Z"
}
],
"hostIP": "172.17.0.15",
"podIP": "10.255.10.70",
"startTime": "2019-09-04T11:23:20Z",
"containerStatuses": [
{
"name": "work",
"state": {
"running": {
"startedAt": "2019-09-04T11:23:25Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "busybox:latest",
"imageID": "docker://busybox:latest",
"containerID": "docker://89b30c15b51b295a"
}
],
"qosClass": "Burstable"
}
}
STEP: Delete machine to trigger node draining
STEP: Observing and verifying node draining
E0904 11:23:30.696664 31689 utils.go:451] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is expected to be marked as unschedulable, it is not
I0904 11:23:35.707191 31689 utils.go:455] [remaining 14m55s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:23:35.721886 31689 utils.go:474] [remaining 14m55s] Have 9 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:23:35.723801 31689 utils.go:490] [remaining 14m55s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:23:35.723826 31689 utils.go:500] [remaining 14m55s] Expecting at most 2 pods to be scheduled to drained node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4", got 9
I0904 11:23:40.705056 31689 utils.go:455] [remaining 14m50s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:23:40.719795 31689 utils.go:474] [remaining 14m50s] Have 8 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:23:40.721745 31689 utils.go:490] [remaining 14m50s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:23:40.721770 31689 utils.go:500] [remaining 14m50s] Expecting at most 2 pods to be scheduled to drained node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4", got 8
I0904 11:23:45.701516 31689 utils.go:455] [remaining 14m45s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:23:45.709821 31689 utils.go:474] [remaining 14m45s] Have 7 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:23:45.715030 31689 utils.go:490] [remaining 14m45s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:23:45.715056 31689 utils.go:500] [remaining 14m45s] Expecting at most 2 pods to be scheduled to drained node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4", got 7
I0904 11:23:50.700780 31689 utils.go:455] [remaining 14m40s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:23:50.706789 31689 utils.go:474] [remaining 14m40s] Have 6 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:23:50.708448 31689 utils.go:490] [remaining 14m40s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:23:50.708476 31689 utils.go:500] [remaining 14m40s] Expecting at most 2 pods to be scheduled to drained node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4", got 6
I0904 11:23:55.700992 31689 utils.go:455] [remaining 14m35s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:23:55.706970 31689 utils.go:474] [remaining 14m35s] Have 5 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:23:55.708685 31689 utils.go:490] [remaining 14m35s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:23:55.708724 31689 utils.go:500] [remaining 14m35s] Expecting at most 2 pods to be scheduled to drained node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4", got 5
I0904 11:24:00.702049 31689 utils.go:455] [remaining 14m30s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:24:00.708927 31689 utils.go:474] [remaining 14m30s] Have 4 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:24:00.711148 31689 utils.go:490] [remaining 14m30s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:24:00.711209 31689 utils.go:500] [remaining 14m30s] Expecting at most 2 pods to be scheduled to drained node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4", got 4
I0904 11:24:05.701875 31689 utils.go:455] [remaining 14m25s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:24:05.710049 31689 utils.go:474] [remaining 14m25s] Have 3 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:24:05.711779 31689 utils.go:490] [remaining 14m25s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:24:05.711803 31689 utils.go:500] [remaining 14m25s] Expecting at most 2 pods to be scheduled to drained node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4", got 3
I0904 11:24:10.700788 31689 utils.go:455] [remaining 14m20s] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" is mark unschedulable as expected
I0904 11:24:10.706848 31689 utils.go:474] [remaining 14m20s] Have 2 pods scheduled to node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4"
I0904 11:24:10.708499 31689 utils.go:490] [remaining 14m20s] RC ReadyReplicas: 20, Replicas: 20
I0904 11:24:10.708526 31689 utils.go:504] [remaining 14m20s] Expected result: all pods from the RC up to last one or two got scheduled to a different node while respecting PDB
STEP: Validating the machine is deleted
E0904 11:24:10.710254 31689 infra.go:454] Machine "machine1" not yet deleted
E0904 11:24:15.712384 31689 infra.go:454] Machine "machine1" not yet deleted
I0904 11:24:20.713577 31689 infra.go:463] Machine "machine1" successfully deleted
STEP: Validate underlying node corresponding to machine1 is removed as well
I0904 11:24:20.715866 31689 utils.go:530] [15m0s remaining] Node "bebdfc33-fa59-4fb2-8ba9-aeb9624e3db4" successfully deleted
STEP: Delete PDB
STEP: Delete machine2
STEP: waiting for cluster to get back to original size. Final size should be 5 nodes
I0904 11:24:20.725769 31689 utils.go:239] [remaining 15m0s] Cluster size expected to be 5 nodes
I0904 11:24:20.730223 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:24:20.730246 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:24:20.730256 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:24:20.730266 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:24:20.741498 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:24:20.741519 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:24:20.741529 31689 utils.go:231] Node "cae9ea51-ebbf-486e-a863-4da5f73358d7". Ready: true. Unschedulable: true
I0904 11:24:20.741537 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:24:20.741545 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:24:20.741552 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:24:20.748603 31689 utils.go:87] Cluster size is 6 nodes
I0904 11:24:25.748841 31689 utils.go:239] [remaining 14m55s] Cluster size expected to be 5 nodes
I0904 11:24:25.754140 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:24:25.754185 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:24:25.754196 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:24:25.754205 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:24:25.758117 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:24:25.758139 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:24:25.758145 31689 utils.go:231] Node "cae9ea51-ebbf-486e-a863-4da5f73358d7". Ready: true. Unschedulable: true
I0904 11:24:25.758151 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:24:25.758156 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:24:25.758189 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:24:25.761548 31689 utils.go:87] Cluster size is 6 nodes
I0904 11:24:30.748785 31689 utils.go:239] [remaining 14m50s] Cluster size expected to be 5 nodes
I0904 11:24:30.753562 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:24:30.753583 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:24:30.753590 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:24:30.753595 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:24:30.756943 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:24:30.756976 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:24:30.756986 31689 utils.go:231] Node "cae9ea51-ebbf-486e-a863-4da5f73358d7". Ready: true. Unschedulable: true
I0904 11:24:30.756995 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:24:30.757004 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:24:30.757016 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:24:30.760620 31689 utils.go:87] Cluster size is 6 nodes
I0904 11:24:35.748792 31689 utils.go:239] [remaining 14m45s] Cluster size expected to be 5 nodes
I0904 11:24:35.752826 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:24:35.752854 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:24:35.752865 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:24:35.752874 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:24:35.756607 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:24:35.756631 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:24:35.756641 31689 utils.go:231] Node "cae9ea51-ebbf-486e-a863-4da5f73358d7". Ready: true. Unschedulable: true
I0904 11:24:35.756649 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:24:35.756657 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:24:35.756665 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:24:35.759792 31689 utils.go:87] Cluster size is 6 nodes
I0904 11:24:40.749123 31689 utils.go:239] [remaining 14m40s] Cluster size expected to be 5 nodes
I0904 11:24:40.754793 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset" replicas 1. Ready: 1, available 1
I0904 11:24:40.754820 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-blue" replicas 1. Ready: 1, available 1
I0904 11:24:40.754830 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-green" replicas 1. Ready: 1, available 1
I0904 11:24:40.754840 31689 utils.go:99] MachineSet "kubemark-actuator-testing-machineset-red" replicas 1. Ready: 1, available 1
I0904 11:24:40.762190 31689 utils.go:231] Node "4169660f-ac78-4b87-b90f-0358fa5efca9". Ready: true. Unschedulable: false
I0904 11:24:40.762218 31689 utils.go:231] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424". Ready: true. Unschedulable: false
I0904 11:24:40.762227 31689 utils.go:231] Node "cbe9202c-babb-4a83-bd70-8f21d75df033". Ready: true. Unschedulable: false
I0904 11:24:40.762236 31689 utils.go:231] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46". Ready: true. Unschedulable: false
I0904 11:24:40.762244 31689 utils.go:231] Node "minikube". Ready: true. Unschedulable: false
I0904 11:24:40.765326 31689 utils.go:87] Cluster size is 5 nodes
I0904 11:24:40.765347 31689 utils.go:257] waiting for all nodes to be ready
I0904 11:24:40.768429 31689 utils.go:262] waiting for all nodes to be schedulable
I0904 11:24:40.771467 31689 utils.go:290] [remaining 1m0s] Node "4169660f-ac78-4b87-b90f-0358fa5efca9" is schedulable
I0904 11:24:40.771493 31689 utils.go:290] [remaining 1m0s] Node "595f4b23-c3ad-4106-bf5c-21ec235eb424" is schedulable
I0904 11:24:40.771505 31689 utils.go:290] [remaining 1m0s] Node "cbe9202c-babb-4a83-bd70-8f21d75df033" is schedulable
I0904 11:24:40.771515 31689 utils.go:290] [remaining 1m0s] Node "d1f145c4-1ed2-493f-97af-c24f5256cf46" is schedulable
I0904 11:24:40.771525 31689 utils.go:290] [remaining 1m0s] Node "minikube" is schedulable
I0904 11:24:40.771538 31689 utils.go:267] waiting for each node to be backed by a machine
I0904 11:24:40.779298 31689 utils.go:47] [remaining 3m0s] Expecting the same number of machines and nodes, have 5 nodes and 5 machines
I0904 11:24:40.779335 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-8drlz" is linked to node "595f4b23-c3ad-4106-bf5c-21ec235eb424"
I0904 11:24:40.779350 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-blue-kx7wv" is linked to node "cbe9202c-babb-4a83-bd70-8f21d75df033"
I0904 11:24:40.779363 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-green-2twbz" is linked to node "4169660f-ac78-4b87-b90f-0358fa5efca9"
I0904 11:24:40.779379 31689 utils.go:70] [remaining 3m0s] Machine "kubemark-actuator-testing-machineset-red-kpmfw" is linked to node "d1f145c4-1ed2-493f-97af-c24f5256cf46"
I0904 11:24:40.779392 31689 utils.go:70] [remaining 3m0s] Machine "minikube-static-machine" is linked to node "minikube"
I0904 11:24:40.790403 31689 utils.go:378] [15m0s remaining] Found 0 number of nodes with map[node-role.kubernetes.io/worker: node-draining-test:1da1871e-cf06-11e9-95fe-0ac9a22f5366] label as expected
• [SLOW TEST:85.202 seconds]
[Feature:Machines] Managed cluster should
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:126
drain node before removing machine resource
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:346
------------------------------
[Feature:Machines] Managed cluster should
reject invalid machinesets
/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg/pkg/e2e/infra/infra.go:487
I0904 11:24:40.790514 31689 framework.go:406] >>> kubeConfig: /root/.kube/config
STEP: Creating invalid machineset
STEP: Waiting for ReconcileError MachineSet event
I0904 11:24:40.908036 31689 infra.go:506] Fetching ReconcileError MachineSet invalid-machineset event
I0904 11:24:40.908094 31689 infra.go:512] Found ReconcileError event for "invalid-machineset" machine set with the following message: "invalid-machineset" machineset validation failed: spec.template.metadata.labels: Invalid value: map[string]string{"big-kitty":"i-am-bit-kitty"}: `selector` does not match template `labels`
STEP: Verify no machine from "invalid-machineset" machineset were created
I0904 11:24:40.911099 31689 infra.go:528] Have 0 machines generated from "invalid-machineset" machineset
STEP: Deleting invalid machineset
•
Ran 7 of 16 Specs in 202.375 seconds
SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 9 Skipped
--- PASS: TestE2E (202.38s)
PASS
ok github.com/openshift/cluster-api-actuator-pkg/pkg/e2e 202.422s
make[1]: Leaving directory `/tmp/tmp.sOq0VYkqJs/src/github.com/openshift/cluster-api-actuator-pkg'
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN E2E TESTS [00h 04m 26s] ##########
[PostBuildScript] - Executing post build scripts.
[workspace@2] $ /bin/bash /tmp/jenkins3573783162949132135.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/gathered
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/gathered
/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/gathered
0 directories, 0 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins8501752695808421797.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/generated
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
WARNING: You're not using the default seccomp profile
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/generated
/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
└── pid1.journal
0 directories, 8 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins8101141956822573995.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/journals
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/journals
/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service
0 directories, 3 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins7415123878710042857.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/276/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/builds/276/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r 'artifacts/gathered/*' gcs/artifacts/
cp: cannot stat ‘artifacts/gathered/*’: No such file or directory
++ export status=FAILURE
++ status=FAILURE
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins1080292076551695019.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
++ mktemp
+ script=/tmp/tmp.Su74RFtHzJ
+ cat
+ chmod +x /tmp/tmp.Su74RFtHzJ
+ scp -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.Su74RFtHzJ openshiftdevel:/tmp/tmp.Su74RFtHzJ
+ ssh -F /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.Su74RFtHzJ"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"pull-ci-openshift-cluster-autoscaler-operator-master-e2e","buildid":"1169205390341050368","prowjobid":"33b339da-cf04-11e9-ab71-0a58ac108d31","refs":{"org":"openshift","repo":"cluster-autoscaler-operator","repo_link":"https://github.com/openshift/cluster-autoscaler-operator","base_ref":"master","base_sha":"5408e9b6aa7c16908e7cdd5dc75d647c449601f3","base_link":"https://github.com/openshift/cluster-autoscaler-operator/commit/5408e9b6aa7c16908e7cdd5dc75d647c449601f3","pulls":[{"number":117,"author":"enxebre","sha":"b85c52e35f8ac23efb92c6c7a5d503b05f0f55a3","link":"https://github.com/openshift/cluster-autoscaler-operator/pull/117","commit_link":"https://github.com/openshift/cluster-autoscaler-operator/pull/117/commits/b85c52e35f8ac23efb92c6c7a5d503b05f0f55a3","author_link":"https://github.com/enxebre"}]}} ]]
++ jq --compact-output '.buildid |= "276"'
+ JOB_SPEC='{"type":"presubmit","job":"pull-ci-openshift-cluster-autoscaler-operator-master-e2e","buildid":"276","prowjobid":"33b339da-cf04-11e9-ab71-0a58ac108d31","refs":{"org":"openshift","repo":"cluster-autoscaler-operator","repo_link":"https://github.com/openshift/cluster-autoscaler-operator","base_ref":"master","base_sha":"5408e9b6aa7c16908e7cdd5dc75d647c449601f3","base_link":"https://github.com/openshift/cluster-autoscaler-operator/commit/5408e9b6aa7c16908e7cdd5dc75d647c449601f3","pulls":[{"number":117,"author":"enxebre","sha":"b85c52e35f8ac23efb92c6c7a5d503b05f0f55a3","link":"https://github.com/openshift/cluster-autoscaler-operator/pull/117","commit_link":"https://github.com/openshift/cluster-autoscaler-operator/pull/117/commits/b85c52e35f8ac23efb92c6c7a5d503b05f0f55a3","author_link":"https://github.com/enxebre"}]}}'
+ docker run -e 'JOB_SPEC={"type":"presubmit","job":"pull-ci-openshift-cluster-autoscaler-operator-master-e2e","buildid":"276","prowjobid":"33b339da-cf04-11e9-ab71-0a58ac108d31","refs":{"org":"openshift","repo":"cluster-autoscaler-operator","repo_link":"https://github.com/openshift/cluster-autoscaler-operator","base_ref":"master","base_sha":"5408e9b6aa7c16908e7cdd5dc75d647c449601f3","base_link":"https://github.com/openshift/cluster-autoscaler-operator/commit/5408e9b6aa7c16908e7cdd5dc75d647c449601f3","pulls":[{"number":117,"author":"enxebre","sha":"b85c52e35f8ac23efb92c6c7a5d503b05f0f55a3","link":"https://github.com/openshift/cluster-autoscaler-operator/pull/117","commit_link":"https://github.com/openshift/cluster-autoscaler-operator/pull/117/commits/b85c52e35f8ac23efb92c6c7a5d503b05f0f55a3","author_link":"https://github.com/enxebre"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-path=gs://origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin '/data/gcs/*'
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ...
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
a073c86ecf9e: Already exists
cc3fc741b1a9: Already exists
822bed51ba40: Pulling fs layer
85cea451eec0: Pulling fs layer
85cea451eec0: Verifying Checksum
85cea451eec0: Download complete
822bed51ba40: Verifying Checksum
822bed51ba40: Download complete
822bed51ba40: Pull complete
85cea451eec0: Pull complete
Digest: sha256:03aad50d7ec631ee07c12ac2ba679bd48c7781f7d5754f9e0dcc4e7260e35208
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","file":"prow/gcsupload/run.go:107","func":"k8s.io/test-infra/prow/gcsupload.Options.assembleTargets","level":"warning","msg":"Encountered error in resolving items to upload for /data/gcs/*: stat /data/gcs/*: no such file or directory","time":"2019-09-04T11:25:02Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/276.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-04T11:25:02Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-04T11:25:02Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_cluster-autoscaler-operator/117/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:64","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload","level":"info","msg":"Queued for upload","time":"2019-09-04T11:25:02Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/276.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-04T11:25:03Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_cluster-autoscaler-operator/117/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-04T11:25:03Z"}
{"component":"gcsupload","dest":"pr-logs/directory/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/latest-build.txt","file":"prow/pod-utils/gcs/upload.go:70","func":"k8s.io/test-infra/prow/pod-utils/gcs.upload.func1","level":"info","msg":"Finished upload","time":"2019-09-04T11:25:03Z"}
{"component":"gcsupload","file":"prow/gcsupload/run.go:65","func":"k8s.io/test-infra/prow/gcsupload.Options.Run","level":"info","msg":"Finished upload to GCS","time":"2019-09-04T11:25:03Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 06s] ##########
[workspace@2] $ /bin/bash /tmp/jenkins8496912179408766053.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed
++ export PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config
+ oct deprovision
PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml
PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****
TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir) => {
"changed": false,
"generated_timestamp": "2019-09-04 07:25:04.587474",
"item": "origin_ci_inventory_dir",
"skip_reason": "Conditional check failed",
"skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region) => {
"changed": false,
"generated_timestamp": "2019-09-04 07:25:04.591859",
"item": "origin_ci_aws_region",
"skip_reason": "Conditional check failed",
"skipped": true
}
PLAY [deprovision virtual hosts in EC2] ****************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost
TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
"changed": false,
"generated_timestamp": "2019-09-04 07:25:05.414477",
"msg": ""
}
TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2019-09-04 07:25:06.198072",
"msg": "Tags {'Name': 'oct-terminate'} created for resource i-00026d36a3df1ef96."
}
TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2019-09-04 07:25:07.164784",
"instance_ids": [
"i-00026d36a3df1ef96"
],
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-028c56dcaee239954"
},
"/dev/sdb": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-0c9838ad75698812b"
}
},
"dns_name": "ec2-54-227-18-68.compute-1.amazonaws.com",
"ebs_optimized": false,
"groups": {
"sg-7e73221a": "default"
},
"hypervisor": "xen",
"id": "i-00026d36a3df1ef96",
"image_id": "ami-0b77b87a37c3e662c",
"instance_type": "m4.xlarge",
"kernel": null,
"key_name": "libra",
"launch_time": "2019-09-04T11:08:01.000Z",
"placement": "us-east-1c",
"private_dns_name": "ip-172-18-19-207.ec2.internal",
"private_ip": "172.18.19.207",
"public_dns_name": "ec2-54-227-18-68.compute-1.amazonaws.com",
"public_ip": "54.227.18.68",
"ramdisk": null,
"region": "us-east-1",
"root_device_name": "/dev/sda1",
"root_device_type": "ebs",
"state": "running",
"state_code": 16,
"tags": {
"Name": "oct-terminate",
"openshift_etcd": "",
"openshift_master": "",
"openshift_node": ""
},
"tenancy": "default",
"virtualization_type": "hvm"
}
],
"tagged_instances": []
}
TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2019-09-04 07:25:07.406764",
"path": "/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory/host_vars/172.18.19.207.yml",
"state": "absent"
}
PLAY [deprovision virtual hosts locally manged by Vagrant] *********************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
PLAY [clean up local configuration for deprovisioned instances] ****************
TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/2b40f3e11aadb569dc9c0c9fb90e7273658ce6ed/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2019-09-04 07:25:07.913409",
"path": "/var/lib/jenkins/jobs/pull-ci-openshift-cluster-autoscaler-operator-master-e2e/workspace@2/.config/origin-ci-tool/inventory",
"state": "absent"
}
PLAY RECAP *********************************************************************
localhost : ok=8 changed=4 unreachable=0 failed=0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 04s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] done
Finished: SUCCESS