SuccessConsole Output

Skipping 1,854 KB.. Full Log
igin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:214
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:09:23.132: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:09:23.277: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:09:23.920: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:09:24.023: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:09:24.173: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:09:24.222: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:09:24.322: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:09:24.322: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:09:24.322: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:09:24.425: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:09:24.425: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:09:24.425: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:09:24.425: INFO: 	Container router ready: true, restart count 0
Mar  7 01:09:24.425: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:09:24.425: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:09:24.425: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:09:24.527: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:09:24.630: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates resource limits of pods that are allowed to run [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:214
Mar  7 01:09:24.838: INFO: Pod docker-registry-2-ldrtp requesting resource cpu=100m on Node ci-primg148-ig-m-nrvp
Mar  7 01:09:24.838: INFO: Pod registry-console-1-4qzm5 requesting resource cpu=0m on Node ci-primg148-ig-m-nrvp
Mar  7 01:09:24.838: INFO: Pod router-1-p9p1l requesting resource cpu=100m on Node ci-primg148-ig-m-nrvp
Mar  7 01:09:24.838: INFO: Using pod capacity: 500m
Mar  7 01:09:24.838: INFO: Node: ci-primg148-ig-n-wvkh has cpu capacity: 2000m
Mar  7 01:09:24.838: INFO: Node: ci-primg148-ig-m-nrvp has cpu capacity: 1800m
Mar  7 01:09:24.838: INFO: Node: ci-primg148-ig-n-qhms has cpu capacity: 2000m
Mar  7 01:09:24.838: INFO: Node: ci-primg148-ig-n-w5x2 has cpu capacity: 2000m
STEP: Starting additional 15 Pods to fully saturate the cluster CPU and trying to start another one
Mar  7 01:09:25.713: INFO: Waiting for running...
I0307 01:09:25.713744   18842 reflector.go:196] Starting reflector *api.Pod (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0307 01:09:25.713824   18842 reflector.go:234] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
Mar  7 01:09:30.816: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:09:40.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-jv1xm" for this suite.
Mar  7 01:10:07.419: INFO: namespace: e2e-tests-sched-pred-jv1xm, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:10:08.028733   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:44.896 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates resource limits of pods that are allowed to run [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:214
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that InterPodAffinity is respected if matching with multiple Affinities
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:615
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:10:08.029: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:10:08.179: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:10:08.866: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:10:08.968: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:10:09.118: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:10:09.169: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:10:09.269: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:10:09.269: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:10:09.269: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:10:09.371: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:10:09.371: INFO: 	Container router ready: true, restart count 0
Mar  7 01:10:09.371: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:10:09.371: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:10:09.371: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:10:09.371: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:10:09.371: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:10:09.473: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:10:09.575: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that InterPodAffinity is respected if matching with multiple Affinities
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:615
STEP: Trying to launch a pod with a label to get a node which can launch it.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label e2e.inter-pod-affinity.kubernetes.io/zone kubernetes-e2e
STEP: Trying to launch the pod, now with multiple pod affinities with diff LabelOperators.
STEP: removing the label e2e.inter-pod-affinity.kubernetes.io/zone off the node ci-primg148-ig-n-w5x2
STEP: verifying the node doesn't have the label e2e.inter-pod-affinity.kubernetes.io/zone
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:10:14.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-zx3hn" for this suite.
Mar  7 01:10:41.126: INFO: namespace: e2e-tests-sched-pred-zx3hn, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:10:41.764309   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:33.735 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that InterPodAffinity is respected if matching with multiple Affinities
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:615
------------------------------
S
------------------------------
[k8s.io] Daemon set [Serial] 
  should run and stop simple daemon
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:148
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] Daemon set [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:10:41.764: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:10:41.949: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Daemon set [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:89
[It] should run and stop simple daemon
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:148
Mar  7 01:10:50.746: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Mar  7 01:10:52.949: INFO: nodesToPodCount: map[string]int{"ci-primg148-ig-n-w5x2":1, "ci-primg148-ig-m-nrvp":1, "ci-primg148-ig-n-wvkh":1, "ci-primg148-ig-n-qhms":1}
STEP: Stop a daemon pod, check that the daemon pod is revived.
Mar  7 01:10:55.261: INFO: nodesToPodCount: map[string]int{"ci-primg148-ig-n-wvkh":1, "ci-primg148-ig-n-qhms":1, "ci-primg148-ig-n-w5x2":1, "ci-primg148-ig-m-nrvp":1}
Mar  7 01:10:55.261: INFO: Check that reaper kills all daemon pods for daemon-set
Mar  7 01:10:59.519: INFO: nodesToPodCount: map[string]int{}
[AfterEach] [k8s.io] Daemon set [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:73
Mar  7 01:10:59.570: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"extensions/v1beta1","metadata":{"selfLink":"/apis/extensions/v1beta1/namespaces/e2e-tests-daemonsets-5wtrp/daemonsets","resourceVersion":"20869"},"items":null}

Mar  7 01:10:59.621: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5wtrp/pods","resourceVersion":"20869"},"items":null}

[AfterEach] [k8s.io] Daemon set [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:11:07.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5wtrp" for this suite.
Mar  7 01:11:19.477: INFO: namespace: e2e-tests-daemonsets-5wtrp, resource: bindings, ignored listing per whitelist

• [SLOW TEST:38.319 seconds]
[k8s.io] Daemon set [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  should run and stop simple daemon
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:148
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that required NodeAffinity setting is respected if matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:373
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:11:20.084: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:11:20.266: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:11:21.020: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:11:21.124: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:11:21.274: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:11:21.325: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:11:21.426: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:11:21.426: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:11:21.426: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:11:21.531: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:11:21.531: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:11:21.531: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:11:21.531: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:11:21.531: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:11:21.531: INFO: 	Container router ready: true, restart count 0
Mar  7 01:11:21.531: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:11:21.635: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:11:21.741: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that required NodeAffinity setting is respected if matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:373
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e48f1ad4-02fc-11e7-b06e-0ea57314f988 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-e48f1ad4-02fc-11e7-b06e-0ea57314f988 off the node ci-primg148-ig-n-wvkh
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e48f1ad4-02fc-11e7-b06e-0ea57314f988
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:11:26.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-xczsf" for this suite.
Mar  7 01:11:53.483: INFO: namespace: e2e-tests-sched-pred-xczsf, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:11:54.089899   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:34.006 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that required NodeAffinity setting is respected if matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:373
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:289
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:11:54.090: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:11:54.262: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:11:54.899: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:11:55.002: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:11:55.152: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:11:55.203: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:11:55.304: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:11:55.304: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:11:55.304: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:11:55.406: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:11:55.406: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:11:55.406: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:11:55.406: INFO: 	Container router ready: true, restart count 0
Mar  7 01:11:55.406: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:11:55.406: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:11:55.406: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:11:55.509: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:11:55.612: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that NodeSelector is respected if matching [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:289
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f8a214d2-02fc-11e7-b06e-0ea57314f988 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f8a214d2-02fc-11e7-b06e-0ea57314f988 off the node ci-primg148-ig-n-qhms
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f8a214d2-02fc-11e7-b06e-0ea57314f988
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:12:00.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-31l24" for this suite.
Mar  7 01:12:27.203: INFO: namespace: e2e-tests-sched-pred-31l24, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:12:27.810398   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:33.720 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that NodeSelector is respected if matching [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:289
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:234
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:12:27.811: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:12:27.961: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:12:28.613: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:12:28.716: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:12:28.867: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:12:28.917: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:12:29.018: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:12:29.018: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:12:29.018: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:12:29.122: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:12:29.122: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:12:29.122: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:12:29.122: INFO: 	Container router ready: true, restart count 0
Mar  7 01:12:29.122: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:12:29.122: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:12:29.122: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:12:29.225: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:12:29.328: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that NodeSelector is respected if not matching [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:234
STEP: Trying to schedule Pod with nonempty NodeSelector.
Mar  7 01:12:29.589: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:12:39.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qsxq2" for this suite.
Mar  7 01:13:06.191: INFO: namespace: e2e-tests-sched-pred-qsxq2, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:13:06.793945   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:38.983 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that NodeSelector is respected if not matching [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:234
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:434
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:13:06.794: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:13:06.943: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:13:07.577: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:13:07.680: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:13:07.831: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:13:07.882: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:13:07.986: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:13:07.986: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:13:07.986: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:13:08.089: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:13:08.089: INFO: 	Container router ready: true, restart count 0
Mar  7 01:13:08.089: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:13:08.089: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:13:08.089: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:13:08.089: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:13:08.089: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:13:08.191: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:13:08.294: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:434
STEP: Trying to launch a pod with an invalid pod Affinity data.
Mar  7 01:13:08.499: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:13:18.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-zgzwk" for this suite.
Mar  7 01:13:30.047: INFO: namespace: e2e-tests-sched-pred-zgzwk, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:13:30.651942   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:23.857 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:434
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Namespaces [Serial] 
  should delete fast enough (90 percent of 100 namespaces in 150 seconds)
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:222
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] Namespaces [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:13:30.653: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:222
STEP: Creating testing namespaces
I0307 01:13:31.054486   18842 request.go:632] Throttling request took 98.787384ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.104469   18842 request.go:632] Throttling request took 148.752053ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.154455   18842 request.go:632] Throttling request took 198.736473ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.204482   18842 request.go:632] Throttling request took 248.759881ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.254466   18842 request.go:632] Throttling request took 298.73226ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.304487   18842 request.go:632] Throttling request took 348.72237ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.354482   18842 request.go:632] Throttling request took 398.706025ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.404489   18842 request.go:632] Throttling request took 448.713953ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.454455   18842 request.go:632] Throttling request took 498.682588ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.504501   18842 request.go:632] Throttling request took 548.678567ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.554460   18842 request.go:632] Throttling request took 598.648464ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.604494   18842 request.go:632] Throttling request took 648.677776ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.654506   18842 request.go:632] Throttling request took 698.684591ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.704453   18842 request.go:632] Throttling request took 748.603589ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.754497   18842 request.go:632] Throttling request took 798.644871ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.804492   18842 request.go:632] Throttling request took 848.635169ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.854456   18842 request.go:632] Throttling request took 898.599136ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.904452   18842 request.go:632] Throttling request took 948.535819ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:31.954470   18842 request.go:632] Throttling request took 998.554074ms, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.004469   18842 request.go:632] Throttling request took 1.048546759s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.054510   18842 request.go:632] Throttling request took 1.098586057s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.104501   18842 request.go:632] Throttling request took 1.148534671s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.154500   18842 request.go:632] Throttling request took 1.198526299s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.204466   18842 request.go:632] Throttling request took 1.2485001s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.254487   18842 request.go:632] Throttling request took 1.298503187s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.304483   18842 request.go:632] Throttling request took 1.34846676s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.354473   18842 request.go:632] Throttling request took 1.398468575s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.404497   18842 request.go:632] Throttling request took 1.448490878s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.454477   18842 request.go:632] Throttling request took 1.49846385s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.504463   18842 request.go:632] Throttling request took 1.548409463s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.554468   18842 request.go:632] Throttling request took 1.598406615s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.604480   18842 request.go:632] Throttling request took 1.648416976s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.654463   18842 request.go:632] Throttling request took 1.698407005s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.704484   18842 request.go:632] Throttling request took 1.748373226s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.754516   18842 request.go:632] Throttling request took 1.798399457s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.804469   18842 request.go:632] Throttling request took 1.848347168s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.854455   18842 request.go:632] Throttling request took 1.898345058s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.904474   18842 request.go:632] Throttling request took 1.948338362s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:32.954459   18842 request.go:632] Throttling request took 1.998325373s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.004464   18842 request.go:632] Throttling request took 2.048309392s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.054457   18842 request.go:632] Throttling request took 2.09830541s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.104467   18842 request.go:632] Throttling request took 2.148285091s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.154477   18842 request.go:632] Throttling request took 2.198282417s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.204520   18842 request.go:632] Throttling request took 2.248291159s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.254482   18842 request.go:632] Throttling request took 2.298253103s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.304466   18842 request.go:632] Throttling request took 2.348244328s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.354485   18842 request.go:632] Throttling request took 2.398237891s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.404468   18842 request.go:632] Throttling request took 2.448204823s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
I0307 01:13:33.454478   18842 request.go:632] Throttling request took 2.498196882s, request: POST:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces
STEP: Waiting 10 seconds
STEP: Deleting namespaces
Mar  7 01:13:51.633: INFO: namespace : e2e-tests-nslifetest-35-l52cs api call to delete is complete 
Mar  7 01:13:51.638: INFO: namespace : e2e-tests-nslifetest-11-b173m api call to delete is complete 
Mar  7 01:13:51.640: INFO: namespace : e2e-tests-nslifetest-23-rd1rx api call to delete is complete 
Mar  7 01:13:51.641: INFO: namespace : e2e-tests-nslifetest-31-60h6n api call to delete is complete 
Mar  7 01:13:51.641: INFO: namespace : e2e-tests-nslifetest-2-0s3cb api call to delete is complete 
Mar  7 01:13:51.642: INFO: namespace : e2e-tests-nslifetest-12-cbmlb api call to delete is complete 
Mar  7 01:13:51.642: INFO: namespace : e2e-tests-nslifetest-99-zfc36 api call to delete is complete 
Mar  7 01:13:51.642: INFO: namespace : e2e-tests-nslifetest-20-lz0xx api call to delete is complete 
Mar  7 01:13:51.642: INFO: namespace : e2e-tests-nslifetest-5-g8stl api call to delete is complete 
I0307 01:13:51.654483   18842 request.go:632] Throttling request took 86.179255ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-62-lmjss
I0307 01:13:51.704502   18842 request.go:632] Throttling request took 136.188442ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-26-dzs79
Mar  7 01:13:51.727: INFO: namespace : e2e-tests-nslifetest-10-r47fd api call to delete is complete 
Mar  7 01:13:51.729: INFO: namespace : e2e-tests-nslifetest-33-3m5tp api call to delete is complete 
Mar  7 01:13:51.729: INFO: namespace : e2e-tests-nslifetest-14-tmdzq api call to delete is complete 
Mar  7 01:13:51.733: INFO: namespace : e2e-tests-nslifetest-0-cwjlc api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-48-qbszs api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-13-z0hwj api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-49-7hlgf api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-37-8gd4l api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-52-0b4sj api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-58-rzwrg api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-16-5c6qk api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-57-43653 api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-32-jw856 api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-22-gz9jt api call to delete is complete 
Mar  7 01:13:51.744: INFO: namespace : e2e-tests-nslifetest-38-c6xq1 api call to delete is complete 
I0307 01:13:51.754489   18842 request.go:632] Throttling request took 186.181974ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-27-44xhz
Mar  7 01:13:51.772: INFO: namespace : e2e-tests-nslifetest-50-m8rmh api call to delete is complete 
Mar  7 01:13:51.773: INFO: namespace : e2e-tests-nslifetest-56-m65hs api call to delete is complete 
Mar  7 01:13:51.773: INFO: namespace : e2e-tests-nslifetest-53-815nv api call to delete is complete 
Mar  7 01:13:51.773: INFO: namespace : e2e-tests-nslifetest-4-7jgf9 api call to delete is complete 
Mar  7 01:13:51.773: INFO: namespace : e2e-tests-nslifetest-21-cftjz api call to delete is complete 
Mar  7 01:13:51.775: INFO: namespace : e2e-tests-nslifetest-51-512h8 api call to delete is complete 
Mar  7 01:13:51.775: INFO: namespace : e2e-tests-nslifetest-17-p5pf9 api call to delete is complete 
Mar  7 01:13:51.775: INFO: namespace : e2e-tests-nslifetest-39-k0zm1 api call to delete is complete 
Mar  7 01:13:51.775: INFO: namespace : e2e-tests-nslifetest-47-99gc7 api call to delete is complete 
Mar  7 01:13:51.775: INFO: namespace : e2e-tests-nslifetest-42-12jmr api call to delete is complete 
Mar  7 01:13:51.775: INFO: namespace : e2e-tests-nslifetest-6-wf186 api call to delete is complete 
Mar  7 01:13:51.775: INFO: namespace : e2e-tests-nslifetest-59-88n5d api call to delete is complete 
Mar  7 01:13:51.776: INFO: namespace : e2e-tests-nslifetest-60-t6gt8 api call to delete is complete 
Mar  7 01:13:51.776: INFO: namespace : e2e-tests-nslifetest-3-77f8j api call to delete is complete 
Mar  7 01:13:51.776: INFO: namespace : e2e-tests-nslifetest-61-z6mr6 api call to delete is complete 
Mar  7 01:13:51.776: INFO: namespace : e2e-tests-nslifetest-54-bjkr4 api call to delete is complete 
Mar  7 01:13:51.781: INFO: namespace : e2e-tests-nslifetest-24-4rr4s api call to delete is complete 
Mar  7 01:13:51.781: INFO: namespace : e2e-tests-nslifetest-36-mbs2d api call to delete is complete 
Mar  7 01:13:51.781: INFO: namespace : e2e-tests-nslifetest-34-sgzcr api call to delete is complete 
Mar  7 01:13:51.781: INFO: namespace : e2e-tests-nslifetest-55-vg0mh api call to delete is complete 
Mar  7 01:13:51.781: INFO: namespace : e2e-tests-nslifetest-1-21svw api call to delete is complete 
Mar  7 01:13:51.781: INFO: namespace : e2e-tests-nslifetest-18-n51fq api call to delete is complete 
Mar  7 01:13:51.782: INFO: namespace : e2e-tests-nslifetest-41-dfcjc api call to delete is complete 
Mar  7 01:13:51.782: INFO: namespace : e2e-tests-nslifetest-19-16bzg api call to delete is complete 
Mar  7 01:13:51.782: INFO: namespace : e2e-tests-nslifetest-15-2mnf9 api call to delete is complete 
Mar  7 01:13:51.782: INFO: namespace : e2e-tests-nslifetest-25-k45dz api call to delete is complete 
Mar  7 01:13:51.782: INFO: namespace : e2e-tests-nslifetest-62-lmjss api call to delete is complete 
Mar  7 01:13:51.782: INFO: namespace : e2e-tests-nslifetest-40-x8hkb api call to delete is complete 
Mar  7 01:13:51.791: INFO: namespace : e2e-tests-nslifetest-26-dzs79 api call to delete is complete 
I0307 01:13:51.804452   18842 request.go:632] Throttling request took 236.141757ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-43-z734m
Mar  7 01:13:51.807: INFO: namespace : e2e-tests-nslifetest-27-44xhz api call to delete is complete 
I0307 01:13:51.854479   18842 request.go:632] Throttling request took 286.144295ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-63-lbvsj
Mar  7 01:13:51.858: INFO: namespace : e2e-tests-nslifetest-43-z734m api call to delete is complete 
I0307 01:13:51.904516   18842 request.go:632] Throttling request took 336.183493ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-28-mbqtz
Mar  7 01:13:51.907: INFO: namespace : e2e-tests-nslifetest-63-lbvsj api call to delete is complete 
I0307 01:13:51.954491   18842 request.go:632] Throttling request took 386.152424ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-44-5pshp
Mar  7 01:13:51.957: INFO: namespace : e2e-tests-nslifetest-28-mbqtz api call to delete is complete 
I0307 01:13:52.004481   18842 request.go:632] Throttling request took 436.142491ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-64-wqmp9
Mar  7 01:13:52.007: INFO: namespace : e2e-tests-nslifetest-44-5pshp api call to delete is complete 
I0307 01:13:52.054501   18842 request.go:632] Throttling request took 486.158139ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-45-8f5nd
Mar  7 01:13:52.057: INFO: namespace : e2e-tests-nslifetest-64-wqmp9 api call to delete is complete 
I0307 01:13:52.104449   18842 request.go:632] Throttling request took 536.102656ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-29-512xf
Mar  7 01:13:52.107: INFO: namespace : e2e-tests-nslifetest-45-8f5nd api call to delete is complete 
I0307 01:13:52.154460   18842 request.go:632] Throttling request took 586.094708ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-65-9h205
Mar  7 01:13:52.157: INFO: namespace : e2e-tests-nslifetest-29-512xf api call to delete is complete 
I0307 01:13:52.204502   18842 request.go:632] Throttling request took 636.119788ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-46-mwhr8
Mar  7 01:13:52.208: INFO: namespace : e2e-tests-nslifetest-65-9h205 api call to delete is complete 
I0307 01:13:52.254462   18842 request.go:632] Throttling request took 686.091823ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-81-z5v3d
Mar  7 01:13:52.258: INFO: namespace : e2e-tests-nslifetest-46-mwhr8 api call to delete is complete 
I0307 01:13:52.304488   18842 request.go:632] Throttling request took 736.105175ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-82-1dvrf
Mar  7 01:13:52.307: INFO: namespace : e2e-tests-nslifetest-81-z5v3d api call to delete is complete 
I0307 01:13:52.354470   18842 request.go:632] Throttling request took 786.086161ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-90-99rmw
Mar  7 01:13:52.358: INFO: namespace : e2e-tests-nslifetest-82-1dvrf api call to delete is complete 
I0307 01:13:52.404464   18842 request.go:632] Throttling request took 836.075332ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-66-fmw1z
Mar  7 01:13:52.407: INFO: namespace : e2e-tests-nslifetest-90-99rmw api call to delete is complete 
I0307 01:13:52.454475   18842 request.go:632] Throttling request took 886.077953ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-91-szlb4
Mar  7 01:13:52.457: INFO: namespace : e2e-tests-nslifetest-66-fmw1z api call to delete is complete 
I0307 01:13:52.504462   18842 request.go:632] Throttling request took 936.056186ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-83-v10k9
Mar  7 01:13:52.508: INFO: namespace : e2e-tests-nslifetest-91-szlb4 api call to delete is complete 
I0307 01:13:52.554488   18842 request.go:632] Throttling request took 986.078783ms, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-92-xsfzv
Mar  7 01:13:52.557: INFO: namespace : e2e-tests-nslifetest-83-v10k9 api call to delete is complete 
I0307 01:13:52.604511   18842 request.go:632] Throttling request took 1.036091531s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-67-3wfc8
Mar  7 01:13:52.607: INFO: namespace : e2e-tests-nslifetest-92-xsfzv api call to delete is complete 
I0307 01:13:52.654473   18842 request.go:632] Throttling request took 1.086041805s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-84-05n27
Mar  7 01:13:52.657: INFO: namespace : e2e-tests-nslifetest-67-3wfc8 api call to delete is complete 
I0307 01:13:52.704495   18842 request.go:632] Throttling request took 1.136062857s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-93-f0lgs
Mar  7 01:13:52.707: INFO: namespace : e2e-tests-nslifetest-84-05n27 api call to delete is complete 
I0307 01:13:52.754504   18842 request.go:632] Throttling request took 1.186055195s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-68-khwqh
Mar  7 01:13:52.757: INFO: namespace : e2e-tests-nslifetest-93-f0lgs api call to delete is complete 
I0307 01:13:52.804472   18842 request.go:632] Throttling request took 1.236023219s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-85-t1vfx
Mar  7 01:13:52.807: INFO: namespace : e2e-tests-nslifetest-68-khwqh api call to delete is complete 
I0307 01:13:52.854463   18842 request.go:632] Throttling request took 1.286016239s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-69-rthqg
Mar  7 01:13:52.857: INFO: namespace : e2e-tests-nslifetest-85-t1vfx api call to delete is complete 
I0307 01:13:52.904502   18842 request.go:632] Throttling request took 1.336041295s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-94-4d9jk
Mar  7 01:13:52.908: INFO: namespace : e2e-tests-nslifetest-69-rthqg api call to delete is complete 
I0307 01:13:52.954458   18842 request.go:632] Throttling request took 1.386007564s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-86-l41x8
Mar  7 01:13:52.958: INFO: namespace : e2e-tests-nslifetest-94-4d9jk api call to delete is complete 
I0307 01:13:53.004489   18842 request.go:632] Throttling request took 1.436022537s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-7-j0k4b
Mar  7 01:13:53.007: INFO: namespace : e2e-tests-nslifetest-86-l41x8 api call to delete is complete 
I0307 01:13:53.054484   18842 request.go:632] Throttling request took 1.486017454s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-95-c4m35
Mar  7 01:13:53.057: INFO: namespace : e2e-tests-nslifetest-7-j0k4b api call to delete is complete 
I0307 01:13:53.104462   18842 request.go:632] Throttling request took 1.535996465s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-87-n0cw5
Mar  7 01:13:53.108: INFO: namespace : e2e-tests-nslifetest-95-c4m35 api call to delete is complete 
I0307 01:13:53.154495   18842 request.go:632] Throttling request took 1.586012679s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-96-q5026
Mar  7 01:13:53.162: INFO: namespace : e2e-tests-nslifetest-87-n0cw5 api call to delete is complete 
I0307 01:13:53.204495   18842 request.go:632] Throttling request took 1.636010641s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-70-rcths
Mar  7 01:13:53.208: INFO: namespace : e2e-tests-nslifetest-96-q5026 api call to delete is complete 
I0307 01:13:53.254458   18842 request.go:632] Throttling request took 1.685988521s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-88-svlqw
Mar  7 01:13:53.257: INFO: namespace : e2e-tests-nslifetest-70-rcths api call to delete is complete 
I0307 01:13:53.304455   18842 request.go:632] Throttling request took 1.735982714s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-97-kjbdk
Mar  7 01:13:53.307: INFO: namespace : e2e-tests-nslifetest-88-svlqw api call to delete is complete 
I0307 01:13:53.354447   18842 request.go:632] Throttling request took 1.78596741s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-71-flwlm
Mar  7 01:13:53.357: INFO: namespace : e2e-tests-nslifetest-97-kjbdk api call to delete is complete 
I0307 01:13:53.404471   18842 request.go:632] Throttling request took 1.835967465s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-89-jd47b
Mar  7 01:13:53.408: INFO: namespace : e2e-tests-nslifetest-71-flwlm api call to delete is complete 
I0307 01:13:53.454477   18842 request.go:632] Throttling request took 1.885984446s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-72-lrln6
Mar  7 01:13:53.457: INFO: namespace : e2e-tests-nslifetest-89-jd47b api call to delete is complete 
I0307 01:13:53.504521   18842 request.go:632] Throttling request took 1.93597475s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-73-f05t5
Mar  7 01:13:53.508: INFO: namespace : e2e-tests-nslifetest-72-lrln6 api call to delete is complete 
I0307 01:13:53.554452   18842 request.go:632] Throttling request took 1.985941085s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-9-h66kp
Mar  7 01:13:53.558: INFO: namespace : e2e-tests-nslifetest-73-f05t5 api call to delete is complete 
I0307 01:13:53.604488   18842 request.go:632] Throttling request took 2.035976114s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-98-hl4fx
Mar  7 01:13:53.608: INFO: namespace : e2e-tests-nslifetest-9-h66kp api call to delete is complete 
I0307 01:13:53.654519   18842 request.go:632] Throttling request took 2.085983491s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-74-z9zcx
Mar  7 01:13:53.657: INFO: namespace : e2e-tests-nslifetest-98-hl4fx api call to delete is complete 
I0307 01:13:53.704453   18842 request.go:632] Throttling request took 2.135921903s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-30-x4v19
Mar  7 01:13:53.706: INFO: namespace : e2e-tests-nslifetest-74-z9zcx api call to delete is complete 
I0307 01:13:53.754504   18842 request.go:632] Throttling request took 2.185967951s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-78-kjv5d
Mar  7 01:13:53.757: INFO: namespace : e2e-tests-nslifetest-30-x4v19 api call to delete is complete 
I0307 01:13:53.804533   18842 request.go:632] Throttling request took 2.235966742s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-79-6cq7n
Mar  7 01:13:53.806: INFO: namespace : e2e-tests-nslifetest-78-kjv5d api call to delete is complete 
I0307 01:13:53.854476   18842 request.go:632] Throttling request took 2.28592412s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-75-jlx6m
Mar  7 01:13:53.858: INFO: namespace : e2e-tests-nslifetest-79-6cq7n api call to delete is complete 
I0307 01:13:53.904512   18842 request.go:632] Throttling request took 2.335946517s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-8-kn81r
Mar  7 01:13:53.908: INFO: namespace : e2e-tests-nslifetest-75-jlx6m api call to delete is complete 
I0307 01:13:53.954458   18842 request.go:632] Throttling request took 2.385909478s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-80-bzhgr
Mar  7 01:13:53.958: INFO: namespace : e2e-tests-nslifetest-8-kn81r api call to delete is complete 
I0307 01:13:54.004509   18842 request.go:632] Throttling request took 2.435944053s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-76-w1tjf
Mar  7 01:13:54.007: INFO: namespace : e2e-tests-nslifetest-80-bzhgr api call to delete is complete 
I0307 01:13:54.054481   18842 request.go:632] Throttling request took 2.485911939s, request: DELETE:https://internal-api.primg148.origin-ci-int-gce.dev.rhcloud.com:8443/api/v1/namespaces/e2e-tests-nslifetest-77-dmphm
Mar  7 01:13:54.058: INFO: namespace : e2e-tests-nslifetest-76-w1tjf api call to delete is complete 
Mar  7 01:13:54.107: INFO: namespace : e2e-tests-nslifetest-77-dmphm api call to delete is complete 
STEP: Waiting for namespaces to vanish
Mar  7 01:13:56.263: INFO: Remaining namespaces : 100
Mar  7 01:13:58.266: INFO: Remaining namespaces : 96
Mar  7 01:14:00.263: INFO: Remaining namespaces : 86
Mar  7 01:14:02.215: INFO: Remaining namespaces : 80
Mar  7 01:14:04.218: INFO: Remaining namespaces : 72
Mar  7 01:14:06.217: INFO: Remaining namespaces : 62
Mar  7 01:14:08.213: INFO: Remaining namespaces : 54
Mar  7 01:14:10.212: INFO: Remaining namespaces : 46
Mar  7 01:14:12.254: INFO: Remaining namespaces : 38
Mar  7 01:14:14.211: INFO: Remaining namespaces : 30
Mar  7 01:14:16.212: INFO: Remaining namespaces : 22
Mar  7 01:14:18.159: INFO: Remaining namespaces : 14
[AfterEach] [k8s.io] Namespaces [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:14:20.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-8hbvh" for this suite.
Mar  7 01:14:31.742: INFO: namespace: e2e-tests-namespaces-8hbvh, resource: bindings, ignored listing per whitelist
STEP: Destroying namespace "e2e-tests-nslifetest-23-rd1rx" for this suite.
Mar  7 01:14:32.405: INFO: Namespace e2e-tests-nslifetest-23-rd1rx was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-48-qbszs" for this suite.
Mar  7 01:14:32.456: INFO: Namespace e2e-tests-nslifetest-48-qbszs was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-99-zfc36" for this suite.
Mar  7 01:14:32.506: INFO: Namespace e2e-tests-nslifetest-99-zfc36 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-24-4rr4s" for this suite.
Mar  7 01:14:32.556: INFO: Namespace e2e-tests-nslifetest-24-4rr4s was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-49-7hlgf" for this suite.
Mar  7 01:14:32.607: INFO: Namespace e2e-tests-nslifetest-49-7hlgf was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-75-jlx6m" for this suite.
Mar  7 01:14:32.657: INFO: Namespace e2e-tests-nslifetest-75-jlx6m was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-74-z9zcx" for this suite.
Mar  7 01:14:32.708: INFO: Namespace e2e-tests-nslifetest-74-z9zcx was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-25-k45dz" for this suite.
Mar  7 01:14:32.758: INFO: Namespace e2e-tests-nslifetest-25-k45dz was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-0-cwjlc" for this suite.
Mar  7 01:14:32.808: INFO: Namespace e2e-tests-nslifetest-0-cwjlc was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-50-m8rmh" for this suite.
Mar  7 01:14:32.859: INFO: Namespace e2e-tests-nslifetest-50-m8rmh was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-9-h66kp" for this suite.
Mar  7 01:14:32.908: INFO: Namespace e2e-tests-nslifetest-9-h66kp was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-26-dzs79" for this suite.
Mar  7 01:14:32.958: INFO: Namespace e2e-tests-nslifetest-26-dzs79 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-76-w1tjf" for this suite.
Mar  7 01:14:33.008: INFO: Namespace e2e-tests-nslifetest-76-w1tjf was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-52-0b4sj" for this suite.
Mar  7 01:14:33.058: INFO: Namespace e2e-tests-nslifetest-52-0b4sj was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-51-512h8" for this suite.
Mar  7 01:14:33.109: INFO: Namespace e2e-tests-nslifetest-51-512h8 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-77-dmphm" for this suite.
Mar  7 01:14:33.159: INFO: Namespace e2e-tests-nslifetest-77-dmphm was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-1-21svw" for this suite.
Mar  7 01:14:33.209: INFO: Namespace e2e-tests-nslifetest-1-21svw was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-27-44xhz" for this suite.
Mar  7 01:14:33.259: INFO: Namespace e2e-tests-nslifetest-27-44xhz was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-78-kjv5d" for this suite.
Mar  7 01:14:33.309: INFO: Namespace e2e-tests-nslifetest-78-kjv5d was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-28-mbqtz" for this suite.
Mar  7 01:14:33.359: INFO: Namespace e2e-tests-nslifetest-28-mbqtz was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-29-512xf" for this suite.
Mar  7 01:14:33.409: INFO: Namespace e2e-tests-nslifetest-29-512xf was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-53-815nv" for this suite.
Mar  7 01:14:33.459: INFO: Namespace e2e-tests-nslifetest-53-815nv was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-79-6cq7n" for this suite.
Mar  7 01:14:33.510: INFO: Namespace e2e-tests-nslifetest-79-6cq7n was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-2-0s3cb" for this suite.
Mar  7 01:14:33.560: INFO: Namespace e2e-tests-nslifetest-2-0s3cb was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-30-x4v19" for this suite.
Mar  7 01:14:33.610: INFO: Namespace e2e-tests-nslifetest-30-x4v19 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-81-z5v3d" for this suite.
Mar  7 01:14:33.660: INFO: Namespace e2e-tests-nslifetest-81-z5v3d was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-54-bjkr4" for this suite.
Mar  7 01:14:33.710: INFO: Namespace e2e-tests-nslifetest-54-bjkr4 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-80-bzhgr" for this suite.
Mar  7 01:14:33.760: INFO: Namespace e2e-tests-nslifetest-80-bzhgr was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-3-77f8j" for this suite.
Mar  7 01:14:33.810: INFO: Namespace e2e-tests-nslifetest-3-77f8j was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-31-60h6n" for this suite.
Mar  7 01:14:33.860: INFO: Namespace e2e-tests-nslifetest-31-60h6n was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-55-vg0mh" for this suite.
Mar  7 01:14:33.910: INFO: Namespace e2e-tests-nslifetest-55-vg0mh was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-4-7jgf9" for this suite.
Mar  7 01:14:33.960: INFO: Namespace e2e-tests-nslifetest-4-7jgf9 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-32-jw856" for this suite.
Mar  7 01:14:34.011: INFO: Namespace e2e-tests-nslifetest-32-jw856 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-82-1dvrf" for this suite.
Mar  7 01:14:34.061: INFO: Namespace e2e-tests-nslifetest-82-1dvrf was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-56-m65hs" for this suite.
Mar  7 01:14:34.110: INFO: Namespace e2e-tests-nslifetest-56-m65hs was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-7-j0k4b" for this suite.
Mar  7 01:14:34.161: INFO: Namespace e2e-tests-nslifetest-7-j0k4b was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-5-g8stl" for this suite.
Mar  7 01:14:34.211: INFO: Namespace e2e-tests-nslifetest-5-g8stl was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-33-3m5tp" for this suite.
Mar  7 01:14:34.261: INFO: Namespace e2e-tests-nslifetest-33-3m5tp was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-83-v10k9" for this suite.
Mar  7 01:14:34.311: INFO: Namespace e2e-tests-nslifetest-83-v10k9 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-57-43653" for this suite.
Mar  7 01:14:34.361: INFO: Namespace e2e-tests-nslifetest-57-43653 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-6-wf186" for this suite.
Mar  7 01:14:34.411: INFO: Namespace e2e-tests-nslifetest-6-wf186 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-34-sgzcr" for this suite.
Mar  7 01:14:34.461: INFO: Namespace e2e-tests-nslifetest-34-sgzcr was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-84-05n27" for this suite.
Mar  7 01:14:34.511: INFO: Namespace e2e-tests-nslifetest-84-05n27 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-58-rzwrg" for this suite.
Mar  7 01:14:34.561: INFO: Namespace e2e-tests-nslifetest-58-rzwrg was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-35-l52cs" for this suite.
Mar  7 01:14:34.611: INFO: Namespace e2e-tests-nslifetest-35-l52cs was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-59-88n5d" for this suite.
Mar  7 01:14:34.662: INFO: Namespace e2e-tests-nslifetest-59-88n5d was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-36-mbs2d" for this suite.
Mar  7 01:14:34.712: INFO: Namespace e2e-tests-nslifetest-36-mbs2d was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-85-t1vfx" for this suite.
Mar  7 01:14:34.762: INFO: Namespace e2e-tests-nslifetest-85-t1vfx was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-8-kn81r" for this suite.
Mar  7 01:14:34.812: INFO: Namespace e2e-tests-nslifetest-8-kn81r was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-86-l41x8" for this suite.
Mar  7 01:14:34.863: INFO: Namespace e2e-tests-nslifetest-86-l41x8 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-60-t6gt8" for this suite.
Mar  7 01:14:34.913: INFO: Namespace e2e-tests-nslifetest-60-t6gt8 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-37-8gd4l" for this suite.
Mar  7 01:14:34.964: INFO: Namespace e2e-tests-nslifetest-37-8gd4l was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-87-n0cw5" for this suite.
Mar  7 01:14:35.014: INFO: Namespace e2e-tests-nslifetest-87-n0cw5 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-61-z6mr6" for this suite.
Mar  7 01:14:35.064: INFO: Namespace e2e-tests-nslifetest-61-z6mr6 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-10-r47fd" for this suite.
Mar  7 01:14:35.115: INFO: Namespace e2e-tests-nslifetest-10-r47fd was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-38-c6xq1" for this suite.
Mar  7 01:14:35.165: INFO: Namespace e2e-tests-nslifetest-38-c6xq1 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-88-svlqw" for this suite.
Mar  7 01:14:35.215: INFO: Namespace e2e-tests-nslifetest-88-svlqw was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-62-lmjss" for this suite.
Mar  7 01:14:35.265: INFO: Namespace e2e-tests-nslifetest-62-lmjss was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-11-b173m" for this suite.
Mar  7 01:14:35.316: INFO: Namespace e2e-tests-nslifetest-11-b173m was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-39-k0zm1" for this suite.
Mar  7 01:14:35.366: INFO: Namespace e2e-tests-nslifetest-39-k0zm1 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-63-lbvsj" for this suite.
Mar  7 01:14:35.417: INFO: Namespace e2e-tests-nslifetest-63-lbvsj was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-89-jd47b" for this suite.
Mar  7 01:14:35.467: INFO: Namespace e2e-tests-nslifetest-89-jd47b was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-12-cbmlb" for this suite.
Mar  7 01:14:35.519: INFO: Namespace e2e-tests-nslifetest-12-cbmlb was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-40-x8hkb" for this suite.
Mar  7 01:14:35.570: INFO: Namespace e2e-tests-nslifetest-40-x8hkb was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-90-99rmw" for this suite.
Mar  7 01:14:35.620: INFO: Namespace e2e-tests-nslifetest-90-99rmw was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-64-wqmp9" for this suite.
Mar  7 01:14:35.670: INFO: Namespace e2e-tests-nslifetest-64-wqmp9 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-13-z0hwj" for this suite.
Mar  7 01:14:35.721: INFO: Namespace e2e-tests-nslifetest-13-z0hwj was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-41-dfcjc" for this suite.
Mar  7 01:14:35.771: INFO: Namespace e2e-tests-nslifetest-41-dfcjc was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-91-szlb4" for this suite.
Mar  7 01:14:35.821: INFO: Namespace e2e-tests-nslifetest-91-szlb4 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-65-9h205" for this suite.
Mar  7 01:14:35.872: INFO: Namespace e2e-tests-nslifetest-65-9h205 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-14-tmdzq" for this suite.
Mar  7 01:14:35.922: INFO: Namespace e2e-tests-nslifetest-14-tmdzq was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-42-12jmr" for this suite.
Mar  7 01:14:35.973: INFO: Namespace e2e-tests-nslifetest-42-12jmr was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-92-xsfzv" for this suite.
Mar  7 01:14:36.023: INFO: Namespace e2e-tests-nslifetest-92-xsfzv was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-66-fmw1z" for this suite.
Mar  7 01:14:36.073: INFO: Namespace e2e-tests-nslifetest-66-fmw1z was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-43-z734m" for this suite.
Mar  7 01:14:36.123: INFO: Namespace e2e-tests-nslifetest-43-z734m was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-15-2mnf9" for this suite.
Mar  7 01:14:36.173: INFO: Namespace e2e-tests-nslifetest-15-2mnf9 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-93-f0lgs" for this suite.
Mar  7 01:14:36.224: INFO: Namespace e2e-tests-nslifetest-93-f0lgs was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-16-5c6qk" for this suite.
Mar  7 01:14:36.274: INFO: Namespace e2e-tests-nslifetest-16-5c6qk was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-44-5pshp" for this suite.
Mar  7 01:14:36.324: INFO: Namespace e2e-tests-nslifetest-44-5pshp was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-67-3wfc8" for this suite.
Mar  7 01:14:36.374: INFO: Namespace e2e-tests-nslifetest-67-3wfc8 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-17-p5pf9" for this suite.
Mar  7 01:14:36.424: INFO: Namespace e2e-tests-nslifetest-17-p5pf9 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-45-8f5nd" for this suite.
Mar  7 01:14:36.474: INFO: Namespace e2e-tests-nslifetest-45-8f5nd was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-94-4d9jk" for this suite.
Mar  7 01:14:36.524: INFO: Namespace e2e-tests-nslifetest-94-4d9jk was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-68-khwqh" for this suite.
Mar  7 01:14:36.575: INFO: Namespace e2e-tests-nslifetest-68-khwqh was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-69-rthqg" for this suite.
Mar  7 01:14:36.625: INFO: Namespace e2e-tests-nslifetest-69-rthqg was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-95-c4m35" for this suite.
Mar  7 01:14:36.675: INFO: Namespace e2e-tests-nslifetest-95-c4m35 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-18-n51fq" for this suite.
Mar  7 01:14:36.725: INFO: Namespace e2e-tests-nslifetest-18-n51fq was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-46-mwhr8" for this suite.
Mar  7 01:14:36.775: INFO: Namespace e2e-tests-nslifetest-46-mwhr8 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-96-q5026" for this suite.
Mar  7 01:14:36.825: INFO: Namespace e2e-tests-nslifetest-96-q5026 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-70-rcths" for this suite.
Mar  7 01:14:36.875: INFO: Namespace e2e-tests-nslifetest-70-rcths was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-47-99gc7" for this suite.
Mar  7 01:14:36.925: INFO: Namespace e2e-tests-nslifetest-47-99gc7 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-19-16bzg" for this suite.
Mar  7 01:14:36.975: INFO: Namespace e2e-tests-nslifetest-19-16bzg was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-97-kjbdk" for this suite.
Mar  7 01:14:37.025: INFO: Namespace e2e-tests-nslifetest-97-kjbdk was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-71-flwlm" for this suite.
Mar  7 01:14:37.075: INFO: Namespace e2e-tests-nslifetest-71-flwlm was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-20-lz0xx" for this suite.
Mar  7 01:14:37.125: INFO: Namespace e2e-tests-nslifetest-20-lz0xx was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-98-hl4fx" for this suite.
Mar  7 01:14:37.175: INFO: Namespace e2e-tests-nslifetest-98-hl4fx was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-72-lrln6" for this suite.
Mar  7 01:14:37.226: INFO: Namespace e2e-tests-nslifetest-72-lrln6 was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-21-cftjz" for this suite.
Mar  7 01:14:37.276: INFO: Namespace e2e-tests-nslifetest-21-cftjz was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-22-gz9jt" for this suite.
Mar  7 01:14:37.326: INFO: Namespace e2e-tests-nslifetest-22-gz9jt was already deleted
STEP: Destroying namespace "e2e-tests-nslifetest-73-f05t5" for this suite.
Mar  7 01:14:37.376: INFO: Namespace e2e-tests-nslifetest-73-f05t5 was already deleted

• [SLOW TEST:66.723 seconds]
[k8s.io] Namespaces [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  should delete fast enough (90 percent of 100 namespaces in 150 seconds)
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:222
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that NodeAffinity is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:327
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:14:37.376: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:14:37.541: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:14:38.181: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:14:38.284: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:14:38.435: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:14:38.485: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:14:38.587: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:14:38.587: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:14:38.587: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:14:38.689: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:14:38.689: INFO: 	Container router ready: true, restart count 0
Mar  7 01:14:38.689: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:14:38.689: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:14:38.689: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:14:38.689: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:14:38.689: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:14:38.791: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:14:38.894: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that NodeAffinity is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:327
STEP: Trying to schedule Pod with nonempty NodeSelector.
Mar  7 01:14:39.152: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:14:49.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-93l7z" for this suite.
Mar  7 01:15:15.754: INFO: namespace: e2e-tests-sched-pred-93l7z, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:15:16.359985   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:38.983 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that NodeAffinity is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:327
------------------------------
SSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that taints-tolerations is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:757
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:15:16.360: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:15:16.534: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:15:17.244: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:15:17.348: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:15:17.499: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:15:17.550: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:15:17.651: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:15:17.651: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:15:17.651: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:15:17.754: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:15:17.754: INFO: 	Container router ready: true, restart count 0
Mar  7 01:15:17.754: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:15:17.754: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:15:17.754: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:15:17.754: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:15:17.754: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:15:17.857: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:15:17.959: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that taints-tolerations is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:757
STEP: Trying to launch a pod without a toleration to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random taint on the found node.
STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-711366c4-02fd-11e7-b06e-0ea57314f988=testing-taint-value:NoSchedule
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-label-key-712b9bc7-02fd-11e7-b06e-0ea57314f988 testing-label-value
STEP: Trying to relaunch the pod, still no tolerations.
Mar  7 01:15:20.887: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: Removing taint off the node
STEP: removing the taint kubernetes.io/e2e-taint-key-711366c4-02fd-11e7-b06e-0ea57314f988=testing-taint-value:NoSchedule off the node ci-primg148-ig-n-w5x2
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-711366c4-02fd-11e7-b06e-0ea57314f988=testing-taint-value:NoSchedule
Mar  7 01:15:31.095: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
STEP: removing the label kubernetes.io/e2e-label-key-712b9bc7-02fd-11e7-b06e-0ea57314f988 off the node ci-primg148-ig-n-w5x2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-712b9bc7-02fd-11e7-b06e-0ea57314f988
STEP: removing the taint kubernetes.io/e2e-taint-key-711366c4-02fd-11e7-b06e-0ea57314f988=testing-taint-value:NoSchedule off the node ci-primg148-ig-n-w5x2
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:15:41.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-qnlvf" for this suite.
Mar  7 01:16:07.898: INFO: namespace: e2e-tests-sched-pred-qnlvf, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:16:08.500013   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:52.140 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that taints-tolerations is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:757
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:398
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:16:08.500: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:16:08.660: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:16:09.293: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:16:09.395: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:16:09.547: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:16:09.597: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:16:09.697: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:16:09.697: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:16:09.697: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:16:09.799: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:16:09.799: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:16:09.799: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:16:09.799: INFO: 	Container router ready: true, restart count 0
Mar  7 01:16:09.799: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:16:09.799: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:16:09.799: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:16:09.901: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:16:10.003: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:398
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a label with fake az info on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-az-name e2e-az1
STEP: Trying to launch a pod that with NodeAffinity setting as embedded JSON string in the annotation value.
STEP: removing the label kubernetes.io/e2e-az-name off the node ci-primg148-ig-n-wvkh
STEP: verifying the node doesn't have the label kubernetes.io/e2e-az-name
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:16:14.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-2w9h3" for this suite.
Mar  7 01:16:36.344: INFO: namespace: e2e-tests-sched-pred-2w9h3, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:16:36.945929   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:28.446 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:398
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that Inter-pod-Affinity is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:461
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:16:36.946: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:16:37.097: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:16:37.728: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:16:37.831: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:16:37.981: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:16:38.031: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:16:38.131: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:16:38.131: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:16:38.131: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:16:38.242: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:16:38.242: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:16:38.242: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:16:38.242: INFO: 	Container router ready: true, restart count 0
Mar  7 01:16:38.242: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:16:38.242: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:16:38.242: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:16:38.347: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:16:38.452: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that Inter-pod-Affinity is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:461
STEP: Trying to schedule Pod with nonempty Pod Affinity.
Mar  7 01:16:38.771: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:16:48.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-lfcnr" for this suite.
Mar  7 01:17:15.369: INFO: namespace: e2e-tests-sched-pred-lfcnr, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:17:15.976743   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:39.030 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that Inter-pod-Affinity is respected if not matching
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:461
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] SchedulerPredicates [Serial] 
  validates that a pod with an invalid NodeAffinity is rejected
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:258
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:17:15.977: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:17:16.139: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Mar  7 01:17:16.781: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Mar  7 01:17:16.884: INFO: Waiting for terminating namespaces to be deleted...
Mar  7 01:17:17.033: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar  7 01:17:17.084: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Mar  7 01:17:17.184: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar  7 01:17:17.184: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Mar  7 01:17:17.184: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-m-nrvp before test
Mar  7 01:17:17.287: INFO: router-1-p9p1l from default started at 2017-03-07 00:37:56 -0500 EST (1 container statuses recorded)
Mar  7 01:17:17.287: INFO: 	Container router ready: true, restart count 0
Mar  7 01:17:17.287: INFO: registry-console-1-4qzm5 from default started at 2017-03-07 00:39:04 -0500 EST (1 container statuses recorded)
Mar  7 01:17:17.287: INFO: 	Container registry-console ready: true, restart count 0
Mar  7 01:17:17.287: INFO: docker-registry-2-ldrtp from default started at 2017-03-07 00:38:56 -0500 EST (1 container statuses recorded)
Mar  7 01:17:17.287: INFO: 	Container registry ready: true, restart count 0
Mar  7 01:17:17.287: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-qhms before test
Mar  7 01:17:17.389: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-w5x2 before test
Mar  7 01:17:17.491: INFO: 
Logging pods the kubelet thinks is on node ci-primg148-ig-n-wvkh before test
[It] validates that a pod with an invalid NodeAffinity is rejected
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:258
STEP: Trying to launch a pod with an invalid Affinity data.
Mar  7 01:17:17.698: INFO: Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:17:27.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-j7dgh" for this suite.
Mar  7 01:17:39.260: INFO: namespace: e2e-tests-sched-pred-j7dgh, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] SchedulerPredicates [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:67
I0307 01:17:39.867743   18842 request.go:769] Error in request: resource name may not be empty

• [SLOW TEST:23.890 seconds]
[k8s.io] SchedulerPredicates [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  validates that a pod with an invalid NodeAffinity is rejected
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:258
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] kubelet [k8s.io] Clean up pods on node 
  kubelet should be able to delete 10 pods per node in 1m0s.
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:226
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] kubelet
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:17:39.868: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:17:40.158: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] kubelet
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:165
[It] kubelet should be able to delete 10 pods per node in 1m0s.
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:226
STEP: Creating a RC of 40 pods and wait until all pods of this RC are running
STEP: creating replication controller cleanup40-c50390f0-02fd-11e7-b06e-0ea57314f988 in namespace e2e-tests-kubelet-k1mft
I0307 01:17:41.446809   18842 runners.go:103] Created replication controller with name: cleanup40-c50390f0-02fd-11e7-b06e-0ea57314f988, namespace: e2e-tests-kubelet-k1mft, replica count: 40
I0307 01:17:41.446898   18842 reflector.go:196] Starting reflector *api.Pod (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0307 01:17:41.446937   18842 reflector.go:234] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0307 01:17:51.447224   18842 runners.go:103] cleanup40-c50390f0-02fd-11e7-b06e-0ea57314f988 Pods: 40 out of 40 created, 0 running, 40 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0307 01:18:01.447538   18842 runners.go:103] cleanup40-c50390f0-02fd-11e7-b06e-0ea57314f988 Pods: 40 out of 40 created, 40 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Mar  7 01:18:02.447: INFO: Checking pods on node ci-primg148-ig-n-wvkh via /runningpods endpoint
Mar  7 01:18:02.447: INFO: Checking pods on node ci-primg148-ig-n-qhms via /runningpods endpoint
Mar  7 01:18:02.448: INFO: Checking pods on node ci-primg148-ig-n-w5x2 via /runningpods endpoint
Mar  7 01:18:02.448: INFO: Checking pods on node ci-primg148-ig-m-nrvp via /runningpods endpoint
Mar  7 01:18:02.702: INFO: Resource usage on node "ci-primg148-ig-m-nrvp":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"/"       0.991      2062.98                240.08
"runtime" 0.444      111.20                 102.25
"kubelet" 0.132      59.50                  59.19

Resource usage on node "ci-primg148-ig-n-qhms":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"/"       0.485      1092.02                120.12
"runtime" 0.545      136.18                 124.70
"kubelet" 0.157      62.35                  61.68

Resource usage on node "ci-primg148-ig-n-w5x2":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"/"       0.931      1125.71                121.53
"runtime" 0.442      135.12                 123.45
"kubelet" 0.192      62.30                  61.64

Resource usage on node "ci-primg148-ig-n-wvkh":
container cpu(cores) memory_working_set(MB) memory_rss(MB)
"runtime" 0.287      127.06                 113.76
"kubelet" 0.171      65.75                  65.17
"/"       0.347      1082.75                118.50

STEP: Deleting the RC
STEP: deleting replication controller cleanup40-c50390f0-02fd-11e7-b06e-0ea57314f988 in namespace e2e-tests-kubelet-k1mft
I0307 01:18:02.753574   18842 reflector.go:196] Starting reflector *api.Pod (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0307 01:18:02.753633   18842 reflector.go:234] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
Mar  7 01:18:04.751: INFO: Deleting RC cleanup40-c50390f0-02fd-11e7-b06e-0ea57314f988 took: 998.018754ms
Mar  7 01:18:04.751: INFO: Terminating RC cleanup40-c50390f0-02fd-11e7-b06e-0ea57314f988 pods took: 50.39µs
Mar  7 01:18:15.752: INFO: Checking pods on node ci-primg148-ig-n-wvkh via /runningpods endpoint
Mar  7 01:18:15.752: INFO: Checking pods on node ci-primg148-ig-m-nrvp via /runningpods endpoint
Mar  7 01:18:15.752: INFO: Checking pods on node ci-primg148-ig-n-qhms via /runningpods endpoint
Mar  7 01:18:15.752: INFO: Checking pods on node ci-primg148-ig-n-w5x2 via /runningpods endpoint
Mar  7 01:18:15.872: INFO: Deleting 40 pods on 4 nodes completed in 1.120139895s after the RC was deleted
Mar  7 01:18:15.872: INFO: CPU usage of containers on node "ci-primg148-ig-n-qhms"
:container 5th%  20th% 50th% 70th% 90th% 95th% 99th%
"/"       0.000 0.000 0.485 0.485 0.485 0.485 0.485
"runtime" 0.000 0.000 0.119 0.125 0.125 0.125 0.125
"kubelet" 0.000 0.000 0.157 0.157 0.157 0.157 0.157

CPU usage of containers on node "ci-primg148-ig-n-w5x2"
:container 5th%  20th% 50th% 70th% 90th% 95th% 99th%
"/"       0.000 0.000 0.000 0.000 0.000 0.000 0.000
"runtime" 0.000 0.000 0.210 0.210 0.210 0.210 0.210
"kubelet" 0.000 0.000 0.181 0.181 0.181 0.181 0.181

CPU usage of containers on node "ci-primg148-ig-n-wvkh"
:container 5th%  20th% 50th% 70th% 90th% 95th% 99th%
"/"       0.000 0.000 0.347 0.347 0.347 0.347 0.347
"runtime" 0.000 0.000 0.287 0.287 0.287 0.287 0.287
"kubelet" 0.000 0.000 0.076 0.076 0.076 0.076 0.076

CPU usage of containers on node "ci-primg148-ig-m-nrvp"
:container 5th%  20th% 50th% 70th% 90th% 95th% 99th%
"/"       0.000 0.000 0.620 0.620 0.620 0.620 0.620
"runtime" 0.000 0.000 0.092 0.092 0.092 0.092 0.092
"kubelet" 0.000 0.000 0.101 0.101 0.101 0.101 0.101

[AfterEach] [k8s.io] kubelet
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:18:15.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-k1mft" for this suite.
Mar  7 01:18:27.383: INFO: namespace: e2e-tests-kubelet-k1mft, resource: bindings, ignored listing per whitelist
[AfterEach] [k8s.io] kubelet
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:173

• [SLOW TEST:48.559 seconds]
[k8s.io] kubelet
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  [k8s.io] Clean up pods on node
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
    kubelet should be able to delete 10 pods per node in 1m0s.
    /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:226
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted.
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:219
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] Namespaces [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:18:28.428: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:18:28.635: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted.
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:219
STEP: Creating a test namespace
Mar  7 01:18:29.336: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Verifying there is no service in the namespace
[AfterEach] [k8s.io] Namespaces [Serial]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:18:35.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-3j81l" for this suite.
Mar  7 01:18:46.645: INFO: namespace: e2e-tests-namespaces-3j81l, resource: bindings, ignored listing per whitelist
STEP: Destroying namespace "e2e-tests-nsdeletetest-gd50p" for this suite.
Mar  7 01:18:47.301: INFO: Namespace e2e-tests-nsdeletetest-gd50p was already deleted

• [SLOW TEST:18.873 seconds]
[k8s.io] Namespaces [Serial]
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  should ensure that all services are removed when a namespace is deleted.
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/namespace.go:219
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Service endpoints latency 
  should not be very high [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service_latency.go:116
[BeforeEach] [Top Level]
  /data/src/github.com/openshift/origin/test/extended/util/test.go:47
[BeforeEach] [k8s.io] Service endpoints latency
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Mar  7 01:18:47.301: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig

STEP: Building a namespace api object
Mar  7 01:18:47.465: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service_latency.go:116
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-h7gs3
I0307 01:18:48.093795   18842 runners.go:103] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-h7gs3, replica count: 1
I0307 01:18:48.093896   18842 reflector.go:196] Starting reflector *api.Pod (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0307 01:18:48.093946   18842 reflector.go:234] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/utils/pod_store.go:52
I0307 01:18:49.094162   18842 runners.go:103] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0307 01:18:50.094420   18842 runners.go:103] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0307 01:18:51.094674   18842 runners.go:103] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0307 01:18:51.095039   18842 reflector.go:196] Starting reflector *api.Endpoints (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service_latency.go:308
I0307 01:18:51.095085   18842 reflector.go:234] Listing and watching *api.Endpoints from github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service_latency.go:308
Mar  7 01:18:51.251: INFO: Created: latency-svc-pr6qh
Mar  7 01:18:51.267: INFO: Got endpoints: latency-svc-pr6qh [72.487651ms]
Mar  7 01:18:51.330: INFO: Created: latency-svc-hzdjc
Mar  7 01:18:51.341: INFO: Created: latency-svc-8r6zv
Mar  7 01:18:51.361: INFO: Created: latency-svc-fb3bk
Mar  7 01:18:51.377: INFO: Created: latency-svc-94573
Mar  7 01:18:51.389: INFO: Got endpoints: latency-svc-hzdjc [121.798603ms]
Mar  7 01:18:51.405: INFO: Created: latency-svc-79w00
Mar  7 01:18:51.413: INFO: Created: latency-svc-mz3g8
Mar  7 01:18:51.415: INFO: Got endpoints: latency-svc-8r6zv [148.279392ms]
Mar  7 01:18:51.424: INFO: Created: latency-svc-mb1h7
Mar  7 01:18:51.441: INFO: Got endpoints: latency-svc-fb3bk [174.141975ms]
Mar  7 01:18:51.451: INFO: Created: latency-svc-8687n
Mar  7 01:18:51.461: INFO: Got endpoints: latency-svc-94573 [194.06759ms]
Mar  7 01:18:51.500: INFO: Created: latency-svc-xd9ml
Mar  7 01:18:51.505: INFO: Created: latency-svc-dvrxd
Mar  7 01:18:51.511: INFO: Got endpoints: latency-svc-79w00 [243.570093ms]
Mar  7 01:18:51.513: INFO: Got endpoints: latency-svc-mz3g8 [245.924671ms]
Mar  7 01:18:51.530: INFO: Got endpoints: latency-svc-mb1h7 [262.929481ms]
Mar  7 01:18:51.534: INFO: Created: latency-svc-nf7c0
Mar  7 01:18:51.548: INFO: Got endpoints: latency-svc-dvrxd [280.656489ms]
Mar  7 01:18:51.554: INFO: Created: latency-svc-td10m
Mar  7 01:18:51.554: INFO: Got endpoints: latency-svc-xd9ml [164.924799ms]
Mar  7 01:18:51.558: INFO: Got endpoints: latency-svc-8687n [290.595045ms]
Mar  7 01:18:51.572: INFO: Created: latency-svc-bgln8
Mar  7 01:18:51.583: INFO: Created: latency-svc-89j1p
Mar  7 01:18:51.594: INFO: Created: latency-svc-gqj8p
Mar  7 01:18:51.601: INFO: Got endpoints: latency-svc-nf7c0 [139.98298ms]
Mar  7 01:18:51.605: INFO: Got endpoints: latency-svc-td10m [337.702474ms]
Mar  7 01:18:51.613: INFO: Created: latency-svc-8zbh0
Mar  7 01:18:51.625: INFO: Got endpoints: latency-svc-bgln8 [356.844391ms]
Mar  7 01:18:51.625: INFO: Created: latency-svc-zl4vx
Mar  7 01:18:51.631: INFO: Created: latency-svc-m9dbq
Mar  7 01:18:51.651: INFO: Created: latency-svc-j58tj
Mar  7 01:18:51.654: INFO: Got endpoints: latency-svc-gqj8p [386.257128ms]
Mar  7 01:18:51.658: INFO: Got endpoints: latency-svc-89j1p [390.417861ms]
Mar  7 01:18:51.661: INFO: Created: latency-svc-qz22p
Mar  7 01:18:51.661: INFO: Got endpoints: latency-svc-8zbh0 [393.848856ms]
Mar  7 01:18:51.681: INFO: Created: latency-svc-3fv0m
Mar  7 01:18:51.688: INFO: Got endpoints: latency-svc-zl4vx [272.741531ms]
Mar  7 01:18:51.690: INFO: Created: latency-svc-5qv89
Mar  7 01:18:51.693: INFO: Got endpoints: latency-svc-m9dbq [252.044832ms]
Mar  7 01:18:51.698: INFO: Got endpoints: latency-svc-j58tj [430.217456ms]
Mar  7 01:18:51.702: INFO: Got endpoints: latency-svc-qz22p [190.663386ms]
Mar  7 01:18:51.720: INFO: Created: latency-svc-ns65n
Mar  7 01:18:51.739: INFO: Created: latency-svc-115b7
Mar  7 01:18:51.740: INFO: Got endpoints: latency-svc-3fv0m [226.236303ms]
Mar  7 01:18:51.746: INFO: Created: latency-svc-7pblc
Mar  7 01:18:51.755: INFO: Got endpoints: latency-svc-5qv89 [224.817949ms]
Mar  7 01:18:51.762: INFO: Created: latency-svc-dpzm9
Mar  7 01:18:51.780: INFO: Created: latency-svc-jmrd6
Mar  7 01:18:51.787: INFO: Got endpoints: latency-svc-ns65n [239.084661ms]
Mar  7 01:18:51.794: INFO: Got endpoints: latency-svc-115b7 [240.180849ms]
Mar  7 01:18:51.800: INFO: Created: latency-svc-khj3z
Mar  7 01:18:51.800: INFO: Got endpoints: latency-svc-7pblc [241.851591ms]
Mar  7 01:18:51.802: INFO: Created: latency-svc-5mrrl
Mar  7 01:18:51.816: INFO: Created: latency-svc-vctrf
Mar  7 01:18:51.833: INFO: Created: latency-svc-2772k
Mar  7 01:18:51.833: INFO: Got endpoints: latency-svc-dpzm9 [232.046809ms]
Mar  7 01:18:51.847: INFO: Created: latency-svc-n24dq
Mar  7 01:18:51.855: INFO: Got endpoints: latency-svc-khj3z [230.724531ms]
Mar  7 01:18:51.859: INFO: Got endpoints: latency-svc-5mrrl [205.538586ms]
Mar  7 01:18:51.866: INFO: Got endpoints: latency-svc-jmrd6 [260.46048ms]
Mar  7 01:18:51.876: INFO: Created: latency-svc-l5dwm
Mar  7 01:18:51.883: INFO: Got endpoints: latency-svc-2772k [221.163244ms]
Mar  7 01:18:51.891: INFO: Created: latency-svc-jdm91
Mar  7 01:18:51.896: INFO: Got endpoints: latency-svc-vctrf [156.217554ms]
Mar  7 01:18:51.897: INFO: Created: latency-svc-xxvl8
Mar  7 01:18:51.913: INFO: Created: latency-svc-m0vln
Mar  7 01:18:51.921: INFO: Got endpoints: latency-svc-n24dq [233.195551ms]
Mar  7 01:18:51.933: INFO: Created: latency-svc-7t47w
Mar  7 01:18:51.951: INFO: Got endpoints: latency-svc-l5dwm [257.576815ms]
Mar  7 01:18:51.964: INFO: Created: latency-svc-vzh0n
Mar  7 01:18:51.970: INFO: Created: latency-svc-rflg9
Mar  7 01:18:51.977: INFO: Got endpoints: latency-svc-jdm91 [279.648482ms]
Mar  7 01:18:51.986: INFO: Got endpoints: latency-svc-m0vln [327.540257ms]
Mar  7 01:18:51.986: INFO: Created: latency-svc-4m2vw
Mar  7 01:18:51.994: INFO: Got endpoints: latency-svc-xxvl8 [292.175988ms]
Mar  7 01:18:52.005: INFO: Got endpoints: latency-svc-7t47w [249.469894ms]
Mar  7 01:18:52.005: INFO: Created: latency-svc-8x8x9
Mar  7 01:18:52.024: INFO: Created: latency-svc-rr00z
Mar  7 01:18:52.039: INFO: Created: latency-svc-r54v8
Mar  7 01:18:52.059: INFO: Created: latency-svc-kdzz5
Mar  7 01:18:52.059: INFO: Got endpoints: latency-svc-vzh0n [272.150221ms]
Mar  7 01:18:52.065: INFO: Got endpoints: latency-svc-rflg9 [271.40432ms]
Mar  7 01:18:52.080: INFO: Got endpoints: latency-svc-4m2vw [279.780915ms]
Mar  7 01:18:52.080: INFO: Created: latency-svc-r5g97
Mar  7 01:18:52.085: INFO: Got endpoints: latency-svc-8x8x9 [251.227192ms]
Mar  7 01:18:52.096: INFO: Got endpoints: latency-svc-rr00z [240.291893ms]
Mar  7 01:18:52.099: INFO: Created: latency-svc-0bz4n
Mar  7 01:18:52.119: INFO: Created: latency-svc-rx4pd
Mar  7 01:18:52.124: INFO: Got endpoints: latency-svc-kdzz5 [258.306163ms]
Mar  7 01:18:52.128: INFO: Created: latency-svc-m62rq
Mar  7 01:18:52.146: INFO: Got endpoints: latency-svc-r54v8 [286.418042ms]
Mar  7 01:18:52.154: INFO: Created: latency-svc-dqznd
Mar  7 01:18:52.154: INFO: Got endpoints: latency-svc-r5g97 [271.419328ms]
Mar  7 01:18:52.164: INFO: Created: latency-svc-kzpg3
Mar  7 01:18:52.183: INFO: Created: latency-svc-md84v
Mar  7 01:18:52.183: INFO: Got endpoints: latency-svc-0bz4n [287.065422ms]
Mar  7 01:18:52.194: INFO: Got endpoints: latency-svc-rx4pd [272.880782ms]
Mar  7 01:18:52.198: INFO: Got endpoints: latency-svc-m62rq [247.074372ms]
Mar  7 01:18:52.200: INFO: Created: latency-svc-91j17
Mar  7 01:18:52.214: INFO: Got endpoints: latency-svc-dqznd [236.681679ms]
Mar  7 01:18:52.221: INFO: Created: latency-svc-xd30t
Mar  7 01:18:52.245: INFO: Got endpoints: latency-svc-md84v [259.534975ms]
Mar  7 01:18:52.246: INFO: Created: latency-svc-w9h10
Mar  7 01:18:52.246: INFO: Got endpoints: latency-svc-kzpg3 [166.508154ms]
Mar  7 01:18:52.258: INFO: Created: latency-svc-qnmfd
Mar  7 01:18:52.259: INFO: Got endpoints: latency-svc-91j17 [254.164117ms]
Mar  7 01:18:52.273: INFO: Created: latency-svc-rrcck
Mar  7 01:18:52.276: INFO: Got endpoints: latency-svc-xd30t [216.624385ms]
Mar  7 01:18:52.286: INFO: Created: latency-svc-tvmsm
Mar  7 01:18:52.287: INFO: Got endpoints: latency-svc-w9h10 [221.20888ms]
Mar  7 01:18:52.304: INFO: Got endpoints: latency-svc-qnmfd [310.222822ms]
Mar  7 01:18:52.314: INFO: Created: latency-svc-jn3k8
Mar  7 01:18:52.323: INFO: Created: latency-svc-t7s8p
Mar  7 01:18:52.343: INFO: Created: latency-svc-hvtvc
Mar  7 01:18:52.349: INFO: Got endpoints: latency-svc-tvmsm [253.302128ms]
Mar  7 01:18:52.360: INFO: Created: latency-svc-1t423
Mar  7 01:18:52.361: INFO: Got endpoints: latency-svc-rrcck [275.882902ms]
Mar  7 01:18:52.371: INFO: Created: latency-svc-xg533
Mar  7 01:18:52.384: INFO: Got endpoints: latency-svc-t7s8p [238.501549ms]
Mar  7 01:18:52.392: INFO: Created: latency-svc-788ln
Mar  7 01:18:52.395: INFO: Got endpoints: latency-svc-jn3k8 [270.757577ms]
Mar  7 01:18:52.409: INFO: Created: latency-svc-j7zn7
Mar  7 01:18:52.415: INFO: Got endpoints: latency-svc-hvtvc [260.946155ms]
Mar  7 01:18:52.426: INFO: Created: latency-svc-hg42p
Mar  7 01:18:52.434: INFO: Got endpoints: latency-svc-1t423 [250.956511ms]
Mar  7 01:18:52.449: INFO: Created: latency-svc-2g8zz
Mar  7 01:18:52.450: INFO: Got endpoints: latency-svc-xg533 [255.691696ms]
Mar  7 01:18:52.459: INFO: Created: latency-svc-bdqt8
Mar  7 01:18:52.471: INFO: Created: latency-svc-f4z34
Mar  7 01:18:52.486: INFO: Got endpoints: latency-svc-j7zn7 [272.305562ms]
Mar  7 01:18:52.495: INFO: Got endpoints: latency-svc-788ln [296.430933ms]
Mar  7 01:18:52.505: INFO: Created: latency-svc-whc7t
Mar  7 01:18:52.509: INFO: Got endpoints: latency-svc-bdqt8 [249.592319ms]
Mar  7 01:18:52.511: INFO: Created: latency-svc-8k7r0
Mar  7 01:18:52.534: INFO: Got endpoints: latency-svc-2g8zz [287.328555ms]
Mar  7 01:18:52.550: INFO: Got endpoints: latency-svc-hg42p [305.30457ms]
Mar  7 01:18:52.551: INFO: Created: latency-svc-w638q
Mar  7 01:18:52.566: INFO: Created: latency-svc-d4p39
Mar  7 01:18:52.566: INFO: Got endpoints: latency-svc-f4z34 [290.030977ms]
Mar  7 01:18:52.583: INFO: Created: latency-svc-dwztl
Mar  7 01:18:52.583: INFO: Got endpoints: latency-svc-8k7r0 [278.767312ms]
Mar  7 01:18:52.586: INFO: Got endpoints: latency-svc-whc7t [298.916164ms]
Mar  7 01:18:52.593: INFO: Created: latency-svc-2jtll
Mar  7 01:18:52.617: INFO: Got endpoints: latency-svc-w638q [268.356583ms]
Mar  7 01:18:52.621: INFO: Created: latency-svc-71m8r
Mar  7 01:18:52.621: INFO: Created: latency-svc-g8wpk
Mar  7 01:18:52.638: INFO: Created: latency-svc-k8q3b
Mar  7 01:18:52.644: INFO: Got endpoints: latency-svc-dwztl [259.140951ms]
Mar  7 01:18:52.648: INFO: Got endpoints: latency-svc-d4p39 [287.084428ms]
Mar  7 01:18:52.673: INFO: Created: latency-svc-ptf9r
Mar  7 01:18:52.674: INFO: Created: latency-svc-fvgpc
Mar  7 01:18:52.695: INFO: Created: latency-svc-ww406
Mar  7 01:18:52.720: INFO: Created: latency-svc-w10sb
Mar  7 01:18:52.744: INFO: Got endpoints: latency-svc-71m8r [193.372212ms]
Mar  7 01:18:52.744: INFO: Got endpoints: latency-svc-2jtll [349.183924ms]
Mar  7 01:18:52.758: INFO: Got endpoints: latency-svc-g8wpk [342.56379ms]
Mar  7 01:18:52.758: INFO: Created: latency-svc-3543f
Mar  7 01:18:52.772: INFO: Created: latency-svc-lqnxs
Mar  7 01:18:52.776: INFO: Got endpoints: latency-svc-k8q3b [326.012095ms]
Mar  7 01:18:52.780: INFO: Got endpoints: latency-svc-ptf9r [293.84333ms]
Mar  7 01:18:52.782: INFO: Created: latency-svc-lk3nx
Mar  7 01:18:52.805: INFO: Created: latency-svc-kgd52
Mar  7 01:18:52.836: INFO: Created: latency-svc-sbhft
Mar  7 01:18:52.836: INFO: Created: latency-svc-r492n
Mar  7 01:18:52.857: INFO: Got endpoints: latency-svc-ww406 [348.562399ms]
Mar  7 01:18:52.862: INFO: Got endpoints: latency-svc-fvgpc [367.143222ms]
Mar  7 01:18:52.863: INFO: Created: latency-svc-z6k74
Mar  7 01:18:52.866: INFO: Created: latency-svc-s6w0f
Mar  7 01:18:52.869: INFO: Got endpoints: latency-svc-w10sb [334.883981ms]
Mar  7 01:18:52.876: INFO: Got endpoints: latency-svc-lqnxs [310.250213ms]
Mar  7 01:18:52.883: INFO: Got endpoints: latency-svc-3543f [449.369502ms]
Mar  7 01:18:52.890: INFO: Created: latency-svc-2jmsh
Mar  7 01:18:52.910: INFO: Created: latency-svc-881v8
Mar  7 01:18:53.040: INFO: Got endpoints: latency-svc-kgd52 [454.817897ms]
Mar  7 01:18:53.063: INFO: Created: latency-svc-kl66j
Mar  7 01:18:53.109: INFO: Got endpoints: latency-svc-lk3nx [525.327592ms]
Mar  7 01:18:53.115: INFO: Got endpoints: latency-svc-sbhft [497.706207ms]
Mar  7 01:18:53.142: INFO: Got endpoints: latency-svc-z6k74 [494.452509ms]
Mar  7 01:18:53.188: INFO: Got endpoints: latency-svc-r492n [430.87515ms]
Mar  7 01:18:53.223: INFO: Created: latency-svc-n379g
Mar  7 01:18:53.238: INFO: Got endpoints: latency-svc-s6w0f [494.57236ms]
Mar  7 01:18:53.255: INFO: Created: latency-svc-3gqwb
Mar  7 01:18:53.333: INFO: Got endpoints: latency-svc-881v8 [688.947278ms]
Mar  7 01:18:53.335: INFO: Got endpoints: latency-svc-2jmsh [590.503217ms]
Mar  7 01:18:53.357: INFO: Got endpoints: latency-svc-kl66j [580.529016ms]
Mar  7 01:18:53.380: INFO: Created: latency-svc-cfdhq
Mar  7 01:18:53.436: INFO: Created: latency-svc-th8tj
Mar  7 01:18:53.457: INFO: Created: latency-svc-dmg3w
Mar  7 01:18:53.484: INFO: Got endpoints: latency-svc-n379g [703.802411ms]
Mar  7 01:18:53.494: INFO: Created: latency-svc-c1r62
Mar  7 01:18:53.503: INFO: Created: latency-svc-94gv1
Mar  7 01:18:53.517: INFO: Got endpoints: latency-svc-3gqwb [660.010986ms]
Mar  7 01:18:53.519: INFO: Created: latency-svc-gvr1q
Mar  7 01:18:53.530: INFO: Got endpoints: latency-svc-cfdhq [341.57108ms]
Mar  7 01:18:53.539: INFO: Got endpoints: latency-svc-th8tj [670.799237ms]
Mar  7 01:18:53.551: INFO: Created: latency-svc-c34hh
Mar  7 01:18:53.568: INFO: Created: latency-svc-9638h
Mar  7 01:18:53.576: INFO: Got endpoints: latency-svc-dmg3w [241.593242ms]
Mar  7 01:18:53.578: INFO: Got endpoints: latency-svc-c1r62 [694.403996ms]
Mar  7 01:18:53.591: INFO: Created: latency-svc-jb96p
Mar  7 01:18:53.600: INFO: Got endpoints: latency-svc-gvr1q [491.502749ms]
Mar  7 01:18:53.603: INFO: Created: latency-svc-psv9d
Mar  7 01:18:53.619: INFO: Created: latency-svc-f595h
Mar  7 01:18:53.637: INFO: Created: latency-svc-kts6x
Mar  7 01:18:53.643: INFO: Got endpoints: latency-svc-c34hh [528.000562ms]
Mar  7 01:18:53.648: INFO: Got endpoints: latency-svc-9638h [506.10148ms]
Mar  7 01:18:53.656: INFO: Created: latency-svc-0t4b2
Mar  7 01:18:53.663: INFO: Got endpoints: latency-svc-94gv1 [622.894269ms]
Mar  7 01:18:53.679: INFO: Got endpoints: latency-svc-jb96p [817.245174ms]
Mar  7 01:18:53.692: INFO: Created: latency-svc-hjgk3
Mar  7 01:18:53.694: INFO: Created: latency-svc-50j8w
Mar  7 01:18:53.698: INFO: Got endpoints: latency-svc-f595h [821.942281ms]
Mar  7 01:18:53.711: INFO: Got endpoints: latency-svc-kts6x [378.207337ms]
Mar  7 01:18:53.716: INFO: Got endpoints: latency-svc-psv9d [477.19485ms]
Mar  7 01:18:53.719: INFO: Created: latency-svc-trlw4
Mar  7 01:18:53.719: INFO: Got endpoints: latency-svc-0t4b2 [362.656912ms]
Mar  7 01:18:53.742: INFO: Created: latency-svc-dnfq5
Mar  7 01:18:53.775: INFO: Created: latency-svc-pqshw
Mar  7 01:18:53.775: INFO: Got endpoints: latency-svc-hjgk3 [257.618198ms]
Mar  7 01:18:53.782: INFO: Got endpoints: latency-svc-50j8w [298.052945ms]
Mar  7 01:18:53.783: INFO: Created: latency-svc-6wklp
Mar  7 01:18:53.794: INFO: Got endpoints: latency-svc-trlw4 [264.344217ms]
Mar  7 01:18:53.807: INFO: Created: latency-svc-sthq4
Mar  7 01:18:53.819: INFO: Got endpoints: latency-svc-dnfq5 [279.94734ms]
Mar  7 01:18:53.828: INFO: Created: latency-svc-jmv4b
Mar  7 01:18:53.848: INFO: Got endpoints: latency-svc-6wklp [270.369024ms]
Mar  7 01:18:53.850: INFO: Got endpoints: latency-svc-pqshw [273.256362ms]
Mar  7 01:18:53.860: INFO: Created: latency-svc-5pjs2
Mar  7 01:18:53.883: INFO: Got endpoints: latency-svc-sthq4 [282.746214ms]
Mar  7 01:18:53.896: INFO: Got endpoints: latency-svc-jmv4b [252.790883ms]
Mar  7 01:18:53.899: INFO: Created: latency-svc-k6n8q
Mar  7 01:18:53.906: INFO: Got endpoints: latency-svc-5pjs2 [258.087975ms]
Mar  7 01:18:53.908: INFO: Created: latency-svc-d46vc
Mar  7 01:18:53.924: INFO: Created: latency-svc-x882p
Mar  7 01:18:53.935: INFO: Created: latency-svc-ssfj4
Mar  7 01:18:53.964: INFO: Got endpoints: latency-svc-k6n8q [300.788651ms]
Mar  7 01:18:53.967: INFO: Created: latency-svc-n4cj8
Mar  7 01:18:53.994: INFO: Created: latency-svc-w1g36
Mar  7 01:18:54.005: INFO: Got endpoints: latency-svc-d46vc [325.897959ms]
Mar  7 01:18:54.016: INFO: Created: latency-svc-wwtc4
Mar  7 01:18:54.023: INFO: Got endpoints: latency-svc-ssfj4 [312.075524ms]
Mar  7 01:18:54.024: INFO: Got endpoints: latency-svc-x882p [325.784836ms]
Mar  7 01:18:54.042: INFO: Created: latency-svc-0jz82
Mar  7 01:18:54.057: INFO: Got endpoints: latency-svc-n4cj8 [337.322202ms]
Mar  7 01:18:54.057: INFO: Created: latency-svc-8r0ps
Mar  7 01:18:54.066: INFO: Got endpoints: latency-svc-w1g36 [350.186902ms]
Mar  7 01:18:54.078: INFO: Created: latency-svc-0rg95
Mar  7 01:18:54.094: INFO: Got endpoints: latency-svc-wwtc4 [319.691125ms]
Mar  7 01:18:54.134: INFO: Got endpoints: latency-svc-8r0ps [339.80813ms]
Mar  7 01:18:54.134: INFO: Created: latency-svc-rsmsf
Mar  7 01:18:54.141: INFO: Got endpoints: latency-svc-0jz82 [358.958577ms]
Mar  7 01:18:54.145: INFO: Created: latency-svc-4sns0
Mar  7 01:18:54.157: INFO: Got endpoints: latency-svc-0rg95 [337.534197ms]
Mar  7 01:18:54.170: INFO: Created: latency-svc-1jj3z
Mar  7 01:18:54.202: INFO: Created: latency-svc-tr8q5
Mar  7 01:18:54.225: INFO: Created: latency-svc-2r74g
Mar  7 01:18:54.240: INFO: Created: latency-svc-b688j
Mar  7 01:18:54.248: INFO: Got endpoints: latency-svc-1jj3z [365.523015ms]
Mar  7 01:18:54.249: INFO: Got endpoints: latency-svc-4sns0 [399.002997ms]
Mar  7 01:18:54.249: INFO: Got endpoints: latency-svc-rsmsf [400.483802ms]
Mar  7 01:18:54.261: INFO: Created: latency-svc-n80jk
Mar  7 01:18:54.278: INFO: Created: latency-svc-519l7
Mar  7 01:18:54.296: INFO: Got endpoints: latency-svc-2r74g [389.650934ms]
Mar  7 01:18:54.314: INFO: Got endpoints: latency-svc-tr8q5 [418.527615ms]
Mar  7 01:18:54.321: INFO: Created: latency-svc-qxgd8
Mar  7 01:18:54.329: INFO: Created: latency-svc-snvw0
Mar  7 01:18:54.357: INFO: Created: latency-svc-bd3ff
Mar  7 01:18:54.390: INFO: Created: latency-svc-p65wd
Mar  7 01:18:54.391: INFO: Got endpoints: latency-svc-b688j [426.37091ms]
Mar  7 01:18:54.402: INFO: Got endpoints: latency-svc-n80jk [396.870521ms]
Mar  7 01:18:54.411: INFO: Got endpoints: latency-svc-519l7 [387.61851ms]
Mar  7 01:18:54.418: INFO: Created: latency-svc-d9zjj
Mar  7 01:18:54.434: INFO: Created: latency-svc-49fn8
Mar  7 01:18:54.445: INFO: Got endpoints: latency-svc-qxgd8 [388.210692ms]
Mar  7 01:18:54.447: INFO: Got endpoints: latency-svc-snvw0 [423.083217ms]
Mar  7 01:18:54.462: INFO: Created: latency-svc-838mr
Mar  7 01:18:54.469: INFO: Got endpoints: latency-svc-bd3ff [403.225087ms]
Mar  7 01:18:54.482: INFO: Got endpoints: latency-svc-p65wd [387.734365ms]
Mar  7 01:18:54.485: INFO: Got endpoints: latency-svc-d9zjj [350.916421ms]
Mar  7 01:18:54.487: INFO: Created: latency-svc-h813r
Mar  7 01:18:54.502: INFO: Created: latency-svc-8jpxr
Mar  7 01:18:54.513: INFO: Created: latency-svc-l0jj5
Mar  7 01:18:54.527: INFO: Got endpoints: latency-svc-49fn8 [385.256565ms]
Mar  7 01:18:54.548: INFO: Got endpoints: latency-svc-838mr [391.142379ms]
Mar  7 01:18:54.548: INFO: Created: latency-svc-7wxhq
Mar  7 01:18:54.561: INFO: Created: latency-svc-90lxm
Mar  7 01:18:54.566: INFO: Got endpoints: latency-svc-8jpxr [317.89316ms]
Mar  7 01:18:54.581: INFO: Created: latency-svc-g8l2q
Mar  7 01:18:54.584: INFO: Got endpoints: latency-svc-h813r [336.08313ms]
Mar  7 01:18:54.618: INFO: Got endpoints: latency-svc-l0jj5 [369.151186ms]
Mar  7 01:18:54.623: INFO: Created: latency-svc-98ndw
Mar  7 01:18:54.641: INFO: Got endpoints: latency-svc-90lxm [326.473819ms]
Mar  7 01:18:54.653: INFO: Created: latency-svc-ssbpj
Mar  7 01:18:54.656: INFO: Got endpoints: latency-svc-7wxhq [359.823313ms]
Mar  7 01:18:54.671: INFO: Got endpoints: latency-svc-g8l2q [280.846411ms]
Mar  7 01:18:54.673: INFO: Created: latency-svc-tk94n
Mar  7 01:18:54.684: INFO: Got endpoints: latency-svc-98ndw [281.709735ms]
Mar  7 01:18:54.696: INFO: Created: latency-svc-p4vvq
Mar  7 01:18:54.713: INFO: Created: latency-svc-l880p
Mar  7 01:18:54.725: INFO: Created: latency-svc-h6fsn
Mar  7 01:18:54.737: INFO: Got endpoints: latency-svc-tk94n [292.240226ms]
Mar  7 01:18:54.742: INFO: Got endpoints: latency-svc-ssbpj [331.906138ms]
Mar  7 01:18:54.751: INFO: Created: latency-svc-j912z
Mar  7 01:18:54.760: INFO: Created: latency-svc-22pgf
Mar  7 01:18:54.776: INFO: Got endpoints: latency-svc-p4vvq [328.987495ms]
Mar  7 01:18:54.784: INFO: Created: latency-svc-4x4wq
Mar  7 01:18:54.787: INFO: Got endpoints: latency-svc-l880p [317.546708ms]
Mar  7 01:18:54.798: INFO: Created: latency-svc-d6r5z
Mar  7 01:18:54.821: INFO: Got endpoints: latency-svc-h6fsn [339.110711ms]
Mar  7 01:18:54.831: INFO: Got endpoints: latency-svc-j912z [345.511853ms]
Mar  7 01:18:54.832: INFO: Created: latency-svc-lln6m
Mar  7 01:18:54.859: INFO: Created: latency-svc-jlnsj
Mar  7 01:18:54.872: INFO: Got endpoints: latency-svc-22pgf [345.659282ms]
Mar  7 01:18:54.883: INFO: Got endpoints: latency-svc-4x4wq [334.887854ms]
Mar  7 01:18:54.886: INFO: Created: latency-svc-4gbf4
Mar  7 01:18:54.891: INFO: Got endpoints: latency-svc-d6r5z [324.767748ms]
Mar  7 01:18:54.899: INFO: Got endpoints: latency-svc-lln6m [314.166955ms]
Mar  7 01:18:54.913: INFO: Created: latency-svc-xxgb9
Mar  7 01:18:54.926: INFO: Created: latency-svc-dw1sp
Mar  7 01:18:54.938: INFO: Got endpoints: latency-svc-jlnsj [319.891026ms]
Mar  7 01:18:54.957: INFO: Created: latency-svc-v7bn7
Mar  7 01:18:54.958: INFO: Got endpoints: latency-svc-4gbf4 [317.016324ms]
Mar  7 01:18:54.980: INFO: Created: latency-svc-5z9nl
Mar  7 01:18:54.989: INFO: Created: latency-svc-wt3gb
Mar  7 01:18:55.003: INFO: Got endpoints: latency-svc-xxgb9 [347.284928ms]
Mar  7 01:18:55.010: INFO: Created: latency-svc-3bcjw
Mar  7 01:18:55.012: INFO: Got endpoints: latency-svc-dw1sp [340.834273ms]
Mar  7 01:18:55.026: INFO: Got endpoints: latency-svc-v7bn7 [342.249656ms]
Mar  7 01:18:55.042: INFO: Created: latency-svc-j5l7k
Mar  7 01:18:55.052: INFO: Created: latency-svc-dz45n
Mar  7 01:18:55.059: INFO: Got endpoints: latency-svc-wt3gb [316.234202ms]
Mar  7 01:18:55.066: INFO: Got endpoints: latency-svc-5z9nl [329.284559ms]
Mar  7 01:18:55.080: INFO: Created: latency-svc-lkf4c
Mar  7 01:18:55.099: INFO: Created: latency-svc-vmhn7
Mar  7 01:18:55.101: INFO: Created: latency-svc-9jf92
Mar  7 01:18:55.102: INFO: Got endpoints: latency-svc-3bcjw [325.635854ms]
Mar  7 01:18:55.126: INFO: Created: latency-svc-h2sgt
Mar  7 01:18:55.126: INFO: Got endpoints: latency-svc-j5l7k [338.947801ms]
Mar  7 01:18:55.147: INFO: Got endpoints: latency-svc-dz45n [326.071714ms]
Mar  7 01:18:55.149: INFO: Created: latency-svc-qdbcg
Mar  7 01:18:55.169: INFO: Got endpoints: latency-svc-lkf4c [338.634557ms]
Mar  7 01:18:55.192: INFO: Created: latency-svc-32pn9
Mar  7 01:18:55.202: INFO: Created: latency-svc-0pbvk
Mar  7 01:18:55.214: INFO: Got endpoints: latency-svc-9jf92 [342.241802ms]
Mar  7 01:18:55.222: INFO: Got endpoints: latency-svc-qdbcg [323.309775ms]
Mar  7 01:18:55.223: INFO: Created: latency-svc-t6j85
Mar  7 01:18:55.237: INFO: Got endpoints: latency-svc-vmhn7 [353.834656ms]
Mar  7 01:18:55.248: INFO: Created: latency-svc-bln3n
Mar  7 01:18:55.248: INFO: Got endpoints: latency-svc-h2sgt [357.214891ms]
Mar  7 01:18:55.274: INFO: Got endpoints: latency-svc-32pn9 [315.599834ms]
Mar  7 01:18:55.275: INFO: Created: latency-svc-llm8l
Mar  7 01:18:55.284: INFO: Got endpoints: latency-svc-0pbvk [346.665569ms]
Mar  7 01:18:55.293: INFO: Created: latency-svc-f99bj
Mar  7 01:18:55.296: INFO: Got endpoints: latency-svc-t6j85 [292.445492ms]
Mar  7 01:18:55.303: INFO: Created: latency-svc-0n9n7
Mar  7 01:18:55.304: INFO: Got endpoints: latency-svc-bln3n [291.38253ms]
Mar  7 01:18:55.318: INFO: Created: latency-svc-fpc2b
Mar  7 01:18:55.340: INFO: Created: latency-svc-wlwdb
Mar  7 01:18:55.355: INFO: Got endpoints: latency-svc-llm8l [329.247704ms]
Mar  7 01:18:55.360: INFO: Got endpoints: latency-svc-f99bj [300.99257ms]
Mar  7 01:18:55.364: INFO: Created: latency-svc-skbbb
Mar  7 01:18:55.375: INFO: Got endpoints: latency-svc-fpc2b [272.847341ms]
Mar  7 01:18:55.378: INFO: Got endpoints: latency-svc-0n9n7 [311.361044ms]
Mar  7 01:18:55.390: INFO: Created: latency-svc-bvvsg
Mar  7 01:18:55.401: INFO: Created: latency-svc-kt30d
Mar  7 01:18:55.405: INFO: Got endpoints: latency-svc-wlwdb [279.431397ms]
Mar  7 01:18:55.415: INFO: Created: latency-svc-krm7r
Mar  7 01:18:55.437: INFO: Created: latency-svc-rd0sx
Mar  7 01:18:55.440: INFO: Got endpoints: latency-svc-skbbb [166.550126ms]
Mar  7 01:18:55.473: INFO: Created: latency-svc-nxbz6
Mar  7 01:18:55.479: INFO: Got endpoints: latency-svc-bvvsg [309.588781ms]
Mar  7 01:18:55.485: INFO: Created: latency-svc-49l54
Mar  7 01:18:55.493: INFO: Got endpoints: latency-svc-krm7r [270.766154ms]
Mar  7 01:18:55.494: INFO: Got endpoints: latency-svc-kt30d [279.387382ms]
Mar  7 01:18:55.504: INFO: Created: latency-svc-nflnz
Mar  7 01:18:55.522: INFO: Got endpoints: latency-svc-rd0sx [285.271798ms]
Mar  7 01:18:55.536: INFO: Got endpoints: latency-svc-49l54 [388.063405ms]
Mar  7 01:18:55.538: INFO: Got endpoints: latency-svc-nxbz6 [289.193088ms]
Mar  7 01:18:55.540: INFO: Created: latency-svc-4x6kb
Mar  7 01:18:55.555: INFO: Created: latency-svc-k41l0
Mar  7 01:18:55.561: INFO: Got endpoints: latency-svc-nflnz [276.622554ms]
Mar  7 01:18:55.570: INFO: Created: latency-svc-317zd
Mar  7 01:18:55.583: INFO: Created: latency-svc-rjcqz
Mar  7 01:18:55.590: INFO: Got endpoints: latency-svc-4x6kb [294.587941ms]
Mar  7 01:18:55.602: INFO: Created: latency-svc-nn07b
Mar  7 01:18:55.626: INFO: Created: latency-svc-9xn6t
Mar  7 01:18:55.631: INFO: Got endpoints: latency-svc-317zd [275.41456ms]
Mar  7 01:18:55.649: INFO: Created: latency-svc-d2kt3
Mar  7 01:18:55.654: INFO: Got endpoints: latency-svc-rjcqz [294.13834ms]
Mar  7 01:18:55.663: INFO: Got endpoints: latency-svc-k41l0 [359.634509ms]
Mar  7 01:18:55.667: INFO: Created: latency-svc-g2qj4
Mar  7 01:18:55.685: INFO: Created: latency-svc-299q0
Mar  7 01:18:55.689: INFO: Got endpoints: latency-svc-nn07b [314.15248ms]
Mar  7 01:18:55.701: INFO: Created: latency-svc-nnvv5
Mar  7 01:18:55.709: INFO: Got endpoints: latency-svc-d2kt3 [304.163367ms]
Mar  7 01:18:55.711: INFO: Got endpoints: latency-svc-9xn6t [333.071493ms]
Mar  7 01:18:55.733: INFO: Got endpoints: latency-svc-g2qj4 [292.670873ms]
Mar  7 01:18:55.746: INFO: Got endpoints: latency-svc-299q0 [266.498661ms]
Mar  7 01:18:55.768: INFO: Got endpoints: latency-svc-nnvv5 [275.03305ms]
Mar  7 01:18:55.768: INFO: Latencies: [121.798603ms 139.98298ms 148.279392ms 156.217554ms 164.924799ms 166.508154ms 166.550126ms 174.141975ms 190.663386ms 193.372212ms 194.06759ms 205.538586ms 216.624385ms 221.163244ms 221.20888ms 224.817949ms 226.236303ms 230.724531ms 232.046809ms 233.195551ms 236.681679ms 238.501549ms 239.084661ms 240.180849ms 240.291893ms 241.593242ms 241.851591ms 243.570093ms 245.924671ms 247.074372ms 249.469894ms 249.592319ms 250.956511ms 251.227192ms 252.044832ms 252.790883ms 253.302128ms 254.164117ms 255.691696ms 257.576815ms 257.618198ms 258.087975ms 258.306163ms 259.140951ms 259.534975ms 260.46048ms 260.946155ms 262.929481ms 264.344217ms 266.498661ms 268.356583ms 270.369024ms 270.757577ms 270.766154ms 271.40432ms 271.419328ms 272.150221ms 272.305562ms 272.741531ms 272.847341ms 272.880782ms 273.256362ms 275.03305ms 275.41456ms 275.882902ms 276.622554ms 278.767312ms 279.387382ms 279.431397ms 279.648482ms 279.780915ms 279.94734ms 280.656489ms 280.846411ms 281.709735ms 282.746214ms 285.271798ms 286.418042ms 287.065422ms 287.084428ms 287.328555ms 289.193088ms 290.030977ms 290.595045ms 291.38253ms 292.175988ms 292.240226ms 292.445492ms 292.670873ms 293.84333ms 294.13834ms 294.587941ms 296.430933ms 298.052945ms 298.916164ms 300.788651ms 300.99257ms 304.163367ms 305.30457ms 309.588781ms 310.222822ms 310.250213ms 311.361044ms 312.075524ms 314.15248ms 314.166955ms 315.599834ms 316.234202ms 317.016324ms 317.546708ms 317.89316ms 319.691125ms 319.891026ms 323.309775ms 324.767748ms 325.635854ms 325.784836ms 325.897959ms 326.012095ms 326.071714ms 326.473819ms 327.540257ms 328.987495ms 329.247704ms 329.284559ms 331.906138ms 333.071493ms 334.883981ms 334.887854ms 336.08313ms 337.322202ms 337.534197ms 337.702474ms 338.634557ms 338.947801ms 339.110711ms 339.80813ms 340.834273ms 341.57108ms 342.241802ms 342.249656ms 342.56379ms 345.511853ms 345.659282ms 346.665569ms 347.284928ms 348.562399ms 349.183924ms 350.186902ms 350.916421ms 353.834656ms 356.844391ms 357.214891ms 358.958577ms 359.634509ms 359.823313ms 362.656912ms 365.523015ms 367.143222ms 369.151186ms 378.207337ms 385.256565ms 386.257128ms 387.61851ms 387.734365ms 388.063405ms 388.210692ms 389.650934ms 390.417861ms 391.142379ms 393.848856ms 396.870521ms 399.002997ms 400.483802ms 403.225087ms 418.527615ms 423.083217ms 426.37091ms 430.217456ms 430.87515ms 449.369502ms 454.817897ms 477.19485ms 491.502749ms 494.452509ms 494.57236ms 497.706207ms 506.10148ms 525.327592ms 528.000562ms 580.529016ms 590.503217ms 622.894269ms 660.010986ms 670.799237ms 688.947278ms 694.403996ms 703.802411ms 817.245174ms 821.942281ms]
Mar  7 01:18:55.768: INFO: 50 %ile: 310.222822ms
Mar  7 01:18:55.768: INFO: 90 %ile: 449.369502ms
Mar  7 01:18:55.768: INFO: 99 %ile: 817.245174ms
Mar  7 01:18:55.768: INFO: Total sample count: 200
[AfterEach] [k8s.io] Service endpoints latency
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Mar  7 01:18:55.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-h7gs3" for this suite.
Mar  7 01:19:27.279: INFO: namespace: e2e-tests-svc-latency-h7gs3, resource: bindings, ignored listing per whitelist

• [SLOW TEST:40.583 seconds]
[k8s.io] Service endpoints latency
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:826
  should not be very high [Conformance]
  /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service_latency.go:116
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
Ran 22 of 702 Specs in 832.400 seconds
SUCCESS! -- 22 Passed | 0 Failed | 0 Pending | 680 Skipped Mar  7 01:19:27.891: INFO: Error running cluster/log-dump.sh: fork/exec /data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/cluster/log-dump.sh: no such file or directory
PASS

Ginkgo ran 1 suite in 13m53.285367895s
Test Suite Passed
grep: /tmp/openshift/test-extended/core/logs/openshift.log: No such file or directory
[INFO] Dumping etcd contents to /tmp/openshift/test-extended/core/artifacts/etcd_dump.json
[INFO] Dumping container logs to /tmp/openshift/test-extended/core/logs/containers
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
grep: /tmp/openshift/test-extended/core/logs/openshift.log: No such file or directory
[INFO] Cleanup complete
[INFO] Exiting

real	39m12.313s
user	46m56.147s
sys	3m25.194s
+ [[ branch_success == \b\r\a\n\c\h\_\s\u\c\c\e\s\s ]]
+ [[ '' != 1 ]]
+ [[ 1 == 1 ]]
+ to=openshift/origin-gce:latest
+ sudo docker tag openshift/origin-gce:latest openshift/origin-gce:latest
+ sudo docker push openshift/origin-gce:latest
The push refers to a repository [docker.io/openshift/origin-gce]
a28821d7ce39: Preparing
fe8f0810c022: Preparing
0a997c4c8e87: Preparing
6b98319e79a5: Preparing
34e7b85d83e4: Preparing
34e7b85d83e4: Mounted from openshift/origin-base
6b98319e79a5: Mounted from openshift/origin-base
fe8f0810c022: Pushed
0a997c4c8e87: Pushed
a28821d7ce39: Pushed
latest: digest: sha256:0cd2fb99b87fb1f2d319d4990a5823d3aa6a1882c60bccd1d9b309834dd06b23 size: 1377
+ exit 0
+ gather
+ set +e
+ hack/build-go.sh cmd/oc
++ Building go targets for linux/amd64: cmd/oc
hack/build-go.sh took 3 seconds
++ pwd
+ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/_output/local/bin/linux/amd64:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
+ PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/_output/local/bin/linux/amd64:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
+ oc get nodes --template '{{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}'
+ xargs -L 1 -I X bash -c 'oc get --raw /api/v1/nodes/X/proxy/metrics > /tmp/artifacts/X.metrics' ''
+ set -e
[PostBuildScript] - Execution post build scripts.
[workspace] $ /bin/bash /tmp/hudson7213084419962625771.sh
~/jobs/zz_origin_gce_image/workspace ~/jobs/zz_origin_gce_image/workspace
Activated service account credentials for: [jenkins-ci-provisioner@openshift-gce-devel.iam.gserviceaccount.com]

PLAY [Terminate running cluster and remove all supporting resources in GCE] ****

TASK [setup] *******************************************************************
Tuesday 07 March 2017  06:21:17 +0000 (0:00:00.037)       0:00:00.037 ********* 
ok: [localhost]

TASK [deprovision : Templatize de-provision script] ****************************
Tuesday 07 March 2017  06:21:20 +0000 (0:00:03.778)       0:00:03.816 ********* 
changed: [localhost]

TASK [deprovision : De-provision GCE resources] ********************************
Tuesday 07 March 2017  06:21:21 +0000 (0:00:00.450)       0:00:04.266 ********* 
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=3    changed=2    unreachable=0    failed=0   

Tuesday 07 March 2017  06:27:34 +0000 (0:06:13.219)       0:06:17.485 ********* 
=============================================================================== 
deprovision : De-provision GCE resources ------------------------------ 373.22s
setup ------------------------------------------------------------------- 3.78s
deprovision : Templatize de-provision script ---------------------------- 0.45s
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_01.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_02.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_03.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_04.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_05.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_06.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_07.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_08.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_09.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_10.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_11.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_12.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_13.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_14.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_15.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_16.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_17.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_18.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_19.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_20.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_21.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_22.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_23.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_24.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_parallel_25.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit/conformance_serial_01.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg148-ig-m-nrvp.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg148-ig-n-qhms.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg148-ig-n-w5x2.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg148-ig-n-wvkh.metrics

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "msg": "Tags {'Name': 'terminate'} created for resource i-043d782cfbff006bd."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "instance_ids": [
        "i-043d782cfbff006bd"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-04ba1e5c2de2787ef"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-01f34a6e725220dbd"
                }
            }, 
            "dns_name": "ec2-54-173-222-172.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-043d782cfbff006bd", 
            "image_id": "ami-a409d1b2", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2017-03-07T05:13:28.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-10-43.ec2.internal", 
            "private_ip": "172.18.10.43", 
            "public_dns_name": "ec2-54-173-222-172.compute-1.amazonaws.com", 
            "public_ip": "54.173.222.172", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:21
changed: [localhost] => {
    "changed": true, 
    "path": "/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.10.43.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.venv/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "path": "/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=7    changed=4    unreachable=0    failed=0   

~/jobs/zz_origin_gce_image/workspace
Recording test results
Archiving artifacts
Finished: SUCCESS