Console Output

Skipping 6,728 KB.. Full Log
    /go/src/github.com/openshift/origin/test/extended/networking/isolation.go:15
------------------------------
S
------------------------------
[sig-storage] Projected 
  should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:56.507: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:56.655: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907
STEP: Creating a pod to test downward API volume plugin
Dec 20 23:19:56.946: INFO: Waiting up to 5m0s for pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca" in namespace "e2e-tests-projected-tpmnm" to be "success or failure"
Dec 20 23:19:56.965: INFO: Pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 19.530069ms
Dec 20 23:19:58.993: INFO: Pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0473414s
Dec 20 23:20:01.008: INFO: Pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062189738s
Dec 20 23:20:03.025: INFO: Pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07887867s
Dec 20 23:20:05.040: INFO: Pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094226754s
Dec 20 23:20:07.064: INFO: Pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117760112s
STEP: Saw pod success
Dec 20 23:20:07.064: INFO: Pod "metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:07.089: INFO: Trying to get logs from node ci-primg624-ig-n-jf6c pod metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca container client-container: <nil>
STEP: delete the pod
Dec 20 23:20:07.219: INFO: Waiting for pod metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca to disappear
Dec 20 23:20:07.236: INFO: Pod metadata-volume-4ad09e85-e5dc-11e7-b211-0e785a65cbca no longer exists
[AfterEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:07.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tpmnm" for this suite.
Dec 20 23:20:13.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:14.552: INFO: namespace: e2e-tests-projected-tpmnm, resource: bindings, ignored listing per whitelist
Dec 20 23:20:14.941: INFO: namespace e2e-tests-projected-tpmnm deletion completed in 7.673448245s


• [SLOW TEST:18.434 seconds]
[sig-storage] Projected
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should provide podname as non-root with fsgroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:907
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:00.633: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:00.750: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
STEP: Creating configMap with name configmap-test-volume-map-4d38c38f-e5dc-11e7-9888-0e785a65cbca
STEP: Creating a pod to test consume configMaps
Dec 20 23:20:00.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca" in namespace "e2e-tests-configmap-94s5v" to be "success or failure"
Dec 20 23:20:00.984: INFO: Pod "pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.944814ms
Dec 20 23:20:02.999: INFO: Pod "pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030403149s
Dec 20 23:20:05.026: INFO: Pod "pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056675359s
Dec 20 23:20:07.044: INFO: Pod "pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074573184s
STEP: Saw pod success
Dec 20 23:20:07.044: INFO: Pod "pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:07.071: INFO: Trying to get logs from node ci-primg624-ig-n-9k4l pod pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca container configmap-volume-test: <nil>
STEP: delete the pod
Dec 20 23:20:07.172: INFO: Waiting for pod pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca to disappear
Dec 20 23:20:07.213: INFO: Pod pod-configmaps-4d3c1639-e5dc-11e7-9888-0e785a65cbca no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:07.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-94s5v" for this suite.
Dec 20 23:20:13.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:14.867: INFO: namespace: e2e-tests-configmap-94s5v, resource: bindings, ignored listing per whitelist
Dec 20 23:20:14.970: INFO: namespace e2e-tests-configmap-94s5v deletion completed in 7.718866919s


• [SLOW TEST:14.336 seconds]
[sig-storage] ConfigMap
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
[Conformance][templates] templateinstance impersonation tests 
  should pass impersonation deletion tests [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:352

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:03.544: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:03.721: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-user.kubeconfig"
Dec 20 23:20:03.721: INFO: The user is now "extended-test-templates-d4xfc-24hzr-user"
Dec 20 23:20:03.721: INFO: Creating project "extended-test-templates-d4xfc-24hzr"
Dec 20 23:20:03.805: INFO: Waiting on permissions in project "extended-test-templates-d4xfc-24hzr" ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [Conformance][templates] templateinstance impersonation tests
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:57
Dec 20 23:20:04.610: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-adminuser.kubeconfig"
Dec 20 23:20:04.858: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-impersonateuser.kubeconfig"
Dec 20 23:20:05.047: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-impersonatebygroupuser.kubeconfig"
Dec 20 23:20:05.342: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-edituser1.kubeconfig"
Dec 20 23:20:05.514: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-edituser2.kubeconfig"
Dec 20 23:20:05.787: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-viewuser.kubeconfig"
Dec 20 23:20:05.970: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-impersonatebygroupuser.kubeconfig"
[It] should pass impersonation deletion tests [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:352
STEP: testing as system:admin user
STEP: testing as extended-test-templates-d4xfc-24hzr-adminuser user
Dec 20 23:20:06.253: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-adminuser.kubeconfig"
STEP: testing as extended-test-templates-d4xfc-24hzr-impersonateuser user
Dec 20 23:20:06.557: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-impersonateuser.kubeconfig"
STEP: testing as extended-test-templates-d4xfc-24hzr-impersonatebygroupuser user
Dec 20 23:20:06.780: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-impersonatebygroupuser.kubeconfig"
STEP: testing as extended-test-templates-d4xfc-24hzr-edituser1 user
Dec 20 23:20:06.977: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-edituser1.kubeconfig"
STEP: testing as extended-test-templates-d4xfc-24hzr-edituser2 user
Dec 20 23:20:07.336: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-edituser2.kubeconfig"
STEP: testing as extended-test-templates-d4xfc-24hzr-viewuser user
Dec 20 23:20:07.503: INFO: configPath is now "/tmp/extended-test-templates-d4xfc-24hzr-viewuser.kubeconfig"
[AfterEach] [Conformance][templates] templateinstance impersonation tests
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:07.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-templates-d4xfc-24hzr" for this suite.
Dec 20 23:20:13.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:15.196: INFO: namespace: extended-test-templates-d4xfc-24hzr, resource: bindings, ignored listing per whitelist
Dec 20 23:20:15.348: INFO: namespace extended-test-templates-d4xfc-24hzr deletion completed in 7.762986796s
[AfterEach] [Conformance][templates] templateinstance impersonation tests
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:221


• [SLOW TEST:11.954 seconds]
[Conformance][templates] templateinstance impersonation tests
/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:27
  should pass impersonation deletion tests [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_impersonation.go:352
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:100

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:06.239: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:06.344: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:100
STEP: Creating configMap with name configmap-test-volume-map-509b76ea-e5dc-11e7-9a64-0e785a65cbca
STEP: Creating a pod to test consume configMaps
Dec 20 23:20:06.642: INFO: Waiting up to 5m0s for pod "pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca" in namespace "e2e-tests-configmap-6q757" to be "success or failure"
Dec 20 23:20:06.659: INFO: Pod "pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 17.045398ms
Dec 20 23:20:08.684: INFO: Pod "pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041792826s
Dec 20 23:20:10.705: INFO: Pod "pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062987536s
Dec 20 23:20:12.729: INFO: Pod "pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086611833s
STEP: Saw pod success
Dec 20 23:20:12.729: INFO: Pod "pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:12.772: INFO: Trying to get logs from node ci-primg624-ig-n-h6tg pod pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca container configmap-volume-test: <nil>
STEP: delete the pod
Dec 20 23:20:12.854: INFO: Waiting for pod pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca to disappear
Dec 20 23:20:12.884: INFO: Pod pod-configmaps-509e45bc-e5dc-11e7-9a64-0e785a65cbca no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:12.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6q757" for this suite.
Dec 20 23:20:18.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:20.156: INFO: namespace: e2e-tests-configmap-6q757, resource: bindings, ignored listing per whitelist
Dec 20 23:20:20.397: INFO: namespace e2e-tests-configmap-6q757 deletion completed in 7.477026329s


• [SLOW TEST:14.158 seconds]
[sig-storage] ConfigMap
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:100
------------------------------
Dec 20 23:20:20.398: INFO: Running AfterSuite actions on all node


[k8s.io] InitContainer 
  should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [k8s.io] InitContainer
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:18:48.154: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:18:48.271: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:40
[It] should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103
STEP: creating the pod
Dec 20 23:18:48.521: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:19:09.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-bklmf" for this suite.
Dec 20 23:20:19.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:20.200: INFO: namespace: e2e-tests-init-container-bklmf, resource: bindings, ignored listing per whitelist
Dec 20 23:20:20.690: INFO: namespace e2e-tests-init-container-bklmf deletion completed in 1m11.471166835s


• [SLOW TEST:92.536 seconds]
[k8s.io] InitContainer
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:643
  should invoke init containers on a RestartAlways pod [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:103
------------------------------
Dec 20 23:20:20.692: INFO: Running AfterSuite actions on all node


[Feature:Builds][Conformance] oc new-app  
  should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/new_app.go:45

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:Builds][Conformance] oc new-app
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:18:35.969: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:18:36.181: INFO: configPath is now "/tmp/extended-test-new-app-5mj4z-tj7p9-user.kubeconfig"
Dec 20 23:18:36.181: INFO: The user is now "extended-test-new-app-5mj4z-tj7p9-user"
Dec 20 23:18:36.181: INFO: Creating project "extended-test-new-app-5mj4z-tj7p9"
Dec 20 23:18:36.350: INFO: Waiting on permissions in project "extended-test-new-app-5mj4z-tj7p9" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/new_app.go:26
STEP: waiting for builder service account
STEP: waiting for openshift namespace imagestreams
Dec 20 23:18:36.525: INFO: Running scan #0 

Dec 20 23:18:36.525: INFO: Checking language ruby 

Dec 20 23:18:36.543: INFO: Checking tag 2.0 

Dec 20 23:18:36.543: INFO: Checking tag 2.2 

Dec 20 23:18:36.543: INFO: Checking tag 2.3 

Dec 20 23:18:36.543: INFO: Checking tag 2.4 

Dec 20 23:18:36.543: INFO: Checking tag latest 

Dec 20 23:18:36.543: INFO: Checking language nodejs 

Dec 20 23:18:36.560: INFO: Checking tag 0.10 

Dec 20 23:18:36.561: INFO: Checking tag 4 

Dec 20 23:18:36.561: INFO: Checking tag 6 

Dec 20 23:18:36.561: INFO: Checking tag latest 

Dec 20 23:18:36.561: INFO: Checking language perl 

Dec 20 23:18:36.578: INFO: Checking tag 5.16 

Dec 20 23:18:36.578: INFO: Checking tag 5.20 

Dec 20 23:18:36.578: INFO: Checking tag 5.24 

Dec 20 23:18:36.578: INFO: Checking tag latest 

Dec 20 23:18:36.578: INFO: Checking language php 

Dec 20 23:18:36.594: INFO: Checking tag 7.0 

Dec 20 23:18:36.594: INFO: Checking tag latest 

Dec 20 23:18:36.594: INFO: Checking tag 5.5 

Dec 20 23:18:36.594: INFO: Checking tag 5.6 

Dec 20 23:18:36.594: INFO: Checking language python 

Dec 20 23:18:36.613: INFO: Checking tag 3.4 

Dec 20 23:18:36.613: INFO: Checking tag 3.5 

Dec 20 23:18:36.613: INFO: Checking tag latest 

Dec 20 23:18:36.613: INFO: Checking tag 2.7 

Dec 20 23:18:36.613: INFO: Checking tag 3.3 

Dec 20 23:18:36.613: INFO: Checking language wildfly 

Dec 20 23:18:36.630: INFO: Checking tag 10.0 

Dec 20 23:18:36.630: INFO: Checking tag 10.1 

Dec 20 23:18:36.630: INFO: Checking tag 8.1 

Dec 20 23:18:36.630: INFO: Checking tag 9.0 

Dec 20 23:18:36.630: INFO: Checking tag latest 

Dec 20 23:18:36.630: INFO: Checking language mysql 

Dec 20 23:18:36.647: INFO: Checking tag latest 

Dec 20 23:18:36.647: INFO: Checking tag 5.5 

Dec 20 23:18:36.647: INFO: Checking tag 5.6 

Dec 20 23:18:36.647: INFO: Checking tag 5.7 

Dec 20 23:18:36.647: INFO: Checking language postgresql 

Dec 20 23:18:36.664: INFO: Checking tag 9.4 

Dec 20 23:18:36.664: INFO: Checking tag 9.5 

Dec 20 23:18:36.664: INFO: Checking tag latest 

Dec 20 23:18:36.664: INFO: Checking tag 9.2 

Dec 20 23:18:36.664: INFO: Checking language mongodb 

Dec 20 23:18:36.680: INFO: Checking tag 2.4 

Dec 20 23:18:36.680: INFO: Checking tag 2.6 

Dec 20 23:18:36.680: INFO: Checking tag 3.2 

Dec 20 23:18:36.680: INFO: Checking tag latest 

Dec 20 23:18:36.680: INFO: Checking language jenkins 

Dec 20 23:18:36.697: INFO: Checking tag 1 

Dec 20 23:18:36.697: INFO: Checking tag 2 

Dec 20 23:18:36.697: INFO: Checking tag latest 

Dec 20 23:18:36.697: INFO: Success! 

[It] should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/new_app.go:45
STEP: calling oc new-app
Dec 20 23:18:36.697: INFO: Running 'oc new-app --config=/tmp/extended-test-new-app-5mj4z-tj7p9-user.kubeconfig --namespace=extended-test-new-app-5mj4z-tj7p9 https://github.com/openshift/nodejs-ex --name a234567890123456789012345678901234567890123456789012345678'
--> Found image d5b68e7 (15 hours old) in image stream "openshift/nodejs" under tag "6" for "nodejs"

    Node.js 6 
    --------- 
    Node.js 6 available as docker container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

    Tags: builder, nodejs, nodejs6

    * The source repository appears to match: nodejs
    * A source build using source code from https://github.com/openshift/nodejs-ex will be created
      * The resulting image will be pushed to image stream "a234567890123456789012345678901234567890123456789012345678:latest"
      * Use 'start-build' to trigger a new build
    * This image will be deployed in deployment config "a234567890123456789012345678901234567890123456789012345678"
    * Port 8080/tcp will be load balanced by service "a234567890123456789012345678901234567890123456789012345678"
      * Other containers can access this service through the hostname "a234567890123456789012345678901234567890123456789012345678"

--> Creating resources ...
    imagestream "a234567890123456789012345678901234567890123456789012345678" created
    buildconfig "a234567890123456789012345678901234567890123456789012345678" created
    deploymentconfig "a234567890123456789012345678901234567890123456789012345678" created
    service "a234567890123456789012345678901234567890123456789012345678" created
--> Success
    Build scheduled, use 'oc logs -f bc/a234567890123456789012345678901234567890123456789012345678' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/a234567890123456789012345678901234567890123456789012345678' 
    Run 'oc status' to view your app.
STEP: waiting for the build to complete
STEP: waiting for the deployment to complete
Dec 20 23:19:49.092: INFO: waiting for deploymentconfig extended-test-new-app-5mj4z-tj7p9/a234567890123456789012345678901234567890123456789012345678 to be available with version 1

Dec 20 23:19:56.147: INFO: deploymentconfig extended-test-new-app-5mj4z-tj7p9/a234567890123456789012345678901234567890123456789012345678 available after 7.054321962s
pods: a2345678901234567890123456789012345678901234567890123456784mwcp

[AfterEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/new_app.go:36
[AfterEach] [Feature:Builds][Conformance] oc new-app
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:19:56.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-new-app-5mj4z-tj7p9" for this suite.
Dec 20 23:20:20.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:21.409: INFO: namespace: extended-test-new-app-5mj4z-tj7p9, resource: bindings, ignored listing per whitelist
Dec 20 23:20:21.723: INFO: namespace extended-test-new-app-5mj4z-tj7p9 deletion completed in 25.522778773s


• [SLOW TEST:105.755 seconds]
[Feature:Builds][Conformance] oc new-app
/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:16
  
  /go/src/github.com/openshift/origin/test/extended/builds/new_app.go:24
    should succeed with a --name of 58 characters [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/builds/new_app.go:45
------------------------------
Dec 20 23:20:21.725: INFO: Running AfterSuite actions on all node


[sig-storage] Projected 
  should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:08.854: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:08.961: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
STEP: Creating a pod to test downward API volume plugin
Dec 20 23:20:09.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca" in namespace "e2e-tests-projected-kjnm9" to be "success or failure"
Dec 20 23:20:09.217: INFO: Pod "downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 15.235641ms
Dec 20 23:20:11.234: INFO: Pod "downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03240852s
Dec 20 23:20:13.251: INFO: Pod "downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049054238s
Dec 20 23:20:15.276: INFO: Pod "downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074296786s
STEP: Saw pod success
Dec 20 23:20:15.276: INFO: Pod "downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:15.296: INFO: Trying to get logs from node ci-primg624-ig-n-jf6c pod downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca container client-container: <nil>
STEP: delete the pod
Dec 20 23:20:15.357: INFO: Waiting for pod downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca to disappear
Dec 20 23:20:15.377: INFO: Pod downwardapi-volume-52243ad8-e5dc-11e7-83a9-0e785a65cbca no longer exists
[AfterEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:15.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kjnm9" for this suite.
Dec 20 23:20:21.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:22.436: INFO: namespace: e2e-tests-projected-kjnm9, resource: bindings, ignored listing per whitelist
Dec 20 23:20:23.010: INFO: namespace e2e-tests-projected-kjnm9 deletion completed in 7.603577555s


• [SLOW TEST:14.156 seconds]
[sig-storage] Projected
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should provide container's cpu limit [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:20:23.012: INFO: Running AfterSuite actions on all node


[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:10.656: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:10.716: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
STEP: Creating configMap with name configmap-test-volume-map-5329bb2c-e5dc-11e7-899b-0e785a65cbca
STEP: Creating a pod to test consume configMaps
Dec 20 23:20:10.926: INFO: Waiting up to 5m0s for pod "pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca" in namespace "e2e-tests-configmap-ncbjc" to be "success or failure"
Dec 20 23:20:10.942: INFO: Pod "pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 15.727224ms
Dec 20 23:20:12.960: INFO: Pod "pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034387457s
Dec 20 23:20:14.977: INFO: Pod "pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050653202s
Dec 20 23:20:16.993: INFO: Pod "pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066570964s
STEP: Saw pod success
Dec 20 23:20:16.993: INFO: Pod "pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:17.009: INFO: Trying to get logs from node ci-primg624-ig-n-jf6c pod pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca container configmap-volume-test: <nil>
STEP: delete the pod
Dec 20 23:20:17.101: INFO: Waiting for pod pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca to disappear
Dec 20 23:20:17.116: INFO: Pod pod-configmaps-532c1db7-e5dc-11e7-899b-0e785a65cbca no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:17.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ncbjc" for this suite.
Dec 20 23:20:23.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:24.271: INFO: namespace: e2e-tests-configmap-ncbjc, resource: bindings, ignored listing per whitelist
Dec 20 23:20:24.676: INFO: namespace e2e-tests-configmap-ncbjc deletion completed in 7.530500771s


• [SLOW TEST:14.020 seconds]
[sig-storage] ConfigMap
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:20:24.677: INFO: Running AfterSuite actions on all node


[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] Secrets
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:12.300: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:12.398: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
STEP: Creating secret with name secret-test-map-542ff06b-e5dc-11e7-90e9-0e785a65cbca
STEP: Creating a pod to test consume secrets
Dec 20 23:20:12.655: INFO: Waiting up to 5m0s for pod "pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca" in namespace "e2e-tests-secrets-q6frw" to be "success or failure"
Dec 20 23:20:12.675: INFO: Pod "pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 19.668911ms
Dec 20 23:20:14.748: INFO: Pod "pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092511152s
Dec 20 23:20:16.763: INFO: Pod "pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108136744s
Dec 20 23:20:18.779: INFO: Pod "pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123353191s
STEP: Saw pod success
Dec 20 23:20:18.779: INFO: Pod "pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:18.793: INFO: Trying to get logs from node ci-primg624-ig-n-h6tg pod pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca container secret-volume-test: <nil>
STEP: delete the pod
Dec 20 23:20:18.840: INFO: Waiting for pod pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca to disappear
Dec 20 23:20:18.855: INFO: Pod pod-secrets-5432829e-e5dc-11e7-90e9-0e785a65cbca no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:18.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-q6frw" for this suite.
Dec 20 23:20:24.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:25.648: INFO: namespace: e2e-tests-secrets-q6frw, resource: bindings, ignored listing per whitelist
Dec 20 23:20:26.326: INFO: namespace e2e-tests-secrets-q6frw deletion completed in 7.442342258s


• [SLOW TEST:14.026 seconds]
[sig-storage] Secrets
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:20:26.328: INFO: Running AfterSuite actions on all node


[sig-storage] Projected 
  should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:14.111: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:14.167: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
STEP: Creating projection with secret that has name projected-secret-test-553b97cc-e5dc-11e7-a311-0e785a65cbca
STEP: Creating a pod to test consume secrets
Dec 20 23:20:14.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca" in namespace "e2e-tests-projected-q8pn7" to be "success or failure"
Dec 20 23:20:14.415: INFO: Pod "pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.677221ms
Dec 20 23:20:16.456: INFO: Pod "pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055884524s
Dec 20 23:20:18.472: INFO: Pod "pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071371228s
Dec 20 23:20:20.489: INFO: Pod "pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088844057s
Dec 20 23:20:22.507: INFO: Pod "pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106657794s
STEP: Saw pod success
Dec 20 23:20:22.507: INFO: Pod "pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:22.526: INFO: Trying to get logs from node ci-primg624-ig-n-h6tg pod pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca container projected-secret-volume-test: <nil>
STEP: delete the pod
Dec 20 23:20:22.607: INFO: Waiting for pod pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca to disappear
Dec 20 23:20:22.631: INFO: Pod pod-projected-secrets-553e22bd-e5dc-11e7-a311-0e785a65cbca no longer exists
[AfterEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:22.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q8pn7" for this suite.
Dec 20 23:20:28.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:29.369: INFO: namespace: e2e-tests-projected-q8pn7, resource: bindings, ignored listing per whitelist
Dec 20 23:20:30.133: INFO: namespace e2e-tests-projected-q8pn7 deletion completed in 7.464685866s


• [SLOW TEST:16.023 seconds]
[sig-storage] Projected
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable from pods in volume with defaultMode set [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:20:30.134: INFO: Running AfterSuite actions on all node


[Feature:Builds][Conformance] s2i build with a quota  Building from a template 
  should create an s2i build with a quota and run it [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:41

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:Builds][Conformance] s2i build with a quota
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:50.927: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:51.248: INFO: configPath is now "/tmp/extended-test-s2i-build-quota-nzbbl-6jm7x-user.kubeconfig"
Dec 20 23:19:51.248: INFO: The user is now "extended-test-s2i-build-quota-nzbbl-6jm7x-user"
Dec 20 23:19:51.248: INFO: Creating project "extended-test-s2i-build-quota-nzbbl-6jm7x"
Dec 20 23:19:51.465: INFO: Waiting on permissions in project "extended-test-s2i-build-quota-nzbbl-6jm7x" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:27
STEP: waiting for builder service account
[It] should create an s2i build with a quota and run it [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:41
STEP: calling oc create -f "/tmp/fixture-testdata-dir183074261/test/extended/testdata/builds/test-s2i-build-quota.json"
Dec 20 23:19:52.127: INFO: Running 'oc create --config=/tmp/extended-test-s2i-build-quota-nzbbl-6jm7x-user.kubeconfig --namespace=extended-test-s2i-build-quota-nzbbl-6jm7x -f /tmp/fixture-testdata-dir183074261/test/extended/testdata/builds/test-s2i-build-quota.json'
buildconfig "s2i-build-quota" created
STEP: starting a test build
Dec 20 23:19:52.411: INFO: Running 'oc start-build --config=/tmp/extended-test-s2i-build-quota-nzbbl-6jm7x-user.kubeconfig --namespace=extended-test-s2i-build-quota-nzbbl-6jm7x s2i-build-quota --from-dir /tmp/fixture-testdata-dir183074261/test/extended/testdata/builds/build-quota -o=name'
Dec 20 23:19:56.372: INFO: 

start-build output with args [s2i-build-quota --from-dir /tmp/fixture-testdata-dir183074261/test/extended/testdata/builds/build-quota -o=name]:
Error><nil>
StdOut>
build/s2i-build-quota-1
StdErr>
Uploading directory "/tmp/fixture-testdata-dir183074261/test/extended/testdata/builds/build-quota" as binary input for the build ...


Dec 20 23:19:56.373: INFO: Waiting for s2i-build-quota-1 to complete

Dec 20 23:20:22.435: INFO: Done waiting for s2i-build-quota-1: util.BuildResult{BuildPath:"build/s2i-build-quota-1", BuildName:"s2i-build-quota-1", StartBuildStdErr:"Uploading directory \"/tmp/fixture-testdata-dir183074261/test/extended/testdata/builds/build-quota\" as binary input for the build ...", StartBuildStdOut:"build/s2i-build-quota-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421ee9800), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc421752d20)}
 with error: <nil>

STEP: expecting the build logs to contain the correct cgroups values
Dec 20 23:20:22.435: INFO: Running 'oc logs --config=/tmp/extended-test-s2i-build-quota-nzbbl-6jm7x-user.kubeconfig --namespace=extended-test-s2i-build-quota-nzbbl-6jm7x -f build/s2i-build-quota-1 --timestamps'
Dec 20 23:20:22.836: INFO: Found event v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"s2i-build-quota-1.150223fd764af859", GenerateName:"", Namespace:"extended-test-s2i-build-quota-nzbbl-6jm7x", SelfLink:"/api/v1/namespaces/extended-test-s2i-build-quota-nzbbl-6jm7x/events/s2i-build-quota-1.150223fd764af859", UID:"4a753036-e5dc-11e7-99e6-42010a8e0005", ResourceVersion:"19238", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408796, loc:(*time.Location)(0x6930d80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Build", Namespace:"extended-test-s2i-build-quota-nzbbl-6jm7x", Name:"s2i-build-quota-1", UID:"48e864e1-e5dc-11e7-99e6-42010a8e0005", APIVersion:"build.openshift.io", ResourceVersion:"19220", FieldPath:""}, Reason:"BuildStarted", Message:"Build extended-test-s2i-build-quota-nzbbl-6jm7x/s2i-build-quota-1 is now running", Source:v1.EventSource{Component:"build-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408796, loc:(*time.Location)(0x6930d80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408796, loc:(*time.Location)(0x6930d80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}
Dec 20 23:20:22.853: INFO: Found event v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"s2i-build-quota-1.150223fd764af859", GenerateName:"", Namespace:"extended-test-s2i-build-quota-nzbbl-6jm7x", SelfLink:"/api/v1/namespaces/extended-test-s2i-build-quota-nzbbl-6jm7x/events/s2i-build-quota-1.150223fd764af859", UID:"4a753036-e5dc-11e7-99e6-42010a8e0005", ResourceVersion:"19238", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408796, loc:(*time.Location)(0x6930d80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Build", Namespace:"extended-test-s2i-build-quota-nzbbl-6jm7x", Name:"s2i-build-quota-1", UID:"48e864e1-e5dc-11e7-99e6-42010a8e0005", APIVersion:"build.openshift.io", ResourceVersion:"19220", FieldPath:""}, Reason:"BuildStarted", Message:"Build extended-test-s2i-build-quota-nzbbl-6jm7x/s2i-build-quota-1 is now running", Source:v1.EventSource{Component:"build-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408796, loc:(*time.Location)(0x6930d80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408796, loc:(*time.Location)(0x6930d80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}
Dec 20 23:20:22.853: INFO: Found event v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"s2i-build-quota-1.15022402fd57386c", GenerateName:"", Namespace:"extended-test-s2i-build-quota-nzbbl-6jm7x", SelfLink:"/api/v1/namespaces/extended-test-s2i-build-quota-nzbbl-6jm7x/events/s2i-build-quota-1.15022402fd57386c", UID:"5899439c-e5dc-11e7-99e6-42010a8e0005", ResourceVersion:"21216", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408820, loc:(*time.Location)(0x6930d80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Build", Namespace:"extended-test-s2i-build-quota-nzbbl-6jm7x", Name:"s2i-build-quota-1", UID:"48e864e1-e5dc-11e7-99e6-42010a8e0005", APIVersion:"build.openshift.io", ResourceVersion:"21215", FieldPath:""}, Reason:"BuildCompleted", Message:"Build extended-test-s2i-build-quota-nzbbl-6jm7x/s2i-build-quota-1 completed successfully", Source:v1.EventSource{Component:"build-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408820, loc:(*time.Location)(0x6930d80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63649408820, loc:(*time.Location)(0x6930d80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}
[AfterEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:33
[AfterEach] [Feature:Builds][Conformance] s2i build with a quota
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:22.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-s2i-build-quota-nzbbl-6jm7x" for this suite.
Dec 20 23:20:28.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:29.534: INFO: namespace: extended-test-s2i-build-quota-nzbbl-6jm7x, resource: bindings, ignored listing per whitelist
Dec 20 23:20:30.352: INFO: namespace extended-test-s2i-build-quota-nzbbl-6jm7x deletion completed in 7.470358751s


• [SLOW TEST:39.426 seconds]
[Feature:Builds][Conformance] s2i build with a quota
/go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:14
  
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:26
    Building from a template
    /go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:40
      should create an s2i build with a quota and run it [Suite:openshift/conformance/parallel]
      /go/src/github.com/openshift/origin/test/extended/builds/s2i_quota.go:41
------------------------------
Dec 20 23:20:30.353: INFO: Running AfterSuite actions on all node


[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] Secrets
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:14.971: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:15.098: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
STEP: Creating secret with name secret-test-55d36156-e5dc-11e7-9888-0e785a65cbca
STEP: Creating a pod to test consume secrets
Dec 20 23:20:15.400: INFO: Waiting up to 5m0s for pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca" in namespace "e2e-tests-secrets-pwjb6" to be "success or failure"
Dec 20 23:20:15.415: INFO: Pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.874493ms
Dec 20 23:20:17.430: INFO: Pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030487951s
Dec 20 23:20:19.446: INFO: Pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04580605s
Dec 20 23:20:21.461: INFO: Pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06148549s
Dec 20 23:20:23.477: INFO: Pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076865539s
Dec 20 23:20:25.492: INFO: Pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09223096s
STEP: Saw pod success
Dec 20 23:20:25.492: INFO: Pod "pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca" satisfied condition "success or failure"
Dec 20 23:20:25.507: INFO: Trying to get logs from node ci-primg624-ig-n-jf6c pod pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca container secret-volume-test: <nil>
STEP: delete the pod
Dec 20 23:20:25.555: INFO: Waiting for pod pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca to disappear
Dec 20 23:20:25.570: INFO: Pod pod-secrets-55d63b71-e5dc-11e7-9888-0e785a65cbca no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:25.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pwjb6" for this suite.
Dec 20 23:20:31.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:32.762: INFO: namespace: e2e-tests-secrets-pwjb6, resource: bindings, ignored listing per whitelist
Dec 20 23:20:33.103: INFO: namespace e2e-tests-secrets-pwjb6 deletion completed in 7.504383254s


• [SLOW TEST:18.133 seconds]
[sig-storage] Secrets
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:20:33.104: INFO: Running AfterSuite actions on all node


[Feature:Builds] build have source revision metadata  started build 
  should contain source revision information [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/revision.go:37

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:Builds] build have source revision metadata
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:58.869: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:59.046: INFO: configPath is now "/tmp/extended-test-cli-build-revision-4sshq-g4fsf-user.kubeconfig"
Dec 20 23:19:59.046: INFO: The user is now "extended-test-cli-build-revision-4sshq-g4fsf-user"
Dec 20 23:19:59.046: INFO: Creating project "extended-test-cli-build-revision-4sshq-g4fsf"
Dec 20 23:19:59.127: INFO: Waiting on permissions in project "extended-test-cli-build-revision-4sshq-g4fsf" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/revision.go:22
STEP: waiting for builder service account
Dec 20 23:19:59.280: INFO: Running 'oc create --config=/tmp/extended-test-cli-build-revision-4sshq-g4fsf-user.kubeconfig --namespace=extended-test-cli-build-revision-4sshq-g4fsf -f /tmp/fixture-testdata-dir864138440/test/extended/testdata/builds/test-build-revision.json'
buildconfig "sample-build" created
[It] should contain source revision information [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/revision.go:37
STEP: starting the build
Dec 20 23:19:59.540: INFO: Running 'oc start-build --config=/tmp/extended-test-cli-build-revision-4sshq-g4fsf-user.kubeconfig --namespace=extended-test-cli-build-revision-4sshq-g4fsf sample-build -o=name'
Dec 20 23:19:59.818: INFO: 

start-build output with args [sample-build -o=name]:
Error><nil>
StdOut>
build/sample-build-1
StdErr>



Dec 20 23:19:59.819: INFO: Waiting for sample-build-1 to complete

Dec 20 23:20:25.853: INFO: Done waiting for sample-build-1: util.BuildResult{BuildPath:"build/sample-build-1", BuildName:"sample-build-1", StartBuildStdErr:"", StartBuildStdOut:"build/sample-build-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*build.Build)(0xc421785b00), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), Oc:(*util.CLI)(0xc421558540)}
 with error: <nil>

STEP: verifying the status of "build/sample-build-1"
[AfterEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/revision.go:29
[AfterEach] [Feature:Builds] build have source revision metadata
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:25.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-cli-build-revision-4sshq-g4fsf" for this suite.
Dec 20 23:20:31.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:32.761: INFO: namespace: extended-test-cli-build-revision-4sshq-g4fsf, resource: bindings, ignored listing per whitelist
Dec 20 23:20:33.485: INFO: namespace extended-test-cli-build-revision-4sshq-g4fsf deletion completed in 7.579338645s


• [SLOW TEST:34.616 seconds]
[Feature:Builds] build have source revision metadata
/go/src/github.com/openshift/origin/test/extended/builds/revision.go:14
  
  /go/src/github.com/openshift/origin/test/extended/builds/revision.go:21
    started build
    /go/src/github.com/openshift/origin/test/extended/builds/revision.go:36
      should contain source revision information [Suite:openshift/conformance/parallel]
      /go/src/github.com/openshift/origin/test/extended/builds/revision.go:37
------------------------------
Dec 20 23:20:33.486: INFO: Running AfterSuite actions on all node


[k8s.io] PrivilegedPod 
  should enable privileged commands [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/privileged.go:47

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [k8s.io] PrivilegedPod
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:54.849: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:55.048: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] should enable privileged commands [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/privileged.go:47
STEP: Creating a pod with a privileged container
STEP: Executing in the privileged container
Dec 20 23:19:59.493: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-tests-e2e-privileged-pod-7gzzd PodName:privileged-pod ContainerName:privileged-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 23:19:59.493: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
Dec 20 23:19:59.705: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-tests-e2e-privileged-pod-7gzzd PodName:privileged-pod ContainerName:privileged-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 23:19:59.705: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Executing in the non-privileged container
Dec 20 23:19:59.946: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-tests-e2e-privileged-pod-7gzzd PodName:privileged-pod ContainerName:not-privileged-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 20 23:19:59.946: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
[AfterEach] [k8s.io] PrivilegedPod
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:00.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-privileged-pod-7gzzd" for this suite.
Dec 20 23:20:38.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:39.718: INFO: namespace: e2e-tests-e2e-privileged-pod-7gzzd, resource: bindings, ignored listing per whitelist
Dec 20 23:20:39.893: INFO: namespace e2e-tests-e2e-privileged-pod-7gzzd deletion completed in 39.661564582s


• [SLOW TEST:45.045 seconds]
[k8s.io] PrivilegedPod
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:643
  should enable privileged commands [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/privileged.go:47
------------------------------
Dec 20 23:20:39.895: INFO: Running AfterSuite actions on all node


DNS 
  should answer endpoint and wildcard queries for the cluster [Conformance] [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/dns/dns.go:298

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] DNS
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:15.500: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should answer endpoint and wildcard queries for the cluster [Conformance] [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/dns/dns.go:298
STEP: Running these commands:for i in `seq 1 10`; do test -n "$$(dig +notcp +noall +answer +search prefix.kubernetes.default A)" && echo "test_udp@prefix.kubernetes.default";test -n "$$(dig +tcp +noall +answer +search prefix.kubernetes.default A)" && echo "test_tcp@prefix.kubernetes.default";test -n "$$(dig +notcp +noall +answer +search prefix.kubernetes.default.svc A)" && echo "test_udp@prefix.kubernetes.default.svc";test -n "$$(dig +tcp +noall +answer +search prefix.kubernetes.default.svc A)" && echo "test_tcp@prefix.kubernetes.default.svc";test -n "$$(dig +notcp +noall +answer +search prefix.kubernetes.default.svc.cluster.local A)" && echo "test_udp@prefix.kubernetes.default.svc.cluster.local";test -n "$$(dig +tcp +noall +answer +search prefix.kubernetes.default.svc.cluster.local A)" && echo "test_tcp@prefix.kubernetes.default.svc.cluster.local";test -n "$$(dig +notcp +noall +answer +search prefix.clusterip.e2e-tests-dns-jc2fb A)" && echo "test_udp@prefix.clusterip.e2e-tests-dns-jc2fb";test -n "$$(dig +tcp +noall +answer +search prefix.clusterip.e2e-tests-dns-jc2fb A)" && echo "test_tcp@prefix.clusterip.e2e-tests-dns-jc2fb"; test -n "$$(dig +notcp +noall +additional +search _http._tcp.externalname.e2e-tests-dns-jc2fb.svc SRV)" && echo "test_udp@_http._tcp.externalname.e2e-tests-dns-jc2fb.svc";test -n "$$(dig +tcp +noall +additional +search _http._tcp.externalname.e2e-tests-dns-jc2fb.svc SRV)" && echo "test_tcp@_http._tcp.externalname.e2e-tests-dns-jc2fb.svc"; test -n "$$(dig +notcp +noall +answer +search externalname.e2e-tests-dns-jc2fb.svc CNAME)" && echo "test_udp@externalname.e2e-tests-dns-jc2fb.svc";test -n "$$(dig +tcp +noall +answer +search externalname.e2e-tests-dns-jc2fb.svc CNAME)" && echo "test_tcp@externalname.e2e-tests-dns-jc2fb.svc"; [ "$$(dig +short +notcp +noall +answer +search headless.e2e-tests-dns-jc2fb.endpoints A | sort | xargs echo)" = "1.1.1.1 1.1.1.2" ] && echo "test_endpoints@headless.e2e-tests-dns-jc2fb.endpoints";[ "$$(dig +short +notcp +noall +answer +search clusterip.e2e-tests-dns-jc2fb.endpoints A | sort | xargs echo)" = "1.1.1.1 1.1.1.2" ] && echo "test_endpoints@clusterip.e2e-tests-dns-jc2fb.endpoints";[ "$$(dig +short +notcp +noall +answer +search endpoint1.headless.e2e-tests-dns-jc2fb.endpoints A | sort | xargs echo)" = "1.1.1.1" ] && echo "test_endpoints@endpoint1.headless.e2e-tests-dns-jc2fb.endpoints";[ "$$(dig +short +notcp +noall +answer +search endpoint1.clusterip.e2e-tests-dns-jc2fb.endpoints A | sort | xargs echo)" = "1.1.1.1" ] && echo "test_endpoints@endpoint1.clusterip.e2e-tests-dns-jc2fb.endpoints";[ "$$(dig +short +notcp +noall +answer +search kubernetes.default.endpoints A | sort | xargs echo)" = "10.142.0.5" ] && echo "test_endpoints@kubernetes.default.endpoints";[ "$$(dig +short +notcp +noall +answer +search headless.e2e-tests-dns-jc2fb.svc A | sort | xargs echo)" = "1.1.1.1 1.1.1.2" ] && echo "test_endpoints@headless.e2e-tests-dns-jc2fb.svc"; [ "$(dig +short +notcp +noall +answer +search 2.1.1.1.in-addr.arpa PTR)" = "" ] && echo "test_ptr@1.1.1.2";[ "$(dig +short +notcp +noall +answer +search 1.1.1.2.in-addr.arpa PTR)" = "" ] && echo "test_ptr@2.1.1.1";[ "$(dig +short +notcp +noall +answer +search 1.1.1.1.in-addr.arpa PTR)" = "endpoint1.headless.e2e-tests-dns-jc2fb.svc.cluster.local." ] && echo "test_ptr@1.1.1.1"; podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jc2fb.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo "test_udp@PodARecord";test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo "test_tcp@PodARecord";sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod logs
STEP: looking for the results for each expected name from probiers
Dec 20 23:20:34.817: INFO: DNS probes using dns-test-56077d15-e5dc-11e7-819f-0e785a65cbca succeeded

STEP: deleting the pod
[AfterEach] DNS
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:34.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-jc2fb" for this suite.
Dec 20 23:20:40.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:41.578: INFO: namespace: e2e-tests-dns-jc2fb, resource: bindings, ignored listing per whitelist
Dec 20 23:20:42.511: INFO: namespace e2e-tests-dns-jc2fb deletion completed in 7.658761212s


• [SLOW TEST:27.011 seconds]
DNS
/go/src/github.com/openshift/origin/test/extended/dns/dns.go:295
  should answer endpoint and wildcard queries for the cluster [Conformance] [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/dns/dns.go:298
------------------------------
Dec 20 23:20:42.512: INFO: Running AfterSuite actions on all node


[Area:Networking] services when using a plugin that does not isolate namespaces by default 
  should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/networking/services.go:27

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:369
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:08.677: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:08.779: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/networking/services.go:27
Dec 20 23:20:09.000: INFO: Using ci-primg624-ig-n-9k4l for test ([ci-primg624-ig-n-9k4l ci-primg624-ig-n-h6tg ci-primg624-ig-n-jf6c] out of [ci-primg624-ig-m-n72c ci-primg624-ig-n-9k4l ci-primg624-ig-n-h6tg ci-primg624-ig-n-jf6c])
Dec 20 23:20:15.100: INFO: Target pod IP:port is 172.16.2.66:8080
Dec 20 23:20:15.193: INFO: Endpoint e2e-tests-net-services1-kppx7/service-7q5p6 is not ready yet
Dec 20 23:20:20.223: INFO: Target service IP:port is 172.30.159.57:8080
Dec 20 23:20:20.223: INFO: Creating an exec pod on node ci-primg624-ig-n-9k4l
Dec 20 23:20:20.223: INFO: Creating new exec pod
Dec 20 23:20:26.294: INFO: Waiting up to 10s to wget 172.30.159.57:8080
Dec 20 23:20:26.294: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.primg624.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig exec --namespace=e2e-tests-net-services2-6w64c execpod-sourceip-ci-primg624-ig-n-9k4lc6jzv -- /bin/sh -c wget -T 30 -qO- 172.30.159.57:8080'
Dec 20 23:20:26.797: INFO: stderr: ""
Dec 20 23:20:26.797: INFO: Cleaning up the exec pod
[AfterEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:26.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-net-services1-kppx7" for this suite.
Dec 20 23:20:34.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:35.623: INFO: namespace: e2e-tests-net-services1-kppx7, resource: bindings, ignored listing per whitelist
Dec 20 23:20:36.441: INFO: namespace e2e-tests-net-services1-kppx7 deletion completed in 9.499742381s
[AfterEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:36.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-net-services2-6w64c" for this suite.
Dec 20 23:20:42.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:44.015: INFO: namespace: e2e-tests-net-services2-6w64c, resource: bindings, ignored listing per whitelist
Dec 20 23:20:44.078: INFO: namespace e2e-tests-net-services2-6w64c deletion completed in 7.607485455s


• [SLOW TEST:35.401 seconds]
[Area:Networking] services
/go/src/github.com/openshift/origin/test/extended/networking/services.go:10
  when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:368
    should allow connections to pods in different namespaces on the same node via service IPs [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/networking/services.go:27
------------------------------
Dec 20 23:20:44.079: INFO: Running AfterSuite actions on all node


[sig-storage] Projected 
  should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:09.194: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:09.349: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
STEP: Creating the pod
Dec 20 23:20:18.172: INFO: Successfully updated pod "annotationupdate52589280-e5dc-11e7-97a9-0e785a65cbca"
[AfterEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:22.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6hwpr" for this suite.
Dec 20 23:20:44.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:44.933: INFO: namespace: e2e-tests-projected-6hwpr, resource: bindings, ignored listing per whitelist
Dec 20 23:20:45.789: INFO: namespace e2e-tests-projected-6hwpr deletion completed in 23.497138315s


• [SLOW TEST:36.594 seconds]
[sig-storage] Projected
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should update annotations on modification [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:20:45.790: INFO: Running AfterSuite actions on all node


[Area:Networking] multicast when using a plugin that does not isolate namespaces by default 
  should block multicast traffic [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/networking/multicast.go:25

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:369
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:15:40.355: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:15:40.587: INFO: configPath is now "/tmp/extended-test-multicast-rr5bc-pw92l-user.kubeconfig"
Dec 20 23:15:40.587: INFO: The user is now "extended-test-multicast-rr5bc-pw92l-user"
Dec 20 23:15:40.587: INFO: Creating project "extended-test-multicast-rr5bc-pw92l"
Dec 20 23:15:40.673: INFO: Waiting on permissions in project "extended-test-multicast-rr5bc-pw92l" ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should block multicast traffic [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/networking/multicast.go:25
Dec 20 23:15:40.768: INFO: Waiting up to 5m0s for pod multicast-0                                             status to be running
Dec 20 23:15:40.794: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (25.392813ms elapsed)
Dec 20 23:15:45.809: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (5.040382389s elapsed)
Dec 20 23:15:50.825: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (10.056360501s elapsed)
Dec 20 23:15:55.842: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (15.073471818s elapsed)
Dec 20 23:16:00.859: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (20.090335576s elapsed)
Dec 20 23:16:05.875: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (25.10631613s elapsed)
Dec 20 23:16:10.890: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (30.121612616s elapsed)
Dec 20 23:16:15.905: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (35.136813492s elapsed)
Dec 20 23:16:20.921: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (40.152283462s elapsed)
Dec 20 23:16:25.936: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (45.167723893s elapsed)
Dec 20 23:16:30.953: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (50.184217033s elapsed)
Dec 20 23:16:35.977: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (55.208257928s elapsed)
Dec 20 23:16:40.992: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m0.22382574s elapsed)
Dec 20 23:16:46.007: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m5.23908032s elapsed)
Dec 20 23:16:51.023: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m10.254355432s elapsed)
Dec 20 23:16:56.040: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m15.271367269s elapsed)
Dec 20 23:17:01.056: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m20.28812166s elapsed)
Dec 20 23:17:06.072: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m25.303360745s elapsed)
Dec 20 23:17:11.087: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m30.31897745s elapsed)
Dec 20 23:17:16.104: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m35.335247521s elapsed)
Dec 20 23:17:21.121: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m40.352715331s elapsed)
Dec 20 23:17:26.136: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m45.368041663s elapsed)
Dec 20 23:17:31.156: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m50.38761167s elapsed)
Dec 20 23:17:36.171: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (1m55.402703704s elapsed)
Dec 20 23:17:41.186: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m0.418126785s elapsed)
Dec 20 23:17:46.206: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m5.43746703s elapsed)
Dec 20 23:17:51.224: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m10.455581893s elapsed)
Dec 20 23:17:56.244: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m15.475281448s elapsed)
Dec 20 23:18:01.259: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m20.490803926s elapsed)
Dec 20 23:18:06.274: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m25.505799874s elapsed)
Dec 20 23:18:11.292: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m30.523935228s elapsed)
Dec 20 23:18:16.309: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m35.54060866s elapsed)
Dec 20 23:18:21.328: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m40.559675584s elapsed)
Dec 20 23:18:26.344: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m45.575215487s elapsed)
Dec 20 23:18:31.360: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m50.591799167s elapsed)
Dec 20 23:18:36.386: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (2m55.617843295s elapsed)
Dec 20 23:18:41.415: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m0.646599366s elapsed)
Dec 20 23:18:46.434: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m5.666025845s elapsed)
Dec 20 23:18:51.455: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m10.687053879s elapsed)
Dec 20 23:18:56.480: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m15.711498885s elapsed)
Dec 20 23:19:01.495: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m20.726796484s elapsed)
Dec 20 23:19:06.549: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m25.781117784s elapsed)
Dec 20 23:19:11.570: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m30.802074285s elapsed)
Dec 20 23:19:16.588: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m35.819579691s elapsed)
Dec 20 23:19:21.606: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m40.837450121s elapsed)
Dec 20 23:19:26.626: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m45.857722301s elapsed)
Dec 20 23:19:31.641: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m50.872960249s elapsed)
Dec 20 23:19:36.660: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (3m55.891297643s elapsed)
Dec 20 23:19:41.676: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m0.907249231s elapsed)
Dec 20 23:19:46.693: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m5.924584763s elapsed)
Dec 20 23:19:51.716: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m10.947921673s elapsed)
Dec 20 23:19:56.738: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m15.969537418s elapsed)
Dec 20 23:20:01.845: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m21.077126801s elapsed)
Dec 20 23:20:06.861: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m26.093005719s elapsed)
Dec 20 23:20:11.884: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m31.115524267s elapsed)
Dec 20 23:20:16.900: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m36.131262644s elapsed)
Dec 20 23:20:21.915: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m41.146342926s elapsed)
Dec 20 23:20:26.930: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m46.161667562s elapsed)
Dec 20 23:20:31.945: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m51.17682362s elapsed)
Dec 20 23:20:36.961: INFO: Waiting for pod multicast-0                                             in namespace 'extended-test-multicast-rr5bc-pw92l' status to be 'running'(found phase: "Failed", readiness: false) (4m56.192356569s elapsed)
[AfterEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:41.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-multicast-rr5bc-pw92l" for this suite.
Dec 20 23:20:48.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:49.024: INFO: namespace: extended-test-multicast-rr5bc-pw92l, resource: bindings, ignored listing per whitelist
Dec 20 23:20:49.819: INFO: namespace extended-test-multicast-rr5bc-pw92l deletion completed in 7.828752915s


• [SLOW TEST:309.464 seconds]
[Area:Networking] multicast
/go/src/github.com/openshift/origin/test/extended/networking/multicast.go:20
  when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:368
    should block multicast traffic [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/networking/multicast.go:25
------------------------------
Dec 20 23:20:49.821: INFO: Running AfterSuite actions on all node


[Area:Networking] network isolation when using a plugin that does not isolate namespaces by default 
  should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:369
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:05.518: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:05.672: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19
Dec 20 23:20:05.884: INFO: Using ci-primg624-ig-n-9k4l and ci-primg624-ig-n-h6tg for test ([ci-primg624-ig-n-9k4l ci-primg624-ig-n-h6tg ci-primg624-ig-n-jf6c] out of [ci-primg624-ig-m-n72c ci-primg624-ig-n-9k4l ci-primg624-ig-n-h6tg ci-primg624-ig-n-jf6c])
Dec 20 23:20:13.969: INFO: Target pod IP:port is 172.16.2.65:8080
Dec 20 23:20:13.969: INFO: Creating an exec pod on node ci-primg624-ig-n-h6tg
Dec 20 23:20:13.969: INFO: Creating new exec pod
Dec 20 23:20:22.045: INFO: Waiting up to 10s to wget 172.16.2.65:8080
Dec 20 23:20:22.046: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://internal-api.primg624.origin-ci-int-gce.dev.rhcloud.com:8443 --kubeconfig=/tmp/cluster-admin.kubeconfig exec --namespace=e2e-tests-net-isolation2-4p5gj execpod-sourceip-ci-primg624-ig-n-h6tgtqwsg -- /bin/sh -c wget -T 30 -qO- 172.16.2.65:8080'
Dec 20 23:20:22.639: INFO: stderr: ""
Dec 20 23:20:22.639: INFO: Cleaning up the exec pod
[AfterEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:22.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-net-isolation1-9lgjb" for this suite.
Dec 20 23:20:44.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:45.633: INFO: namespace: e2e-tests-net-isolation1-9lgjb, resource: bindings, ignored listing per whitelist
Dec 20 23:20:46.192: INFO: namespace e2e-tests-net-isolation1-9lgjb deletion completed in 23.470398425s
[AfterEach] when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:46.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-net-isolation2-4p5gj" for this suite.
Dec 20 23:20:52.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:53.322: INFO: namespace: e2e-tests-net-isolation2-4p5gj, resource: bindings, ignored listing per whitelist
Dec 20 23:20:53.698: INFO: namespace e2e-tests-net-isolation2-4p5gj deletion completed in 7.475499982s


• [SLOW TEST:48.180 seconds]
[Area:Networking] network isolation
/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:10
  when using a plugin that does not isolate namespaces by default
  /go/src/github.com/openshift/origin/test/extended/networking/util.go:368
    should allow communication between pods in different namespaces on different nodes [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/networking/isolation.go:19
------------------------------
Dec 20 23:20:53.699: INFO: Running AfterSuite actions on all node


[Feature:DeploymentConfig] deploymentconfigs  
  should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1059

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:31.574: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:31.873: INFO: configPath is now "/tmp/extended-test-cli-deployment-8bshs-92ck5-user.kubeconfig"
Dec 20 23:19:31.873: INFO: The user is now "extended-test-cli-deployment-8bshs-92ck5-user"
Dec 20 23:19:31.873: INFO: Creating project "extended-test-cli-deployment-8bshs-92ck5"
Dec 20 23:19:31.973: INFO: Waiting on permissions in project "extended-test-cli-deployment-8bshs-92ck5" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1059
STEP: should create ControllerRef in RCs it creates
Dec 20 23:20:06.936: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete.
STEP: releasing RCs that no longer match its selector
STEP: adopting RCs that match its selector and have no ControllerRef
STEP: deleting owned RCs when deleted
[AfterEach] 
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1054
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:49.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-cli-deployment-8bshs-92ck5" for this suite.
Dec 20 23:20:55.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:20:56.061: INFO: namespace: extended-test-cli-deployment-8bshs-92ck5, resource: bindings, ignored listing per whitelist
Dec 20 23:20:56.650: INFO: namespace extended-test-cli-deployment-8bshs-92ck5 deletion completed in 7.488383501s


• [SLOW TEST:85.076 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
  
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1052
    should adhere to Three Laws of Controllers [Conformance] [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:1059
------------------------------
Dec 20 23:20:56.651: INFO: Running AfterSuite actions on all node


[Feature:DeploymentConfig] deploymentconfigs with failing hook [Conformance] 
  should get all logs from retried hooks [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:757

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:14.318: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:14.503: INFO: configPath is now "/tmp/extended-test-cli-deployment-tsss9-nvp8b-user.kubeconfig"
Dec 20 23:20:14.503: INFO: The user is now "extended-test-cli-deployment-tsss9-nvp8b-user"
Dec 20 23:20:14.503: INFO: Creating project "extended-test-cli-deployment-tsss9-nvp8b"
Dec 20 23:20:14.611: INFO: Waiting on permissions in project "extended-test-cli-deployment-tsss9-nvp8b" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should get all logs from retried hooks [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:757
Dec 20 23:20:14.763: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-tsss9-nvp8b-user.kubeconfig --namespace=extended-test-cli-deployment-tsss9-nvp8b -f /tmp/fixture-testdata-dir233739697/test/extended/testdata/deployments/failing-pre-hook.yaml -o name'
Dec 20 23:20:26.359: INFO: Running 'oc logs --config=/tmp/extended-test-cli-deployment-tsss9-nvp8b-user.kubeconfig --namespace=extended-test-cli-deployment-tsss9-nvp8b deploymentconfig/hook'
STEP: checking the logs for substrings
--> pre: Running hook pod ...
pre hook logs
--> pre: Retrying hook pod (retry #1)
pre hook logs
[AfterEach] with failing hook [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:753
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:28.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-cli-deployment-tsss9-nvp8b" for this suite.
Dec 20 23:21:06.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:21:08.073: INFO: namespace: extended-test-cli-deployment-tsss9-nvp8b, resource: bindings, ignored listing per whitelist
Dec 20 23:21:08.238: INFO: namespace extended-test-cli-deployment-tsss9-nvp8b deletion completed in 39.506344746s


• [SLOW TEST:53.920 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
  with failing hook [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:752
    should get all logs from retried hooks [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:757
------------------------------
Dec 20 23:21:08.239: INFO: Running AfterSuite actions on all node


[Feature:Builds][Conformance] s2i build with a root user image  
  should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:75

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:Builds][Conformance] s2i build with a root user image
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:13.321: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:13.623: INFO: configPath is now "/tmp/extended-test-s2i-build-root-f27mh-swqbs-user.kubeconfig"
Dec 20 23:19:13.623: INFO: The user is now "extended-test-s2i-build-root-f27mh-swqbs-user"
Dec 20 23:19:13.623: INFO: Creating project "extended-test-s2i-build-root-f27mh-swqbs"
Dec 20 23:19:13.896: INFO: Waiting on permissions in project "extended-test-s2i-build-root-f27mh-swqbs" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:24
STEP: waiting for builder service account
STEP: creating a root build container
Dec 20 23:19:14.454: INFO: Running 'oc new-build --config=/tmp/extended-test-s2i-build-root-f27mh-swqbs-user.kubeconfig --namespace=extended-test-s2i-build-root-f27mh-swqbs -D FROM centos/nodejs-6-centos7
USER 0 --name nodejsroot'
--> Found Docker image d5b68e7 (15 hours old) from Docker Hub for "centos/nodejs-6-centos7"

    Node.js 6 
    --------- 
    Node.js 6 available as docker container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

    Tags: builder, nodejs, nodejs6

    * An image stream will be created as "nodejs-6-centos7:latest" that will track the source image
    * A Docker build using a predefined Dockerfile will be created
      * The resulting image will be pushed to image stream "nodejsroot:latest"
      * Every time "nodejs-6-centos7:latest" changes a new build will be triggered

--> Creating resources with label build=nodejsroot ...
    imagestream "nodejs-6-centos7" created
    imagestream "nodejsroot" created
    buildconfig "nodejsroot" created
--> Success
    Build configuration "nodejsroot" created and build triggered.
    Run 'oc logs -f bc/nodejsroot' to stream the build progress.
[It] should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:75
STEP: adding builder account to privileged SCC
Dec 20 23:20:06.610: INFO: Running 'oc new-app --config=/tmp/extended-test-s2i-build-root-f27mh-swqbs-user.kubeconfig --namespace=extended-test-s2i-build-root-f27mh-swqbs nodejsroot~https://github.com/openshift/nodejs-ex --name nodejspass'
--> Found image 03dd1e6 (38 seconds old) in image stream "extended-test-s2i-build-root-f27mh-swqbs/nodejsroot" under tag "latest" for "nodejsroot"

    Node.js 6 
    --------- 
    Node.js 6 available as docker container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

    Tags: builder, nodejs, nodejs6

    * A source build using source code from https://github.com/openshift/nodejs-ex will be created
      * The resulting image will be pushed to image stream "nodejspass:latest"
      * Use 'start-build' to trigger a new build
    * This image will be deployed in deployment config "nodejspass"
    * Port 8080/tcp will be load balanced by service "nodejspass"
      * Other containers can access this service through the hostname "nodejspass"
    * WARNING: Image "extended-test-s2i-build-root-f27mh-swqbs/nodejsroot:latest" runs as the 'root' user which may not be permitted by your cluster administrator

--> Creating resources ...
    imagestream "nodejspass" created
    buildconfig "nodejspass" created
    deploymentconfig "nodejspass" created
    service "nodejspass" created
--> Success
    Build scheduled, use 'oc logs -f bc/nodejspass' to track its progress.
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/nodejspass' 
    Run 'oc status' to view your app.
[AfterEach] 
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:37
[AfterEach] [Feature:Builds][Conformance] s2i build with a root user image
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:43.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-s2i-build-root-f27mh-swqbs" for this suite.
Dec 20 23:21:07.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:21:08.592: INFO: namespace: extended-test-s2i-build-root-f27mh-swqbs, resource: bindings, ignored listing per whitelist
Dec 20 23:21:08.768: INFO: namespace extended-test-s2i-build-root-f27mh-swqbs deletion completed in 25.525532252s


• [SLOW TEST:115.447 seconds]
[Feature:Builds][Conformance] s2i build with a root user image
/go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:16
  
  /go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:23
    should create a root build and pass with a privileged SCC [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/builds/s2i_root.go:75
------------------------------
[sig-storage] Projected 
  updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:14.492: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:14.594: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
Dec 20 23:19:14.855: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
STEP: Creating projection with configMap that has name projected-configmap-test-upd-31c37fcf-e5dc-11e7-9f09-0e785a65cbca
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-31c37fcf-e5dc-11e7-9f09-0e785a65cbca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:46.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sjhrx" for this suite.
Dec 20 23:21:08.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:21:10.165: INFO: namespace: e2e-tests-projected-sjhrx, resource: bindings, ignored listing per whitelist
Dec 20 23:21:10.443: INFO: namespace e2e-tests-projected-sjhrx deletion completed in 23.493400083s


• [SLOW TEST:115.951 seconds]
[sig-storage] Projected
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:34
  updates should be reflected in volume [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:21:10.444: INFO: Running AfterSuite actions on all node


[Feature:DeploymentConfig] deploymentconfigs initially [Conformance] 
  should not deploy if pods never transition to ready [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:862

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:19:55.926: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:19:56.115: INFO: configPath is now "/tmp/extended-test-cli-deployment-m6tnm-qll2c-user.kubeconfig"
Dec 20 23:19:56.115: INFO: The user is now "extended-test-cli-deployment-m6tnm-qll2c-user"
Dec 20 23:19:56.115: INFO: Creating project "extended-test-cli-deployment-m6tnm-qll2c"
Dec 20 23:19:56.306: INFO: Waiting on permissions in project "extended-test-cli-deployment-m6tnm-qll2c" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should not deploy if pods never transition to ready [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:862
Dec 20 23:19:56.457: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-m6tnm-qll2c-user.kubeconfig --namespace=extended-test-cli-deployment-m6tnm-qll2c -f /tmp/fixture-testdata-dir763500669/test/extended/testdata/deployments/readiness-test.yaml -o name'
STEP: waiting for the deployment to fail
[AfterEach] initially [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:858
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:20:34.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-cli-deployment-m6tnm-qll2c" for this suite.
Dec 20 23:21:12.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:21:13.408: INFO: namespace: extended-test-cli-deployment-m6tnm-qll2c, resource: bindings, ignored listing per whitelist
Dec 20 23:21:13.578: INFO: namespace extended-test-cli-deployment-m6tnm-qll2c deletion completed in 39.489273455s


• [SLOW TEST:77.652 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
  initially [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:857
    should not deploy if pods never transition to ready [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:862
------------------------------
Dec 20 23:21:13.578: INFO: Running AfterSuite actions on all node


[Feature:DeploymentConfig] deploymentconfigs with revision history limits [Conformance] 
  should never persist more old deployments than acceptable after being observed by the controller [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:877

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:18:07.163: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:18:07.342: INFO: configPath is now "/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig"
Dec 20 23:18:07.342: INFO: The user is now "extended-test-cli-deployment-cldkd-z4xd8-user"
Dec 20 23:18:07.342: INFO: Creating project "extended-test-cli-deployment-cldkd-z4xd8"
Dec 20 23:18:07.421: INFO: Waiting on permissions in project "extended-test-cli-deployment-cldkd-z4xd8" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should never persist more old deployments than acceptable after being observed by the controller [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:877
Dec 20 23:18:07.468: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 -f /tmp/fixture-testdata-dir737804120/test/extended/testdata/deployments/deployment-history-limit.yaml -o name'
Dec 20 23:18:23.502: INFO: Latest rollout of dc/history-limit (rc/history-limit-1) is complete.
Dec 20 23:18:23.502: INFO: 00: triggering a new deployment with config change
Dec 20 23:18:23.502: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=0'
Dec 20 23:18:45.731: INFO: Latest rollout of dc/history-limit (rc/history-limit-2) is complete.
Dec 20 23:18:45.731: INFO: 01: triggering a new deployment with config change
Dec 20 23:18:45.731: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=1'
Dec 20 23:19:06.805: INFO: Latest rollout of dc/history-limit (rc/history-limit-3) is complete.
Dec 20 23:19:06.805: INFO: 02: triggering a new deployment with config change
Dec 20 23:19:06.805: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=2'
Dec 20 23:19:22.087: INFO: Latest rollout of dc/history-limit (rc/history-limit-4) is complete.
Dec 20 23:19:22.087: INFO: 03: triggering a new deployment with config change
Dec 20 23:19:22.087: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=3'
Dec 20 23:19:34.526: INFO: Latest rollout of dc/history-limit (rc/history-limit-5) is complete.
Dec 20 23:19:34.526: INFO: 04: triggering a new deployment with config change
Dec 20 23:19:34.526: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=4'
Dec 20 23:19:55.799: INFO: Latest rollout of dc/history-limit (rc/history-limit-6) is complete.
Dec 20 23:19:55.799: INFO: 05: triggering a new deployment with config change
Dec 20 23:19:55.799: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=5'
Dec 20 23:20:12.482: INFO: Latest rollout of dc/history-limit (rc/history-limit-7) is complete.
Dec 20 23:20:12.482: INFO: 06: triggering a new deployment with config change
Dec 20 23:20:12.482: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=6'
Dec 20 23:20:24.366: INFO: Latest rollout of dc/history-limit (rc/history-limit-8) is complete.
Dec 20 23:20:24.366: INFO: 07: triggering a new deployment with config change
Dec 20 23:20:24.366: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=7'
Dec 20 23:20:35.000: INFO: Latest rollout of dc/history-limit (rc/history-limit-9) is complete.
Dec 20 23:20:35.000: INFO: 08: triggering a new deployment with config change
Dec 20 23:20:35.000: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=8'
Dec 20 23:20:44.433: INFO: Latest rollout of dc/history-limit (rc/history-limit-10) is complete.
Dec 20 23:20:44.433: INFO: 09: triggering a new deployment with config change
Dec 20 23:20:44.433: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-cldkd-z4xd8-user.kubeconfig --namespace=extended-test-cli-deployment-cldkd-z4xd8 dc/history-limit A=9'
STEP: waiting for the deployment to complete
Dec 20 23:20:58.770: INFO: Latest rollout of dc/history-limit (rc/history-limit-11) is complete.
[AfterEach] with revision history limits [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:873
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:21:00.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-cli-deployment-cldkd-z4xd8" for this suite.
Dec 20 23:21:45.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:21:45.989: INFO: namespace: extended-test-cli-deployment-cldkd-z4xd8, resource: bindings, ignored listing per whitelist
Dec 20 23:21:46.424: INFO: namespace extended-test-cli-deployment-cldkd-z4xd8 deletion completed in 45.459696111s


• [SLOW TEST:219.261 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
  with revision history limits [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:872
    should never persist more old deployments than acceptable after being observed by the controller [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:877
------------------------------
Dec 20 23:21:46.426: INFO: Running AfterSuite actions on all node


[Feature:DeploymentConfig] deploymentconfigs rolled back [Conformance] 
  should rollback to an older deployment [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:778

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:01.298: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:01.468: INFO: configPath is now "/tmp/extended-test-cli-deployment-p6xwv-l4s49-user.kubeconfig"
Dec 20 23:20:01.468: INFO: The user is now "extended-test-cli-deployment-p6xwv-l4s49-user"
Dec 20 23:20:01.468: INFO: Creating project "extended-test-cli-deployment-p6xwv-l4s49"
Dec 20 23:20:01.573: INFO: Waiting on permissions in project "extended-test-cli-deployment-p6xwv-l4s49" ...
STEP: Waiting for a default service account to be provisioned in namespace
[JustBeforeEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:43
[It] should rollback to an older deployment [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:778
Dec 20 23:20:01.606: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-p6xwv-l4s49-user.kubeconfig --namespace=extended-test-cli-deployment-p6xwv-l4s49 -f /tmp/fixture-testdata-dir080860640/test/extended/testdata/deployments/deployment-simple.yaml -o name'
Dec 20 23:20:22.647: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete.
Dec 20 23:20:22.647: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-p6xwv-l4s49-user.kubeconfig --namespace=extended-test-cli-deployment-p6xwv-l4s49 latest deployment-simple'
STEP: verifying that we are on the second version
Dec 20 23:20:22.973: INFO: Running 'oc get --config=/tmp/extended-test-cli-deployment-p6xwv-l4s49-user.kubeconfig --namespace=extended-test-cli-deployment-p6xwv-l4s49 deploymentconfig/deployment-simple --output=jsonpath="{.status.latestVersion}"'
Dec 20 23:20:45.536: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-2) is complete.
STEP: verifying that we can rollback
Dec 20 23:20:45.536: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-p6xwv-l4s49-user.kubeconfig --namespace=extended-test-cli-deployment-p6xwv-l4s49 undo deploymentconfig/deployment-simple'
Dec 20 23:21:08.955: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-3) is complete.
STEP: verifying that we are on the third version
Dec 20 23:21:08.955: INFO: Running 'oc get --config=/tmp/extended-test-cli-deployment-p6xwv-l4s49-user.kubeconfig --namespace=extended-test-cli-deployment-p6xwv-l4s49 deploymentconfig/deployment-simple --output=jsonpath="{.status.latestVersion}"'
[AfterEach] rolled back [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:774
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:62
[AfterEach] [Feature:DeploymentConfig] deploymentconfigs
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:21:11.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-cli-deployment-p6xwv-l4s49" for this suite.
Dec 20 23:21:49.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:21:50.473: INFO: namespace: extended-test-cli-deployment-p6xwv-l4s49, resource: bindings, ignored listing per whitelist
Dec 20 23:21:50.714: INFO: namespace extended-test-cli-deployment-p6xwv-l4s49 deletion completed in 39.466506038s


• [SLOW TEST:109.417 seconds]
[Feature:DeploymentConfig] deploymentconfigs
/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:37
  rolled back [Conformance]
  /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:773
    should rollback to an older deployment [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:778
------------------------------
Dec 20 23:21:50.716: INFO: Running AfterSuite actions on all node


[sig-storage] ConfigMap 
  optional updates should be reflected in volume  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:05.753: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:05.826: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
Dec 20 23:20:06.063: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
STEP: Creating configMap with name cm-test-opt-del-50493c62-e5dc-11e7-b70b-0e785a65cbca
STEP: Creating configMap with name cm-test-opt-upd-50493d67-e5dc-11e7-b70b-0e785a65cbca
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-50493c62-e5dc-11e7-b70b-0e785a65cbca
STEP: Updating configmap cm-test-opt-upd-50493d67-e5dc-11e7-b70b-0e785a65cbca
STEP: Creating configMap with name cm-test-opt-create-50493da1-e5dc-11e7-b70b-0e785a65cbca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:21:43.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-4jqtd" for this suite.
Dec 20 23:22:05.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:22:06.747: INFO: namespace: e2e-tests-configmap-4jqtd, resource: bindings, ignored listing per whitelist
Dec 20 23:22:07.092: INFO: namespace e2e-tests-configmap-4jqtd deletion completed in 23.482632033s


• [SLOW TEST:121.339 seconds]
[sig-storage] ConfigMap
/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume  [Conformance] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:648
------------------------------
Dec 20 23:22:07.093: INFO: Running AfterSuite actions on all node


[Conformance][templates] templateinstance readiness test  
  should report ready soon after all annotated objects are ready [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:119

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/test/extended/util/test.go:53
[BeforeEach] [Conformance][templates] templateinstance readiness test
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:134
STEP: Creating a kubernetes client
Dec 20 23:20:14.943: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
Dec 20 23:20:15.185: INFO: configPath is now "/tmp/extended-test-templates-xbr2c-g489w-user.kubeconfig"
Dec 20 23:20:15.185: INFO: The user is now "extended-test-templates-xbr2c-g489w-user"
Dec 20 23:20:15.185: INFO: Creating project "extended-test-templates-xbr2c-g489w"
Dec 20 23:20:15.257: INFO: Waiting on permissions in project "extended-test-templates-xbr2c-g489w" ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] 
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:101
Dec 20 23:20:15.924: INFO: Running 'oc create --config=/tmp/extended-test-templates-xbr2c-g489w-user.kubeconfig --namespace=extended-test-templates-xbr2c-g489w -f /tmp/fixture-testdata-dir446324089/examples/quickstarts/cakephp-mysql.json'
template "cakephp-mysql-example" created
[It] should report ready soon after all annotated objects are ready [Suite:openshift/conformance/parallel]
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:119
STEP: instantiating the templateinstance
STEP: waiting for build and dc to settle
STEP: waiting for the templateinstance to indicate ready
[AfterEach] 
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:112
[AfterEach] [Conformance][templates] templateinstance readiness test
  /go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:135
Dec 20 23:21:47.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-templates-xbr2c-g489w" for this suite.
Dec 20 23:22:09.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 20 23:22:10.649: INFO: namespace: extended-test-templates-xbr2c-g489w, resource: bindings, ignored listing per whitelist
Dec 20 23:22:10.811: INFO: namespace extended-test-templates-xbr2c-g489w deletion completed in 23.436566107s


• [SLOW TEST:115.869 seconds]
[Conformance][templates] templateinstance readiness test
/go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:24
  
  /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:100
    should report ready soon after all annotated objects are ready [Suite:openshift/conformance/parallel]
    /go/src/github.com/openshift/origin/test/extended/templates/templateinstance_readiness.go:119
------------------------------
Dec 20 23:22:10.813: INFO: Running AfterSuite actions on all node


Dec 20 23:21:08.770: INFO: Running AfterSuite actions on all node
Dec 20 23:22:10.892: INFO: Running AfterSuite actions on node 1


Ran 194 of 440 Specs in 421.539 seconds
SUCCESS! -- 194 Passed | 0 Failed | 0 Pending | 246 Skipped 

Ginkgo ran 1 suite in 7m5.239238175s
Test Suite Passed
[INFO] Running serial tests
I1220 23:22:11.217622    9114 test.go:94] Extended test version v3.9.0-alpha.0+31367ee-263
I1220 23:22:12.983162   10066 test.go:94] Extended test version v3.9.0-alpha.0+31367ee-263
Running Suite: Extended
=======================
Random Seed: 1513812132 - Will randomize all specs
Will run 0 of 440 specs

Dec 20 23:22:13.113: INFO: Fetching cloud provider for "gce"

I1220 23:22:13.114043   10066 gce.go:805] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc420016180), conf:(*jwt.Config)(0xc4202f5980)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
I1220 23:22:13.214445   10066 gce.go:805] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc420016180), conf:(*jwt.Config)(0xc42040b900)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
I1220 23:22:13.259107   10066 gce.go:805] Using DefaultTokenSource &oauth2.reuseTokenSource{new:jwt.jwtSource{ctx:(*context.emptyCtx)(0xc420016180), conf:(*jwt.Config)(0xc42036c100)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}
W1220 23:22:13.308248   10066 gce.go:430] No network name or URL specified.
Dec 20 23:22:13.308: INFO: lookupDiskImageSources: gcloud error with [[]string{"instance-groups", "list-instances", "", "--format=get(instance)"}]; err:exec: "gcloud": executable file not found in $PATH
Dec 20 23:22:13.308: INFO:  > 
Dec 20 23:22:13.308: INFO: Cluster image sources lookup failed: exec: "gcloud": executable file not found in $PATH

Dec 20 23:22:13.308: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
Dec 20 23:22:13.310: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Dec 20 23:22:13.474: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Dec 20 23:22:13.521: INFO: 1 / 1 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Dec 20 23:22:13.521: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Dec 20 23:22:13.537: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Dec 20 23:22:13.537: INFO: Dumping network health container logs from all nodes...
Dec 20 23:22:13.552: INFO: e2e test version: v1.9.0-beta1
Dec 20 23:22:13.566: INFO: kube-apiserver version: v1.9.0-beta1
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSDec 20 23:22:13.575: INFO: Running AfterSuite actions on all node
Dec 20 23:22:13.575: INFO: Running AfterSuite actions on node 1

Ran 0 of 440 Specs in 0.462 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 440 Skipped Dec 20 23:22:13.578: INFO: Dumping logs locally to: /data/src/github.com/openshift/origin/_output/scripts/conformance/artifacts/junit
Checking for custom logdump instances, if any
Sourcing kube-util.sh
Detecting project
/data/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/cluster/log-dump/../../cluster/../cluster/gce/util.sh: line 147: gcloud: command not found
Dec 20 23:22:13.614: INFO: Error running cluster/log-dump/log-dump.sh: exit status 127
PASS

Ginkgo ran 1 suite in 917.548997ms
Test Suite Passed
[INFO] [CLEANUP] Beginning cleanup routines...
[INFO] [CLEANUP] Dumping cluster events to _output/scripts/conformance/artifacts/events.txt
Logged into "https://internal-api.primg624.origin-ci-int-gce.dev.rhcloud.com:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-public
    kube-system
    logging
    management-infra
    openshift
    openshift-infra
    openshift-node

Using project "default".
[INFO] [CLEANUP] Dumping container logs to _output/scripts/conformance/logs/containers
[INFO] [CLEANUP] Truncating log files over 200M
[INFO] [CLEANUP] Stopping docker containers
[INFO] [CLEANUP] Removing docker containers
[INFO] [CLEANUP] Killing child processes
[INFO] test/extended/conformance.sh exited with code 0 after 00h 07m 29s

real	7m29.396s
user	2m40.919s
sys	0m21.806s
+ [[ branch_success == \b\r\a\n\c\h\_\s\u\c\c\e\s\s ]]
+ [[ '' != 1 ]]
+ [[ 1 == 1 ]]
+ to=docker.io/openshift/origin-gce:latest
+ sudo docker tag openshift/origin-gce:latest docker.io/openshift/origin-gce:latest
+ sudo docker push docker.io/openshift/origin-gce:latest
The push refers to a repository [docker.io/openshift/origin-gce]
c3bdb60ea822: Preparing
cd9066570881: Preparing
37fa9b0e9711: Preparing
34e0c805337c: Preparing
bba07d132cf6: Preparing
d1be66a59bc5: Preparing
d1be66a59bc5: Waiting
34e0c805337c: Mounted from openshift/origin-base
bba07d132cf6: Mounted from openshift/origin-base
d1be66a59bc5: Layer already exists
cd9066570881: Pushed
c3bdb60ea822: Pushed
37fa9b0e9711: Pushed
latest: digest: sha256:cae511e62522d932f11163b902b50256220743b9a454781e6aa5c6b3b571babb size: 1581
+ exit 0
+ gather
+ set +e
++ pwd
+ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ oc get nodes --template '{{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}'
+ xargs -L 1 -I X bash -c 'oc get --raw /api/v1/nodes/X/proxy/metrics > /tmp/artifacts/X.metrics' ''
+ oc get --raw /metrics
+ set -e
[PostBuildScript] - Execution post build scripts.
[workspace] $ /bin/bash /tmp/jenkins5500547070221352699.sh
~/jobs/zz_origin_gce_image/workspace ~/jobs/zz_origin_gce_image/workspace
Activated service account credentials for: [jenkins-ci-provisioner@openshift-gce-devel.iam.gserviceaccount.com]

PLAY [Terminate running cluster and remove all supporting resources in GCE] ****

TASK [Gathering Facts] *********************************************************
Wednesday 20 December 2017  23:23:23 +0000 (0:00:00.059)       0:00:00.059 **** 
ok: [localhost]

TASK [include_role] ************************************************************
Wednesday 20 December 2017  23:23:27 +0000 (0:00:04.408)       0:00:04.468 **** 

TASK [openshift_gcp : Templatize DNS script] ***********************************
Wednesday 20 December 2017  23:23:28 +0000 (0:00:00.085)       0:00:04.554 **** 
changed: [localhost]

TASK [openshift_gcp : Templatize provision script] *****************************
Wednesday 20 December 2017  23:23:28 +0000 (0:00:00.502)       0:00:05.056 **** 
changed: [localhost]

TASK [openshift_gcp : Templatize de-provision script] **************************
Wednesday 20 December 2017  23:23:28 +0000 (0:00:00.308)       0:00:05.364 **** 
changed: [localhost]

TASK [openshift_gcp : Provision GCP DNS domain] ********************************
Wednesday 20 December 2017  23:23:29 +0000 (0:00:00.276)       0:00:05.640 **** 
skipping: [localhost]

TASK [openshift_gcp : Ensure that DNS resolves to the hosted zone] *************
Wednesday 20 December 2017  23:23:29 +0000 (0:00:00.023)       0:00:05.664 **** 
skipping: [localhost]

TASK [openshift_gcp : Provision GCP resources] *********************************
Wednesday 20 December 2017  23:23:29 +0000 (0:00:00.022)       0:00:05.686 **** 
skipping: [localhost]

TASK [openshift_gcp : De-provision GCP resources] ******************************
Wednesday 20 December 2017  23:23:29 +0000 (0:00:00.022)       0:00:05.708 **** 
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=5    changed=4    unreachable=0    failed=0   

Wednesday 20 December 2017  23:30:43 +0000 (0:07:14.014)       0:07:19.723 **** 
=============================================================================== 
openshift_gcp : De-provision GCP resources ---------------------------- 434.01s
Gathering Facts --------------------------------------------------------- 4.41s
openshift_gcp : Templatize DNS script ----------------------------------- 0.50s
openshift_gcp : Templatize provision script ----------------------------- 0.31s
openshift_gcp : Templatize de-provision script -------------------------- 0.28s
include_role ------------------------------------------------------------ 0.09s
openshift_gcp : Provision GCP DNS domain -------------------------------- 0.02s
openshift_gcp : Ensure that DNS resolves to the hosted zone ------------- 0.02s
openshift_gcp : Provision GCP resources --------------------------------- 0.02s
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/junit
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg624-ig-m-n72c.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg624-ig-n-9k4l.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg624-ig-n-h6tg.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/ci-primg624-ig-n-jf6c.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/master.metrics
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/conformance
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/conformance/volumes
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/shell
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/openshift/shell/volumes
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_01.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_02.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_03.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_04.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_05.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_06.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_07.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_08.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_09.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_10.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_11.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_12.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_13.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_14.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_15.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_16.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_17.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_18.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_19.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_20.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_21.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_22.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_23.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_24.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_parallel_25.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/junit/conformance_serial_01.xml
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/artifacts/events.txt
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/logs
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/logs/containers
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/logs/scripts.log
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/conformance/openshift.local.home
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/shell
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/shell/artifacts
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/shell/logs
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/shell/logs/scripts.log
/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/artifacts/scripts/shell/openshift.local.home

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2017-12-20 18:30:45.790382", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2017-12-20 18:30:45.794828", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2017-12-20 18:30:47.040314", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-20 18:30:47.613606", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-071c4240b295315ef."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-20 18:30:48.386959", 
    "instance_ids": [
        "i-071c4240b295315ef"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-0dcc53918301509a3"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-08123253293bb0f9b"
                }
            }, 
            "dns_name": "ec2-54-85-223-144.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-071c4240b295315ef", 
            "image_id": "ami-259cef5f", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2017-12-20T22:41:27.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-10-192.ec2.internal", 
            "private_ip": "172.18.10.192", 
            "public_dns_name": "ec2-54-85-223-144.compute-1.amazonaws.com", 
            "public_ip": "54.85.223.144", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-20 18:30:48.644352", 
    "path": "/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.10.192.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/ea8a196871d600537924bb4317199814dfe8ce4f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-20 18:30:48.808777", 
    "path": "/var/lib/jenkins/jobs/zz_origin_gce_image/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=7    changed=4    unreachable=0    failed=0   

~/jobs/zz_origin_gce_image/workspace
Recording test results
Archiving artifacts
Finished: SUCCESS