SuccessConsole Output

Skipping 1,652 KB.. Full Log
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:55:38.625: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c4phl" for this suite.
Jun 23 02:55:44.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:55:45.578: INFO: namespace: e2e-tests-projected-c4phl, resource: bindings, ignored listing per whitelist
Jun 23 02:55:46.133: INFO: namespace e2e-tests-projected-c4phl deletion completed in 7.477764308s

• [SLOW TEST:12.393 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should provide container's memory request [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:55:46.133: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name configmap-test-volume-eda8ec08-7690-11e8-aba4-0e3c01d665e8
STEP: Creating a pod to test consume configMaps
Jun 23 02:55:46.998: INFO: Waiting up to 5m0s for pod "pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-configmap-zcd52" to be "success or failure"
Jun 23 02:55:47.013: INFO: Pod "pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.444926ms
Jun 23 02:55:49.029: INFO: Pod "pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031629858s
Jun 23 02:55:51.046: INFO: Pod "pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048044437s
STEP: Saw pod success
Jun 23 02:55:51.046: INFO: Pod "pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 02:55:51.062: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-2jvs pod pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8 container configmap-volume-test: <nil>
STEP: delete the pod
Jun 23 02:55:51.107: INFO: Waiting for pod pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 02:55:51.122: INFO: Pod pod-configmaps-edada0aa-7690-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:55:51.122: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zcd52" for this suite.
Jun 23 02:55:57.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:55:58.256: INFO: namespace: e2e-tests-configmap-zcd52, resource: bindings, ignored listing per whitelist
Jun 23 02:55:58.610: INFO: namespace e2e-tests-configmap-zcd52 deletion completed in 7.458005931s

• [SLOW TEST:12.477 seconds]
[sig-storage] ConfigMap
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] HostPath
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:55:58.610: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test hostPath mode
Jun 23 02:55:59.383: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-5vbpw" to be "success or failure"
Jun 23 02:55:59.398: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.402092ms
Jun 23 02:56:01.414: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031008594s
Jun 23 02:56:03.430: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047333559s
STEP: Saw pod success
Jun 23 02:56:03.430: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jun 23 02:56:03.446: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-3hm6 pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jun 23 02:56:03.502: INFO: Waiting for pod pod-host-path-test to disappear
Jun 23 02:56:03.518: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:56:03.518: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-5vbpw" for this suite.
Jun 23 02:56:09.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:56:10.638: INFO: namespace: e2e-tests-hostpath-5vbpw, resource: bindings, ignored listing per whitelist
Jun 23 02:56:11.004: INFO: namespace e2e-tests-hostpath-5vbpw deletion completed in 7.456332393s

• [SLOW TEST:12.394 seconds]
[sig-storage] HostPath
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:56:11.004: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide container's cpu request  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jun 23 02:56:11.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-downward-api-zg8sm" to be "success or failure"
Jun 23 02:56:11.818: INFO: Pod "downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.927662ms
Jun 23 02:56:13.834: INFO: Pod "downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036247702s
Jun 23 02:56:15.853: INFO: Pod "downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055176518s
STEP: Saw pod success
Jun 23 02:56:15.853: INFO: Pod "downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 02:56:15.869: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-2jvs pod downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8 container client-container: <nil>
STEP: delete the pod
Jun 23 02:56:15.922: INFO: Waiting for pod downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 02:56:15.938: INFO: Pod downwardapi-volume-fc736f22-7690-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:56:15.938: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zg8sm" for this suite.
Jun 23 02:56:22.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:56:22.992: INFO: namespace: e2e-tests-downward-api-zg8sm, resource: bindings, ignored listing per whitelist
Jun 23 02:56:23.447: INFO: namespace e2e-tests-downward-api-zg8sm deletion completed in 7.480249462s

• [SLOW TEST:12.443 seconds]
[sig-storage] Downward API volume
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
  should provide container's cpu request  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Docker Containers
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:56:23.448: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test override all
Jun 23 02:56:24.150: INFO: Waiting up to 5m0s for pod "client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-containers-wq75q" to be "success or failure"
Jun 23 02:56:24.167: INFO: Pod "client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.05684ms
Jun 23 02:56:26.183: INFO: Pod "client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032852724s
Jun 23 02:56:28.199: INFO: Pod "client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048707465s
STEP: Saw pod success
Jun 23 02:56:28.199: INFO: Pod "client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 02:56:28.214: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-3hm6 pod client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8 container test-container: <nil>
STEP: delete the pod
Jun 23 02:56:28.257: INFO: Waiting for pod client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 02:56:28.272: INFO: Pod client-containers-03d0dd35-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:56:28.272: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-wq75q" for this suite.
Jun 23 02:56:34.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:56:35.379: INFO: namespace: e2e-tests-containers-wq75q, resource: bindings, ignored listing per whitelist
Jun 23 02:56:35.752: INFO: namespace e2e-tests-containers-wq75q deletion completed in 7.450699995s

• [SLOW TEST:12.305 seconds]
[k8s.io] Docker Containers
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should be able to override the image's default command and arguments  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[sig-storage] Projected 
  should update annotations on modification [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:56:35.752: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should update annotations on modification [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating the pod
Jun 23 02:56:41.125: INFO: Successfully updated pod "annotationupdate0b2b7f6c-7691-11e8-aba4-0e3c01d665e8"
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:56:43.170: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l5gp6" for this suite.
Jun 23 02:57:05.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:57:06.034: INFO: namespace: e2e-tests-projected-l5gp6, resource: bindings, ignored listing per whitelist
Jun 23 02:57:06.665: INFO: namespace e2e-tests-projected-l5gp6 deletion completed in 23.464926843s

• [SLOW TEST:30.913 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should update annotations on modification [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:57:06.665: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Jun 23 02:57:07.464: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jun 23 02:57:07.514: INFO: Number of nodes with available pods: 0
Jun 23 02:57:07.514: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jun 23 02:57:07.593: INFO: Number of nodes with available pods: 0
Jun 23 02:57:07.593: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:08.609: INFO: Number of nodes with available pods: 0
Jun 23 02:57:08.609: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:09.609: INFO: Number of nodes with available pods: 0
Jun 23 02:57:09.609: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:10.609: INFO: Number of nodes with available pods: 1
Jun 23 02:57:10.609: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jun 23 02:57:10.690: INFO: Number of nodes with available pods: 0
Jun 23 02:57:10.690: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jun 23 02:57:10.735: INFO: Number of nodes with available pods: 0
Jun 23 02:57:10.735: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:11.751: INFO: Number of nodes with available pods: 0
Jun 23 02:57:11.751: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:12.752: INFO: Number of nodes with available pods: 0
Jun 23 02:57:12.752: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:13.751: INFO: Number of nodes with available pods: 0
Jun 23 02:57:13.751: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:14.751: INFO: Number of nodes with available pods: 0
Jun 23 02:57:14.751: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:15.753: INFO: Number of nodes with available pods: 0
Jun 23 02:57:15.753: INFO: Node prtest-cc63063-575-ig-n-1n10 is running more than one daemon pod
Jun 23 02:57:16.753: INFO: Number of nodes with available pods: 1
Jun 23 02:57:16.753: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:66
STEP: Deleting DaemonSet "daemon-set" with reaper
Jun 23 02:57:27.867: INFO: Number of nodes with available pods: 0
Jun 23 02:57:27.867: INFO: Number of running nodes: 0, number of available pods: 0
Jun 23 02:57:27.882: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-66hnk/daemonsets","resourceVersion":"26390"},"items":null}

Jun 23 02:57:27.897: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-66hnk/pods","resourceVersion":"26390"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:57:28.004: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-66hnk" for this suite.
Jun 23 02:57:34.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:57:35.262: INFO: namespace: e2e-tests-daemonsets-66hnk, resource: bindings, ignored listing per whitelist
Jun 23 02:57:35.536: INFO: namespace e2e-tests-daemonsets-66hnk deletion completed in 7.503746229s

• [SLOW TEST:28.871 seconds]
[sig-apps] Daemon set [Serial]
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:57:35.537: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:199
[It] should be submitted and removed  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:57:36.384: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-49j94" for this suite.
Jun 23 02:57:58.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:57:59.723: INFO: namespace: e2e-tests-pods-49j94, resource: bindings, ignored listing per whitelist
Jun 23 02:57:59.902: INFO: namespace e2e-tests-pods-49j94 deletion completed in 23.481531022s

• [SLOW TEST:24.366 seconds]
[k8s.io] [sig-node] Pods Extended
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  [k8s.io] Pods Set QOS Class
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should be submitted and removed  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected 
  should set DefaultMode on files [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:57:59.903: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should set DefaultMode on files [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jun 23 02:58:00.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-projected-cc67d" to be "success or failure"
Jun 23 02:58:00.617: INFO: Pod "downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.528124ms
Jun 23 02:58:02.634: INFO: Pod "downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034544213s
Jun 23 02:58:04.650: INFO: Pod "downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051326035s
STEP: Saw pod success
Jun 23 02:58:04.650: INFO: Pod "downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 02:58:04.667: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-2jvs pod downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8 container client-container: <nil>
STEP: delete the pod
Jun 23 02:58:04.718: INFO: Waiting for pod downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 02:58:04.734: INFO: Pod downwardapi-volume-3d4fdad1-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:58:04.734: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cc67d" for this suite.
Jun 23 02:58:10.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:58:12.218: INFO: namespace: e2e-tests-projected-cc67d, resource: bindings, ignored listing per whitelist
Jun 23 02:58:12.279: INFO: namespace e2e-tests-projected-cc67d deletion completed in 7.51644629s

• [SLOW TEST:12.377 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should set DefaultMode on files [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] Projected 
  should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:58:12.279: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jun 23 02:58:12.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-projected-5x6r2" to be "success or failure"
Jun 23 02:58:13.000: INFO: Pod "downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.074015ms
Jun 23 02:58:15.017: INFO: Pod "downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042469113s
Jun 23 02:58:17.033: INFO: Pod "downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058682475s
STEP: Saw pod success
Jun 23 02:58:17.033: INFO: Pod "downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 02:58:17.049: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-3hm6 pod downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8 container client-container: <nil>
STEP: delete the pod
Jun 23 02:58:17.098: INFO: Waiting for pod downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 02:58:17.113: INFO: Pod downwardapi-volume-44adc669-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:58:17.113: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5x6r2" for this suite.
Jun 23 02:58:23.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:58:24.007: INFO: namespace: e2e-tests-projected-5x6r2, resource: bindings, ignored listing per whitelist
Jun 23 02:58:24.619: INFO: namespace e2e-tests-projected-5x6r2 deletion completed in 7.475582489s

• [SLOW TEST:12.339 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-network] DNS
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:58:24.619: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/wheezy_udp@kubernetes.default;test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/wheezy_tcp@kubernetes.default;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/wheezy_udp@kubernetes.default.svc;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-f2wxv.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-f2wxv.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-f2wxv.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/jessie_udp@kubernetes.default;test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/jessie_tcp@kubernetes.default;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/jessie_udp@kubernetes.default.svc;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-f2wxv.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-f2wxv.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-f2wxv.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 23 02:58:39.727: INFO: DNS probes using dns-test-4c07943d-7691-11e8-aba4-0e3c01d665e8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:58:39.755: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-f2wxv" for this suite.
Jun 23 02:58:45.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:58:46.523: INFO: namespace: e2e-tests-dns-f2wxv, resource: bindings, ignored listing per whitelist
Jun 23 02:58:47.276: INFO: namespace e2e-tests-dns-f2wxv deletion completed in 7.491998657s

• [SLOW TEST:22.658 seconds]
[sig-network] DNS
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:58:47.277: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should create a job from an image, then delete the job  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: executing a command with run --rm and attach with stdin
Jun 23 02:58:47.976: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig --namespace=e2e-tests-kubectl-knk4c run e2e-test-rm-busybox-job --image=busybox --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jun 23 02:58:53.911: INFO: stderr: "If you don't see a command prompt, try pressing enter.\n"
Jun 23 02:58:53.911: INFO: stdout: "abcd1234stdin closed\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:58:55.943: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-knk4c" for this suite.
Jun 23 02:59:02.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:59:03.149: INFO: namespace: e2e-tests-kubectl-knk4c, resource: bindings, ignored listing per whitelist
Jun 23 02:59:03.493: INFO: namespace e2e-tests-kubectl-knk4c deletion completed in 7.511554112s

• [SLOW TEST:16.217 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should create a job from an image, then delete the job  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:59:03.493: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 23 02:59:04.215: INFO: Waiting up to 5m0s for pod "pod-633a2eba-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-emptydir-c96z4" to be "success or failure"
Jun 23 02:59:04.232: INFO: Pod "pod-633a2eba-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.490969ms
Jun 23 02:59:06.248: INFO: Pod "pod-633a2eba-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033254385s
Jun 23 02:59:08.264: INFO: Pod "pod-633a2eba-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049267039s
STEP: Saw pod success
Jun 23 02:59:08.264: INFO: Pod "pod-633a2eba-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 02:59:08.280: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-2jvs pod pod-633a2eba-7691-11e8-aba4-0e3c01d665e8 container test-container: <nil>
STEP: delete the pod
Jun 23 02:59:08.323: INFO: Waiting for pod pod-633a2eba-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 02:59:08.338: INFO: Pod pod-633a2eba-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:59:08.338: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-c96z4" for this suite.
Jun 23 02:59:14.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:59:15.066: INFO: namespace: e2e-tests-emptydir-c96z4, resource: bindings, ignored listing per whitelist
Jun 23 02:59:15.842: INFO: namespace e2e-tests-emptydir-c96z4 deletion completed in 7.474184984s

• [SLOW TEST:12.348 seconds]
[sig-storage] EmptyDir volumes
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected 
  should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:59:15.842: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-6a978996-7691-11e8-aba4-0e3c01d665e8
STEP: Creating a pod to test consume secrets
Jun 23 02:59:16.588: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-projected-d2pkp" to be "success or failure"
Jun 23 02:59:16.607: INFO: Pod "pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.380083ms
Jun 23 02:59:18.623: INFO: Pod "pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034188967s
Jun 23 02:59:20.639: INFO: Pod "pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050506297s
STEP: Saw pod success
Jun 23 02:59:20.639: INFO: Pod "pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 02:59:20.655: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-3hm6 pod pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 02:59:20.706: INFO: Waiting for pod pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 02:59:20.723: INFO: Pod pod-projected-secrets-6a9a770c-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:59:20.723: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d2pkp" for this suite.
Jun 23 02:59:26.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:59:27.516: INFO: namespace: e2e-tests-projected-d2pkp, resource: bindings, ignored listing per whitelist
Jun 23 02:59:28.225: INFO: namespace e2e-tests-projected-d2pkp deletion completed in 7.471642217s

• [SLOW TEST:12.383 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:59:28.225: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[BeforeEach] [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:264
[It] should create and stop a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating a replication controller
Jun 23 02:59:28.915: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig create -f - --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:29.195: INFO: stderr: ""
Jun 23 02:59:29.195: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jun 23 02:59:29.195: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:29.351: INFO: stderr: ""
Jun 23 02:59:29.351: INFO: stdout: "update-demo-nautilus-khfm4 update-demo-nautilus-pw782 "
Jun 23 02:59:29.351: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-khfm4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:29.506: INFO: stderr: ""
Jun 23 02:59:29.506: INFO: stdout: ""
Jun 23 02:59:29.506: INFO: update-demo-nautilus-khfm4 is created but not running
Jun 23 02:59:34.506: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:34.665: INFO: stderr: ""
Jun 23 02:59:34.665: INFO: stdout: "update-demo-nautilus-khfm4 update-demo-nautilus-pw782 "
Jun 23 02:59:34.665: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-khfm4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:34.824: INFO: stderr: ""
Jun 23 02:59:34.824: INFO: stdout: "true"
Jun 23 02:59:34.824: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-khfm4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:34.986: INFO: stderr: ""
Jun 23 02:59:34.986: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Jun 23 02:59:34.986: INFO: validating pod update-demo-nautilus-khfm4
Jun 23 02:59:35.053: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 23 02:59:35.053: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 23 02:59:35.053: INFO: update-demo-nautilus-khfm4 is verified up and running
Jun 23 02:59:35.053: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-pw782 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:35.213: INFO: stderr: ""
Jun 23 02:59:35.213: INFO: stdout: "true"
Jun 23 02:59:35.213: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-pw782 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:35.371: INFO: stderr: ""
Jun 23 02:59:35.371: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Jun 23 02:59:35.371: INFO: validating pod update-demo-nautilus-pw782
Jun 23 02:59:35.398: INFO: got data: {
  "image": "nautilus.jpg"
}

Jun 23 02:59:35.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jun 23 02:59:35.398: INFO: update-demo-nautilus-pw782 is verified up and running
STEP: using delete to clean up resources
Jun 23 02:59:35.398: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:35.677: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 23 02:59:35.677: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted\n"
Jun 23 02:59:35.677: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-jjgcp'
Jun 23 02:59:35.854: INFO: stderr: "No resources found.\n"
Jun 23 02:59:35.854: INFO: stdout: ""
Jun 23 02:59:35.854: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -l name=update-demo --namespace=e2e-tests-kubectl-jjgcp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jun 23 02:59:36.014: INFO: stderr: ""
Jun 23 02:59:36.014: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 02:59:36.014: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jjgcp" for this suite.
Jun 23 02:59:58.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 02:59:59.236: INFO: namespace: e2e-tests-kubectl-jjgcp, resource: bindings, ignored listing per whitelist
Jun 23 02:59:59.507: INFO: namespace e2e-tests-kubectl-jjgcp deletion completed in 23.463654347s

• [SLOW TEST:31.283 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should create and stop a replication controller  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a docker exec liveness probe with timeout  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 02:59:59.508: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a docker exec liveness probe with timeout  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Jun 23 03:00:00.151: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API
[AfterEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 03:00:00.151: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gdcpg" for this suite.
Jun 23 03:00:06.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 03:00:07.360: INFO: namespace: e2e-tests-container-probe-gdcpg, resource: bindings, ignored listing per whitelist
Jun 23 03:00:07.647: INFO: namespace e2e-tests-container-probe-gdcpg deletion completed in 7.466690499s

S [SKIPPING] [8.139 seconds]
[k8s.io] Probing container
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should be restarted with a docker exec liveness probe with timeout  [Conformance] [It]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674

  Jun 23 03:00:00.151: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API

  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:299
------------------------------
[sig-storage] Projected 
  should be consumable in multiple volumes in a pod [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 03:00:07.647: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable in multiple volumes in a pod [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name projected-secret-test-89844066-7691-11e8-aba4-0e3c01d665e8
STEP: Creating a pod to test consume secrets
Jun 23 03:00:08.475: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-projected-5z22t" to be "success or failure"
Jun 23 03:00:08.493: INFO: Pod "pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.104221ms
Jun 23 03:00:10.509: INFO: Pod "pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033906088s
Jun 23 03:00:12.525: INFO: Pod "pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050105904s
STEP: Saw pod success
Jun 23 03:00:12.525: INFO: Pod "pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 03:00:12.541: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-2jvs pod pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 03:00:12.588: INFO: Waiting for pod pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 03:00:12.608: INFO: Pod pod-projected-secrets-89876e0b-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 03:00:12.608: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5z22t" for this suite.
Jun 23 03:00:18.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 03:00:19.608: INFO: namespace: e2e-tests-projected-5z22t, resource: bindings, ignored listing per whitelist
Jun 23 03:00:20.087: INFO: namespace e2e-tests-projected-5z22t deletion completed in 7.438798776s

• [SLOW TEST:12.440 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable in multiple volumes in a pod [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 03:00:20.087: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:53
[It] should serve a basic endpoint from pods  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating service endpoint-test2 in namespace e2e-tests-services-6nqb5
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-6nqb5 to expose endpoints map[]
Jun 23 03:00:20.872: INFO: Get endpoints failed (36.249684ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jun 23 03:00:21.887: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6nqb5 exposes endpoints map[] (1.051837051s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-6nqb5
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-6nqb5 to expose endpoints map[pod1:[80]]
Jun 23 03:00:26.083: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6nqb5 exposes endpoints map[pod1:[80]] (4.159447798s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-6nqb5
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-6nqb5 to expose endpoints map[pod1:[80] pod2:[80]]
Jun 23 03:00:30.357: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6nqb5 exposes endpoints map[pod1:[80] pod2:[80]] (4.2377424s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-6nqb5
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-6nqb5 to expose endpoints map[pod2:[80]]
Jun 23 03:00:30.461: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6nqb5 exposes endpoints map[pod2:[80]] (86.799389ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-6nqb5
STEP: waiting up to 1m0s for service endpoint-test2 in namespace e2e-tests-services-6nqb5 to expose endpoints map[]
Jun 23 03:00:30.509: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-6nqb5 exposes endpoints map[] (20.016068ms elapsed)
[AfterEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 03:00:30.600: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-6nqb5" for this suite.
Jun 23 03:00:36.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 03:00:38.104: INFO: namespace: e2e-tests-services-6nqb5, resource: bindings, ignored listing per whitelist
Jun 23 03:00:38.104: INFO: namespace e2e-tests-services-6nqb5 deletion completed in 7.471163897s
[AfterEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:58

• [SLOW TEST:18.017 seconds]
[sig-network] Services
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-api-machinery] Secrets
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 03:00:38.104: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating secret with name secret-test-9ba4a6b1-7691-11e8-aba4-0e3c01d665e8
STEP: Creating a pod to test consume secrets
Jun 23 03:00:38.896: INFO: Waiting up to 5m0s for pod "pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-secrets-vj95k" to be "success or failure"
Jun 23 03:00:38.912: INFO: Pod "pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.793105ms
Jun 23 03:00:40.931: INFO: Pod "pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035010417s
Jun 23 03:00:42.947: INFO: Pod "pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050910227s
Jun 23 03:00:44.963: INFO: Pod "pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067161292s
STEP: Saw pod success
Jun 23 03:00:44.963: INFO: Pod "pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 03:00:44.978: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-3hm6 pod pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8 container secret-env-test: <nil>
STEP: delete the pod
Jun 23 03:00:45.021: INFO: Waiting for pod pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 03:00:45.036: INFO: Pod pod-secrets-9ba73576-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 03:00:45.036: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vj95k" for this suite.
Jun 23 03:00:51.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 03:00:51.862: INFO: namespace: e2e-tests-secrets-vj95k, resource: bindings, ignored listing per whitelist
Jun 23 03:00:52.533: INFO: namespace e2e-tests-secrets-vj95k deletion completed in 7.468102501s

• [SLOW TEST:14.430 seconds]
[sig-api-machinery] Secrets
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:30
  should be consumable from pods in env vars  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 03:00:52.534: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide container's memory request  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Jun 23 03:00:53.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-downward-api-pjdc4" to be "success or failure"
Jun 23 03:00:53.307: INFO: Pod "downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.353352ms
Jun 23 03:00:55.323: INFO: Pod "downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035238516s
Jun 23 03:00:57.339: INFO: Pod "downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051812334s
STEP: Saw pod success
Jun 23 03:00:57.339: INFO: Pod "downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 03:00:57.355: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-1n10 pod downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8 container client-container: <nil>
STEP: delete the pod
Jun 23 03:00:57.407: INFO: Waiting for pod downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 03:00:57.422: INFO: Pod downwardapi-volume-a43c29e2-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 03:00:57.422: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-pjdc4" for this suite.
Jun 23 03:01:03.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 03:01:04.893: INFO: namespace: e2e-tests-downward-api-pjdc4, resource: bindings, ignored listing per whitelist
Jun 23 03:01:04.970: INFO: namespace e2e-tests-downward-api-pjdc4 deletion completed in 7.51875384s

• [SLOW TEST:12.437 seconds]
[sig-storage] Downward API volume
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
  should provide container's memory request  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-api-machinery] ConfigMap 
  should be consumable via the environment  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-api-machinery] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Jun 23 03:01:04.970: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap e2e-tests-configmap-p88fk/configmap-test-aba73095-7691-11e8-aba4-0e3c01d665e8
STEP: Creating a pod to test consume configMaps
Jun 23 03:01:05.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8" in namespace "e2e-tests-configmap-p88fk" to be "success or failure"
Jun 23 03:01:05.798: INFO: Pod "pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.3784ms
Jun 23 03:01:07.814: INFO: Pod "pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049783806s
Jun 23 03:01:09.830: INFO: Pod "pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065787217s
STEP: Saw pod success
Jun 23 03:01:09.830: INFO: Pod "pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8" satisfied condition "success or failure"
Jun 23 03:01:09.846: INFO: Trying to get logs from node prtest-cc63063-575-ig-n-3hm6 pod pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8 container env-test: <nil>
STEP: delete the pod
Jun 23 03:01:09.893: INFO: Waiting for pod pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8 to disappear
Jun 23 03:01:09.908: INFO: Pod pod-configmaps-abaa560e-7691-11e8-aba4-0e3c01d665e8 no longer exists
[AfterEach] [sig-api-machinery] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jun 23 03:01:09.908: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p88fk" for this suite.
Jun 23 03:01:15.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 23 03:01:17.379: INFO: namespace: e2e-tests-configmap-p88fk, resource: bindings, ignored listing per whitelist
Jun 23 03:01:17.410: INFO: namespace e2e-tests-configmap-p88fk deletion completed in 7.472929862s

• [SLOW TEST:12.440 seconds]
[sig-api-machinery] ConfigMap
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:29
  should be consumable via the environment  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSJun 23 03:01:17.411: INFO: Running AfterSuite actions on all node
Jun 23 03:01:17.411: INFO: Running AfterSuite actions on node 1
Jun 23 03:01:17.411: INFO: Dumping logs locally to: /data/src/github.com/openshift/origin/_output/scripts/conformance-k8s/artifacts
Jun 23 03:01:17.412: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

Ran 161 of 890 Specs in 5091.137 seconds
SUCCESS! -- 161 Passed | 0 Failed | 0 Pending | 729 Skipped PASS

Run complete, results in /data/src/github.com/openshift/origin/_output/scripts/conformance-k8s/artifacts
+ gather
+ set +e
++ pwd
+ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ oc get nodes --template '{{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}'
+ xargs -L 1 -I X bash -c 'oc get --raw /api/v1/nodes/X/proxy/metrics > /tmp/artifacts/X.metrics' ''
+ oc get --raw /metrics
+ set -e
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN EXTENDED TESTS [01h 32m 56s] ##########
[workspace] $ /bin/bash /tmp/jenkins5946757115730350203.sh
########## STARTING STAGE: TAG THE LATEST CONFORMANCE RESULTS ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ location=origin-ci-test/logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts
+ location_url=https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts
+ echo https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts
+ gsutil cp .latest-conformance gs://origin-ci-test/releases/openshift/origin/master/.latest-conformance
Copying file://.latest-conformance [Content-Type=application/octet-stream]...
/ [0 files][    0.0 B/  143.0 B]                                                
/ [1 files][  143.0 B/  143.0 B]                                                
Operation completed over 1 objects/143.0 B.                                      
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: TAG THE LATEST CONFORMANCE RESULTS [00h 00m 01s] ##########
[PostBuildScript] - Executing post build scripts.
[workspace] $ /bin/bash /tmp/jenkins1834535516869964329.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin/_output/scripts
  File: ‘/data/src/github.com/openshift/origin/_output/scripts’
  Size: 87        	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 176489351   Links: 6
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:container_file_t:s0
Access: 2018-06-23 00:37:03.897643385 +0000
Modify: 2018-06-23 01:29:19.527273689 +0000
Change: 2018-06-23 01:29:19.527273689 +0000
 Birth: -
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin/_output/scripts
+ scp -r -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin/_output/scripts /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /tmp/artifacts
  File: ‘/tmp/artifacts’
  Size: 160       	Blocks: 0          IO Block: 4096   directory
Device: 27h/39d	Inode: 324215      Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1001/  origin)   Gid: ( 1002/  docker)
Context: unconfined_u:object_r:user_tmp_t:s0
Access: 2018-06-23 01:28:23.962981510 +0000
Modify: 2018-06-23 03:01:18.553549357 +0000
Change: 2018-06-23 03:01:18.553549357 +0000
 Birth: -
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /tmp/artifacts
+ scp -r -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/tmp/artifacts /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/gathered
/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/gathered
├── artifacts
│   ├── junit
│   ├── master.metrics
│   ├── prtest-cc63063-575-ig-m-wcqs.metrics
│   ├── prtest-cc63063-575-ig-n-1n10.metrics
│   ├── prtest-cc63063-575-ig-n-2jvs.metrics
│   └── prtest-cc63063-575-ig-n-3hm6.metrics
└── scripts
    ├── build-base-images
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    ├── conformance-k8s
    │   ├── artifacts
    │   │   ├── e2e.log
    │   │   ├── junit_01.xml
    │   │   ├── nethealth.txt
    │   │   ├── README.md
    │   │   └── version.txt
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── push-release
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    └── shell
        ├── artifacts
        ├── logs
        │   ├── 384b9049d55e38284965464d8a446c24d545d3d0a58ca23217e128c3bb61074a.json
        │   ├── 66c2e3f7b6dd192c3d5d9bbe3dc08416db49a52ab98da2128b31a222ae6c7443.json
        │   ├── f7e4619091685634110234e2e92182471d5927955ce9b16ce42952666be8e572.json
        │   └── scripts.log
        └── openshift.local.home

19 directories, 16 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins3684722063341412814.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/generated
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --server=https://$( uname --nodename ):10250 --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/generated
/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
├── master-metrics.log
├── node-metrics.log
└── pid1.journal

0 directories, 10 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins8742832636648990657.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/journals
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/journals
/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service

0 directories, 3 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins5125177171690258991.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/test_branch_origin_extended_conformance_k8s/575/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/builds/575/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/master-metrics.log artifacts/generated/node-metrics.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r artifacts/gathered/artifacts artifacts/gathered/scripts gcs/artifacts/
++ pwd
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config -r /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/gcs openshiftdevel:/data
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config /var/lib/jenkins/.config/gcloud/gcs-publisher-credentials.json openshiftdevel:/data/credentials.json
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins642303187841021412.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ mktemp
+ script=/tmp/tmp.dUskn6MiQ6
+ cat
+ chmod +x /tmp/tmp.dUskn6MiQ6
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.dUskn6MiQ6 openshiftdevel:/tmp/tmp.dUskn6MiQ6
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.dUskn6MiQ6"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"periodic","job":"test_branch_origin_extended_conformance_k8s","buildid":"b9cc5762-767c-11e8-96fd-0a58ac10fe2b","refs":{}} ]]
++ jq --compact-output .buildid
+ [[ "b9cc5762-767c-11e8-96fd-0a58ac10fe2b" =~ ^"[0-9]+"$ ]]
+ echo 'Using BUILD_NUMBER'
Using BUILD_NUMBER
++ jq --compact-output '.buildid |= "575"'
+ JOB_SPEC='{"type":"periodic","job":"test_branch_origin_extended_conformance_k8s","buildid":"575","refs":{}}'
+ docker run -e 'JOB_SPEC={"type":"periodic","job":"test_branch_origin_extended_conformance_k8s","buildid":"575","refs":{}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-path=gs://origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin /data/gcs/artifacts /data/gcs/build-log.txt /data/gcs/finished.json
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ... 
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
605ce1bd3f31: Already exists
dc6346da9948: Already exists
7377da2e59db: Pulling fs layer
7377da2e59db: Verifying Checksum
7377da2e59db: Download complete
7377da2e59db: Pull complete
Digest: sha256:cce318f50d4a815a3dc926962b02d39fd6cd33303c3a59035b743f19dd2bf6a7
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","level":"info","msg":"Gathering artifacts from artifact directory: /data/gcs/artifacts","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/master.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/master.metrics\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-cc63063-575-ig-m-wcqs.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-m-wcqs.metrics\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-cc63063-575-ig-n-1n10.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-1n10.metrics\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-cc63063-575-ig-n-2jvs.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-2jvs.metrics\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-cc63063-575-ig-n-3hm6.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-3hm6.metrics\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/avc_denials.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/avc_denials.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/containers.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/containers.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/dmesg.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/dmesg.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.config in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/docker.config\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.info in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/docker.info\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/filesystem.info in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/filesystem.info\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/installed_packages.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/installed_packages.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/master-metrics.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/master-metrics.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/node-metrics.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/node-metrics.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/pid1.journal in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/pid1.journal\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/dnsmasq.service in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/dnsmasq.service\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/docker.service in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/docker.service\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/systemd-journald.service in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/systemd-journald.service\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/README.md in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/README.md\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/e2e.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/e2e.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/version.txt in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/version.txt\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/logs/scripts.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/logs/scripts.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/push-release/logs/scripts.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/push-release/logs/scripts.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/384b9049d55e38284965464d8a446c24d545d3d0a58ca23217e128c3bb61074a.json in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/384b9049d55e38284965464d8a446c24d545d3d0a58ca23217e128c3bb61074a.json\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/66c2e3f7b6dd192c3d5d9bbe3dc08416db49a52ab98da2128b31a222ae6c7443.json in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/66c2e3f7b6dd192c3d5d9bbe3dc08416db49a52ab98da2128b31a222ae6c7443.json\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/f7e4619091685634110234e2e92182471d5927955ce9b16ce42952666be8e572.json in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/f7e4619091685634110234e2e92182471d5927955ce9b16ce42952666be8e572.json\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/scripts.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/scripts.log\n","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-2jvs.metrics","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/avc_denials.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/docker.service","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/systemd-journald.service","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/master.metrics","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-1n10.metrics","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/docker.config","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/docker.info","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/build-log.txt","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/finished.json","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/dmesg.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/README.md","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/push-release/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/66c2e3f7b6dd192c3d5d9bbe3dc08416db49a52ab98da2128b31a222ae6c7443.json","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/dnsmasq.service","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/e2e.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/containers.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-3hm6.metrics","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/pid1.journal","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/version.txt","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/f7e4619091685634110234e2e92182471d5927955ce9b16ce42952666be8e572.json","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-m-wcqs.metrics","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/filesystem.info","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/master-metrics.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/installed_packages.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/node-metrics.log","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/384b9049d55e38284965464d8a446c24d545d3d0a58ca23217e128c3bb61074a.json","level":"info","msg":"Queued for upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/README.md","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/f7e4619091685634110234e2e92182471d5927955ce9b16ce42952666be8e572.json","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/containers.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/avc_denials.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/systemd-journald.service","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/push-release/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/dnsmasq.service","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/filesystem.info","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/installed_packages.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/master-metrics.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/finished.json","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/dmesg.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/docker.config","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/version.txt","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-m-wcqs.metrics","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-1n10.metrics","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/e2e.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/66c2e3f7b6dd192c3d5d9bbe3dc08416db49a52ab98da2128b31a222ae6c7443.json","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/shell/logs/384b9049d55e38284965464d8a446c24d545d3d0a58ca23217e128c3bb61074a.json","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/docker.info","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/node-metrics.log","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-2jvs.metrics","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/prtest-cc63063-575-ig-n-3hm6.metrics","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/generated/pid1.journal","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/journals/docker.service","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/build-log.txt","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:58Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s/575/artifacts/artifacts/master.metrics","level":"info","msg":"Finished upload","time":"2018-06-23T03:01:59Z"}
{"component":"gcsupload","level":"info","msg":"Finished upload to GCS","time":"2018-06-23T03:01:59Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 07s] ##########
[workspace] $ /bin/bash /tmp/jenkins1923728024639609856.sh
########## STARTING STAGE: GATHER ARTIFACTS FROM TEST CLUSTER ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ base_artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts
+ source ./INSTANCE_PREFIX
++ INSTANCE_PREFIX=prtest-cc63063-575
++ OS_TAG=84c76c1
++ OS_PUSH_BASE_REPO=ci-pr-images/prtest-cc63063-575-
++ gcloud compute instances list --regexp '.*prtest-cc63063-575.*' --uri
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.SotpbNfEir
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs --format json
++ jq .name --raw-output /tmp/tmp.SotpbNfEir
++ tail -c 5
+ name=wcqs
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.SotpbNfEir
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/masters/wcqs
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/masters/wcqs /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/masters/wcqs/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/masters/wcqs/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit origin-master.service --no-pager --all --lines=all
Warning: Permanently added 'compute.2064189489747599093' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig
error: Error loading config file "/etc/origin/master/admin.kubeconfig": open /etc/origin/master/admin.kubeconfig: permission denied
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.5:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-m-wcqs -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.dh6N5Fg4Sf
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 --format json
++ jq .name --raw-output /tmp/tmp.dh6N5Fg4Sf
++ tail -c 5
+ name=1n10
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.dh6N5Fg4Sf
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.dh6N5Fg4Sf
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/1n10
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/1n10 /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/1n10/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/1n10/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.7654558989971103477' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.4:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-1n10 -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.4bE4A9n0yQ
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs --format json
++ jq .name --raw-output /tmp/tmp.4bE4A9n0yQ
++ tail -c 5
+ name=2jvs
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.4bE4A9n0yQ
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.4bE4A9n0yQ
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/2jvs
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/2jvs /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/2jvs/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/2jvs/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.5753289856952311541' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.2:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-2jvs -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.uDofOOkncY
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 --format json
++ jq .name --raw-output /tmp/tmp.uDofOOkncY
++ tail -c 5
+ name=3hm6
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.uDofOOkncY
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.uDofOOkncY
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/3hm6
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/3hm6 /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/3hm6/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/artifacts/nodes/3hm6/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.325266985191481077' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.3:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-cc63063-575-ig-n-3hm6 -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins6855933212383947357.sh
########## STARTING STAGE: DEPROVISION TEST CLUSTER ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ mktemp
+ script=/tmp/tmp.oyyYWn3gsE
+ cat
+ chmod +x /tmp/tmp.oyyYWn3gsE
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.oyyYWn3gsE openshiftdevel:/tmp/tmp.oyyYWn3gsE
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 900 /tmp/tmp.oyyYWn3gsE"'
+ cd /data/src/github.com/openshift/release
+ trap 'exit 0' EXIT
+ cd cluster/test-deploy/
+ make down WHAT=prtest-cc63063-575 PROFILE=gcp
cd gcp/ && ../../bin/ansible.sh ansible-playbook playbooks/gcp/openshift-cluster/deprovision.yml
Activated service account credentials for: [jenkins-ci-provisioner@openshift-gce-devel.iam.gserviceaccount.com]

PLAY [Terminate running cluster and remove all supporting resources in GCE] ****

TASK [Gathering Facts] *********************************************************
Saturday 23 June 2018  03:04:04 +0000 (0:00:00.069)       0:00:00.069 ********* 
ok: [localhost]

TASK [include_role] ************************************************************
Saturday 23 June 2018  03:04:10 +0000 (0:00:06.095)       0:00:06.165 ********* 

TASK [openshift_gcp : Templatize DNS script] ***********************************
Saturday 23 June 2018  03:04:10 +0000 (0:00:00.116)       0:00:06.282 ********* 
changed: [localhost]

TASK [openshift_gcp : Templatize provision script] *****************************
Saturday 23 June 2018  03:04:11 +0000 (0:00:00.550)       0:00:06.833 ********* 
changed: [localhost]

TASK [openshift_gcp : Templatize de-provision script] **************************
Saturday 23 June 2018  03:04:11 +0000 (0:00:00.361)       0:00:07.194 ********* 
changed: [localhost]

TASK [openshift_gcp : Provision GCP DNS domain] ********************************
Saturday 23 June 2018  03:04:11 +0000 (0:00:00.331)       0:00:07.526 ********* 
skipping: [localhost]

TASK [openshift_gcp : Ensure that DNS resolves to the hosted zone] *************
Saturday 23 June 2018  03:04:11 +0000 (0:00:00.029)       0:00:07.555 ********* 
skipping: [localhost]

TASK [openshift_gcp : Templatize SSH key provision script] *********************
Saturday 23 June 2018  03:04:12 +0000 (0:00:00.029)       0:00:07.584 ********* 
changed: [localhost]

TASK [openshift_gcp : Provision GCP SSH key resources] *************************
Saturday 23 June 2018  03:04:12 +0000 (0:00:00.306)       0:00:07.891 ********* 
skipping: [localhost]

TASK [openshift_gcp : Provision GCP resources] *********************************
Saturday 23 June 2018  03:04:12 +0000 (0:00:00.028)       0:00:07.920 ********* 
skipping: [localhost]

TASK [openshift_gcp : De-provision GCP resources] ******************************
Saturday 23 June 2018  03:04:12 +0000 (0:00:00.027)       0:00:07.948 ********* 
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=6    changed=5    unreachable=0    failed=0   

Saturday 23 June 2018  03:08:37 +0000 (0:04:24.666)       0:04:32.615 ********* 
=============================================================================== 
openshift_gcp : De-provision GCP resources ---------------------------- 264.67s
Gathering Facts --------------------------------------------------------- 6.10s
openshift_gcp : Templatize DNS script ----------------------------------- 0.55s
openshift_gcp : Templatize provision script ----------------------------- 0.36s
openshift_gcp : Templatize de-provision script -------------------------- 0.33s
openshift_gcp : Templatize SSH key provision script --------------------- 0.31s
include_role ------------------------------------------------------------ 0.12s
openshift_gcp : Provision GCP DNS domain -------------------------------- 0.03s
openshift_gcp : Ensure that DNS resolves to the hosted zone ------------- 0.03s
openshift_gcp : Provision GCP SSH key resources ------------------------- 0.03s
openshift_gcp : Provision GCP resources --------------------------------- 0.03s
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION TEST CLUSTER [00h 04m 39s] ##########
[workspace] $ /bin/bash /tmp/jenkins4980847890159796793.sh
########## STARTING STAGE: DELETE PR IMAGES ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
+ source ./INSTANCE_PREFIX
++ INSTANCE_PREFIX=prtest-cc63063-575
++ OS_TAG=84c76c1
++ OS_PUSH_BASE_REPO=ci-pr-images/prtest-cc63063-575-
+ export KUBECONFIG=/var/lib/jenkins/secrets/image-pr-push.kubeconfig
+ KUBECONFIG=/var/lib/jenkins/secrets/image-pr-push.kubeconfig
+ oc get is -o name -n ci-pr-images
+ grep prtest-cc63063-575
+ xargs -r oc delete
imagestream "prtest-cc63063-575-node" deleted
imagestream "prtest-cc63063-575-origin" deleted
imagestream "prtest-cc63063-575-origin-base" deleted
imagestream "prtest-cc63063-575-origin-cli" deleted
imagestream "prtest-cc63063-575-origin-control-plane" deleted
imagestream "prtest-cc63063-575-origin-deployer" deleted
imagestream "prtest-cc63063-575-origin-docker-builder" deleted
imagestream "prtest-cc63063-575-origin-docker-registry" deleted
imagestream "prtest-cc63063-575-origin-egress-dns-proxy" deleted
imagestream "prtest-cc63063-575-origin-egress-http-proxy" deleted
imagestream "prtest-cc63063-575-origin-egress-router" deleted
imagestream "prtest-cc63063-575-origin-f5-router" deleted
imagestream "prtest-cc63063-575-origin-haproxy-router" deleted
imagestream "prtest-cc63063-575-origin-hyperkube" deleted
imagestream "prtest-cc63063-575-origin-hypershift" deleted
imagestream "prtest-cc63063-575-origin-keepalived-ipfailover" deleted
imagestream "prtest-cc63063-575-origin-metrics-server" deleted
imagestream "prtest-cc63063-575-origin-node" deleted
imagestream "prtest-cc63063-575-origin-pod" deleted
imagestream "prtest-cc63063-575-origin-recycler" deleted
imagestream "prtest-cc63063-575-origin-template-service-broker" deleted
imagestream "prtest-cc63063-575-origin-tests" deleted
imagestream "prtest-cc63063-575-origin-web-console" deleted
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins5088872783690483440.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2018-06-22 23:08:42.620399", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2018-06-22 23:08:42.624257", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2018-06-22 23:08:43.689550", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-06-22 23:08:44.268779", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-0af61c2eae57f7ee5."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-06-22 23:08:45.102936", 
    "instance_ids": [
        "i-0af61c2eae57f7ee5"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-06e73f838ee024357"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-083b62dc3a651f82e"
                }
            }, 
            "dns_name": "ec2-18-209-28-149.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-0af61c2eae57f7ee5", 
            "image_id": "ami-0f2178e5f060dbf2d", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2018-06-23T00:31:31.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-0-175.ec2.internal", 
            "private_ip": "172.18.0.175", 
            "public_dns_name": "ec2-18-209-28-149.compute-1.amazonaws.com", 
            "public_ip": "18.209.28.149", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-06-22 23:08:45.363773", 
    "path": "/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.0.175.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-06-22 23:08:46.169597", 
    "path": "/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 05s] ##########
Archiving artifacts
Recording test results
Sending e-mails to: ccoleman@redhat.com
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
Finished: SUCCESS