SuccessConsole Output

Skipping 1,613 KB.. Full Log
STEP: Creating secret with name secret-test-fd167c1b-a4d4-11e8-9347-0e1ce18119f6
STEP: Creating a pod to test consume secrets
Aug 20 23:58:52.137: INFO: Waiting up to 5m0s for pod "pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-secrets-4hvgm" to be "success or failure"
Aug 20 23:58:52.152: INFO: Pod "pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.196246ms
Aug 20 23:58:54.180: INFO: Pod "pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043429876s
Aug 20 23:58:56.197: INFO: Pod "pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059796869s
STEP: Saw pod success
Aug 20 23:58:56.197: INFO: Pod "pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 20 23:58:56.212: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-m97r pod pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6 container secret-volume-test: <nil>
STEP: delete the pod
Aug 20 23:58:56.257: INFO: Waiting for pod pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6 to disappear
Aug 20 23:58:56.272: INFO: Pod pod-secrets-fd1939c7-a4d4-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-storage] Secrets
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 20 23:58:56.272: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4hvgm" for this suite.
Aug 20 23:59:02.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 23:59:03.607: INFO: namespace: e2e-tests-secrets-4hvgm, resource: bindings, ignored listing per whitelist
Aug 20 23:59:03.835: INFO: namespace e2e-tests-secrets-4hvgm deletion completed in 7.532748177s

• [SLOW TEST:12.599 seconds]
[sig-storage] Secrets
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 20 23:59:03.835: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:199
[It] should be submitted and removed  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 20 23:59:04.609: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-km6wv" for this suite.
Aug 20 23:59:26.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 23:59:28.011: INFO: namespace: e2e-tests-pods-km6wv, resource: bindings, ignored listing per whitelist
Aug 20 23:59:28.188: INFO: namespace e2e-tests-pods-km6wv deletion completed in 23.550287173s

• [SLOW TEST:24.353 seconds]
[k8s.io] [sig-node] Pods Extended
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  [k8s.io] Pods Set QOS Class
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should be submitted and removed  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a docker exec liveness probe with timeout  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 20 23:59:28.189: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a docker exec liveness probe with timeout  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Aug 20 23:59:28.984: INFO: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API
[AfterEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 20 23:59:28.985: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-969q9" for this suite.
Aug 20 23:59:35.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 23:59:36.312: INFO: namespace: e2e-tests-container-probe-969q9, resource: bindings, ignored listing per whitelist
Aug 20 23:59:36.567: INFO: namespace e2e-tests-container-probe-969q9 deletion completed in 7.545800291s

S [SKIPPING] [8.378 seconds]
[k8s.io] Probing container
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should be restarted with a docker exec liveness probe with timeout  [Conformance] [It]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674

  Aug 20 23:59:28.984: The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API

  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:299
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 20 23:59:36.567: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[AfterEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:00:37.382: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-5msv6" for this suite.
Aug 21 00:00:59.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:01:00.603: INFO: namespace: e2e-tests-container-probe-5msv6, resource: bindings, ignored listing per whitelist
Aug 21 00:01:00.958: INFO: namespace e2e-tests-container-probe-5msv6 deletion completed in 23.546609237s

• [SLOW TEST:84.391 seconds]
[k8s.io] Probing container
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  with readiness probe that fails should never be ready and never restart  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSS
------------------------------
[sig-storage] Projected 
  should be consumable from pods in volume with mappings [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:01:00.958: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with mappings [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-map-4a4cd3c8-a4d5-11e8-9347-0e1ce18119f6
STEP: Creating a pod to test consume secrets
Aug 21 00:01:01.699: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-projected-z6tw6" to be "success or failure"
Aug 21 00:01:01.724: INFO: Pod "pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.24819ms
Aug 21 00:01:03.740: INFO: Pod "pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041443669s
Aug 21 00:01:05.756: INFO: Pod "pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057626551s
STEP: Saw pod success
Aug 21 00:01:05.756: INFO: Pod "pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:01:05.775: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-276f pod pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug 21 00:01:05.821: INFO: Waiting for pod pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:01:05.836: INFO: Pod pod-projected-secrets-4a5117fa-a4d5-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:01:05.836: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-z6tw6" for this suite.
Aug 21 00:01:11.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:01:12.889: INFO: namespace: e2e-tests-projected-z6tw6, resource: bindings, ignored listing per whitelist
Aug 21 00:01:13.395: INFO: namespace e2e-tests-projected-z6tw6 deletion completed in 7.528708007s

• [SLOW TEST:12.437 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable from pods in volume with mappings [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:01:13.395: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name configmap-test-volume-map-51bdceaa-a4d5-11e8-9347-0e1ce18119f6
STEP: Creating a pod to test consume configMaps
Aug 21 00:01:14.169: INFO: Waiting up to 5m0s for pod "pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-configmap-sp72z" to be "success or failure"
Aug 21 00:01:14.190: INFO: Pod "pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.279692ms
Aug 21 00:01:16.207: INFO: Pod "pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03753339s
Aug 21 00:01:18.222: INFO: Pod "pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053457149s
STEP: Saw pod success
Aug 21 00:01:18.223: INFO: Pod "pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:01:18.238: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-m97r pod pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6 container configmap-volume-test: <nil>
STEP: delete the pod
Aug 21 00:01:18.290: INFO: Waiting for pod pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:01:18.305: INFO: Pod pod-configmaps-51c1bee2-a4d5-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:01:18.305: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sp72z" for this suite.
Aug 21 00:01:24.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:01:25.202: INFO: namespace: e2e-tests-configmap-sp72z, resource: bindings, ignored listing per whitelist
Aug 21 00:01:25.879: INFO: namespace e2e-tests-configmap-sp72z deletion completed in 7.544588879s

• [SLOW TEST:12.484 seconds]
[sig-storage] ConfigMap
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[sig-api-machinery] Downward API 
  should provide pod name, namespace and IP address as env vars  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-api-machinery] Downward API
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:01:25.879: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward api env vars
Aug 21 00:01:26.683: INFO: Waiting up to 5m0s for pod "downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-downward-api-sjccp" to be "success or failure"
Aug 21 00:01:26.699: INFO: Pod "downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.549354ms
Aug 21 00:01:28.715: INFO: Pod "downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032644102s
Aug 21 00:01:30.732: INFO: Pod "downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049654615s
STEP: Saw pod success
Aug 21 00:01:30.732: INFO: Pod "downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:01:30.748: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-276f pod downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6 container dapi-container: <nil>
STEP: delete the pod
Aug 21 00:01:30.795: INFO: Waiting for pod downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:01:30.811: INFO: Pod downward-api-59348ba8-a4d5-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-api-machinery] Downward API
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:01:30.811: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sjccp" for this suite.
Aug 21 00:01:36.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:01:37.887: INFO: namespace: e2e-tests-downward-api-sjccp, resource: bindings, ignored listing per whitelist
Aug 21 00:01:38.379: INFO: namespace e2e-tests-downward-api-sjccp deletion completed in 7.538735985s

• [SLOW TEST:12.500 seconds]
[sig-api-machinery] Downward API
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:37
  should provide pod name, namespace and IP address as env vars  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:01:38.379: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[BeforeEach] [k8s.io] Kubectl run pod
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1355
[It] should create a pod from an image when restart is Never  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: running the image k8s.gcr.io/nginx-slim-amd64:0.20
Aug 21 00:01:39.139: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=k8s.gcr.io/nginx-slim-amd64:0.20 --namespace=e2e-tests-kubectl-kft2l'
Aug 21 00:01:39.896: INFO: stderr: ""
Aug 21 00:01:39.896: INFO: stdout: "pod \"e2e-test-nginx-pod\" created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1360
Aug 21 00:01:39.912: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-kft2l'
Aug 21 00:01:40.091: INFO: stderr: ""
Aug 21 00:01:40.091: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:01:40.091: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kft2l" for this suite.
Aug 21 00:01:46.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:01:47.352: INFO: namespace: e2e-tests-kubectl-kft2l, resource: bindings, ignored listing per whitelist
Aug 21 00:01:47.655: INFO: namespace e2e-tests-kubectl-kft2l deletion completed in 7.534951747s

• [SLOW TEST:9.276 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should create a pod from an image when restart is Never  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:01:47.655: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[BeforeEach] [k8s.io] Kubectl run deployment
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1237
[It] should create a deployment from an image  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: running the image k8s.gcr.io/nginx-slim-amd64:0.20
Aug 21 00:01:48.372: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig run e2e-test-nginx-deployment --image=k8s.gcr.io/nginx-slim-amd64:0.20 --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-xcnz2'
Aug 21 00:01:48.534: INFO: stderr: ""
Aug 21 00:01:48.534: INFO: stdout: "deployment.apps \"e2e-test-nginx-deployment\" created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1242
Aug 21 00:01:50.571: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-xcnz2'
Aug 21 00:01:51.932: INFO: stderr: ""
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:01:51.932: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xcnz2" for this suite.
Aug 21 00:01:58.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:01:58.950: INFO: namespace: e2e-tests-kubectl-xcnz2, resource: bindings, ignored listing per whitelist
Aug 21 00:01:59.529: INFO: namespace e2e-tests-kubectl-xcnz2 deletion completed in 7.567683885s

• [SLOW TEST:11.874 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should create a deployment from an image  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Docker Containers
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:01:59.530: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test override all
Aug 21 00:02:00.302: INFO: Waiting up to 5m0s for pod "client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-containers-fmhqf" to be "success or failure"
Aug 21 00:02:00.318: INFO: Pod "client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.425881ms
Aug 21 00:02:02.334: INFO: Pod "client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031332991s
Aug 21 00:02:04.350: INFO: Pod "client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047288143s
STEP: Saw pod success
Aug 21 00:02:04.350: INFO: Pod "client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:02:04.365: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-m97r pod client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6 container test-container: <nil>
STEP: delete the pod
Aug 21 00:02:04.411: INFO: Waiting for pod client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:02:04.426: INFO: Pod client-containers-6d40b3e1-a4d5-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:02:04.426: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-fmhqf" for this suite.
Aug 21 00:02:10.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:02:11.212: INFO: namespace: e2e-tests-containers-fmhqf, resource: bindings, ignored listing per whitelist
Aug 21 00:02:12.015: INFO: namespace e2e-tests-containers-fmhqf deletion completed in 7.558967512s

• [SLOW TEST:12.485 seconds]
[k8s.io] Docker Containers
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should be able to override the image's default command and arguments  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:02:12.015: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide container's cpu limit  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:02:12.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-downward-api-qpx9f" to be "success or failure"
Aug 21 00:02:12.834: INFO: Pod "downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.768656ms
Aug 21 00:02:14.850: INFO: Pod "downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042321068s
Aug 21 00:02:16.867: INFO: Pod "downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058888314s
STEP: Saw pod success
Aug 21 00:02:16.867: INFO: Pod "downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:02:16.883: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-276f pod downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6 container client-container: <nil>
STEP: delete the pod
Aug 21 00:02:16.933: INFO: Waiting for pod downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:02:16.949: INFO: Pod downwardapi-volume-74b4cb1c-a4d5-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:02:16.949: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qpx9f" for this suite.
Aug 21 00:02:23.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:02:24.447: INFO: namespace: e2e-tests-downward-api-qpx9f, resource: bindings, ignored listing per whitelist
Aug 21 00:02:24.557: INFO: namespace e2e-tests-downward-api-qpx9f deletion completed in 7.578388543s

• [SLOW TEST:12.542 seconds]
[sig-storage] Downward API volume
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
  should provide container's cpu limit  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected 
  should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:02:24.557: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-7c2e6739-a4d5-11e8-9347-0e1ce18119f6
STEP: Creating a pod to test consume secrets
Aug 21 00:02:25.377: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-projected-2g2bl" to be "success or failure"
Aug 21 00:02:25.404: INFO: Pod "pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 27.485214ms
Aug 21 00:02:27.420: INFO: Pod "pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043763198s
Aug 21 00:02:29.437: INFO: Pod "pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059910898s
STEP: Saw pod success
Aug 21 00:02:29.437: INFO: Pod "pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:02:29.452: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-m97r pod pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Aug 21 00:02:29.500: INFO: Waiting for pod pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:02:29.516: INFO: Pod pod-projected-secrets-7c30f793-a4d5-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:02:29.516: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2g2bl" for this suite.
Aug 21 00:02:35.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:02:36.704: INFO: namespace: e2e-tests-projected-2g2bl, resource: bindings, ignored listing per whitelist
Aug 21 00:02:37.071: INFO: namespace e2e-tests-projected-2g2bl deletion completed in 7.525729658s

• [SLOW TEST:12.514 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Variable Expansion
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:02:37.071: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test substitution in container's args
Aug 21 00:02:37.972: INFO: Waiting up to 5m0s for pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-var-expansion-ltb2p" to be "success or failure"
Aug 21 00:02:37.990: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.375079ms
Aug 21 00:02:40.005: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033091843s
Aug 21 00:02:42.021: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049211979s
Aug 21 00:02:44.038: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065561267s
Aug 21 00:02:46.054: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081580917s
Aug 21 00:02:48.070: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097659568s
Aug 21 00:02:50.086: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.113776447s
Aug 21 00:02:52.102: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.129790398s
STEP: Saw pod success
Aug 21 00:02:52.102: INFO: Pod "var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:02:52.118: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-276f pod var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6 container dapi-container: <nil>
STEP: delete the pod
Aug 21 00:02:52.162: INFO: Waiting for pod var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:02:52.177: INFO: Pod var-expansion-83b30433-a4d5-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:02:52.177: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-ltb2p" for this suite.
Aug 21 00:02:58.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:02:59.313: INFO: namespace: e2e-tests-var-expansion-ltb2p, resource: bindings, ignored listing per whitelist
Aug 21 00:02:59.797: INFO: namespace e2e-tests-var-expansion-ltb2p deletion completed in 7.591277339s

• [SLOW TEST:22.727 seconds]
[k8s.io] Variable Expansion
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should allow substituting values in a container's args  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:02:59.798: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[BeforeEach] [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:264
[It] should create and stop a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating a replication controller
Aug 21 00:03:00.503: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig create -f - --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:00.862: INFO: stderr: ""
Aug 21 00:03:00.862: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 21 00:03:00.863: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:01.018: INFO: stderr: ""
Aug 21 00:03:01.018: INFO: stdout: "update-demo-nautilus-xrf8j update-demo-nautilus-zqx9r "
Aug 21 00:03:01.018: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-xrf8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:01.173: INFO: stderr: ""
Aug 21 00:03:01.173: INFO: stdout: ""
Aug 21 00:03:01.173: INFO: update-demo-nautilus-xrf8j is created but not running
Aug 21 00:03:06.173: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:06.330: INFO: stderr: ""
Aug 21 00:03:06.330: INFO: stdout: "update-demo-nautilus-xrf8j update-demo-nautilus-zqx9r "
Aug 21 00:03:06.330: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-xrf8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:06.483: INFO: stderr: ""
Aug 21 00:03:06.483: INFO: stdout: "true"
Aug 21 00:03:06.483: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-xrf8j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:06.637: INFO: stderr: ""
Aug 21 00:03:06.637: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Aug 21 00:03:06.637: INFO: validating pod update-demo-nautilus-xrf8j
Aug 21 00:03:06.657: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 00:03:06.657: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 00:03:06.657: INFO: update-demo-nautilus-xrf8j is verified up and running
Aug 21 00:03:06.657: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-zqx9r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:06.811: INFO: stderr: ""
Aug 21 00:03:06.811: INFO: stdout: "true"
Aug 21 00:03:06.811: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-zqx9r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:06.964: INFO: stderr: ""
Aug 21 00:03:06.964: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Aug 21 00:03:06.964: INFO: validating pod update-demo-nautilus-zqx9r
Aug 21 00:03:06.984: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 21 00:03:06.984: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 21 00:03:06.984: INFO: update-demo-nautilus-zqx9r is verified up and running
STEP: using delete to clean up resources
Aug 21 00:03:06.984: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:07.246: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 00:03:07.246: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted\n"
Aug 21 00:03:07.246: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xdwvn'
Aug 21 00:03:07.423: INFO: stderr: "No resources found.\n"
Aug 21 00:03:07.423: INFO: stdout: ""
Aug 21 00:03:07.423: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -l name=update-demo --namespace=e2e-tests-kubectl-xdwvn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 21 00:03:07.580: INFO: stderr: ""
Aug 21 00:03:07.580: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:03:07.580: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xdwvn" for this suite.
Aug 21 00:03:13.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:03:14.430: INFO: namespace: e2e-tests-kubectl-xdwvn, resource: bindings, ignored listing per whitelist
Aug 21 00:03:15.158: INFO: namespace e2e-tests-kubectl-xdwvn deletion completed in 7.547165732s

• [SLOW TEST:15.360 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should create and stop a replication controller  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-network] DNS
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:03:15.158: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/wheezy_udp@kubernetes.default;test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/wheezy_tcp@kubernetes.default;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/wheezy_udp@kubernetes.default.svc;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-57fvg.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-57fvg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-57fvg.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK > /results/jessie_udp@kubernetes.default;test -n "$$(dig +tcp +noall +answer +search kubernetes.default A)" && echo OK > /results/jessie_tcp@kubernetes.default;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/jessie_udp@kubernetes.default.svc;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && echo OK > /results/jessie_tcp@kubernetes.default.svc;test -n "$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;test -n "$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-57fvg.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-57fvg.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-57fvg.pod.cluster.local"}');test -n "$$(dig +notcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_udp@PodARecord;test -n "$$(dig +tcp +noall +answer +search $${podARec} A)" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 00:03:36.295: INFO: DNS probes using dns-test-9a4f8a79-a4d5-11e8-9347-0e1ce18119f6 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:03:36.316: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-57fvg" for this suite.
Aug 21 00:03:42.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:03:43.083: INFO: namespace: e2e-tests-dns-57fvg, resource: bindings, ignored listing per whitelist
Aug 21 00:03:43.876: INFO: namespace e2e-tests-dns-57fvg deletion completed in 7.53083169s

• [SLOW TEST:28.718 seconds]
[sig-network] DNS
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:03:43.876: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 00:03:44.753: INFO: Number of nodes with available pods: 0
Aug 21 00:03:44.753: INFO: Node prtest-7ef3e0b-45-ig-m-gvf6 is running more than one daemon pod
Aug 21 00:03:45.799: INFO: Number of nodes with available pods: 0
Aug 21 00:03:45.799: INFO: Node prtest-7ef3e0b-45-ig-m-gvf6 is running more than one daemon pod
Aug 21 00:03:46.799: INFO: Number of nodes with available pods: 0
Aug 21 00:03:46.799: INFO: Node prtest-7ef3e0b-45-ig-m-gvf6 is running more than one daemon pod
Aug 21 00:03:47.801: INFO: Number of nodes with available pods: 3
Aug 21 00:03:47.801: INFO: Node prtest-7ef3e0b-45-ig-n-276f is running more than one daemon pod
Aug 21 00:03:48.801: INFO: Number of nodes with available pods: 4
Aug 21 00:03:48.801: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 21 00:03:48.952: INFO: Number of nodes with available pods: 3
Aug 21 00:03:48.952: INFO: Node prtest-7ef3e0b-45-ig-n-276f is running more than one daemon pod
Aug 21 00:03:49.999: INFO: Number of nodes with available pods: 3
Aug 21 00:03:49.999: INFO: Node prtest-7ef3e0b-45-ig-n-276f is running more than one daemon pod
Aug 21 00:03:50.999: INFO: Number of nodes with available pods: 3
Aug 21 00:03:50.999: INFO: Node prtest-7ef3e0b-45-ig-n-276f is running more than one daemon pod
Aug 21 00:03:51.998: INFO: Number of nodes with available pods: 4
Aug 21 00:03:51.998: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:66
STEP: Deleting DaemonSet "daemon-set" with reaper
Aug 21 00:04:03.115: INFO: Number of nodes with available pods: 0
Aug 21 00:04:03.115: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 00:04:03.130: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-x7qns/daemonsets","resourceVersion":"26929"},"items":null}

Aug 21 00:04:03.146: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-x7qns/pods","resourceVersion":"26929"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:04:03.223: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-x7qns" for this suite.
Aug 21 00:04:09.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:04:10.502: INFO: namespace: e2e-tests-daemonsets-x7qns, resource: bindings, ignored listing per whitelist
Aug 21 00:04:10.790: INFO: namespace e2e-tests-daemonsets-x7qns deletion completed in 7.538515921s

• [SLOW TEST:26.915 seconds]
[sig-apps] Daemon set [Serial]
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:04:10.791: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:53
[It] should provide secure master service  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[AfterEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:04:11.531: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-fjlnk" for this suite.
Aug 21 00:04:17.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:04:19.130: INFO: namespace: e2e-tests-services-fjlnk, resource: bindings, ignored listing per whitelist
Aug 21 00:04:19.130: INFO: namespace e2e-tests-services-fjlnk deletion completed in 7.567925364s
[AfterEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:58

• [SLOW TEST:8.339 seconds]
[sig-network] Services
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:04:19.130: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-22xb5
Aug 21 00:04:25.927: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-22xb5
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 00:04:25.942: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:06:27.197: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-22xb5" for this suite.
Aug 21 00:06:33.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:06:34.438: INFO: namespace: e2e-tests-container-probe-22xb5, resource: bindings, ignored listing per whitelist
Aug 21 00:06:34.790: INFO: namespace e2e-tests-container-probe-22xb5 deletion completed in 7.550420252s

• [SLOW TEST:135.660 seconds]
[k8s.io] Probing container
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:06:34.790: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-k627j
Aug 21 00:06:39.612: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-k627j
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 00:06:39.628: INFO: Initial restart count of pod liveness-exec is 0
Aug 21 00:07:36.102: INFO: Restart count of pod e2e-tests-container-probe-k627j/liveness-exec is now 1 (56.473319399s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:07:36.124: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-k627j" for this suite.
Aug 21 00:07:42.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:07:43.532: INFO: namespace: e2e-tests-container-probe-k627j, resource: bindings, ignored listing per whitelist
Aug 21 00:07:43.698: INFO: namespace e2e-tests-container-probe-k627j deletion completed in 7.531815104s

• [SLOW TEST:68.908 seconds]
[k8s.io] Probing container
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:07:43.699: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 21 00:07:44.460: INFO: Waiting up to 5m0s for pod "pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-emptydir-l4wq4" to be "success or failure"
Aug 21 00:07:44.482: INFO: Pod "pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.338091ms
Aug 21 00:07:46.498: INFO: Pod "pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038166178s
Aug 21 00:07:48.515: INFO: Pod "pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054992613s
STEP: Saw pod success
Aug 21 00:07:48.515: INFO: Pod "pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:07:48.531: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-276f pod pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6 container test-container: <nil>
STEP: delete the pod
Aug 21 00:07:48.598: INFO: Waiting for pod pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:07:48.613: INFO: Pod pod-3a62f84f-a4d6-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:07:48.613: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l4wq4" for this suite.
Aug 21 00:07:54.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:07:55.983: INFO: namespace: e2e-tests-emptydir-l4wq4, resource: bindings, ignored listing per whitelist
Aug 21 00:07:56.242: INFO: namespace e2e-tests-emptydir-l4wq4 deletion completed in 7.586971601s

• [SLOW TEST:12.544 seconds]
[sig-storage] EmptyDir volumes
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 21 00:07:56.243: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:38
[It] should provide container's memory limit  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Aug 21 00:07:56.996: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6" in namespace "e2e-tests-downward-api-qzjsw" to be "success or failure"
Aug 21 00:07:57.015: INFO: Pod "downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.003634ms
Aug 21 00:07:59.031: INFO: Pod "downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034375517s
Aug 21 00:08:01.047: INFO: Pod "downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050858968s
STEP: Saw pod success
Aug 21 00:08:01.047: INFO: Pod "downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6" satisfied condition "success or failure"
Aug 21 00:08:01.063: INFO: Trying to get logs from node prtest-7ef3e0b-45-ig-n-276f pod downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6 container client-container: <nil>
STEP: delete the pod
Aug 21 00:08:01.112: INFO: Waiting for pod downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6 to disappear
Aug 21 00:08:01.127: INFO: Pod downwardapi-volume-41dbbcda-a4d6-11e8-9347-0e1ce18119f6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Aug 21 00:08:01.127: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qzjsw" for this suite.
Aug 21 00:08:07.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 00:08:08.084: INFO: namespace: e2e-tests-downward-api-qzjsw, resource: bindings, ignored listing per whitelist
Aug 21 00:08:08.705: INFO: namespace e2e-tests-downward-api-qzjsw deletion completed in 7.548065372s

• [SLOW TEST:12.462 seconds]
[sig-storage] Downward API volume
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:33
  should provide container's memory limit  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
Aug 21 00:08:08.705: INFO: Running AfterSuite actions on all node
Aug 21 00:08:08.705: INFO: Running AfterSuite actions on node 1
Aug 21 00:08:08.705: INFO: Dumping logs locally to: /data/src/github.com/openshift/origin/_output/scripts/conformance-k8s/artifacts
Aug 21 00:08:08.705: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

Ran 161 of 892 Specs in 4579.088 seconds
SUCCESS! -- 161 Passed | 0 Failed | 0 Pending | 731 Skipped PASS

Run complete, results in /data/src/github.com/openshift/origin/_output/scripts/conformance-k8s/artifacts
+ gather
+ set +e
++ pwd
+ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ oc get nodes --template '{{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}'
+ xargs -L 1 -I X bash -c 'oc get --raw /api/v1/nodes/X/proxy/metrics > /tmp/artifacts/X.metrics' ''
+ oc get --raw /metrics
+ set -e
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN EXTENDED TESTS [01h 24m 18s] ##########
[workspace] $ /bin/bash /tmp/jenkins2717067926925388702.sh
########## STARTING STAGE: TAG THE LATEST CONFORMANCE RESULTS ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ location=origin-ci-test/logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts
+ location_url=https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts
+ echo https://storage.googleapis.com/origin-ci-test/logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts
+ gsutil cp .latest-conformance gs://origin-ci-test/releases/openshift/origin/master/.latest-conformance
Copying file://.latest-conformance [Content-Type=application/octet-stream]...
/ [0 files][    0.0 B/  146.0 B]                                                
/ [1 files][  146.0 B/  146.0 B]                                                
Operation completed over 1 objects/146.0 B.                                      
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: TAG THE LATEST CONFORMANCE RESULTS [00h 00m 01s] ##########
[PostBuildScript] - Executing post build scripts.
[workspace] $ /bin/bash /tmp/jenkins4600648251847658271.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/gathered
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin/_output/scripts
  File: ‘/data/src/github.com/openshift/origin/_output/scripts’
  Size: 87        	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 25419629    Links: 6
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:container_file_t:s0
Access: 2018-08-20 21:52:41.927138357 +0000
Modify: 2018-08-20 22:44:46.745894453 +0000
Change: 2018-08-20 22:44:46.745894453 +0000
 Birth: -
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin/_output/scripts
+ scp -r -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin/_output/scripts /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/gathered
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /tmp/artifacts
  File: ‘/tmp/artifacts’
  Size: 160       	Blocks: 0          IO Block: 4096   directory
Device: 27h/39d	Inode: 320042      Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1001/  origin)   Gid: ( 1002/  docker)
Context: unconfined_u:object_r:user_tmp_t:s0
Access: 2018-08-20 22:43:53.444262275 +0000
Modify: 2018-08-21 00:08:09.852368016 +0000
Change: 2018-08-21 00:08:09.852368016 +0000
 Birth: -
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /tmp/artifacts
+ scp -r -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/tmp/artifacts /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/gathered
/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/gathered
├── artifacts
│   ├── junit
│   ├── master.metrics
│   ├── prtest-7ef3e0b-45-ig-m-gvf6.metrics
│   ├── prtest-7ef3e0b-45-ig-n-276f.metrics
│   ├── prtest-7ef3e0b-45-ig-n-4t3s.metrics
│   └── prtest-7ef3e0b-45-ig-n-m97r.metrics
└── scripts
    ├── build-base-images
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    ├── conformance-k8s
    │   ├── artifacts
    │   │   ├── e2e.log
    │   │   ├── junit_01.xml
    │   │   ├── nethealth.txt
    │   │   ├── README.md
    │   │   └── version.txt
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── push-release
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    └── shell
        ├── artifacts
        ├── logs
        │   ├── 39a917e79859b93f9bc96440bb365e687fbf326ebd4cd00efe0f7cce75e8e42b.json
        │   ├── 76d01185ba98302ae0fbfc03cea6c566e18d888bdee29f50fc1763b8e516475a.json
        │   ├── 826491397d7ab0153d5bdb94bd3a48622d1080fd36a9b3022b4dbc8e8785d322.json
        │   └── scripts.log
        └── openshift.local.home

19 directories, 16 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins1292749428551771203.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/generated
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --server=https://$( uname --nodename ):10250 --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/generated
/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
├── master-metrics.log
├── node-metrics.log
└── pid1.journal

0 directories, 10 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins6996466218585515639.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/journals
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/journals
/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service

0 directories, 3 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins1711332605068127104.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/test_branch_origin_extended_conformance_k8s_310/45/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/builds/45/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/master-metrics.log artifacts/generated/node-metrics.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r artifacts/gathered/artifacts artifacts/gathered/scripts gcs/artifacts/
++ pwd
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config -r /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/gcs openshiftdevel:/data
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config /var/lib/jenkins/.config/gcloud/gcs-publisher-credentials.json openshiftdevel:/data/credentials.json
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins3974759208584505366.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ mktemp
+ script=/tmp/tmp.cZ0dX154Oq
+ cat
+ chmod +x /tmp/tmp.cZ0dX154Oq
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.cZ0dX154Oq openshiftdevel:/tmp/tmp.cZ0dX154Oq
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.cZ0dX154Oq"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"periodic","job":"test_branch_origin_extended_conformance_k8s_310","buildid":"1031659447979610112","prowjobid":"d0c0fcb2-a4c2-11e8-9fa0-0a58ac100ccf","refs":{}} ]]
++ jq --compact-output '.buildid |= "45"'
+ JOB_SPEC='{"type":"periodic","job":"test_branch_origin_extended_conformance_k8s_310","buildid":"45","prowjobid":"d0c0fcb2-a4c2-11e8-9fa0-0a58ac100ccf","refs":{}}'
+ docker run -e 'JOB_SPEC={"type":"periodic","job":"test_branch_origin_extended_conformance_k8s_310","buildid":"45","prowjobid":"d0c0fcb2-a4c2-11e8-9fa0-0a58ac100ccf","refs":{}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-path=gs://origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin /data/gcs/artifacts /data/gcs/build-log.txt /data/gcs/finished.json
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ... 
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
605ce1bd3f31: Already exists
dc6346da9948: Already exists
dd66899f62ac: Pulling fs layer
dd66899f62ac: Verifying Checksum
dd66899f62ac: Download complete
dd66899f62ac: Pull complete
Digest: sha256:7e52bce67d44818596e6bfe3f5592299663ec4f23016cfa2858fe1fda547a0ee
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","level":"info","msg":"Gathering artifacts from artifact directory: /data/gcs/artifacts","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/master.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/master.metrics\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-7ef3e0b-45-ig-m-gvf6.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-m-gvf6.metrics\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-276f.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-276f.metrics\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-4t3s.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-4t3s.metrics\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-m97r.metrics in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-m97r.metrics\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/avc_denials.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/avc_denials.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/containers.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/containers.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/dmesg.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/dmesg.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.config in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/docker.config\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.info in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/docker.info\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/filesystem.info in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/filesystem.info\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/installed_packages.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/installed_packages.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/master-metrics.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/master-metrics.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/node-metrics.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/node-metrics.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/pid1.journal in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/pid1.journal\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/dnsmasq.service in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/dnsmasq.service\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/docker.service in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/docker.service\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/systemd-journald.service in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/systemd-journald.service\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/README.md in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/README.md\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/e2e.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/e2e.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/version.txt in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/version.txt\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/logs/scripts.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/logs/scripts.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/push-release/logs/scripts.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/push-release/logs/scripts.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/39a917e79859b93f9bc96440bb365e687fbf326ebd4cd00efe0f7cce75e8e42b.json in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/39a917e79859b93f9bc96440bb365e687fbf326ebd4cd00efe0f7cce75e8e42b.json\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/76d01185ba98302ae0fbfc03cea6c566e18d888bdee29f50fc1763b8e516475a.json in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/76d01185ba98302ae0fbfc03cea6c566e18d888bdee29f50fc1763b8e516475a.json\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/826491397d7ab0153d5bdb94bd3a48622d1080fd36a9b3022b4dbc8e8785d322.json in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/826491397d7ab0153d5bdb94bd3a48622d1080fd36a9b3022b4dbc8e8785d322.json\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/scripts.log in artifact directory. Uploading as logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/scripts.log\n","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-m97r.metrics","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/README.md","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/e2e.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/version.txt","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/826491397d7ab0153d5bdb94bd3a48622d1080fd36a9b3022b4dbc8e8785d322.json","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/master.metrics","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/dmesg.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/filesystem.info","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/39a917e79859b93f9bc96440bb365e687fbf326ebd4cd00efe0f7cce75e8e42b.json","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/finished.json","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/avc_denials.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/docker.info","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/pid1.journal","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/installed_packages.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/node-metrics.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/docker.service","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/build-log.txt","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-4t3s.metrics","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/docker.config","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-276f.metrics","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/push-release/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/dnsmasq.service","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/systemd-journald.service","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/76d01185ba98302ae0fbfc03cea6c566e18d888bdee29f50fc1763b8e516475a.json","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-m-gvf6.metrics","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/containers.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/master-metrics.log","level":"info","msg":"Queued for upload","time":"2018-08-21T00:08:52Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/master-metrics.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/826491397d7ab0153d5bdb94bd3a48622d1080fd36a9b3022b4dbc8e8785d322.json","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/filesystem.info","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/systemd-journald.service","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/containers.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/docker.config","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/dnsmasq.service","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-276f.metrics","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/avc_denials.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/docker.info","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-4t3s.metrics","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/installed_packages.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/finished.json","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/dmesg.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/39a917e79859b93f9bc96440bb365e687fbf326ebd4cd00efe0f7cce75e8e42b.json","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-n-m97r.metrics","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/version.txt","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/76d01185ba98302ae0fbfc03cea6c566e18d888bdee29f50fc1763b8e516475a.json","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/node-metrics.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/push-release/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/journals/docker.service","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/README.md","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/prtest-7ef3e0b-45-ig-m-gvf6.metrics","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/generated/pid1.journal","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/scripts/conformance-k8s/artifacts/e2e.log","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/build-log.txt","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","dest":"logs/test_branch_origin_extended_conformance_k8s_310/45/artifacts/artifacts/master.metrics","level":"info","msg":"Finished upload","time":"2018-08-21T00:08:53Z"}
{"component":"gcsupload","level":"info","msg":"Finished upload to GCS","time":"2018-08-21T00:08:53Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 06s] ##########
[workspace] $ /bin/bash /tmp/jenkins3821919339618979389.sh
########## STARTING STAGE: GATHER ARTIFACTS FROM TEST CLUSTER ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ base_artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts
+ source ./INSTANCE_PREFIX
++ INSTANCE_PREFIX=prtest-7ef3e0b-45
++ OS_TAG=a5e4ac9
++ OS_PUSH_BASE_REPO=ci-pr-images/prtest-7ef3e0b-45-
++ gcloud compute instances list --regexp '.*prtest-7ef3e0b-45.*' --uri
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.9DxJvLKvjY
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 --format json
++ jq .name --raw-output /tmp/tmp.9DxJvLKvjY
++ tail -c 5
+ name=gvf6
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.9DxJvLKvjY
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/masters/gvf6
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/masters/gvf6 /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/masters/gvf6/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/masters/gvf6/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit origin-master.service --no-pager --all --lines=all
Warning: Permanently added 'compute.8624454741052030746' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig
error: Error loading config file "/etc/origin/master/admin.kubeconfig": open /etc/origin/master/admin.kubeconfig: permission denied
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.5:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-m-gvf6 -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.gdYohEGF9x
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f --format json
++ jq .name --raw-output /tmp/tmp.gdYohEGF9x
++ tail -c 5
+ name=276f
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.gdYohEGF9x
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.gdYohEGF9x
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/276f
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/276f /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/276f/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/276f/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.2017204918356999961' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.4:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-276f -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.zYuagbEemB
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s --format json
++ jq .name --raw-output /tmp/tmp.zYuagbEemB
++ tail -c 5
+ name=4t3s
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.zYuagbEemB
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.zYuagbEemB
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/4t3s
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/4t3s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/4t3s/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/4t3s/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.8151943128003575578' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.2:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-4t3s -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.lZCpuwnkO4
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r --format json
++ jq .name --raw-output /tmp/tmp.lZCpuwnkO4
++ tail -c 5
+ name=m97r
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.lZCpuwnkO4
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.lZCpuwnkO4
true
+ artifact_dir=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/m97r
+ mkdir -p /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/m97r /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/m97r/generated /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/artifacts/nodes/m97r/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.5990745615067709209' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.3:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/prtest-7ef3e0b-45-ig-n-m97r -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins6968221284918125128.sh
########## STARTING STAGE: DEPROVISION TEST CLUSTER ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ mktemp
+ script=/tmp/tmp.etFGqKWZQn
+ cat
+ chmod +x /tmp/tmp.etFGqKWZQn
+ scp -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.etFGqKWZQn openshiftdevel:/tmp/tmp.etFGqKWZQn
+ ssh -F /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 900 /tmp/tmp.etFGqKWZQn"'
+ cd /data/src/github.com/openshift/release
+ trap 'exit 0' EXIT
+ cd cluster/test-deploy/
+ make down WHAT=prtest-7ef3e0b-45 PROFILE=gcp
cd gcp/ && ../../bin/ansible.sh ansible-playbook playbooks/gcp/openshift-cluster/deprovision.yml
Activated service account credentials for: [jenkins-ci-provisioner@openshift-gce-devel.iam.gserviceaccount.com]

PLAY [Terminate running cluster and remove all supporting resources in GCE] ****

TASK [Gathering Facts] *********************************************************
Tuesday 21 August 2018  00:11:05 +0000 (0:00:00.068)       0:00:00.068 ******** 
ok: [localhost]

TASK [include_role] ************************************************************
Tuesday 21 August 2018  00:11:11 +0000 (0:00:06.067)       0:00:06.135 ******** 

TASK [openshift_gcp : Templatize DNS script] ***********************************
Tuesday 21 August 2018  00:11:11 +0000 (0:00:00.113)       0:00:06.249 ******** 
changed: [localhost]

TASK [openshift_gcp : Templatize provision script] *****************************
Tuesday 21 August 2018  00:11:11 +0000 (0:00:00.538)       0:00:06.788 ******** 
changed: [localhost]

TASK [openshift_gcp : Templatize de-provision script] **************************
Tuesday 21 August 2018  00:11:12 +0000 (0:00:00.346)       0:00:07.135 ******** 
changed: [localhost]

TASK [openshift_gcp : Provision GCP DNS domain] ********************************
Tuesday 21 August 2018  00:11:12 +0000 (0:00:00.326)       0:00:07.461 ******** 
skipping: [localhost]

TASK [openshift_gcp : Ensure that DNS resolves to the hosted zone] *************
Tuesday 21 August 2018  00:11:12 +0000 (0:00:00.028)       0:00:07.490 ******** 
skipping: [localhost]

TASK [openshift_gcp : Templatize SSH key provision script] *********************
Tuesday 21 August 2018  00:11:12 +0000 (0:00:00.027)       0:00:07.517 ******** 
changed: [localhost]

TASK [openshift_gcp : Provision GCP SSH key resources] *************************
Tuesday 21 August 2018  00:11:13 +0000 (0:00:00.305)       0:00:07.822 ******** 
skipping: [localhost]

TASK [openshift_gcp : Provision GCP resources] *********************************
Tuesday 21 August 2018  00:11:13 +0000 (0:00:00.028)       0:00:07.851 ******** 
skipping: [localhost]

TASK [openshift_gcp : De-provision GCP resources] ******************************
Tuesday 21 August 2018  00:11:13 +0000 (0:00:00.025)       0:00:07.876 ******** 
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=6    changed=5    unreachable=0    failed=0   

Tuesday 21 August 2018  00:15:30 +0000 (0:04:17.681)       0:04:25.558 ******** 
=============================================================================== 
openshift_gcp : De-provision GCP resources ---------------------------- 257.68s
Gathering Facts --------------------------------------------------------- 6.07s
openshift_gcp : Templatize DNS script ----------------------------------- 0.54s
openshift_gcp : Templatize provision script ----------------------------- 0.35s
openshift_gcp : Templatize de-provision script -------------------------- 0.33s
openshift_gcp : Templatize SSH key provision script --------------------- 0.31s
include_role ------------------------------------------------------------ 0.11s
openshift_gcp : Provision GCP DNS domain -------------------------------- 0.03s
openshift_gcp : Provision GCP SSH key resources ------------------------- 0.03s
openshift_gcp : Ensure that DNS resolves to the hosted zone ------------- 0.03s
openshift_gcp : Provision GCP resources --------------------------------- 0.03s
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION TEST CLUSTER [00h 04m 33s] ##########
[workspace] $ /bin/bash /tmp/jenkins4797799160271665408.sh
########## STARTING STAGE: DELETE PR IMAGES ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ trap 'exit 0' EXIT
+ source ./INSTANCE_PREFIX
++ INSTANCE_PREFIX=prtest-7ef3e0b-45
++ OS_TAG=a5e4ac9
++ OS_PUSH_BASE_REPO=ci-pr-images/prtest-7ef3e0b-45-
+ export KUBECONFIG=/var/lib/jenkins/secrets/image-pr-push.kubeconfig
+ KUBECONFIG=/var/lib/jenkins/secrets/image-pr-push.kubeconfig
+ oc get is -o name -n ci-pr-images
+ grep prtest-7ef3e0b-45
+ xargs -r oc delete
imagestream "prtest-7ef3e0b-45-node" deleted
imagestream "prtest-7ef3e0b-45-origin" deleted
imagestream "prtest-7ef3e0b-45-origin-base" deleted
imagestream "prtest-7ef3e0b-45-origin-cli" deleted
imagestream "prtest-7ef3e0b-45-origin-control-plane" deleted
imagestream "prtest-7ef3e0b-45-origin-deployer" deleted
imagestream "prtest-7ef3e0b-45-origin-docker-builder" deleted
imagestream "prtest-7ef3e0b-45-origin-docker-registry" deleted
imagestream "prtest-7ef3e0b-45-origin-egress-dns-proxy" deleted
imagestream "prtest-7ef3e0b-45-origin-egress-http-proxy" deleted
imagestream "prtest-7ef3e0b-45-origin-egress-router" deleted
imagestream "prtest-7ef3e0b-45-origin-f5-router" deleted
imagestream "prtest-7ef3e0b-45-origin-haproxy-router" deleted
imagestream "prtest-7ef3e0b-45-origin-hyperkube" deleted
imagestream "prtest-7ef3e0b-45-origin-hypershift" deleted
imagestream "prtest-7ef3e0b-45-origin-keepalived-ipfailover" deleted
imagestream "prtest-7ef3e0b-45-origin-metrics-server" deleted
imagestream "prtest-7ef3e0b-45-origin-node" deleted
imagestream "prtest-7ef3e0b-45-origin-pod" deleted
imagestream "prtest-7ef3e0b-45-origin-recycler" deleted
imagestream "prtest-7ef3e0b-45-origin-template-service-broker" deleted
imagestream "prtest-7ef3e0b-45-origin-tests" deleted
imagestream "prtest-7ef3e0b-45-origin-web-console" deleted
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins4204110324475279984.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd
++ export PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2018-08-20 20:15:37.293665", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2018-08-20 20:15:37.296477", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2018-08-20 20:15:38.054095", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-08-20 20:15:38.614729", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-06548e9bcfbb4b337."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-08-20 20:15:39.468525", 
    "instance_ids": [
        "i-06548e9bcfbb4b337"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-07c627b746cd55a35"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-0d9db0b63efd5fc18"
                }
            }, 
            "dns_name": "ec2-54-146-141-161.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-06548e9bcfbb4b337", 
            "image_id": "ami-0b77b87a37c3e662c", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2018-08-20T21:49:30.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-12-8.ec2.internal", 
            "private_ip": "172.18.12.8", 
            "public_dns_name": "ec2-54-146-141-161.compute-1.amazonaws.com", 
            "public_ip": "54.146.141.161", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-08-20 20:15:39.706287", 
    "path": "/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.12.8.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/dae8b1fdd92e4c6b040802a9f6893334ae0660fd/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-08-20 20:15:40.150995", 
    "path": "/var/lib/jenkins/jobs/test_branch_origin_extended_conformance_k8s_310/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 04s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
Finished: SUCCESS