SuccessConsole Output

Skipping 1,473 KB.. Full Log
STEP: Saw pod success
Apr 17 06:49:50.184: INFO: Pod "downwardapi-volume-83fa6648-420b-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:49:50.199: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-bpk1 pod downwardapi-volume-83fa6648-420b-11e8-be33-0effa0d0e830 container client-container: <nil>
STEP: delete the pod
Apr 17 06:49:50.242: INFO: Waiting for pod downwardapi-volume-83fa6648-420b-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:49:50.256: INFO: Pod downwardapi-volume-83fa6648-420b-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:49:50.256: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6zvr2" for this suite.
Apr 17 06:49:56.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:49:57.044: INFO: namespace: e2e-tests-projected-6zvr2, resource: bindings, ignored listing per whitelist
Apr 17 06:49:57.752: INFO: namespace e2e-tests-projected-6zvr2 deletion completed in 7.466798338s

• [SLOW TEST:12.357 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSS
------------------------------
[sig-storage] Projected 
  should be consumable from pods in volume [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:49:57.752: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-8b5c8527-420b-11e8-be33-0effa0d0e830
STEP: Creating a pod to test consume secrets
Apr 17 06:49:58.534: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830" in namespace "e2e-tests-projected-8vwkf" to be "success or failure"
Apr 17 06:49:58.552: INFO: Pod "pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 17.929208ms
Apr 17 06:50:00.569: INFO: Pod "pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034498267s
Apr 17 06:50:02.585: INFO: Pod "pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050844902s
STEP: Saw pod success
Apr 17 06:50:02.585: INFO: Pod "pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:50:02.601: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-9zw9 pod pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830 container projected-secret-volume-test: <nil>
STEP: delete the pod
Apr 17 06:50:02.651: INFO: Waiting for pod pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:50:02.667: INFO: Pod pod-projected-secrets-8b5f4b90-420b-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:50:02.667: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8vwkf" for this suite.
Apr 17 06:50:08.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:50:09.692: INFO: namespace: e2e-tests-projected-8vwkf, resource: bindings, ignored listing per whitelist
Apr 17 06:50:10.163: INFO: namespace e2e-tests-projected-8vwkf deletion completed in 7.467259416s

• [SLOW TEST:12.411 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable from pods in volume [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:50:10.163: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:53
[It] should serve multiport endpoints from pods  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-7kfx6
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-7kfx6 to expose endpoints map[]
Apr 17 06:50:10.950: INFO: Get endpoints failed (22.296461ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Apr 17 06:50:11.966: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7kfx6 exposes endpoints map[] (1.038129385s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-7kfx6
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-7kfx6 to expose endpoints map[pod1:[100]]
Apr 17 06:50:15.127: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7kfx6 exposes endpoints map[pod1:[100]] (3.128008407s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-7kfx6
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-7kfx6 to expose endpoints map[pod1:[100] pod2:[101]]
Apr 17 06:50:17.306: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7kfx6 exposes endpoints map[pod1:[100] pod2:[101]] (2.143493515s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-7kfx6
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-7kfx6 to expose endpoints map[pod2:[101]]
Apr 17 06:50:17.358: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7kfx6 exposes endpoints map[pod2:[101]] (34.552027ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-7kfx6
STEP: waiting up to 1m0s for service multi-endpoint-test in namespace e2e-tests-services-7kfx6 to expose endpoints map[]
Apr 17 06:50:17.400: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-7kfx6 exposes endpoints map[] (21.362008ms elapsed)
[AfterEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:50:17.438: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-7kfx6" for this suite.
Apr 17 06:50:23.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:50:24.841: INFO: namespace: e2e-tests-services-7kfx6, resource: bindings, ignored listing per whitelist
Apr 17 06:50:24.901: INFO: namespace e2e-tests-services-7kfx6 deletion completed in 7.431056013s
[AfterEach] [sig-network] Services
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:58

• [SLOW TEST:14.738 seconds]
[sig-network] Services
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:50:24.901: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[BeforeEach] [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:264
[It] should scale a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating a replication controller
Apr 17 06:50:25.634: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig create -f - --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:26.540: INFO: stderr: ""
Apr 17 06:50:26.540: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 17 06:50:26.540: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:26.690: INFO: stderr: ""
Apr 17 06:50:26.690: INFO: stdout: "update-demo-nautilus-skffr update-demo-nautilus-zttlz "
Apr 17 06:50:26.690: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:26.835: INFO: stderr: ""
Apr 17 06:50:26.835: INFO: stdout: ""
Apr 17 06:50:26.835: INFO: update-demo-nautilus-skffr is created but not running
Apr 17 06:50:31.836: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:31.984: INFO: stderr: ""
Apr 17 06:50:31.984: INFO: stdout: "update-demo-nautilus-skffr update-demo-nautilus-zttlz "
Apr 17 06:50:31.984: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:32.127: INFO: stderr: ""
Apr 17 06:50:32.127: INFO: stdout: "true"
Apr 17 06:50:32.127: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:32.273: INFO: stderr: ""
Apr 17 06:50:32.273: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:50:32.273: INFO: validating pod update-demo-nautilus-skffr
Apr 17 06:50:32.292: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:50:32.292: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:50:32.292: INFO: update-demo-nautilus-skffr is verified up and running
Apr 17 06:50:32.292: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-zttlz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:32.435: INFO: stderr: ""
Apr 17 06:50:32.435: INFO: stdout: "true"
Apr 17 06:50:32.435: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-zttlz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:32.579: INFO: stderr: ""
Apr 17 06:50:32.579: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:50:32.579: INFO: validating pod update-demo-nautilus-zttlz
Apr 17 06:50:32.599: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:50:32.599: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:50:32.599: INFO: update-demo-nautilus-zttlz is verified up and running
STEP: scaling down the replication controller
Apr 17 06:50:32.600: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:32.811: INFO: stderr: ""
Apr 17 06:50:32.811: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 17 06:50:32.811: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:32.959: INFO: stderr: ""
Apr 17 06:50:32.959: INFO: stdout: "update-demo-nautilus-skffr update-demo-nautilus-zttlz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Apr 17 06:50:37.959: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:38.106: INFO: stderr: ""
Apr 17 06:50:38.106: INFO: stdout: "update-demo-nautilus-skffr "
Apr 17 06:50:38.106: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:38.250: INFO: stderr: ""
Apr 17 06:50:38.250: INFO: stdout: "true"
Apr 17 06:50:38.250: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:38.393: INFO: stderr: ""
Apr 17 06:50:38.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:50:38.393: INFO: validating pod update-demo-nautilus-skffr
Apr 17 06:50:38.409: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:50:38.409: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:50:38.409: INFO: update-demo-nautilus-skffr is verified up and running
STEP: scaling up the replication controller
Apr 17 06:50:38.409: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:39.646: INFO: stderr: ""
Apr 17 06:50:39.646: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 17 06:50:39.646: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:39.794: INFO: stderr: ""
Apr 17 06:50:39.794: INFO: stdout: "update-demo-nautilus-skffr update-demo-nautilus-vk8xz "
Apr 17 06:50:39.795: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:39.937: INFO: stderr: ""
Apr 17 06:50:39.937: INFO: stdout: "true"
Apr 17 06:50:39.937: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:40.080: INFO: stderr: ""
Apr 17 06:50:40.080: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:50:40.080: INFO: validating pod update-demo-nautilus-skffr
Apr 17 06:50:40.095: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:50:40.095: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:50:40.095: INFO: update-demo-nautilus-skffr is verified up and running
Apr 17 06:50:40.095: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-vk8xz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:40.240: INFO: stderr: ""
Apr 17 06:50:40.240: INFO: stdout: ""
Apr 17 06:50:40.240: INFO: update-demo-nautilus-vk8xz is created but not running
Apr 17 06:50:45.240: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:45.389: INFO: stderr: ""
Apr 17 06:50:45.389: INFO: stdout: "update-demo-nautilus-skffr update-demo-nautilus-vk8xz "
Apr 17 06:50:45.389: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:45.533: INFO: stderr: ""
Apr 17 06:50:45.533: INFO: stdout: "true"
Apr 17 06:50:45.533: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-skffr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:45.678: INFO: stderr: ""
Apr 17 06:50:45.678: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:50:45.678: INFO: validating pod update-demo-nautilus-skffr
Apr 17 06:50:45.695: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:50:45.695: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:50:45.695: INFO: update-demo-nautilus-skffr is verified up and running
Apr 17 06:50:45.695: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-vk8xz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:45.841: INFO: stderr: ""
Apr 17 06:50:45.841: INFO: stdout: "true"
Apr 17 06:50:45.841: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-vk8xz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:45.986: INFO: stderr: ""
Apr 17 06:50:45.986: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:50:45.986: INFO: validating pod update-demo-nautilus-vk8xz
Apr 17 06:50:46.005: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:50:46.005: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:50:46.005: INFO: update-demo-nautilus-vk8xz is verified up and running
STEP: using delete to clean up resources
Apr 17 06:50:46.006: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:47.314: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 17 06:50:47.314: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" deleted\n"
Apr 17 06:50:47.314: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-pmv8j'
Apr 17 06:50:47.476: INFO: stderr: "No resources found.\n"
Apr 17 06:50:47.476: INFO: stdout: ""
Apr 17 06:50:47.476: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -l name=update-demo --namespace=e2e-tests-kubectl-pmv8j -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 17 06:50:47.622: INFO: stderr: ""
Apr 17 06:50:47.622: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:50:47.622: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pmv8j" for this suite.
Apr 17 06:51:09.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:51:10.530: INFO: namespace: e2e-tests-kubectl-pmv8j, resource: bindings, ignored listing per whitelist
Apr 17 06:51:11.092: INFO: namespace e2e-tests-kubectl-pmv8j deletion completed in 23.441279289s

• [SLOW TEST:46.191 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should scale a replication controller  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be updated  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [k8s.io] Pods
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:51:11.092: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:127
[It] should be updated  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Apr 17 06:51:16.395: INFO: Successfully updated pod "pod-update-b7096b9b-420b-11e8-be33-0effa0d0e830"
STEP: verifying the updated pod is in kubernetes
Apr 17 06:51:16.425: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:51:16.425: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-z6w6k" for this suite.
Apr 17 06:51:38.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:51:39.329: INFO: namespace: e2e-tests-pods-z6w6k, resource: bindings, ignored listing per whitelist
Apr 17 06:51:39.915: INFO: namespace e2e-tests-pods-z6w6k deletion completed in 23.461347686s

• [SLOW TEST:28.823 seconds]
[k8s.io] Pods
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
  should be updated  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:51:39.915: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Apr 17 06:51:40.761: INFO: Number of nodes with available pods: 0
Apr 17 06:51:40.761: INFO: Node ci-prtest-060e08d-14-ig-m-tb3m is running more than one daemon pod
Apr 17 06:51:41.807: INFO: Number of nodes with available pods: 0
Apr 17 06:51:41.807: INFO: Node ci-prtest-060e08d-14-ig-m-tb3m is running more than one daemon pod
Apr 17 06:51:42.807: INFO: Number of nodes with available pods: 0
Apr 17 06:51:42.807: INFO: Node ci-prtest-060e08d-14-ig-m-tb3m is running more than one daemon pod
Apr 17 06:51:43.807: INFO: Number of nodes with available pods: 2
Apr 17 06:51:43.807: INFO: Node ci-prtest-060e08d-14-ig-m-tb3m is running more than one daemon pod
Apr 17 06:51:44.807: INFO: Number of nodes with available pods: 4
Apr 17 06:51:44.807: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Apr 17 06:51:44.914: INFO: Number of nodes with available pods: 3
Apr 17 06:51:44.914: INFO: Node ci-prtest-060e08d-14-ig-n-zwgf is running more than one daemon pod
Apr 17 06:51:45.959: INFO: Number of nodes with available pods: 3
Apr 17 06:51:45.959: INFO: Node ci-prtest-060e08d-14-ig-n-zwgf is running more than one daemon pod
Apr 17 06:51:46.961: INFO: Number of nodes with available pods: 3
Apr 17 06:51:46.961: INFO: Node ci-prtest-060e08d-14-ig-n-zwgf is running more than one daemon pod
Apr 17 06:51:47.960: INFO: Number of nodes with available pods: 4
Apr 17 06:51:47.960: INFO: Number of running nodes: 4, number of available pods: 4
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:66
STEP: Deleting DaemonSet "daemon-set" with reaper
Apr 17 06:51:57.072: INFO: Number of nodes with available pods: 0
Apr 17 06:51:57.072: INFO: Number of running nodes: 0, number of available pods: 0
Apr 17 06:51:57.086: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-t24vw/daemonsets","resourceVersion":"26429"},"items":null}

Apr 17 06:51:57.101: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-t24vw/pods","resourceVersion":"26429"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:51:57.174: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-t24vw" for this suite.
Apr 17 06:52:03.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:52:04.384: INFO: namespace: e2e-tests-daemonsets-t24vw, resource: bindings, ignored listing per whitelist
Apr 17 06:52:04.632: INFO: namespace e2e-tests-daemonsets-t24vw deletion completed in 7.429335643s

• [SLOW TEST:24.717 seconds]
[sig-apps] Daemon set [Serial]
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:52:04.632: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Apr 17 06:52:05.425: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Apr 17 06:52:05.458: INFO: Number of nodes with available pods: 0
Apr 17 06:52:05.458: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Apr 17 06:52:05.552: INFO: Number of nodes with available pods: 0
Apr 17 06:52:05.552: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:06.570: INFO: Number of nodes with available pods: 0
Apr 17 06:52:06.570: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:07.568: INFO: Number of nodes with available pods: 0
Apr 17 06:52:07.568: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:08.568: INFO: Number of nodes with available pods: 1
Apr 17 06:52:08.568: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Apr 17 06:52:08.642: INFO: Number of nodes with available pods: 0
Apr 17 06:52:08.642: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Apr 17 06:52:08.674: INFO: Number of nodes with available pods: 0
Apr 17 06:52:08.674: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:09.690: INFO: Number of nodes with available pods: 0
Apr 17 06:52:09.690: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:10.690: INFO: Number of nodes with available pods: 0
Apr 17 06:52:10.690: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:11.690: INFO: Number of nodes with available pods: 0
Apr 17 06:52:11.690: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:12.690: INFO: Number of nodes with available pods: 0
Apr 17 06:52:12.690: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:13.690: INFO: Number of nodes with available pods: 0
Apr 17 06:52:13.690: INFO: Node ci-prtest-060e08d-14-ig-n-9zw9 is running more than one daemon pod
Apr 17 06:52:14.690: INFO: Number of nodes with available pods: 1
Apr 17 06:52:14.690: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:66
STEP: Deleting DaemonSet "daemon-set" with reaper
Apr 17 06:52:17.802: INFO: Number of nodes with available pods: 0
Apr 17 06:52:17.802: INFO: Number of running nodes: 0, number of available pods: 0
Apr 17 06:52:17.817: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bjkc9/daemonsets","resourceVersion":"26599"},"items":null}

Apr 17 06:52:17.832: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bjkc9/pods","resourceVersion":"26599"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:52:17.934: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bjkc9" for this suite.
Apr 17 06:52:24.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:52:25.039: INFO: namespace: e2e-tests-daemonsets-bjkc9, resource: bindings, ignored listing per whitelist
Apr 17 06:52:25.434: INFO: namespace e2e-tests-daemonsets-bjkc9 deletion completed in 7.471342962s

• [SLOW TEST:20.802 seconds]
[sig-apps] Daemon set [Serial]
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:52:25.434: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir 0777 on node default medium
Apr 17 06:52:26.135: INFO: Waiting up to 5m0s for pod "pod-e35948fe-420b-11e8-be33-0effa0d0e830" in namespace "e2e-tests-emptydir-tmkkn" to be "success or failure"
Apr 17 06:52:26.152: INFO: Pod "pod-e35948fe-420b-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 16.898788ms
Apr 17 06:52:28.168: INFO: Pod "pod-e35948fe-420b-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032230628s
Apr 17 06:52:30.183: INFO: Pod "pod-e35948fe-420b-11e8-be33-0effa0d0e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047899448s
STEP: Saw pod success
Apr 17 06:52:30.183: INFO: Pod "pod-e35948fe-420b-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:52:30.198: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-zwgf pod pod-e35948fe-420b-11e8-be33-0effa0d0e830 container test-container: <nil>
STEP: delete the pod
Apr 17 06:52:30.245: INFO: Waiting for pod pod-e35948fe-420b-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:52:30.260: INFO: Pod pod-e35948fe-420b-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:52:30.260: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tmkkn" for this suite.
Apr 17 06:52:36.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:52:37.031: INFO: namespace: e2e-tests-emptydir-tmkkn, resource: bindings, ignored listing per whitelist
Apr 17 06:52:37.727: INFO: namespace e2e-tests-emptydir-tmkkn deletion completed in 7.438195256s

• [SLOW TEST:12.293 seconds]
[sig-storage] EmptyDir volumes
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSS
------------------------------
[sig-storage] Projected 
  should be consumable from pods in volume with mappings [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:52:37.727: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with mappings [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-map-eaac8f84-420b-11e8-be33-0effa0d0e830
STEP: Creating a pod to test consume secrets
Apr 17 06:52:38.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830" in namespace "e2e-tests-projected-d26f9" to be "success or failure"
Apr 17 06:52:38.464: INFO: Pod "pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 16.78272ms
Apr 17 06:52:40.479: INFO: Pod "pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032545214s
Apr 17 06:52:42.495: INFO: Pod "pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048401761s
STEP: Saw pod success
Apr 17 06:52:42.495: INFO: Pod "pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:52:42.512: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-bpk1 pod pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830 container projected-secret-volume-test: <nil>
STEP: delete the pod
Apr 17 06:52:42.574: INFO: Waiting for pod pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:52:42.594: INFO: Pod pod-projected-secrets-eaaf6233-420b-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:52:42.594: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d26f9" for this suite.
Apr 17 06:52:48.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:52:49.779: INFO: namespace: e2e-tests-projected-d26f9, resource: bindings, ignored listing per whitelist
Apr 17 06:52:50.060: INFO: namespace e2e-tests-projected-d26f9 deletion completed in 7.432795176s

• [SLOW TEST:12.333 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable from pods in volume with mappings [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:52:50.060: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[BeforeEach] [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:264
[It] should do a rolling update of a replication controller  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: creating the initial replication controller
Apr 17 06:52:50.742: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig create -f - --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:51.031: INFO: stderr: ""
Apr 17 06:52:51.031: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 17 06:52:51.031: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:51.178: INFO: stderr: ""
Apr 17 06:52:51.178: INFO: stdout: "update-demo-nautilus-2fc2r update-demo-nautilus-cnks7 "
Apr 17 06:52:51.178: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-2fc2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:51.324: INFO: stderr: ""
Apr 17 06:52:51.324: INFO: stdout: ""
Apr 17 06:52:51.324: INFO: update-demo-nautilus-2fc2r is created but not running
Apr 17 06:52:56.324: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:56.474: INFO: stderr: ""
Apr 17 06:52:56.474: INFO: stdout: "update-demo-nautilus-2fc2r update-demo-nautilus-cnks7 "
Apr 17 06:52:56.474: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-2fc2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:56.619: INFO: stderr: ""
Apr 17 06:52:56.619: INFO: stdout: "true"
Apr 17 06:52:56.619: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-2fc2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:56.762: INFO: stderr: ""
Apr 17 06:52:56.762: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:52:56.762: INFO: validating pod update-demo-nautilus-2fc2r
Apr 17 06:52:56.782: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:52:56.782: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:52:56.782: INFO: update-demo-nautilus-2fc2r is verified up and running
Apr 17 06:52:56.782: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-cnks7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:56.926: INFO: stderr: ""
Apr 17 06:52:56.926: INFO: stdout: "true"
Apr 17 06:52:56.926: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-nautilus-cnks7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:52:57.071: INFO: stderr: ""
Apr 17 06:52:57.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0"
Apr 17 06:52:57.071: INFO: validating pod update-demo-nautilus-cnks7
Apr 17 06:52:57.090: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 17 06:52:57.090: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 17 06:52:57.090: INFO: update-demo-nautilus-cnks7 is verified up and running
STEP: rolling-update to new replication controller
Apr 17 06:52:57.090: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:53:07.283: INFO: stderr: ""
Apr 17 06:53:07.283: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting update-demo-nautilus\nreplicationcontroller \"update-demo-kitten\" rolling updated to \"update-demo-kitten\"\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 17 06:53:07.283: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:53:07.445: INFO: stderr: ""
Apr 17 06:53:07.445: INFO: stdout: "update-demo-kitten-4crjf update-demo-kitten-cjtht update-demo-nautilus-2fc2r "
STEP: Replicas for name=update-demo: expected=2 actual=3
Apr 17 06:53:12.445: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:53:12.592: INFO: stderr: ""
Apr 17 06:53:12.592: INFO: stdout: "update-demo-kitten-4crjf update-demo-kitten-cjtht "
Apr 17 06:53:12.592: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-kitten-4crjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:53:12.736: INFO: stderr: ""
Apr 17 06:53:12.736: INFO: stdout: "true"
Apr 17 06:53:12.736: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-kitten-4crjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:53:12.879: INFO: stderr: ""
Apr 17 06:53:12.879: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0"
Apr 17 06:53:12.879: INFO: validating pod update-demo-kitten-4crjf
Apr 17 06:53:12.898: INFO: got data: {
  "image": "kitten.jpg"
}

Apr 17 06:53:12.898: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Apr 17 06:53:12.898: INFO: update-demo-kitten-4crjf is verified up and running
Apr 17 06:53:12.898: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-kitten-cjtht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:53:13.044: INFO: stderr: ""
Apr 17 06:53:13.044: INFO: stdout: "true"
Apr 17 06:53:13.044: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig get pods update-demo-kitten-cjtht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lglth'
Apr 17 06:53:13.187: INFO: stderr: ""
Apr 17 06:53:13.187: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0"
Apr 17 06:53:13.187: INFO: validating pod update-demo-kitten-cjtht
Apr 17 06:53:13.206: INFO: got data: {
  "image": "kitten.jpg"
}

Apr 17 06:53:13.206: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Apr 17 06:53:13.206: INFO: update-demo-kitten-cjtht is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:53:13.206: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lglth" for this suite.
Apr 17 06:53:35.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:53:36.611: INFO: namespace: e2e-tests-kubectl-lglth, resource: bindings, ignored listing per whitelist
Apr 17 06:53:36.670: INFO: namespace e2e-tests-kubectl-lglth deletion completed in 23.435950069s

• [SLOW TEST:46.610 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should do a rolling update of a replication controller  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:53:36.670: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should check if v1 is in available api versions  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: validating api versions
Apr 17 06:53:37.324: INFO: Running '/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/bin/linux/amd64/kubectl --kubeconfig=/tmp/cluster-admin.kubeconfig api-versions'
Apr 17 06:53:37.473: INFO: stderr: ""
Apr 17 06:53:37.473: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps.openshift.io/v1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nauthorization.openshift.io/v1\nautoscaling/v1\nautoscaling/v2beta1\nbatch/v1\nbatch/v1beta1\nbatch/v2alpha1\nbuild.openshift.io/v1\ncertificates.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nimage.openshift.io/v1\nnetwork.openshift.io/v1\nnetworking.k8s.io/v1\noauth.openshift.io/v1\npolicy/v1beta1\nproject.openshift.io/v1\nquota.openshift.io/v1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nroute.openshift.io/v1\nsecurity.openshift.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntemplate.openshift.io/v1\nuser.openshift.io/v1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:53:37.474: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n89m8" for this suite.
Apr 17 06:53:43.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:53:44.402: INFO: namespace: e2e-tests-kubectl-n89m8, resource: bindings, ignored listing per whitelist
Apr 17 06:53:44.931: INFO: namespace e2e-tests-kubectl-n89m8 deletion completed in 7.42835348s

• [SLOW TEST:8.261 seconds]
[sig-cli] Kubectl client
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:669
    should check if v1 is in available api versions  [Conformance]
    /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Projected 
  should provide container's memory request [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:53:44.931: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should provide container's memory request [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test downward API volume plugin
Apr 17 06:53:45.681: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830" in namespace "e2e-tests-projected-7swtn" to be "success or failure"
Apr 17 06:53:45.701: INFO: Pod "downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 18.995681ms
Apr 17 06:53:47.718: INFO: Pod "downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036176594s
Apr 17 06:53:49.735: INFO: Pod "downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053926936s
STEP: Saw pod success
Apr 17 06:53:49.735: INFO: Pod "downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:53:49.751: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-zwgf pod downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830 container client-container: <nil>
STEP: delete the pod
Apr 17 06:53:49.799: INFO: Waiting for pod downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:53:49.815: INFO: Pod downwardapi-volume-12c2709c-420c-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:53:49.815: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7swtn" for this suite.
Apr 17 06:53:55.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:53:57.022: INFO: namespace: e2e-tests-projected-7swtn, resource: bindings, ignored listing per whitelist
Apr 17 06:53:57.323: INFO: namespace e2e-tests-projected-7swtn deletion completed in 7.476919051s

• [SLOW TEST:12.392 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should provide container's memory request [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:53:57.323: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating configMap with name configmap-test-volume-1a28a84b-420c-11e8-be33-0effa0d0e830
STEP: Creating a pod to test consume configMaps
Apr 17 06:53:58.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830" in namespace "e2e-tests-configmap-994wd" to be "success or failure"
Apr 17 06:53:58.136: INFO: Pod "pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 19.460951ms
Apr 17 06:54:00.152: INFO: Pod "pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035857865s
Apr 17 06:54:02.169: INFO: Pod "pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052321456s
STEP: Saw pod success
Apr 17 06:54:02.169: INFO: Pod "pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:54:02.184: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-bpk1 pod pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830 container configmap-volume-test: <nil>
STEP: delete the pod
Apr 17 06:54:02.233: INFO: Waiting for pod pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:54:02.248: INFO: Pod pod-configmaps-1a2be12d-420c-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:54:02.248: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-994wd" for this suite.
Apr 17 06:54:08.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:54:09.724: INFO: namespace: e2e-tests-configmap-994wd, resource: bindings, ignored listing per whitelist
Apr 17 06:54:09.755: INFO: namespace e2e-tests-configmap-994wd deletion completed in 7.477232979s

• [SLOW TEST:12.431 seconds]
[sig-storage] ConfigMap
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Secrets
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:54:09.755: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
Apr 17 06:54:10.523: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
STEP: Creating secret with name s-test-opt-del-2194c8e8-420c-11e8-be33-0effa0d0e830
STEP: Creating secret with name s-test-opt-upd-2194c94a-420c-11e8-be33-0effa0d0e830
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2194c8e8-420c-11e8-be33-0effa0d0e830
STEP: Updating secret s-test-opt-upd-2194c94a-420c-11e8-be33-0effa0d0e830
STEP: Creating secret with name s-test-opt-create-2194c974-420c-11e8-be33-0effa0d0e830
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:55:27.674: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ckhrv" for this suite.
Apr 17 06:55:49.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:55:50.871: INFO: namespace: e2e-tests-secrets-ckhrv, resource: bindings, ignored listing per whitelist
Apr 17 06:55:51.134: INFO: namespace e2e-tests-secrets-ckhrv deletion completed in 23.430660238s

• [SLOW TEST:101.379 seconds]
[sig-storage] Secrets
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume  [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:55:51.134: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating a pod to test emptydir volume type on tmpfs
Apr 17 06:55:51.835: INFO: Waiting up to 5m0s for pod "pod-5df49079-420c-11e8-be33-0effa0d0e830" in namespace "e2e-tests-emptydir-764xd" to be "success or failure"
Apr 17 06:55:51.851: INFO: Pod "pod-5df49079-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 16.135407ms
Apr 17 06:55:53.867: INFO: Pod "pod-5df49079-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032712259s
Apr 17 06:55:55.884: INFO: Pod "pod-5df49079-420c-11e8-be33-0effa0d0e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04897191s
STEP: Saw pod success
Apr 17 06:55:55.884: INFO: Pod "pod-5df49079-420c-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:55:55.898: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-zwgf pod pod-5df49079-420c-11e8-be33-0effa0d0e830 container test-container: <nil>
STEP: delete the pod
Apr 17 06:55:55.943: INFO: Waiting for pod pod-5df49079-420c-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:55:55.958: INFO: Pod pod-5df49079-420c-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:55:55.958: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-764xd" for this suite.
Apr 17 06:56:02.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:56:02.685: INFO: namespace: e2e-tests-emptydir-764xd, resource: bindings, ignored listing per whitelist
Apr 17 06:56:03.450: INFO: namespace e2e-tests-emptydir-764xd deletion completed in 7.463198753s

• [SLOW TEST:12.316 seconds]
[sig-storage] EmptyDir volumes
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected 
  should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Apr 17 06:56:03.450: INFO: >>> kubeConfig: /tmp/cluster-admin.kubeconfig
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:858
[It] should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
STEP: Creating projection with secret that has name projected-secret-test-655326fd-420c-11e8-be33-0effa0d0e830
STEP: Creating a pod to test consume secrets
Apr 17 06:56:04.218: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830" in namespace "e2e-tests-projected-bl8js" to be "success or failure"
Apr 17 06:56:04.234: INFO: Pod "pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 15.568114ms
Apr 17 06:56:06.249: INFO: Pod "pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031377119s
Apr 17 06:56:08.266: INFO: Pod "pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047792753s
STEP: Saw pod success
Apr 17 06:56:08.266: INFO: Pod "pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830" satisfied condition "success or failure"
Apr 17 06:56:08.281: INFO: Trying to get logs from node ci-prtest-060e08d-14-ig-n-bpk1 pod pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830 container projected-secret-volume-test: <nil>
STEP: delete the pod
Apr 17 06:56:08.326: INFO: Waiting for pod pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830 to disappear
Apr 17 06:56:08.341: INFO: Pod pod-projected-secrets-65560a79-420c-11e8-be33-0effa0d0e830 no longer exists
[AfterEach] [sig-storage] Projected
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Apr 17 06:56:08.341: INFO: Waiting up to 3m0s for all (but 1) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bl8js" for this suite.
Apr 17 06:56:14.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 17 06:56:15.157: INFO: namespace: e2e-tests-projected-bl8js, resource: bindings, ignored listing per whitelist
Apr 17 06:56:15.882: INFO: namespace e2e-tests-projected-bl8js deletion completed in 7.511815663s

• [SLOW TEST:12.432 seconds]
[sig-storage] Projected
/data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected.go:34
  should be consumable from pods in volume with defaultMode set [Conformance]
  /data/src/github.com/openshift/origin/_output/components/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:674
------------------------------
SSSSSSSSSSApr 17 06:56:15.882: INFO: Running AfterSuite actions on all node
Apr 17 06:56:15.882: INFO: Running AfterSuite actions on node 1
Apr 17 06:56:15.882: INFO: Dumping logs locally to: /data/src/github.com/openshift/origin/_output/scripts/conformance-k8s/artifacts
Apr 17 06:56:15.882: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

Ran 161 of 853 Specs in 4646.018 seconds
SUCCESS! -- 161 Passed | 0 Failed | 0 Pending | 692 Skipped PASS

Run complete, results in /data/src/github.com/openshift/origin/_output/scripts/conformance-k8s/artifacts
+ gather
+ set +e
++ pwd
+ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/origin/.local/bin:/home/origin/bin
+ oc get nodes --template '{{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}'
+ xargs -L 1 -I X bash -c 'oc get --raw /api/v1/nodes/X/proxy/metrics > /tmp/artifacts/X.metrics' ''
+ oc get --raw /metrics
+ set -e
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN EXTENDED TESTS [01h 25m 33s] ##########
[PostBuildScript] - Execution post build scripts.
[workspace] $ /bin/bash /tmp/jenkins7315487243882710698.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin/_output/scripts
  File: ‘/data/src/github.com/openshift/origin/_output/scripts’
  Size: 87        	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 75554185    Links: 6
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:svirt_sandbox_file_t:s0
Access: 2018-04-17 04:39:20.503266285 +0000
Modify: 2018-04-17 05:31:48.436320627 +0000
Change: 2018-04-17 05:31:48.436320627 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin/_output/scripts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin/_output/scripts /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /tmp/artifacts
  File: ‘/tmp/artifacts’
  Size: 160       	Blocks: 0          IO Block: 4096   directory
Device: 26h/38d	Inode: 210566      Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1001/  origin)   Gid: ( 1002/  docker)
Context: unconfined_u:object_r:user_tmp_t:s0
Access: 2018-04-17 05:30:44.993244508 +0000
Modify: 2018-04-17 06:56:17.128266801 +0000
Change: 2018-04-17 06:56:17.128266801 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /tmp/artifacts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/tmp/artifacts /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/gathered
/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/gathered
├── artifacts
│   ├── ci-prtest-060e08d-14-ig-m-tb3m.metrics
│   ├── ci-prtest-060e08d-14-ig-n-9zw9.metrics
│   ├── ci-prtest-060e08d-14-ig-n-bpk1.metrics
│   ├── ci-prtest-060e08d-14-ig-n-zwgf.metrics
│   ├── junit
│   └── master.metrics
└── scripts
    ├── build-base-images
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    ├── conformance-k8s
    │   ├── artifacts
    │   │   ├── e2e.log
    │   │   ├── junit_01.xml
    │   │   ├── nethealth.txt
    │   │   ├── README.md
    │   │   └── version.txt
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── push-release
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    └── shell
        ├── artifacts
        ├── logs
        │   ├── 39aef6e3f6e72620afd8cc4e8b4a0028a9c90ce9c8fa6e3af4d1ab591d89e67f.json
        │   ├── 3e8757575fe8b35259ce737d3e3f1f1473b71c1d8a204f6bd76f0a9e88584d5d.json
        │   ├── 7ea49bb84217de64cdc5142e7ea63138981334e2b9a93b085105353e9a2f622e.json
        │   └── scripts.log
        └── openshift.local.home

19 directories, 16 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins6299816325849391488.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/generated
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --server=https://$( uname --nodename ):10250 --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/generated
/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
├── master-metrics.log
├── node-metrics.log
└── pid1.journal

0 directories, 10 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins8059286428486078955.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/journals
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/journals
/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service

0 directories, 3 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins2120741069798750167.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended_conformance_k8s/14/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/builds/14/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/master-metrics.log artifacts/generated/node-metrics.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r artifacts/gathered/artifacts artifacts/gathered/scripts gcs/artifacts/
++ pwd
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config -r /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/gcs openshiftdevel:/data
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /var/lib/jenkins/.config/gcloud/gcs-publisher-credentials.json openshiftdevel:/data/credentials.json
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins49945402331297138.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ mktemp
+ script=/tmp/tmp.rxGNamWwxP
+ cat
+ chmod +x /tmp/tmp.rxGNamWwxP
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.rxGNamWwxP openshiftdevel:/tmp/tmp.rxGNamWwxP
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.rxGNamWwxP"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"test_pull_request_origin_extended_conformance_k8s","buildid":"a17dc755-41f8-11e8-a9a6-0a58ac100267","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"221863fa39645925acd01f43ebd815aeae453160","pulls":[{"number":19381,"author":"smarterclayton","sha":"fd46200a0f20bfbe1f6a93537ca58fd13104ec52"}]}} ]]
++ jq --compact-output .buildid
+ [[ "a17dc755-41f8-11e8-a9a6-0a58ac100267" =~ ^"[0-9]+"$ ]]
Using BUILD_NUMBER
+ echo 'Using BUILD_NUMBER'
++ jq --compact-output '.buildid |= "14"'
+ JOB_SPEC='{"type":"presubmit","job":"test_pull_request_origin_extended_conformance_k8s","buildid":"14","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"221863fa39645925acd01f43ebd815aeae453160","pulls":[{"number":19381,"author":"smarterclayton","sha":"fd46200a0f20bfbe1f6a93537ca58fd13104ec52"}]}}'
+ docker run -e 'JOB_SPEC={"type":"presubmit","job":"test_pull_request_origin_extended_conformance_k8s","buildid":"14","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"221863fa39645925acd01f43ebd815aeae453160","pulls":[{"number":19381,"author":"smarterclayton","sha":"fd46200a0f20bfbe1f6a93537ca58fd13104ec52"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-bucket=origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin /data/gcs/artifacts /data/gcs/build-log.txt /data/gcs/finished.json
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ... 
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
6d987f6f4279: Already exists
4cccebe844ee: Already exists
deb4d9262c8e: Pulling fs layer
deb4d9262c8e: Download complete
deb4d9262c8e: Pull complete
Digest: sha256:937cfc74efbe5f99ac6b54a8837ce0c1ba72f9f12cf4bf484c6fb7323727f623
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","level":"info","msg":"Gathering artifacts from artifact directory: /data/gcs/artifacts","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/ci-prtest-060e08d-14-ig-m-tb3m.metrics in artifact directory. Uploading as artifacts/artifacts/ci-prtest-060e08d-14-ig-m-tb3m.metrics\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-9zw9.metrics in artifact directory. Uploading as artifacts/artifacts/ci-prtest-060e08d-14-ig-n-9zw9.metrics\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-bpk1.metrics in artifact directory. Uploading as artifacts/artifacts/ci-prtest-060e08d-14-ig-n-bpk1.metrics\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-zwgf.metrics in artifact directory. Uploading as artifacts/artifacts/ci-prtest-060e08d-14-ig-n-zwgf.metrics\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/artifacts/master.metrics in artifact directory. Uploading as artifacts/artifacts/master.metrics\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/avc_denials.log in artifact directory. Uploading as artifacts/generated/avc_denials.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/containers.log in artifact directory. Uploading as artifacts/generated/containers.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/dmesg.log in artifact directory. Uploading as artifacts/generated/dmesg.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.config in artifact directory. Uploading as artifacts/generated/docker.config\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.info in artifact directory. Uploading as artifacts/generated/docker.info\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/filesystem.info in artifact directory. Uploading as artifacts/generated/filesystem.info\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/installed_packages.log in artifact directory. Uploading as artifacts/generated/installed_packages.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/master-metrics.log in artifact directory. Uploading as artifacts/generated/master-metrics.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/node-metrics.log in artifact directory. Uploading as artifacts/generated/node-metrics.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/pid1.journal in artifact directory. Uploading as artifacts/generated/pid1.journal\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/dnsmasq.service in artifact directory. Uploading as artifacts/journals/dnsmasq.service\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/docker.service in artifact directory. Uploading as artifacts/journals/docker.service\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/systemd-journald.service in artifact directory. Uploading as artifacts/journals/systemd-journald.service\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/README.md in artifact directory. Uploading as artifacts/scripts/conformance-k8s/artifacts/README.md\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/e2e.log in artifact directory. Uploading as artifacts/scripts/conformance-k8s/artifacts/e2e.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml in artifact directory. Uploading as artifacts/scripts/conformance-k8s/artifacts/junit_01.xml\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt in artifact directory. Uploading as artifacts/scripts/conformance-k8s/artifacts/nethealth.txt\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/artifacts/version.txt in artifact directory. Uploading as artifacts/scripts/conformance-k8s/artifacts/version.txt\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/conformance-k8s/logs/scripts.log in artifact directory. Uploading as artifacts/scripts/conformance-k8s/logs/scripts.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/push-release/logs/scripts.log in artifact directory. Uploading as artifacts/scripts/push-release/logs/scripts.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/39aef6e3f6e72620afd8cc4e8b4a0028a9c90ce9c8fa6e3af4d1ab591d89e67f.json in artifact directory. Uploading as artifacts/scripts/shell/logs/39aef6e3f6e72620afd8cc4e8b4a0028a9c90ce9c8fa6e3af4d1ab591d89e67f.json\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/3e8757575fe8b35259ce737d3e3f1f1473b71c1d8a204f6bd76f0a9e88584d5d.json in artifact directory. Uploading as artifacts/scripts/shell/logs/3e8757575fe8b35259ce737d3e3f1f1473b71c1d8a204f6bd76f0a9e88584d5d.json\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/7ea49bb84217de64cdc5142e7ea63138981334e2b9a93b085105353e9a2f622e.json in artifact directory. Uploading as artifacts/scripts/shell/logs/7ea49bb84217de64cdc5142e7ea63138981334e2b9a93b085105353e9a2f622e.json\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/scripts.log in artifact directory. Uploading as artifacts/scripts/shell/logs/scripts.log\n","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-bpk1.metrics","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-zwgf.metrics","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/journals/systemd-journald.service","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/version.txt","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/39aef6e3f6e72620afd8cc4e8b4a0028a9c90ce9c8fa6e3af4d1ab591d89e67f.json","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_extended_conformance_k8s/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/avc_denials.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/docker.info","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/installed_packages.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/pid1.journal","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/3e8757575fe8b35259ce737d3e3f1f1473b71c1d8a204f6bd76f0a9e88584d5d.json","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/filesystem.info","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_extended_conformance_k8s/14.txt","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/dmesg.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/master-metrics.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/README.md","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/7ea49bb84217de64cdc5142e7ea63138981334e2b9a93b085105353e9a2f622e.json","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/build-log.txt","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/finished.json","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-m-tb3m.metrics","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/docker.config","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/journals/dnsmasq.service","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/journals/docker.service","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-9zw9.metrics","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/master.metrics","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/containers.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/e2e.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/node-metrics.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/push-release/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/7ea49bb84217de64cdc5142e7ea63138981334e2b9a93b085105353e9a2f622e.json","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/containers.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/push-release/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/version.txt","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/installed_packages.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/pid1.journal","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/README.md","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:36Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/docker.info","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/finished.json","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/3e8757575fe8b35259ce737d3e3f1f1473b71c1d8a204f6bd76f0a9e88584d5d.json","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_extended_conformance_k8s/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/avc_denials.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/filesystem.info","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/docker.config","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/39aef6e3f6e72620afd8cc4e8b4a0028a9c90ce9c8fa6e3af4d1ab591d89e67f.json","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-9zw9.metrics","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/journals/systemd-journald.service","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_extended_conformance_k8s/14.txt","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/journals/dnsmasq.service","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-zwgf.metrics","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/junit_01.xml","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/node-metrics.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/dmesg.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-m-tb3m.metrics","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/e2e.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/conformance-k8s/artifacts/nethealth.txt","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/build-log.txt","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/generated/master-metrics.log","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/journals/docker.service","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/ci-prtest-060e08d-14-ig-n-bpk1.metrics","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","dest":"pr-logs/pull/19381/test_pull_request_origin_extended_conformance_k8s/14/artifacts/artifacts/master.metrics","level":"info","msg":"Finished upload","time":"2018-04-17T06:56:37Z"}
{"component":"gcsupload","level":"info","msg":"Finished upload to GCS","time":"2018-04-17T06:56:37Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 09s] ##########
[workspace] $ /bin/bash /tmp/jenkins1235263566538276721.sh
########## STARTING STAGE: GATHER ARTIFACTS FROM TEST CLUSTER ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ base_artifact_dir=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts
+ source ./INSTANCE_PREFIX
++ INSTANCE_PREFIX=prtest-060e08d-14
++ OS_TAG=fd46200
++ OS_PUSH_BASE_REPO=ci-pr-images/prtest-060e08d-14-
++ gcloud compute instances list --regexp '.*prtest-060e08d-14.*' --uri
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.6uFeTKoPgv
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m --format json
++ jq .name --raw-output /tmp/tmp.6uFeTKoPgv
++ tail -c 5
+ name=tb3m
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.6uFeTKoPgv
true
+ artifact_dir=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/masters/tb3m
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/masters/tb3m /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/masters/tb3m/generated /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/masters/tb3m/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit origin-master.service --no-pager --all --lines=all
Warning: Permanently added 'compute.2745624701651401263' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig
error: Error loading config file "/etc/origin/master/admin.kubeconfig": open /etc/origin/master/admin.kubeconfig: permission denied
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.5:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-m-tb3m -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.ABneNVsfor
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 --format json
++ jq .name --raw-output /tmp/tmp.ABneNVsfor
++ tail -c 5
+ name=9zw9
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.ABneNVsfor
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.ABneNVsfor
true
+ artifact_dir=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/9zw9
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/9zw9 /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/9zw9/generated /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/9zw9/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.7976505224451340881' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.3:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-9zw9 -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.KDvKOnoJe7
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 --format json
++ jq .name --raw-output /tmp/tmp.KDvKOnoJe7
++ tail -c 5
+ name=bpk1
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.KDvKOnoJe7
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.KDvKOnoJe7
true
+ artifact_dir=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/bpk1
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/bpk1 /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/bpk1/generated /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/bpk1/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.6758315631582919249' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.2:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-bpk1 -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ for instance in '$( gcloud compute instances list --regexp ".*${INSTANCE_PREFIX}.*" --uri )'
++ mktemp
+ info=/tmp/tmp.cjVTHDuR6G
+ gcloud compute instances describe https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf --format json
++ jq .name --raw-output /tmp/tmp.cjVTHDuR6G
++ tail -c 5
+ name=zwgf
+ jq '.tags.items | contains(["ocp-master"])' --exit-status /tmp/tmp.cjVTHDuR6G
false
+ jq '.tags.items | contains(["ocp-node"])' --exit-status /tmp/tmp.cjVTHDuR6G
true
+ artifact_dir=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/zwgf
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/zwgf /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/zwgf/generated /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/artifacts/nodes/zwgf/journals
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl --unit origin-node.service --no-pager --all --lines=all
Warning: Permanently added 'compute.2787963737636092497' (ECDSA) to the list of known hosts.
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl --unit docker.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
++ uname --nodename
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- oc get --raw /metrics --server=https://ip-172-18-8-64.ec2.internal:10250
Unable to connect to the server: dial tcp: lookup ip-172-18-8-64.ec2.internal on 10.142.0.4:53: no such host
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a'
  WARNING: You're not using the default seccomp profile
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo yum history info origin
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- 'sudo df -h && sudo pvs && sudo vgs && sudo lvs'
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo yum list installed
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC
<no matches>
+ true
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- sudo journalctl _PID=1 --no-pager --all --lines=all
+ gcloud compute ssh https://www.googleapis.com/compute/v1/projects/openshift-gce-devel-ci/zones/us-east1-c/instances/ci-prtest-060e08d-14-ig-n-zwgf -- 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1'
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins58864394237408106.sh
########## STARTING STAGE: DEPROVISION TEST CLUSTER ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ mktemp
+ script=/tmp/tmp.M7hZTjWzwq
+ cat
+ chmod +x /tmp/tmp.M7hZTjWzwq
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.M7hZTjWzwq openshiftdevel:/tmp/tmp.M7hZTjWzwq
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 900 /tmp/tmp.M7hZTjWzwq"'
+ cd /data/src/github.com/openshift/release
+ trap 'exit 0' EXIT
+ cd cluster/test-deploy/
+ make down WHAT=prtest-060e08d-14 PROFILE=gcp
cd gcp/ && ../../bin/ansible.sh ansible-playbook playbooks/gcp/openshift-cluster/deprovision.yml
Activated service account credentials for: [jenkins-ci-provisioner@openshift-gce-devel.iam.gserviceaccount.com]

PLAY [Terminate running cluster and remove all supporting resources in GCE] ****

TASK [Gathering Facts] *********************************************************
Tuesday 17 April 2018  06:58:57 +0000 (0:00:00.065)       0:00:00.065 ********* 
ok: [localhost]

TASK [include_role] ************************************************************
Tuesday 17 April 2018  06:59:03 +0000 (0:00:06.279)       0:00:06.344 ********* 

TASK [openshift_gcp : Templatize DNS script] ***********************************
Tuesday 17 April 2018  06:59:03 +0000 (0:00:00.101)       0:00:06.446 ********* 
changed: [localhost]

TASK [openshift_gcp : Templatize provision script] *****************************
Tuesday 17 April 2018  06:59:04 +0000 (0:00:00.536)       0:00:06.983 ********* 
changed: [localhost]

TASK [openshift_gcp : Templatize de-provision script] **************************
Tuesday 17 April 2018  06:59:04 +0000 (0:00:00.355)       0:00:07.338 ********* 
changed: [localhost]

TASK [openshift_gcp : Provision GCP DNS domain] ********************************
Tuesday 17 April 2018  06:59:04 +0000 (0:00:00.323)       0:00:07.662 ********* 
skipping: [localhost]

TASK [openshift_gcp : Ensure that DNS resolves to the hosted zone] *************
Tuesday 17 April 2018  06:59:04 +0000 (0:00:00.028)       0:00:07.691 ********* 
skipping: [localhost]

TASK [openshift_gcp : Templatize SSH key provision script] *********************
Tuesday 17 April 2018  06:59:04 +0000 (0:00:00.028)       0:00:07.719 ********* 
changed: [localhost]

TASK [openshift_gcp : Provision GCP SSH key resources] *************************
Tuesday 17 April 2018  06:59:05 +0000 (0:00:00.305)       0:00:08.024 ********* 
skipping: [localhost]

TASK [openshift_gcp : Provision GCP resources] *********************************
Tuesday 17 April 2018  06:59:05 +0000 (0:00:00.030)       0:00:08.055 ********* 
skipping: [localhost]

TASK [openshift_gcp : De-provision GCP resources] ******************************
Tuesday 17 April 2018  06:59:05 +0000 (0:00:00.030)       0:00:08.086 ********* 
changed: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=6    changed=5    unreachable=0    failed=0   

Tuesday 17 April 2018  07:02:29 +0000 (0:03:24.161)       0:03:32.248 ********* 
=============================================================================== 
openshift_gcp : De-provision GCP resources ---------------------------- 204.16s
Gathering Facts --------------------------------------------------------- 6.28s
openshift_gcp : Templatize DNS script ----------------------------------- 0.54s
openshift_gcp : Templatize provision script ----------------------------- 0.36s
openshift_gcp : Templatize de-provision script -------------------------- 0.32s
openshift_gcp : Templatize SSH key provision script --------------------- 0.31s
include_role ------------------------------------------------------------ 0.10s
openshift_gcp : Provision GCP resources --------------------------------- 0.03s
openshift_gcp : Provision GCP SSH key resources ------------------------- 0.03s
openshift_gcp : Provision GCP DNS domain -------------------------------- 0.03s
openshift_gcp : Ensure that DNS resolves to the hosted zone ------------- 0.03s
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION TEST CLUSTER [00h 03m 40s] ##########
[workspace] $ /bin/bash /tmp/jenkins1738162980380194100.sh
########## STARTING STAGE: DELETE PR IMAGES ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
+ trap 'exit 0' EXIT
+ source ./INSTANCE_PREFIX
++ INSTANCE_PREFIX=prtest-060e08d-14
++ OS_TAG=fd46200
++ OS_PUSH_BASE_REPO=ci-pr-images/prtest-060e08d-14-
+ export KUBECONFIG=/var/lib/jenkins/secrets/image-pr-push.kubeconfig
+ KUBECONFIG=/var/lib/jenkins/secrets/image-pr-push.kubeconfig
+ oc get is -o name -n ci-pr-images
+ grep prtest-060e08d-14
+ xargs -r oc delete
imagestream "prtest-060e08d-14-hello-openshift" deleted
imagestream "prtest-060e08d-14-node" deleted
imagestream "prtest-060e08d-14-openvswitch" deleted
imagestream "prtest-060e08d-14-origin" deleted
imagestream "prtest-060e08d-14-origin-base" deleted
imagestream "prtest-060e08d-14-origin-deployer" deleted
imagestream "prtest-060e08d-14-origin-docker-builder" deleted
imagestream "prtest-060e08d-14-origin-docker-registry" deleted
imagestream "prtest-060e08d-14-origin-egress-dns-proxy" deleted
imagestream "prtest-060e08d-14-origin-egress-http-proxy" deleted
imagestream "prtest-060e08d-14-origin-egress-router" deleted
imagestream "prtest-060e08d-14-origin-f5-router" deleted
imagestream "prtest-060e08d-14-origin-haproxy-router" deleted
imagestream "prtest-060e08d-14-origin-keepalived-ipfailover" deleted
imagestream "prtest-060e08d-14-origin-metrics-server" deleted
imagestream "prtest-060e08d-14-origin-pod" deleted
imagestream "prtest-060e08d-14-origin-recycler" deleted
imagestream "prtest-060e08d-14-origin-sti-builder" deleted
imagestream "prtest-060e08d-14-origin-template-service-broker" deleted
imagestream "prtest-060e08d-14-origin-web-console" deleted
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins6728385973438439813.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c
++ export PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2018-04-17 03:02:34.355285", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2018-04-17 03:02:34.359435", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2018-04-17 03:02:35.154553", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-17 03:02:35.758397", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-012cf4e46bd04884a."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-17 03:02:36.610849", 
    "instance_ids": [
        "i-012cf4e46bd04884a"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-0f8ccead27020ad99"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-04d87ce1459aedff4"
                }
            }, 
            "dns_name": "ec2-54-84-2-57.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-012cf4e46bd04884a", 
            "image_id": "ami-069c0ca6cc091e8fa", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2018-04-17T04:35:13.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-15-137.ec2.internal", 
            "private_ip": "172.18.15.137", 
            "public_dns_name": "ec2-54-84-2-57.compute-1.amazonaws.com", 
            "public_ip": "54.84.2.57", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-17 03:02:36.857683", 
    "path": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.15.137.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/2ac8264613c881a254c455aa448c414c3f984c4c/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-17 03:02:37.300820", 
    "path": "/var/lib/jenkins/jobs/test_pull_request_origin_extended_conformance_k8s/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 04s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
Finished: SUCCESS