Started by timer [EnvInject] - Loading node environment variables. Building in workspace /var/lib/jenkins/jobs/origin_extended/workspace [workspace] $ /bin/sh -xe /tmp/jenkins5964173157032212552.sh + vagrant origin-local-checkout --replace -u openshift -b master You don't seem to have the GOPATH environment variable set on your system. See: 'go help gopath' for more details about GOPATH. Waiting for the cloning process to finish Checking repo integrity for /var/lib/jenkins/jobs/origin_extended/workspace/origin ~/jobs/origin_extended/workspace/origin ~/jobs/origin_extended/workspace # On branch master # Untracked files: # (use "git add <file>..." to include in what will be committed) # # artifacts/ nothing added to commit but untracked files present (use "git add" to track) ~/jobs/origin_extended/workspace Replacing: /var/lib/jenkins/jobs/origin_extended/workspace/origin ~/jobs/origin_extended/workspace/origin ~/jobs/origin_extended/workspace From github.com:openshift/origin 7c47306..515979d master -> origin/master d67f7ab..3a34b96 release-3.11 -> origin/release-3.11 7c47306..515979d release-4.2 -> origin/release-4.2 7c47306..515979d release-4.3 -> origin/release-4.3 Already on 'master' Your branch is behind 'origin/master' by 52 commits, and can be fast-forwarded. (use "git pull" to update your local branch) HEAD is now at 515979d Merge pull request #23237 from deads2k/move-94-final-oas Removing .vagrant-openshift.json Removing .vagrant/ Removing artifacts/ fatal: branch name required ~/jobs/origin_extended/workspace Origin repositories cloned into /var/lib/jenkins/jobs/origin_extended/workspace + pushd origin ~/jobs/origin_extended/workspace/origin ~/jobs/origin_extended/workspace + vagrant origin-init --stage inst --os rhel7 --instance-type m4.large origin_origin_extended_rhel7_1818 Reading AWS credentials from /var/lib/jenkins/.awscred Searching devenv-rhel7_* for latest base AMI (required_name_tag=) Found: ami-0b4c1d64a26a928c2 (devenv-rhel7_6324) ++ seq 0 2 + for i in '$(seq 0 2)' + vagrant up --provider aws Bringing machine 'openshiftdev' up with 'aws' provider... ==> openshiftdev: Warning! The AWS provider doesn't support any of the Vagrant ==> openshiftdev: high-level network configurations (`config.vm.network`). They ==> openshiftdev: will be silently ignored. ==> openshiftdev: Warning! You're launching this instance into a VPC without an ==> openshiftdev: elastic IP. Please verify you're properly connected to a VPN so ==> openshiftdev: you can access this machine, otherwise Vagrant will not be able ==> openshiftdev: to SSH into it. ==> openshiftdev: Launching an instance with the following settings... ==> openshiftdev: -- Type: m4.large ==> openshiftdev: -- AMI: ami-0b4c1d64a26a928c2 ==> openshiftdev: -- Region: us-east-1 ==> openshiftdev: -- Keypair: libra ==> openshiftdev: -- Subnet ID: subnet-cf57c596 ==> openshiftdev: -- User Data: yes ==> openshiftdev: -- User Data: ==> openshiftdev: # cloud-config ==> openshiftdev: ==> openshiftdev: growpart: ==> openshiftdev: mode: auto ==> openshiftdev: devices: ['/'] ==> openshiftdev: runcmd: ==> openshiftdev: - [ sh, -xc, "sed -i s/^Defaults.*requiretty/#Defaults requiretty/g /etc/sudoers"] ==> openshiftdev: ==> openshiftdev: -- Block Device Mapping: [{"DeviceName"=>"/dev/sda1", "Ebs.VolumeSize"=>25, "Ebs.VolumeType"=>"gp2"}, {"DeviceName"=>"/dev/sdb", "Ebs.VolumeSize"=>35, "Ebs.VolumeType"=>"gp2"}] ==> openshiftdev: -- Terminate On Shutdown: false ==> openshiftdev: -- Monitoring: false ==> openshiftdev: -- EBS optimized: false ==> openshiftdev: -- Assigning a public IP address in a VPC: false ==> openshiftdev: Waiting for instance to become "ready"... ==> openshiftdev: Waiting for SSH to become available... ==> openshiftdev: Machine is booted and ready for use! ==> openshiftdev: Running provisioner: setup (shell)... openshiftdev: Running: /tmp/vagrant-shell20190624-25693-12reqh1.sh ==> openshiftdev: Host: ec2-54-175-235-191.compute-1.amazonaws.com + break + vagrant ssh -c 'sudo mkdir /tmp || true' mkdir: cannot create directory ‘/tmp’: File exists + vagrant ssh -c 'sudo mount -t tmpfs -o size=2048m tmpfs /tmp' + vagrant test-origin --extended core --env JUNIT_REPORT=true -d --skip-check --skip-image-cleanup *************************************************** Running SKIP_IMAGE_CLEANUP=1 JUNIT_REPORT=true make... hack/build-go.sh ++ Using release artifacts from bf447db for linux/amd64 instead of building ++ Extracting openshift-origin-server-v3.6.0-alpha.1-bf447db-1052-linux-64bit.tar.gz tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' ++ Extracting openshift-origin-client-tools-v3.6.0-alpha.1-bf447db-1052-linux-64bit.tar.gz tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' ++ Extracting openshift-origin-image-v3.6.0-alpha.1-bf447db-1052-linux-64bit.tar.gz tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.security.selinux' hack/build-go.sh took 13 seconds real 0m13.832s user 0m8.730s sys 0m2.493s Finished SKIP_IMAGE_CLEANUP=1 JUNIT_REPORT=true make *************************************************** *************************************************** Running TEST_REPORT_DIR=/tmp/openshift/test-extended/junit JUNIT_REPORT=true test/extended/core.sh... [WARNING] No compiled `ginkgo` binary was found. Attempting to build one using: [WARNING] $ hack/build-go.sh vendor/github.com/onsi/ginkgo/ginkgo ++ Building go targets for linux/amd64: vendor/github.com/onsi/ginkgo/ginkgo /data/src/github.com/openshift/origin/hack/build-go.sh took 51 seconds [WARNING] No compiled `junitmerge` binary was found. Attempting to build one using: [WARNING] $ hack/build-go.sh tools/junitmerge ++ Building go targets for linux/amd64: tools/junitmerge /data/src/github.com/openshift/origin/hack/build-go.sh took 3 seconds [INFO] [CLEANUP] Cleaning up temporary directories [INFO] Starting server Generated new key pair as /tmp/openshift/core/openshift.local.config/master/serviceaccounts.public.key and /tmp/openshift/core/openshift.local.config/master/serviceaccounts.private.key Generating node credentials ... Created node config for 172.18.11.204 in /tmp/openshift/core/openshift.local.config/node-172.18.11.204 Wrote master config to: /tmp/openshift/core/openshift.local.config/master/master-config.yaml [INFO] Turn on audit logging [INFO] Using VOLUME_DIR=/mnt/openshift-xfs-vol-dir Running hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 6.852s: hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Running hack/lib/start.sh:353: executing 'oc get --raw https://172.18.11.204:10250/healthz --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s... SUCCESS after 0.420s: hack/lib/start.sh:353: executing 'oc get --raw https://172.18.11.204:10250/healthz --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s Running hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 0.685s: hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Running hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s... SUCCESS after 0.734s: hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s Running hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.11.204 --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s... SUCCESS after 1.513s: hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.11.204 --config='/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s serviceaccount "registry" created clusterrolebinding "registry-registry-role" created deploymentconfig "docker-registry" created service "docker-registry" created Waiting for rollout to finish: 0 out of 1 new replicas have been updated... Waiting for rollout to finish: 0 out of 1 new replicas have been updated... Waiting for rollout to finish: 0 of 1 updated replicas are available... Waiting for latest deployment config spec to be observed by the controller loop... replication controller "docker-registry-1" successfully rolled out --> Creating router router ... info: password for stats user admin has been set to M3Czto4twK secret "router-certs" created serviceaccount "router" created clusterrolebinding "router-router-role" created deploymentconfig "router" created service "router" created --> Success deploymentconfig "router" patched deploymentconfig "router" updated [INFO] Creating image streams imagestream "ruby" created imagestream "nodejs" created imagestream "perl" created imagestream "php" created imagestream "python" created imagestream "wildfly" created imagestream "mysql" created imagestream "mariadb" created imagestream "postgresql" created imagestream "mongodb" created imagestream "redis" created imagestream "jenkins" created [INFO] Creating quickstart templates template "cakephp-mysql-persistent" created template "cakephp-mysql-example" created template "dancer-mysql-persistent" created template "dancer-mysql-example" created template "django-psql-persistent" created template "django-psql-example" created template "nodejs-mongo-persistent" created template "nodejs-mongodb-example" created template "rails-pgsql-persistent" created template "rails-postgresql-example" created [INFO] Running parallel tests N=<default> I0624 01:49:31.699336 6283 plugins.go:72] Registered admission plugin "LimitRanger" I0624 01:49:31.699364 6283 plugins.go:72] Registered admission plugin "openshift.io/ImageLimitRange" I0624 01:49:31.797948 6283 test.go:92] Extended test version v3.6.0-alpha.1+bf447db-1052 Running Suite: Extended ======================= Random Seed: 1561355374 - Will randomize all specs Will run 850 specs Running in parallel across 5 nodes I0624 01:49:36.797028 8234 e2e.go:65] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. Jun 24 01:49:36.797: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 01:49:36.800: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable Jun 24 01:49:36.893: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 24 01:49:36.901: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 24 01:49:36.901: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jun 24 01:49:36.903: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller] Jun 24 01:49:36.903: INFO: Dumping network health container logs from all nodes I0624 01:49:36.906121 8234 e2e.go:65] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. I0624 01:49:36.929152 8238 e2e.go:65] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. I0624 01:49:36.935268 8251 e2e.go:65] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. I0624 01:49:36.956737 8258 e2e.go:65] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][jenkins][Slow] openshift pipeline plugin [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:664 jenkins-plugin test context /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:663 jenkins-plugin test trigger build DSL /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:378 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ I0624 01:49:37.255665 8233 e2e.go:65] The --provider flag is not set. Treating as a conformance test. Some tests may not be run. [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 override environment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:118 BUILD_LOGLEVEL in buildconfig should create verbose output /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:106 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:58 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:49:36.908: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:49:37.140: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:58 STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:49:39.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-resourcequota-ktdmc" for this suite. Jun 24 01:49:49.473: INFO: namespace: e2e-tests-resourcequota-ktdmc, resource: bindings, ignored listing per whitelist • [SLOW TEST:12.652 seconds] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a ResourceQuota and ensure its status is promptly calculated. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:58 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "ci.dev.openshift.redhat.com:5000/openshift/perl-520-rhel7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 Setting build-args on Docker builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:320 Should copy build args from BuildConfig to Build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:301 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Downgrade [Feature:Downgrade] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] cluster downgrade /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should maintain a functioning cluster [Feature:ClusterDowngrade] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:160 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs with env in params referencing the configmap [Conformance] should expand the config map key to a value /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:425 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:49:36.937: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:49:37.300: INFO: configPath is now "/tmp/extended-test-cli-deployment-4bb3d-4xp8x-user.kubeconfig" Jun 24 01:49:37.300: INFO: The user is now "extended-test-cli-deployment-4bb3d-4xp8x-user" Jun 24 01:49:37.300: INFO: Creating project "extended-test-cli-deployment-4bb3d-4xp8x" STEP: Waiting for a default service account to be provisioned in namespace [It] should expand the config map key to a value /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:425 Jun 24 01:49:37.819: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-4bb3d-4xp8x-user.kubeconfig --namespace=extended-test-cli-deployment-4bb3d-4xp8x configmap test --from-literal=foo=bar' Jun 24 01:49:38.358: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-4bb3d-4xp8x-user.kubeconfig --namespace=extended-test-cli-deployment-4bb3d-4xp8x -f /tmp/fixture-testdata-dir471114561/test/extended/testdata/deployments/deployment-with-ref-env.yaml -o name' Jun 24 01:49:38.830: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-4bb3d-4xp8x-user.kubeconfig --namespace=extended-test-cli-deployment-4bb3d-4xp8x latest dc/deployment-simple' Jun 24 01:49:39.153: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-4bb3d-4xp8x-user.kubeconfig --namespace=extended-test-cli-deployment-4bb3d-4xp8x status dc/deployment-simple' Jun 24 01:50:30.214: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/extended-test-cli-deployment-4bb3d-4xp8x-user.kubeconfig --namespace=extended-test-cli-deployment-4bb3d-4xp8x status dc/deployment-simple] [] Waiting for rollout to finish: 0 out of 1 new replicas have been updated... error: replication controller "deployment-simple-1" has failed progressing Waiting for rollout to finish: 0 out of 1 new replicas have been updated... error: replication controller "deployment-simple-1" has failed progressing [] <nil> 0xc42161c270 exit status 1 <nil> <nil> true [0xc420473388 0xc4204733d0 0xc4204733d0] [0xc420473388 0xc4204733d0] [0xc4204733a0 0xc4204733c0] [0xdd8a30 0xdd8b30] 0xc4212ba2a0 <nil>}: Waiting for rollout to finish: 0 out of 1 new replicas have been updated... error: replication controller "deployment-simple-1" has failed progressing Jun 24 01:50:30.214: INFO: Running 'oc logs --config=/tmp/extended-test-cli-deployment-4bb3d-4xp8x-user.kubeconfig --namespace=extended-test-cli-deployment-4bb3d-4xp8x dc/deployment-simple' [AfterEach] with env in params referencing the configmap [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:407 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:50:30.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-4bb3d-4xp8x" for this suite. Jun 24 01:50:40.825: INFO: namespace: extended-test-cli-deployment-4bb3d-4xp8x, resource: bindings, ignored listing per whitelist • [SLOW TEST:63.960 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 with env in params referencing the configmap [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:426 should expand the config map key to a value /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:425 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [cli][Slow] can use rsync to upload files to pods [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:396 rsync specific flags /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:395 should honor multiple --exclude flags /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:319 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] [Serial] [Slow] ReplicationController /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] completed builds should have digest of the image in their status [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:50 S2I build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:39 started with log level >5 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:38 should save the image digest when finished /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:68 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation namespace [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Namespace objects /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/namespace.go:185 should be deleted from underlying clusters when OrphanDependents is false /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/namespace.go:85 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift sample application repositories [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:225 [image_ecosystem][perl] test perl images with dancer-ex db repo /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:166 Building dancer-mysql app from new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:92 should build a dancer-mysql image and run it in a pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:91 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [networking] OVS [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/ovs.go:168 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:273 should add and remove flows when services are added and removed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/ovs.go:166 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] can use build secrets [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/secrets.go:99 build with secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/secrets.go:98 should contain secrets during the source strategy build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/secrets.go:64 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift sample application repositories [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:225 [image_ecosystem][perl] test perl images with dancer-ex repo /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:223 Building dancer app from new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:92 should build a dancer image and run it in a pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:91 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ESIPP [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should handle updates to source ip annotation [Feature:ExternalTrafficLocalOnly] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:1529 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [cli][Slow] can use rsync to upload files to pods [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:396 rsync specific flags /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:395 should honor the --exclude flag /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:302 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:418 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:49:49.566: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:49:49.638: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:418 STEP: Creating configMap with name projected-configmap-test-volume-map-e134d8aa-9643-11e9-a60e-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 01:49:49.696: INFO: Waiting up to 5m0s for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 status to be success or failure Jun 24 01:49:49.699: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.639556ms elapsed) Jun 24 01:49:51.701: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005262167s elapsed) Jun 24 01:49:53.713: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.017368408s elapsed) Jun 24 01:49:55.716: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.0198169s elapsed) Jun 24 01:49:57.718: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.022207634s elapsed) Jun 24 01:49:59.720: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.024571358s elapsed) Jun 24 01:50:01.725: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.028898523s elapsed) Jun 24 01:50:03.732: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.036163145s elapsed) Jun 24 01:50:05.735: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.038859066s elapsed) Jun 24 01:50:07.737: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (18.041484435s elapsed) Jun 24 01:50:09.740: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (20.044515965s elapsed) Jun 24 01:50:11.743: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (22.046673521s elapsed) Jun 24 01:50:13.745: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (24.048892093s elapsed) Jun 24 01:50:15.748: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (26.051844003s elapsed) Jun 24 01:50:17.752: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (28.056147951s elapsed) Jun 24 01:50:19.754: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (30.058453674s elapsed) Jun 24 01:50:21.760: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (32.064005385s elapsed) Jun 24 01:50:23.762: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (34.066023584s elapsed) Jun 24 01:50:25.764: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (36.068113324s elapsed) Jun 24 01:50:27.766: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (38.069921517s elapsed) Jun 24 01:50:29.768: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (40.072513889s elapsed) Jun 24 01:50:31.772: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (42.075655126s elapsed) Jun 24 01:50:33.774: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 in namespace 'e2e-tests-projected-n7g5w' status to be 'success or failure'(found phase: "Pending", readiness: false) (44.077645593s elapsed) STEP: Saw pod success Jun 24 01:50:35.777: INFO: Trying to get logs from node 172.18.11.204 pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 container projected-configmap-volume-test: <nil> STEP: delete the pod Jun 24 01:50:36.543: INFO: Waiting for pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 to disappear Jun 24 01:50:36.549: INFO: Pod pod-projected-configmaps-e13531e2-9643-11e9-a60e-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:50:36.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n7g5w" for this suite. Jun 24 01:50:46.710: INFO: namespace: e2e-tests-projected-n7g5w, resource: bindings, ignored listing per whitelist • [SLOW TEST:57.148 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:418 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [imageapis][registry] image signature workflow [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/registry/signature.go:110 can push a signed image to openshift registry and verify it /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/registry/signature.go:109 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] [Feature:Example] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] RethinkDB /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create and stop rethinkdb servers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/examples.go:537 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 Setting build-args on Docker builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:320 Should accept build args that are specified in the Dockerfile /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:310 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs with failing hook [Conformance] should get all logs from retried hooks /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:725 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:49:37.259: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:49:37.801: INFO: configPath is now "/tmp/extended-test-cli-deployment-k08fq-2lztr-user.kubeconfig" Jun 24 01:49:37.801: INFO: The user is now "extended-test-cli-deployment-k08fq-2lztr-user" Jun 24 01:49:37.801: INFO: Creating project "extended-test-cli-deployment-k08fq-2lztr" STEP: Waiting for a default service account to be provisioned in namespace [It] should get all logs from retried hooks /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:725 Jun 24 01:49:38.051: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-k08fq-2lztr-user.kubeconfig --namespace=extended-test-cli-deployment-k08fq-2lztr -f /tmp/fixture-testdata-dir557889249/test/extended/testdata/deployments/failing-pre-hook.yaml -o name' Jun 24 01:50:37.758: INFO: Running 'oc logs --config=/tmp/extended-test-cli-deployment-k08fq-2lztr-user.kubeconfig --namespace=extended-test-cli-deployment-k08fq-2lztr deploymentconfig/hook' STEP: checking the logs for substrings --> pre: Running hook pod ... pre hook logs --> pre: Retrying hook pod (retry #1) pre hook logs [AfterEach] with failing hook [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:710 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:50:38.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-k08fq-2lztr" for this suite. Jun 24 01:50:58.311: INFO: namespace: extended-test-cli-deployment-k08fq-2lztr, resource: bindings, ignored listing per whitelist • [SLOW TEST:81.134 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 with failing hook [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:726 should get all logs from retried hooks /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:725 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Security Context [Feature:SecurityContext] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support volume SELinux relabeling when using hostIPC /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:104 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] CronJob [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not schedule jobs when suspended [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cronjob.go:108 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [networking] network isolation [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:58 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:273 should allow communication from non-default to default namespace on the same node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:52 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] update failure status [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:195 Build status Docker fetch source failure /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:80 should contain the Docker build fetch source failure reason and message /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:79 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Pods should be submitted and removed [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:265 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:50:40.920: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:50:40.986: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should be submitted and removed [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:265 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Jun 24 01:50:45.061: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-ffd0f5de-9643-11e9-906c-0e9110352016", GenerateName:"", Namespace:"e2e-tests-pods-qmlvp", SelfLink:"/api/v1/namespaces/e2e-tests-pods-qmlvp/pods/pod-submit-remove-ffd0f5de-9643-11e9-906c-0e9110352016", UID:"ffd20cdc-9643-11e9-9f9d-0e9110352016", ResourceVersion:"1650", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:63696952241, nsec:44193271, loc:(*time.Location)(0x590a6e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"37058957", "name":"foo"}, Annotations:map[string]string{"openshift.io/scc":"anyuid"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fn4z3", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc420d31900), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), Metadata:(*v1.DeprecatedDownwardAPIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"gcr.io/google_containers/nginx-slim:0.7", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fn4z3", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc420bd4960), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc420a5edd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"172.18.11.204", HostNetwork:false, HostPID:false, HostIPC:false, SecurityContext:(*v1.PodSecurityContext)(0xc420d319c0), ImagePullSecrets:[]v1.LocalObjectReference{v1.LocalObjectReference{Name:"default-dockercfg-9zj72"}}, Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63696952241, nsec:0, loc:(*time.Location)(0x590a6e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63696952244, nsec:0, loc:(*time.Location)(0x590a6e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63696952241, nsec:0, loc:(*time.Location)(0x590a6e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"172.18.11.204", PodIP:"172.17.0.7", StartTime:(*v1.Time)(0xc420a03400), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc420a03420), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/google_containers/nginx-slim:0.7", ImageID:"docker-pullable://gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795", ContainerID:"docker://17a6748d3f94cb7bdede4a397d372c501ab7467a89aa96708802692d1bc97b68"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 24 01:50:50.079: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:50:50.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-qmlvp" for this suite. Jun 24 01:51:00.136: INFO: namespace: e2e-tests-pods-qmlvp, resource: bindings, ignored listing per whitelist • [SLOW TEST:19.296 seconds] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be submitted and removed [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:265 ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:75 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:49:36.931: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:49:37.288: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:75 Jun 24 01:50:39.854: INFO: Container started at 2019-06-24 01:50:10 -0400 EDT, pod became ready at 2019-06-24 01:50:38 -0400 EDT [AfterEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:50:39.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8pzgc" for this suite. Jun 24 01:51:04.982: INFO: namespace: e2e-tests-container-probe-8pzgc, resource: bindings, ignored listing per whitelist • [SLOW TEST:88.078 seconds] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 with readiness probe should not be ready before initial delay and never restart [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:75 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Volumes [Feature:Volumes] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Ceph RBD /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be mountable [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/volumes.go:649 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][php][Slow] hot deploy for openshift php image [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_php.go:69 CakePHP example /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_php.go:68 should work with hot deploy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_php.go:67 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation secrets [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Secret objects /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/secret.go:95 should be deleted from underlying clusters when OrphanDependents is false /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/secret.go:79 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] [Feature:Example] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Storm /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create and stop Zookeeper, Nimbus and Storm worker servers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/examples.go:391 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1249 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:50:58.403: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:50:58.477: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:289 [BeforeEach] [k8s.io] Kubectl run deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1207 [It] should create a deployment from an image [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1249 STEP: running the image gcr.io/google_containers/nginx-slim:0.7 Jun 24 01:50:58.528: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-nc1t5' Jun 24 01:50:58.884: INFO: stderr: "" Jun 24 01:50:58.884: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1221 Jun 24 01:51:00.912: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-nc1t5' Jun 24 01:51:04.759: INFO: stderr: "" [AfterEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:51:04.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nc1t5" for this suite. Jun 24 01:51:29.965: INFO: namespace: e2e-tests-kubectl-nc1t5, resource: bindings, ignored listing per whitelist • [SLOW TEST:31.568 seconds] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl run deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a deployment from an image [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1249 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Conformance][image_ecosystem][mongodb][Slow] openshift mongodb replication (with statefulset) [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_replica_statefulset.go:125 creating from a template /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_replica_statefulset.go:123 should instantiate the template /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mongodb_replica_statefulset.go:122 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Cluster level logging using Elasticsearch [Feature:Elasticsearch] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should check that logs from containers are ingested into Elasticsearch /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_logging_es.go:65 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] SchedulerPredicates [Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 validates that NodeSelector is respected if matching [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:290 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Deployment [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 deployment should support rollback when there's replica set with no revision /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:89 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Network Partition [Disruptive] [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] [StatefulSet] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:437 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] CronJob [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not schedule new jobs when ForbidConcurrent [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cronjob.go:140 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 Trigger builds with branch refs matching directories on master branch /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:340 Should checkout the config branch, not config directory /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:339 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ClusterDns [Feature:Example] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create pod that uses dns /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/example_cluster_dns.go:153 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] completed builds should have digest of the image in their status [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:50 Docker build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:49 started with normal log level /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:44 should save the image digest when finished /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/digest.go:68 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs when run iteratively [Conformance] should immediately start a new deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:201 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:51:05.019: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:51:05.048: INFO: configPath is now "/tmp/extended-test-cli-deployment-n81dj-b2ffj-user.kubeconfig" Jun 24 01:51:05.048: INFO: The user is now "extended-test-cli-deployment-n81dj-b2ffj-user" Jun 24 01:51:05.048: INFO: Creating project "extended-test-cli-deployment-n81dj-b2ffj" STEP: Waiting for a default service account to be provisioned in namespace [It] should immediately start a new deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:201 Jun 24 01:51:05.163: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-n81dj-b2ffj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-b2ffj -f /tmp/fixture-testdata-dir824123364/test/extended/testdata/deployments/deployment-simple.yaml -o name' Jun 24 01:51:05.469: INFO: Running 'oc set env --config=/tmp/extended-test-cli-deployment-n81dj-b2ffj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-b2ffj deploymentconfig/deployment-simple TRY=ONCE' STEP: by checking that the deployment config has the correct version STEP: by checking that the second deployment exists STEP: by checking that the first deployer was deleted and the second deployer exists [AfterEach] when run iteratively [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:56 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:51:08.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-n81dj-b2ffj" for this suite. Jun 24 01:51:38.619: INFO: namespace: extended-test-cli-deployment-n81dj-b2ffj, resource: bindings, ignored listing per whitelist • [SLOW TEST:33.638 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 when run iteratively [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:202 should immediately start a new deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:201 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Garbage collector [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Feature:GarbageCollector] should not be blocked by dependency circle /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/garbage_collector.go:711 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1390 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:51:38.669: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:51:38.729: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:289 [It] should create a job from an image, then delete the job [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1390 STEP: executing a command with run --rm and attach with stdin Jun 24 01:51:38.850: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig --namespace=e2e-tests-kubectl-xp16b run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 24 01:52:02.208: INFO: stderr: "If you don't see a command prompt, try pressing enter.\n" Jun 24 01:52:02.208: INFO: stdout: "abcd1234stdin closed\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:02.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xp16b" for this suite. Jun 24 01:52:12.692: INFO: namespace: e2e-tests-kubectl-xp16b, resource: bindings, ignored listing per whitelist • [SLOW TEST:34.049 seconds] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl run --rm job /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a job from an image, then delete the job [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1390 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] incremental s2i build [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_incremental.go:76 Building from a template /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_incremental.go:75 should create a build from "/tmp/fixture-testdata-dir824123364/test/extended/testdata/incremental-auth-build.json" template and run it /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_incremental.go:74 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "openshift/ruby-20-centos7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [Conformance][networking][router] openshift routers The HAProxy router should serve the correct routes when scoped to a single namespace and label set /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:90 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:51:00.218: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:51:00.248: INFO: configPath is now "/tmp/extended-test-scoped-router-n8f05-cb670-user.kubeconfig" Jun 24 01:51:00.248: INFO: The user is now "extended-test-scoped-router-n8f05-cb670-user" Jun 24 01:51:00.248: INFO: Creating project "extended-test-scoped-router-n8f05-cb670" STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:39 Jun 24 01:51:00.358: INFO: Running 'oc adm --config=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-n8f05-cb670 policy add-cluster-role-to-user system:router extended-test-scoped-router-n8f05-cb670-user' cluster role "system:router" added: "extended-test-scoped-router-n8f05-cb670-user" Jun 24 01:51:00.661: INFO: Running 'oc new-app --config=/tmp/extended-test-scoped-router-n8f05-cb670-user.kubeconfig --namespace=extended-test-scoped-router-n8f05-cb670 -f /tmp/fixture-testdata-dir471114561/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-n8f05-cb670/" for "/tmp/fixture-testdata-dir471114561/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-n8f05-cb670 * With parameters: * IMAGE=openshift/origin-haproxy-router --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Run 'oc status' to view your app. [It] should serve the correct routes when scoped to a single namespace and label set /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:90 Jun 24 01:51:01.113: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir471114561/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Jun 24 01:52:02.145: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-n8f05-cb670 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 172.17.0.7' "http://172.17.0.7:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jun 24 01:52:03.521: INFO: stderr: "" STEP: waiting for the valid route to respond Jun 24 01:52:03.522: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-n8f05-cb670 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: first.example.com' "http://172.17.0.7" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jun 24 01:52:04.419: INFO: stderr: "" STEP: checking that second.example.com does not match a route Jun 24 01:52:04.419: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-n8f05-cb670 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: second.example.com' "http://172.17.0.7"' Jun 24 01:52:04.852: INFO: stderr: "" STEP: checking that third.example.com does not match a route Jun 24 01:52:04.852: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-n8f05-cb670 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: third.example.com' "http://172.17.0.7"' Jun 24 01:52:05.364: INFO: stderr: "" Jun 24 01:52:05.400: INFO: Scoped Router test [Conformance][networking][router] openshift routers The HAProxy router should serve the correct routes when scoped to a single namespace and label set logs: I0624 05:51:29.774169 1 merged_client_builder.go:123] Using in-cluster configuration I0624 05:51:29.778621 1 reflector.go:187] Starting reflector *api.Service (10m0s) from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 I0624 05:51:29.779006 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 I0624 05:51:29.813024 1 router.go:156] Creating a new template router, writing to /var/lib/haproxy/router I0624 05:51:29.813138 1 router.go:350] Template router will coalesce reloads within 5 seconds of each other I0624 05:51:29.813160 1 router.go:400] Router default cert from router container I0624 05:51:29.813167 1 router.go:214] Reading persisted state I0624 05:51:29.834652 1 router.go:218] Committing state I0624 05:51:29.834664 1 router.go:455] Writing the router state I0624 05:51:29.846657 1 router.go:462] Writing the router config I0624 05:51:29.849482 1 router.go:476] Reloading the router E0624 05:51:29.855129 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:30.134017 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:51:30.134092 1 router.go:237] Router is only using resources in namespace extended-test-scoped-router-n8f05-cb670 I0624 05:51:30.134128 1 reflector.go:187] Starting reflector *api.Route (10m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:88 I0624 05:51:30.134164 1 reflector.go:187] Starting reflector *api.Endpoints (10m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:95 I0624 05:51:30.134213 1 router_controller.go:70] Running router controller I0624 05:51:30.134230 1 reaper.go:17] Launching reaper I0624 05:51:30.134332 1 reflector.go:236] Listing and watching *api.Route from github.com/openshift/origin/pkg/router/controller/factory/factory.go:88 I0624 05:51:30.134681 1 reflector.go:236] Listing and watching *api.Endpoints from github.com/openshift/origin/pkg/router/controller/factory/factory.go:95 I0624 05:51:30.174732 1 plugin.go:159] Processing 0 Endpoints for Name: endpoints (MODIFIED) I0624 05:51:30.174763 1 plugin.go:171] Modifying endpoints for extended-test-scoped-router-n8f05-cb670/endpoints I0624 05:51:30.174776 1 router.go:823] Ignoring change for extended-test-scoped-router-n8f05-cb670/endpoints, endpoints are the same I0624 05:51:30.174784 1 router_controller.go:296] Router sync in progress I0624 05:51:30.181714 1 router_controller.go:305] Processing Route: extended-test-scoped-router-n8f05-cb670/route-1 -> endpoints I0624 05:51:30.181728 1 router_controller.go:306] Alias: first.example.com I0624 05:51:30.181733 1 router_controller.go:307] Path: I0624 05:51:30.181738 1 router_controller.go:308] Event: MODIFIED I0624 05:51:30.181746 1 router.go:132] host first.example.com admitted I0624 05:51:30.181853 1 unique_host.go:195] Route extended-test-scoped-router-n8f05-cb670/route-1 claims first.example.com I0624 05:51:30.181871 1 status.go:179] has last touch <nil> for extended-test-scoped-router-n8f05-cb670/route-1 I0624 05:51:30.181902 1 status.go:269] admit: admitting route by updating status: route-1 (true): first.example.com I0624 05:51:30.236439 1 router.go:781] Adding route extended-test-scoped-router-n8f05-cb670/route-1 I0624 05:51:30.236459 1 router_controller.go:298] Router sync complete I0624 05:51:30.236466 1 router.go:435] Router state synchronized for the first time I0624 05:51:30.236522 1 router.go:455] Writing the router state I0624 05:51:30.236706 1 router_controller.go:305] Processing Route: extended-test-scoped-router-n8f05-cb670/route-1 -> endpoints I0624 05:51:30.236715 1 router_controller.go:306] Alias: first.example.com I0624 05:51:30.236720 1 router_controller.go:307] Path: I0624 05:51:30.236724 1 router_controller.go:308] Event: MODIFIED I0624 05:51:30.236733 1 router.go:132] host first.example.com admitted I0624 05:51:30.236766 1 status.go:245] admit: route already admitted I0624 05:51:30.306260 1 router.go:462] Writing the router config I0624 05:51:30.308223 1 router.go:476] Reloading the router I0624 05:51:30.371492 1 reaper.go:24] Signal received: child exited I0624 05:51:30.371551 1 reaper.go:32] Reaped process with pid 28 I0624 05:51:30.382315 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:51:30.382417 1 reaper.go:24] Signal received: child exited I0624 05:51:30.858245 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:30.862078 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:31.862269 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:31.868725 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:32.873272 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:32.877251 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:33.877478 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:33.880758 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:34.880991 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:34.886491 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:35.886688 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:35.890650 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:36.890837 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:36.893763 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:37.540422 1 router_controller.go:305] Processing Route: extended-test-scoped-router-n8f05-cb670/route-1 -> endpoints I0624 05:51:37.540445 1 router_controller.go:306] Alias: first.example.com I0624 05:51:37.540450 1 router_controller.go:307] Path: I0624 05:51:37.540455 1 router_controller.go:308] Event: MODIFIED I0624 05:51:37.540466 1 router.go:132] host first.example.com admitted I0624 05:51:37.540506 1 status.go:179] has last touch 2019-06-24 05:51:37 +0000 UTC for extended-test-scoped-router-n8f05-cb670/route-1 I0624 05:51:37.540529 1 status.go:196] different cached last touch of 2019-06-24 05:51:30 +0000 UTC I0624 05:51:37.540558 1 status.go:261] admit: observed a route update from someone else: route extended-test-scoped-router-n8f05-cb670/route-1 has been updated to an inconsistent value, doing nothing I0624 05:51:37.893988 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:37.897427 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:38.897637 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:38.902473 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:39.909259 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:39.912536 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:40.912739 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:40.915918 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:41.916171 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:41.919372 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:42.919576 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:42.922565 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:43.922783 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:43.926062 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:44.926275 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:44.929010 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:45.929209 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:45.931838 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:46.931944 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:46.934865 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:47.935098 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:47.940986 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:48.941831 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:48.945402 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:49.945657 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:49.948521 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:50.948747 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:50.951941 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:51.955145 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:51.969135 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:52.969249 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:52.998878 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:53.999148 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:54.003069 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:55.003279 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:55.008712 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:55.310480 1 plugin.go:159] Processing 1 Endpoints for Name: endpoints (MODIFIED) I0624 05:51:55.310502 1 plugin.go:162] Subset 0 : api.EndpointSubset{Addresses:[]api.EndpointAddress{api.EndpointAddress{IP:"172.17.0.9", Hostname:"", NodeName:(*string)(0xc42113dbf0), TargetRef:(*api.ObjectReference)(0xc42070ca10)}}, NotReadyAddresses:[]api.EndpointAddress(nil), Ports:[]api.EndpointPort{api.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0624 05:51:55.310617 1 plugin.go:171] Modifying endpoints for extended-test-scoped-router-n8f05-cb670/endpoints I0624 05:51:55.310667 1 router.go:455] Writing the router state I0624 05:51:55.310849 1 router.go:462] Writing the router config I0624 05:51:55.316464 1 router.go:476] Reloading the router I0624 05:51:55.439705 1 reaper.go:24] Signal received: child exited I0624 05:51:55.439751 1 reaper.go:32] Reaped process with pid 53 I0624 05:51:55.479127 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:51:55.479224 1 reaper.go:24] Signal received: child exited I0624 05:51:56.012337 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:56.016078 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:57.016333 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:57.019378 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:58.021729 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:58.025448 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:51:59.026485 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:51:59.029967 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:52:00.030341 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:52:00.034800 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:52:01.035261 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:52:01.038341 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:52:02.042151 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:52:02.046880 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:52:03.047052 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:52:03.091489 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:52:04.092325 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:52:04.096184 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster I0624 05:52:05.096356 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:52:05.101787 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-n8f05-cb670:default" cannot list all services in the cluster [AfterEach] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-n8f05-cb670" for this suite. Jun 24 01:52:30.593: INFO: namespace: extended-test-scoped-router-n8f05-cb670, resource: bindings, ignored listing per whitelist • [SLOW TEST:90.566 seconds] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:147 The HAProxy router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:146 should serve the correct routes when scoped to a single namespace and label set /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:90 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] idling and unidling [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:468 unidling /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:467 should work with TCP (while idling) [local] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:335 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Feature:ImageQuota] Image limit range [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/limitrange_admission.go:225 should deny a docker image reference exceeding limit on openshift.io/image-tags resource /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/limitrange_admission.go:183 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] openshift pipeline build [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/pipeline.go:395 Pipeline with maven slave /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/pipeline.go:156 should build and complete successfully /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/pipeline.go:155 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [k8s.io] Dynamic provisioning [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] DynamicProvisioner External /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should let an external dynamic provisioner create and delete persistent volumes [Slow] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/volume_provisioning.go:281 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] PersistentVolumes:NFS[Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 when invoking the Recycle reclaim policy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:291 should test that a PV becomes Available and is clean after the PVC is deleted. [Volume][Serial][Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:290 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs should respect image stream tag reference policy [Conformance] resolve the image pull spec /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:238 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:52:12.726: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:52:12.792: INFO: configPath is now "/tmp/extended-test-cli-deployment-n81dj-9h333-user.kubeconfig" Jun 24 01:52:12.792: INFO: The user is now "extended-test-cli-deployment-n81dj-9h333-user" Jun 24 01:52:12.792: INFO: Creating project "extended-test-cli-deployment-n81dj-9h333" STEP: Waiting for a default service account to be provisioned in namespace [It] resolve the image pull spec /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:238 Jun 24 01:52:12.982: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-n81dj-9h333-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-9h333 -f /tmp/fixture-testdata-dir824123364/test/extended/testdata/deployments/deployment-image-resolution.yaml' imagestream "deployment-image-resolution" created deploymentconfig "deployment-image-resolution" created [AfterEach] should respect image stream tag reference policy [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:207 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:13.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-n81dj-9h333" for this suite. Jun 24 01:52:34.080: INFO: namespace: extended-test-cli-deployment-n81dj-9h333, resource: bindings, ignored listing per whitelist • [SLOW TEST:21.484 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 should respect image stream tag reference policy [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:239 resolve the image pull spec /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:238 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Daemon set [Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should run and stop complex daemon /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:176 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Upgrade [Feature:Upgrade] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] node upgrade /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should maintain a functioning cluster [Feature:NodeUpgrade] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:98 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] DNS configMap nameserver [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be able to change stubDomain configuration /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns_configmap.go:195 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [networking] network isolation [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:58 when using a plugin that does not isolate namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:261 should allow communication between pods in different namespaces on different nodes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:21 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ReplicaSet [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should serve a basic image on each replica with a private image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/replica_set.go:88 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] PersistentVolumes:NFS[Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 with Single PV - PVC pairs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:190 create a PVC and a pre-bound PV: test write access /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:181 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation namespace [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Namespace objects /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/namespace.go:185 deletes replicasets in the namespace when the namespace is deleted /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/namespace.go:148 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [templates] templateservicebroker security test [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:189 should pass security tests /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_security.go:188 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "ci.dev.openshift.redhat.com:5000/openshift/nodejs-010-rhel7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "openshift/php-55-centos7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:284 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:49:36.984: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:49:37.288: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:63 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:84 STEP: Creating service test in namespace e2e-tests-statefulset-86chj [It] should allow template updates /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:284 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-86chj Jun 24 01:49:37.909: INFO: Found 0 stateful pods, waiting for 2 Jun 24 01:49:47.911: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:49:57.912: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:50:07.912: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:50:17.912: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:50:27.911: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:50:37.914: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:50:37.914: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false Jun 24 01:50:47.912: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:50:47.912: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from gcr.io/google_containers/nginx-slim:0.7 to gcr.io/google_containers/nginx-slim:0.8 Jun 24 01:50:47.935: INFO: Updating stateful set ss STEP: Deleting stateful pod at index 0 STEP: Waiting for all stateful pods to be running again Jun 24 01:50:47.965: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:50:57.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 24 01:51:07.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 24 01:51:17.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 24 01:51:27.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 24 01:51:37.969: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jun 24 01:51:47.971: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:51:47.971: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying stateful pod at index 0 is updated [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:92 Jun 24 01:51:47.974: INFO: Deleting all statefulset in ns e2e-tests-statefulset-86chj Jun 24 01:51:47.979: INFO: Scaling statefulset ss to 0 Jun 24 01:52:18.021: INFO: Waiting for statefulset status.replicas updated to 0 Jun 24 01:52:18.023: INFO: Deleting statefulset ss Jun 24 01:52:18.030: INFO: Deleting pvc: datadir-ss-0 with volume pvc-da378207-9643-11e9-9f9d-0e9110352016 Jun 24 01:52:18.033: INFO: Deleting pvc: datadir-ss-1 with volume pvc-fa758d1e-9643-11e9-9f9d-0e9110352016 Jun 24 01:52:18.045: INFO: Still waiting for pvs of statefulset to disappear: pvc-da378207-9643-11e9-9f9d-0e9110352016: {Phase:Released Message: Reason:} pvc-fa758d1e-9643-11e9-9f9d-0e9110352016: {Phase:Bound Message: Reason:} [AfterEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:28.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-86chj" for this suite. Jun 24 01:52:38.263: INFO: namespace: e2e-tests-statefulset-86chj, resource: bindings, ignored listing per whitelist • [SLOW TEST:181.340 seconds] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should allow template updates /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:284 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] kubelet [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] host cleanup with volume mounts [HostCleanup][Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Host cleanup after pod using NFS mount is deleted [Volume][NFS] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:448 after deleting the nfs-server, the host should be cleaned-up when deleting sleeping pod which mounts an NFS vol [Serial] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubelet.go:446 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:499 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Port forwarding /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:51:29.994: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:51:30.082: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends no data, and disconnects [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:499 STEP: creating the target pod STEP: Running 'kubectl port-forward' Jun 24 01:52:10.153: INFO: starting port-forward command and streaming output Jun 24 01:52:10.153: INFO: Asynchronously running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig port-forward --namespace=e2e-tests-port-forwarding-jxqm3 pfpod :80' Jun 24 01:52:10.169: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig version --client' Jun 24 01:52:10.629: INFO: stderr: "" Jun 24 01:52:10.629: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.1+5115d708d7\", GitCommit:\"5115d70\", GitTreeState:\"clean\", BuildDate:\"2017-06-06T22:41:15Z\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jun 24 01:52:10.630: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Closing the connection to the local port STEP: Waiting for the target pod to stop running Jun 24 01:52:11.032: INFO: Waiting up to 5m0s for pod pfpod status to be container terminated Jun 24 01:52:11.065: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-jxqm3' status to be 'container terminated'(found phase: "Running", readiness: true) (32.752618ms elapsed) Jun 24 01:52:13.074: INFO: Waiting for pod pfpod in namespace 'e2e-tests-port-forwarding-jxqm3' status to be 'container terminated'(found phase: "Running", readiness: true) (2.04161279s elapsed) STEP: Verifying logs Jun 24 01:52:16.475: INFO: Pod log: Accepted client connection Expected to read 3 bytes from client, but got 0 instead. err=EOF [AfterEach] [k8s.io] Port forwarding /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:16.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-port-forwarding-jxqm3" for this suite. Jun 24 01:52:41.552: INFO: namespace: e2e-tests-port-forwarding-jxqm3, resource: bindings, ignored listing per whitelist • [SLOW TEST:71.679 seconds] [k8s.io] Port forwarding /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] With a server listening on localhost /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] that expects a client request /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support a client that connects, sends no data, and disconnects [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:499 ------------------------------ [k8s.io] Security Context [Feature:SecurityContext] should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:125 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:52:34.251: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:52:34.324: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:125 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 24 01:52:34.467: INFO: Waiting up to 5m0s for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 status to be success or failure Jun 24 01:52:34.476: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.878119ms elapsed) Jun 24 01:52:36.481: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.014037861s elapsed) Jun 24 01:52:38.492: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.025036151s elapsed) Jun 24 01:52:40.502: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.035044224s elapsed) Jun 24 01:52:42.505: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.038431491s elapsed) Jun 24 01:52:44.511: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.044038801s elapsed) Jun 24 01:52:46.517: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.050389156s elapsed) Jun 24 01:52:48.523: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.056073394s elapsed) Jun 24 01:52:50.526: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-security-context-5r3ck' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.059133181s elapsed) STEP: Saw pod success Jun 24 01:52:52.532: INFO: Trying to get logs from node 172.18.11.204 pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:52:53.266: INFO: Waiting for pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 to disappear Jun 24 01:52:53.268: INFO: Pod security-context-4369f0a2-9644-11e9-a4d2-0e9110352016 no longer exists [AfterEach] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:53.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-security-context-5r3ck" for this suite. Jun 24 01:53:03.465: INFO: namespace: e2e-tests-security-context-5r3ck, resource: bindings, ignored listing per whitelist • [SLOW TEST:29.229 seconds] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:125 ------------------------------ [k8s.io] Downward API volume should provide podname as non-root with fsgroup [Feature:FSGroup] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:83 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:52:38.330: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:52:38.504: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname as non-root with fsgroup [Feature:FSGroup] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:83 STEP: Creating a pod to test downward API volume plugin Jun 24 01:52:38.709: INFO: Waiting up to 5m0s for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:52:38.733: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (23.703149ms elapsed) Jun 24 01:52:40.737: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.027867909s elapsed) Jun 24 01:52:42.743: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.033369082s elapsed) Jun 24 01:52:44.750: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.040512798s elapsed) Jun 24 01:52:46.758: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.048751792s elapsed) Jun 24 01:52:48.763: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.05375534s elapsed) Jun 24 01:52:50.767: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.057888574s elapsed) Jun 24 01:52:52.772: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-rr35k' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.063098702s elapsed) STEP: Saw pod success Jun 24 01:52:54.781: INFO: Trying to get logs from node 172.18.11.204 pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 01:52:54.810: INFO: Waiting for pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 to disappear Jun 24 01:52:54.814: INFO: Pod metadata-volume-45f1a63c-9644-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:54.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rr35k" for this suite. Jun 24 01:53:04.992: INFO: namespace: e2e-tests-downward-api-rr35k, resource: bindings, ignored listing per whitelist • [SLOW TEST:26.705 seconds] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide podname as non-root with fsgroup [Feature:FSGroup] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:83 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes:vsphere [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes-vsphere.go:149 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Cadvisor [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be healthy on every node. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cadvisor.go:36 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][quota][Slow] docker build with a quota [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_quota.go:58 Building from a template /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_quota.go:57 should create a docker build with a quota and run it /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_quota.go:56 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:39 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:52:41.677: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:52:41.815: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:39 STEP: Creating secret with name secret-test-47d942ac-9644-11e9-afd4-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 01:52:41.909: INFO: Waiting up to 5m0s for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:52:41.911: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-97q7n' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.872388ms elapsed) Jun 24 01:52:43.919: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-97q7n' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.009572642s elapsed) Jun 24 01:52:45.923: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-97q7n' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.013567694s elapsed) Jun 24 01:52:47.927: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-97q7n' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.017179196s elapsed) Jun 24 01:52:49.935: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-97q7n' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.026066571s elapsed) Jun 24 01:52:51.957: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-97q7n' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.047221305s elapsed) Jun 24 01:52:53.970: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-97q7n' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.06037824s elapsed) STEP: Saw pod success Jun 24 01:52:55.977: INFO: Trying to get logs from node 172.18.11.204 pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 container secret-volume-test: <nil> STEP: delete the pod Jun 24 01:52:56.113: INFO: Waiting for pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 to disappear Jun 24 01:52:56.116: INFO: Pod pod-secrets-47da5aac-9644-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:52:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-97q7n" for this suite. Jun 24 01:53:06.294: INFO: namespace: e2e-tests-secrets-97q7n, resource: bindings, ignored listing per whitelist • [SLOW TEST:24.624 seconds] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:39 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Disruptive][Flaky] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 when kubelet restarts /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:146 Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes-disruptive.go:143 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/limit_range.go:103 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] LimitRange /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:05.044: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:05.133: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/limit_range.go:103 STEP: Creating a LimitRange STEP: Fetching the LimitRange to ensure it has proper values Jun 24 01:53:05.229: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[memory:{{209715200 0} {<nil>} BinarySI} cpu:{{100 -3} {<nil>} 100m DecimalSI}] Jun 24 01:53:05.229: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jun 24 01:53:05.247: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[memory:{{209715200 0} {<nil>} BinarySI} cpu:{{100 -3} {<nil>} 100m DecimalSI}] Jun 24 01:53:05.247: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[memory:{{524288000 0} {<nil>} 500Mi BinarySI} cpu:{{500 -3} {<nil>} 500m DecimalSI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jun 24 01:53:05.255: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[memory:{{157286400 0} {<nil>} 150Mi BinarySI} cpu:{{300 -3} {<nil>} 300m DecimalSI}] Jun 24 01:53:05.255: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources [AfterEach] [k8s.io] LimitRange /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:05.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-limitrange-cgvn9" for this suite. Jun 24 01:53:30.368: INFO: namespace: e2e-tests-limitrange-cgvn9, resource: bindings, ignored listing per whitelist • [SLOW TEST:25.345 seconds] [k8s.io] LimitRange /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a LimitRange with defaults and ensure pod has those defaults applied. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/limit_range.go:103 ------------------------------ P [PENDING] idling and unidling /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:468 unidling /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:467 should handle many UDP senders (by continuing to drop all packets on the floor) [local] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:466 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:499 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:285 should support setting DefaultDeny namespace policy [Feature:NetworkPolicy] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:83 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation deployments [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Federated Deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/deployment.go:127 should be deleted from underlying clusters when OrphanDependents is false /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/deployment.go:110 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:06.306: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:06.400: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 24 01:53:06.475: INFO: Waiting up to 5m0s for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:53:06.483: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-emptydir-2pccj' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.716414ms elapsed) Jun 24 01:53:08.486: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-emptydir-2pccj' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010827531s elapsed) Jun 24 01:53:10.506: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-emptydir-2pccj' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.031017243s elapsed) Jun 24 01:53:12.512: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-emptydir-2pccj' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.03686105s elapsed) Jun 24 01:53:14.522: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-emptydir-2pccj' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.046672555s elapsed) Jun 24 01:53:16.532: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-emptydir-2pccj' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.056840932s elapsed) Jun 24 01:53:18.535: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-emptydir-2pccj' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.059933122s elapsed) STEP: Saw pod success Jun 24 01:53:20.539: INFO: Trying to get logs from node 172.18.11.204 pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:53:21.297: INFO: Waiting for pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 to disappear Jun 24 01:53:21.302: INFO: Pod pod-567f1bbd-9644-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:21.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2pccj" for this suite. Jun 24 01:53:31.356: INFO: namespace: e2e-tests-emptydir-2pccj, resource: bindings, ignored listing per whitelist • [SLOW TEST:25.137 seconds] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support (non-root,0666,default) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113 ------------------------------ [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:401 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:03.483: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:03.592: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should be consumable from pods in volume as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:401 STEP: Creating configMap with name projected-configmap-test-volume-54d4a0f4-9644-11e9-a4d2-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 01:53:03.692: INFO: Waiting up to 5m0s for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 status to be success or failure Jun 24 01:53:03.695: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.410975ms elapsed) Jun 24 01:53:05.698: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.006093584s elapsed) Jun 24 01:53:07.701: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.008925175s elapsed) Jun 24 01:53:09.703: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.011056959s elapsed) Jun 24 01:53:11.708: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.016028188s elapsed) Jun 24 01:53:13.711: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.018881508s elapsed) Jun 24 01:53:15.714: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.022033304s elapsed) Jun 24 01:53:17.717: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.024699389s elapsed) Jun 24 01:53:19.719: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-l8hq7' status to be 'success or failure'(found phase: "Pending", readiness: false) (16.027575719s elapsed) STEP: Saw pod success Jun 24 01:53:21.726: INFO: Trying to get logs from node 172.18.11.204 pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 container projected-configmap-volume-test: <nil> STEP: delete the pod Jun 24 01:53:21.824: INFO: Waiting for pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 to disappear Jun 24 01:53:21.826: INFO: Pod pod-projected-configmaps-54d51769-9644-11e9-a4d2-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:21.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l8hq7" for this suite. Jun 24 01:53:32.627: INFO: namespace: e2e-tests-projected-l8hq7, resource: bindings, ignored listing per whitelist • [SLOW TEST:29.895 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:401 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] using build configuration runPolicy [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/run_policy.go:392 build configuration with Serial build run policy handling failure /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/run_policy.go:297 starts the next build immediately after one fails /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/run_policy.go:296 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Dynamic provisioning [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] DynamicProvisioner Default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create and delete default persistent volumes [Slow] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/volume_provisioning.go:299 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1315 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:31.444: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:31.527: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:289 [BeforeEach] [k8s.io] Kubectl run pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1292 [It] should create a pod from an image when restart is Never [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1315 STEP: running the image gcr.io/google_containers/nginx-slim:0.7 Jun 24 01:53:31.578: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-9lk7l' Jun 24 01:53:32.503: INFO: stderr: "" Jun 24 01:53:32.503: INFO: stdout: "pod \"e2e-test-nginx-pod\" created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1296 Jun 24 01:53:32.525: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9lk7l' Jun 24 01:53:33.310: INFO: stderr: "" Jun 24 01:53:33.310: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:33.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9lk7l" for this suite. Jun 24 01:53:43.459: INFO: namespace: e2e-tests-kubectl-9lk7l, resource: bindings, ignored listing per whitelist • [SLOW TEST:12.067 seconds] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl run pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a pod from an image when restart is Never [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1315 ------------------------------ [builds][Conformance] oc new-app should succeed with a --name of 58 characters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:44 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:50:46.721: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:50:46.753: INFO: configPath is now "/tmp/extended-test-new-app-rsdx5-klwtw-user.kubeconfig" Jun 24 01:50:46.753: INFO: The user is now "extended-test-new-app-rsdx5-klwtw-user" Jun 24 01:50:46.753: INFO: Creating project "extended-test-new-app-rsdx5-klwtw" STEP: Waiting for a default service account to be provisioned in namespace [JustBeforeEach] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:27 STEP: waiting for builder service account [It] should succeed with a --name of 58 characters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:44 STEP: calling oc new-app Jun 24 01:50:46.984: INFO: Running 'oc new-app --config=/tmp/extended-test-new-app-rsdx5-klwtw-user.kubeconfig --namespace=extended-test-new-app-rsdx5-klwtw https://github.com/openshift/nodejs-ex --name a234567890123456789012345678901234567890123456789012345678' --> Found image 27eba6e (2 months old) in image stream "openshift/nodejs" under tag "6" for "nodejs" Node.js 6 --------- Node.js 6 available as container is a base platform for building and running various Node.js 6 applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. Tags: builder, nodejs, nodejs6 * The source repository appears to match: nodejs * A source build using source code from https://github.com/openshift/nodejs-ex will be created * The resulting image will be pushed to image stream "a234567890123456789012345678901234567890123456789012345678:latest" * Use 'start-build' to trigger a new build * This image will be deployed in deployment config "a234567890123456789012345678901234567890123456789012345678" * Port 8080/tcp will be load balanced by service "a234567890123456789012345678901234567890123456789012345678" * Other containers can access this service through the hostname "a234567890123456789012345678901234567890123456789012345678" --> Creating resources ... imagestream "a234567890123456789012345678901234567890123456789012345678" created buildconfig "a234567890123456789012345678901234567890123456789012345678" created deploymentconfig "a234567890123456789012345678901234567890123456789012345678" created service "a234567890123456789012345678901234567890123456789012345678" created --> Success Build scheduled, use 'oc logs -f bc/a234567890123456789012345678901234567890123456789012345678' to track its progress. Run 'oc status' to view your app. STEP: waiting for the build to complete STEP: waiting for the deployment to complete [AfterEach] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:26.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-new-app-rsdx5-klwtw" for this suite. Jun 24 01:53:51.739: INFO: namespace: extended-test-new-app-rsdx5-klwtw, resource: bindings, ignored listing per whitelist • [SLOW TEST:185.071 seconds] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:52 should succeed with a --name of 58 characters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:44 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Cluster size autoscaling [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:180 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] InitContainer should invoke init containers on a RestartAlways pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:198 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] InitContainer /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:52:30.803: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:52:30.905: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:42 [It] should invoke init containers on a RestartAlways pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:198 STEP: creating the pod Jun 24 01:52:31.412: INFO: PodSpec: initContainers in spec.initContainers Jun 24 01:53:19.504: INFO: PodSpec: initContainers in metadata.annotation [AfterEach] [k8s.io] InitContainer /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:28.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-mk3t2" for this suite. Jun 24 01:53:53.862: INFO: namespace: e2e-tests-init-container-mk3t2, resource: bindings, ignored listing per whitelist • [SLOW TEST:83.173 seconds] [k8s.io] InitContainer /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should invoke init containers on a RestartAlways pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/init_container.go:198 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 binary builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:230 should accept --from-repo as input /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:166 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Network [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should set TCP CLOSE_WAIT timeout /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kube_proxy.go:206 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds] forcePull should affect pulling builder images [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:134 ForcePull test context /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:132 ForcePull test case execution s2i /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/forcepull.go:113 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation namespace [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Namespace objects /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/namespace.go:185 all resources in the namespace should be deleted when namespace is deleted /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/namespace.go:184 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ingresses [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Federated Ingresses /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:274 Ingress connectivity and DNS /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:273 should be able to connect to a federated ingress via its load balancer /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:271 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] s2i extended build [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_extended_build.go:148 with scripts from runtime image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_extended_build.go:146 should use assemble-runtime script from that image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_extended_build.go:145 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ReplicaSet [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Features /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:199 CRUD /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:138 should not be deleted from underlying clusters when OrphanDependents is nil /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:137 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Upgrade [Feature:Upgrade] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Federation Control Plane upgrade /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should maintain a functioning federation [Feature:FCPUpgrade] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/upgrade.go:47 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Firewall rule [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Slow] [Serial] should create valid firewall rules for LoadBalancer type service /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/firewall.go:116 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] DaemonRestart [Disruptive] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Scheduler should continue assigning pods to nodes across restart /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_restart.go:302 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 using the SCL in s2i images /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73 "ci.dev.openshift.redhat.com:5000/openshift/python-27-rhel7" should be SCL enabled /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Proxy [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:275 should proxy to cadvisor /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:64 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 binary builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:230 should reject binary build requests without a --from-xxxx value /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:208 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [Conformance][networking][router] openshift router metrics The HAProxy router should expose a health check on the metrics port /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:64 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:30.396: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:30.422: INFO: configPath is now "/tmp/extended-test-router-metrics-z45zl-fmdkg-user.kubeconfig" Jun 24 01:53:30.422: INFO: The user is now "extended-test-router-metrics-z45zl-fmdkg-user" Jun 24 01:53:30.422: INFO: Creating project "extended-test-router-metrics-z45zl-fmdkg" STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:51 [It] should expose a health check on the metrics port /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:64 Jun 24 01:53:30.557: INFO: Creating new exec pod STEP: listening on the health port Jun 24 01:53:36.569: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-router-metrics-z45zl-fmdkg execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' "http://172.18.11.204:1935/healthz"' Jun 24 01:53:36.973: INFO: stderr: "" [AfterEach] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:36.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-z45zl-fmdkg" for this suite. Jun 24 01:53:57.165: INFO: namespace: extended-test-router-metrics-z45zl-fmdkg, resource: bindings, ignored listing per whitelist • [SLOW TEST:26.804 seconds] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:172 The HAProxy router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:171 should expose a health check on the metrics port /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:64 ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:35 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:43.514: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:43.597: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:35 STEP: Creating a pod to test use defaults Jun 24 01:53:43.676: INFO: Waiting up to 5m0s for pod client-containers-6cabc7f9-9644-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:53:43.683: INFO: Waiting for pod client-containers-6cabc7f9-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-containers-xj6q0' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.453146ms elapsed) Jun 24 01:53:45.686: INFO: Waiting for pod client-containers-6cabc7f9-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-containers-xj6q0' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010259874s elapsed) Jun 24 01:53:47.688: INFO: Waiting for pod client-containers-6cabc7f9-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-containers-xj6q0' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012335988s elapsed) STEP: Saw pod success Jun 24 01:53:49.693: INFO: Trying to get logs from node 172.18.11.204 pod client-containers-6cabc7f9-9644-11e9-afd4-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:53:49.711: INFO: Waiting for pod client-containers-6cabc7f9-9644-11e9-afd4-0e9110352016 to disappear Jun 24 01:53:49.716: INFO: Pod client-containers-6cabc7f9-9644-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:49.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-xj6q0" for this suite. Jun 24 01:53:59.789: INFO: namespace: e2e-tests-containers-xj6q0, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.479 seconds] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should use the image defaults if command and args are blank [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:35 ------------------------------ [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:473 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:33.397: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:33.527: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:473 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:49.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-resourcequota-mx5t8" for this suite. Jun 24 01:54:00.120: INFO: namespace: e2e-tests-resourcequota-mx5t8, resource: bindings, ignored listing per whitelist • [SLOW TEST:26.746 seconds] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should verify ResourceQuota with terminating scopes. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:473 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] DisruptionController [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 evictions: too few pods, replicaSet, percentage => should not allow an eviction /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/disruption.go:187 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Load capacity [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Feature:ManualPerformance] should be able to handle 3 pods per node { ReplicationController} with 0 secrets and 0 daemons /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/load.go:265 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:52 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:54.003: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:54.231: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] files with FSGroup ownership should support (root,0644,tmpfs) [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:52 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 24 01:53:54.356: INFO: Waiting up to 5m0s for pod pod-730a01c0-9644-11e9-906c-0e9110352016 status to be success or failure Jun 24 01:53:54.358: INFO: Waiting for pod pod-730a01c0-9644-11e9-906c-0e9110352016 in namespace 'e2e-tests-emptydir-g89dx' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.809943ms elapsed) Jun 24 01:53:56.361: INFO: Waiting for pod pod-730a01c0-9644-11e9-906c-0e9110352016 in namespace 'e2e-tests-emptydir-g89dx' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004511749s elapsed) STEP: Saw pod success Jun 24 01:53:58.365: INFO: Trying to get logs from node 172.18.11.204 pod pod-730a01c0-9644-11e9-906c-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:53:58.403: INFO: Waiting for pod pod-730a01c0-9644-11e9-906c-0e9110352016 to disappear Jun 24 01:53:58.406: INFO: Pod pod-730a01c0-9644-11e9-906c-0e9110352016 no longer exists [AfterEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:53:58.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-g89dx" for this suite. Jun 24 01:54:08.497: INFO: namespace: e2e-tests-emptydir-g89dx, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.552 seconds] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 when FSGroup is specified [Feature:FSGroup] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61 files with FSGroup ownership should support (root,0644,tmpfs) [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:52 ------------------------------ [builds][Conformance] oc new-app should fail with a --name longer than 58 characters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:51 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:54:00.165: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:54:00.211: INFO: configPath is now "/tmp/extended-test-new-app-6v5vh-9dw0t-user.kubeconfig" Jun 24 01:54:00.211: INFO: The user is now "extended-test-new-app-6v5vh-9dw0t-user" Jun 24 01:54:00.211: INFO: Creating project "extended-test-new-app-6v5vh-9dw0t" STEP: Waiting for a default service account to be provisioned in namespace [JustBeforeEach] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:27 STEP: waiting for builder service account [It] should fail with a --name longer than 58 characters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:51 STEP: calling oc new-app Jun 24 01:54:00.476: INFO: Running 'oc new-app --config=/tmp/extended-test-new-app-6v5vh-9dw0t-user.kubeconfig --namespace=extended-test-new-app-6v5vh-9dw0t https://github.com/openshift/nodejs-ex --name a2345678901234567890123456789012345678901234567890123456789' Jun 24 01:54:02.946: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc new-app --config=/tmp/extended-test-new-app-6v5vh-9dw0t-user.kubeconfig --namespace=extended-test-new-app-6v5vh-9dw0t https://github.com/openshift/nodejs-ex --name a2345678901234567890123456789012345678901234567890123456789] [] error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character. error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character. [] <nil> 0xc42170c480 exit status 1 <nil> <nil> true [0xc42162e498 0xc42162e4c0 0xc42162e4c0] [0xc42162e498 0xc42162e4c0] [0xc42162e4a0 0xc42162e4b8] [0xdd8a30 0xdd8b30] 0xc4210a1f20 <nil>}: error: invalid name: a2345678901234567890123456789012345678901234567890123456789. Must be an a lower case alphanumeric (a-z, and 0-9) string with a maximum length of 58 characters, where the first character is a letter (a-z), and the '-' character is allowed anywhere except the first or last character. [AfterEach] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:54:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-new-app-6v5vh-9dw0t" for this suite. Jun 24 01:54:13.003: INFO: namespace: extended-test-new-app-6v5vh-9dw0t, resource: bindings, ignored listing per whitelist • [SLOW TEST:12.980 seconds] [builds][Conformance] oc new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:52 should fail with a --name longer than 58 characters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/new_app.go:51 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [networking] services [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:65 basic functionality /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:21 should allow connections to another pod on a different node via a service IP /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:20 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Projected [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide podname as non-root with fsgroup [Feature:FSGroup] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:846 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] update failure status [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:195 Build status failed https proxy invalid url /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:194 should contain the generic failure reason and message /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:193 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Monitoring [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should verify monitoring pods and all cluster nodes are available on influxdb using heapster. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/monitoring.go:45 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation secrets [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Secret objects /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/secret.go:95 should not be deleted from underlying clusters when OrphanDependents is nil /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/secret.go:94 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ConfigMap [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Feature:FSGroup] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:49 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] builds should have deadlines [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/completiondeadlineseconds.go:76 oc start-build source-build --wait /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/completiondeadlineseconds.go:50 Source: should start a build and wait for the build failed and build pod being killed by kubelet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/completiondeadlineseconds.go:49 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Garbage collector [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should orphan RS created by deployment when deleteOptions.OrphanDependents is true /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/garbage_collector.go:477 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ESIPP [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should work for type=NodePort /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:1316 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [bldcompat][Slow][Compatibility] build controller [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:52 RunBuildRunningPodDeleteTest /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:46 should succeed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:45 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Sysctls [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not launch unsafe, but not explicitly enabled sysctls on the node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:208 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] SchedulerPredicates [Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 validates that InterPodAntiAffinity is respected if matching 2 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:556 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ESIPP [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should only target nodes with endpoints [Feature:ExternalTrafficLocalOnly] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:1376 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Upgrade [Feature:Upgrade] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] master upgrade /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should maintain a functioning cluster [Feature:MasterUpgrade] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:75 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Services [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create endpoints for unready pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:1154 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:514 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:54:08.557: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:54:08.641: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:63 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:84 STEP: Creating service test in namespace e2e-tests-statefulset-f0tbm [It] Should recreate evicted statefulset /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:514 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-f0tbm STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-f0tbm STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-f0tbm STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-f0tbm Jun 24 01:54:18.752: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-f0tbm, name: ss-0, uid: 818fad46-9644-11e9-9f9d-0e9110352016, status phase: Failed. Waiting for statefulset controller to delete. Jun 24 01:54:18.752: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-f0tbm STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-f0tbm STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-f0tbm and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:92 Jun 24 01:54:26.961: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f0tbm Jun 24 01:54:26.962: INFO: Scaling statefulset ss to 0 Jun 24 01:54:36.975: INFO: Waiting for statefulset status.replicas updated to 0 Jun 24 01:54:36.977: INFO: Deleting statefulset ss [AfterEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:54:36.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-f0tbm" for this suite. Jun 24 01:54:47.091: INFO: namespace: e2e-tests-statefulset-f0tbm, resource: bindings, ignored listing per whitelist • [SLOW TEST:38.586 seconds] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Should recreate evicted statefulset /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:514 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Dynamic provisioning [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] DynamicProvisioner Default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/volume_provisioning.go:324 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:52 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:51.796: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:51.884: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:52 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-ln3b3 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 24 01:53:51.933: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 24 01:54:26.016: INFO: ExecWithOptions {Command:[/bin/sh -c timeout -t 15 curl -q -s --connect-timeout 1 http://172.17.0.2:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-ln3b3 PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 24 01:54:26.016: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 01:54:26.545: INFO: Waiting for map[] endpoints, got endpoints map[netserver-0:{}] [AfterEach] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:54:26.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-ln3b3" for this suite. Jun 24 01:54:51.641: INFO: namespace: e2e-tests-pod-network-test-ln3b3, resource: bindings, ignored listing per whitelist • [SLOW TEST:59.922 seconds] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Granular Checks: Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should function for node-pod communication: http [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:52 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:55 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:54:47.149: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:54:47.246: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default commmand (docker entrypoint) [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:55 STEP: Creating a pod to test override command Jun 24 01:54:47.348: INFO: Waiting up to 5m0s for pod client-containers-929ed26f-9644-11e9-906c-0e9110352016 status to be success or failure Jun 24 01:54:47.355: INFO: Waiting for pod client-containers-929ed26f-9644-11e9-906c-0e9110352016 in namespace 'e2e-tests-containers-vfgrd' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.432071ms elapsed) Jun 24 01:54:49.358: INFO: Waiting for pod client-containers-929ed26f-9644-11e9-906c-0e9110352016 in namespace 'e2e-tests-containers-vfgrd' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.009881882s elapsed) STEP: Saw pod success Jun 24 01:54:51.362: INFO: Trying to get logs from node 172.18.11.204 pod client-containers-929ed26f-9644-11e9-906c-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:54:51.377: INFO: Waiting for pod client-containers-929ed26f-9644-11e9-906c-0e9110352016 to disappear Jun 24 01:54:51.379: INFO: Pod client-containers-929ed26f-9644-11e9-906c-0e9110352016 no longer exists [AfterEach] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:54:51.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-vfgrd" for this suite. Jun 24 01:55:01.848: INFO: namespace: e2e-tests-containers-vfgrd, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.737 seconds] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be able to override the image's default commmand (docker entrypoint) [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:55 ------------------------------ [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:38 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:54:13.177: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:54:13.247: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:38 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pt1ll STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 24 01:54:13.433: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 24 01:54:39.556: INFO: ExecWithOptions {Command:[/bin/sh -c curl -q -s 'http://172.17.0.2:8080/dial?request=hostName&protocol=http&host=172.17.0.11&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-pt1ll PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 24 01:54:39.556: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 01:54:39.682: INFO: Waiting for endpoints: map[] [AfterEach] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:54:39.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-pt1ll" for this suite. Jun 24 01:55:04.734: INFO: namespace: e2e-tests-pod-network-test-pt1ll, resource: bindings, ignored listing per whitelist • [SLOW TEST:51.628 seconds] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Granular Checks: Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should function for intra-pod communication: http [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/networking.go:38 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [networking] services [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:65 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:273 should allow connections from pods in the default namespace to a service in another namespace on the same node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:59 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Port forwarding [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] With a server listening on 0.0.0.0 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] that expects a client request /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support a client that connects, sends no data, and disconnects /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:478 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Network Partition [Disruptive] [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] [Job] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create new pods when node is partitioned /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:481 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:130 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:54:51.720: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:54:51.800: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:130 STEP: Creating a pod to test downward api env vars Jun 24 01:54:51.849: INFO: Waiting up to 5m0s for pod downward-api-954edd44-9644-11e9-a60e-0e9110352016 status to be success or failure Jun 24 01:54:51.855: INFO: Waiting for pod downward-api-954edd44-9644-11e9-a60e-0e9110352016 in namespace 'e2e-tests-downward-api-n4d05' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.42351ms elapsed) Jun 24 01:54:53.861: INFO: Waiting for pod downward-api-954edd44-9644-11e9-a60e-0e9110352016 in namespace 'e2e-tests-downward-api-n4d05' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.012158847s elapsed) STEP: Saw pod success Jun 24 01:54:55.866: INFO: Trying to get logs from node 172.18.11.204 pod downward-api-954edd44-9644-11e9-a60e-0e9110352016 container dapi-container: <nil> STEP: delete the pod Jun 24 01:54:55.882: INFO: Waiting for pod downward-api-954edd44-9644-11e9-a60e-0e9110352016 to disappear Jun 24 01:54:55.885: INFO: Pod downward-api-954edd44-9644-11e9-a60e-0e9110352016 no longer exists [AfterEach] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:54:55.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n4d05" for this suite. Jun 24 01:55:06.054: INFO: namespace: e2e-tests-downward-api-n4d05, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.405 seconds] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:130 ------------------------------ [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:67 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:06.126: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:06.202: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:67 Jun 24 01:55:06.467: INFO: (0) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 216.793501ms) Jun 24 01:55:06.470: INFO: (1) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.010579ms) Jun 24 01:55:06.472: INFO: (2) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.237477ms) Jun 24 01:55:06.475: INFO: (3) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.356139ms) Jun 24 01:55:06.477: INFO: (4) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.680519ms) Jun 24 01:55:06.480: INFO: (5) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.432204ms) Jun 24 01:55:06.482: INFO: (6) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.198697ms) Jun 24 01:55:06.485: INFO: (7) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.388362ms) Jun 24 01:55:06.487: INFO: (8) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.43541ms) Jun 24 01:55:06.489: INFO: (9) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.42851ms) Jun 24 01:55:06.492: INFO: (10) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.767427ms) Jun 24 01:55:06.495: INFO: (11) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.784032ms) Jun 24 01:55:06.498: INFO: (12) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.739881ms) Jun 24 01:55:06.500: INFO: (13) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.649735ms) Jun 24 01:55:06.504: INFO: (14) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.483441ms) Jun 24 01:55:06.506: INFO: (15) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.322956ms) Jun 24 01:55:06.509: INFO: (16) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.579118ms) Jun 24 01:55:06.512: INFO: (17) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.184138ms) Jun 24 01:55:06.515: INFO: (18) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.531027ms) Jun 24 01:55:06.517: INFO: (19) /api/v1/nodes/172.18.11.204/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.680932ms) [AfterEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:06.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-lgh1m" for this suite. Jun 24 01:55:16.973: INFO: namespace: e2e-tests-proxy-lgh1m, resource: bindings, ignored listing per whitelist • [SLOW TEST:10.871 seconds] [k8s.io] Proxy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:275 should proxy logs on node using proxy subresource [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:67 ------------------------------ deploymentconfigs viewing rollout history [Conformance] should print the rollout history /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:557 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:57.201: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:53:57.220: INFO: configPath is now "/tmp/extended-test-cli-deployment-xbj84-gltzr-user.kubeconfig" Jun 24 01:53:57.220: INFO: The user is now "extended-test-cli-deployment-xbj84-gltzr-user" Jun 24 01:53:57.220: INFO: Creating project "extended-test-cli-deployment-xbj84-gltzr" STEP: Waiting for a default service account to be provisioned in namespace [It] should print the rollout history /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:557 Jun 24 01:53:57.337: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-xbj84-gltzr-user.kubeconfig --namespace=extended-test-cli-deployment-xbj84-gltzr -f /tmp/fixture-testdata-dir978613527/test/extended/testdata/deployments/deployment-simple.yaml -o name' STEP: waiting for the first rollout to complete Jun 24 01:54:12.466: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete. STEP: updating the deployment config in order to trigger a new rollout STEP: waiting for the second rollout to complete Jun 24 01:54:40.075: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-2) is complete. Jun 24 01:54:40.076: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-xbj84-gltzr-user.kubeconfig --namespace=extended-test-cli-deployment-xbj84-gltzr history deploymentconfig/deployment-simple' STEP: checking the history for substrings deploymentconfigs "deployment-simple" REVISION STATUS CAUSE 1 Complete config change 2 Complete config change [AfterEach] viewing rollout history [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:515 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:54:40.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-xbj84-gltzr" for this suite. Jun 24 01:55:25.439: INFO: namespace: extended-test-cli-deployment-xbj84-gltzr, resource: bindings, ignored listing per whitelist • [SLOW TEST:88.321 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 viewing rollout history [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:558 should print the rollout history /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:557 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Staging client repo client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create pods, delete pods, watch pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/generated_clientset.go:375 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl alpha client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl run ScheduledJob /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a ScheduledJob /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:227 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds] build have source revision metadata [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:45 started build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:44 should contain source revision information /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/revision.go:43 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] CronJob [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not emit unexpected warnings /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cronjob.go:195 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:25.532: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:25.610: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:53 STEP: Creating configMap with name configmap-test-volume-a975f2d2-9644-11e9-a527-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 01:55:25.660: INFO: Waiting up to 5m0s for pod pod-configmaps-a9764327-9644-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:55:25.668: INFO: Waiting for pod pod-configmaps-a9764327-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-pmtks' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.246608ms elapsed) Jun 24 01:55:27.675: INFO: Waiting for pod pod-configmaps-a9764327-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-pmtks' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.014492258s elapsed) STEP: Saw pod success Jun 24 01:55:29.682: INFO: Trying to get logs from node 172.18.11.204 pod pod-configmaps-a9764327-9644-11e9-a527-0e9110352016 container configmap-volume-test: <nil> STEP: delete the pod Jun 24 01:55:29.723: INFO: Waiting for pod pod-configmaps-a9764327-9644-11e9-a527-0e9110352016 to disappear Jun 24 01:55:29.726: INFO: Pod pod-configmaps-a9764327-9644-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:29.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-pmtks" for this suite. Jun 24 01:55:39.811: INFO: namespace: e2e-tests-configmap-pmtks, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.329 seconds] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume as non-root [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:53 ------------------------------ [k8s.io] Pods should support remote command execution over websockets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:511 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:01.889: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:01.959: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:127 [It] should support remote command execution over websockets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:511 Jun 24 01:55:02.037: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:06.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hgcnh" for this suite. Jun 24 01:55:46.475: INFO: namespace: e2e-tests-pods-hgcnh, resource: bindings, ignored listing per whitelist • [SLOW TEST:44.652 seconds] [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support remote command execution over websockets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:511 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Projected [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide podname as non-root with fsgroup and defaultMode [Feature:FSGroup] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:861 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [bldcompat][Slow][Compatibility] build controller [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:52 RunBuildDeleteTest /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:41 should succeed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:40 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] update failure status [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:195 Build status push image to registry failure /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:137 should contain the image push to registry failure reason and message /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:136 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Services [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should work after restarting apiserver [Disruptive] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:421 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [Conformance][networking][router] openshift routers The HAProxy router should override the route host with a custom value /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:145 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:04.812: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:04.840: INFO: configPath is now "/tmp/extended-test-scoped-router-dblhx-8x858-user.kubeconfig" Jun 24 01:55:04.840: INFO: The user is now "extended-test-scoped-router-dblhx-8x858-user" Jun 24 01:55:04.840: INFO: Creating project "extended-test-scoped-router-dblhx-8x858" STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:39 Jun 24 01:55:04.971: INFO: Running 'oc adm --config=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-scoped-router-dblhx-8x858 policy add-cluster-role-to-user system:router extended-test-scoped-router-dblhx-8x858-user' cluster role "system:router" added: "extended-test-scoped-router-dblhx-8x858-user" Jun 24 01:55:05.262: INFO: Running 'oc new-app --config=/tmp/extended-test-scoped-router-dblhx-8x858-user.kubeconfig --namespace=extended-test-scoped-router-dblhx-8x858 -f /tmp/fixture-testdata-dir824123364/test/extended/testdata/scoped-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-scoped-router-dblhx-8x858/" for "/tmp/fixture-testdata-dir824123364/test/extended/testdata/scoped-router.yaml" to project extended-test-scoped-router-dblhx-8x858 * With parameters: * IMAGE=openshift/origin-haproxy-router --> Creating resources ... pod "scoped-router" created pod "router-override" created rolebinding "system-router" created route "route-1" created route "route-2" created service "endpoints" created pod "endpoint-1" created --> Success Run 'oc status' to view your app. [It] should override the route host with a custom value /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:145 Jun 24 01:55:05.685: INFO: Creating new exec pod STEP: creating a scoped router from a config file "/tmp/fixture-testdata-dir824123364/test/extended/testdata/scoped-router.yaml" STEP: waiting for the healthz endpoint to respond Jun 24 01:55:18.896: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-dblhx-8x858 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 172.17.0.10' "http://172.17.0.10:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jun 24 01:55:20.099: INFO: stderr: "" STEP: waiting for the valid route to respond Jun 24 01:55:20.099: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-dblhx-8x858 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: route-1-extended-test-scoped-router-dblhx-8x858.myapps.mycompany.com' "http://172.17.0.10" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jun 24 01:55:20.576: INFO: stderr: "" STEP: checking that the stored domain name does not match a route Jun 24 01:55:20.576: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-dblhx-8x858 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: first.example.com' "http://172.17.0.10"' Jun 24 01:55:21.017: INFO: stderr: "" STEP: checking that route-1-extended-test-scoped-router-dblhx-8x858.myapps.mycompany.com matches a route Jun 24 01:55:21.017: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-dblhx-8x858 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-1-extended-test-scoped-router-dblhx-8x858.myapps.mycompany.com' "http://172.17.0.10"' Jun 24 01:55:21.724: INFO: stderr: "" STEP: checking that route-2-extended-test-scoped-router-dblhx-8x858.myapps.mycompany.com matches a route Jun 24 01:55:21.724: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-scoped-router-dblhx-8x858 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: route-2-extended-test-scoped-router-dblhx-8x858.myapps.mycompany.com' "http://172.17.0.10"' Jun 24 01:55:22.696: INFO: stderr: "" Jun 24 01:55:22.747: INFO: Scoped Router test [Conformance][networking][router] openshift routers The HAProxy router should override the route host with a custom value logs: I0624 05:55:11.869492 1 merged_client_builder.go:123] Using in-cluster configuration I0624 05:55:11.924438 1 reflector.go:187] Starting reflector *api.Service (10m0s) from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 I0624 05:55:11.929642 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 I0624 05:55:11.955609 1 router.go:156] Creating a new template router, writing to /var/lib/haproxy/router I0624 05:55:11.955750 1 router.go:350] Template router will coalesce reloads within 5 seconds of each other I0624 05:55:11.955777 1 router.go:400] Router default cert from router container I0624 05:55:11.955783 1 router.go:214] Reading persisted state I0624 05:55:11.981283 1 router.go:218] Committing state I0624 05:55:11.981297 1 router.go:455] Writing the router state I0624 05:55:12.001303 1 router.go:462] Writing the router config I0624 05:55:12.009029 1 router.go:476] Reloading the router E0624 05:55:12.024646 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:12.428067 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:55:12.428137 1 router.go:237] Router is only using resources in namespace extended-test-scoped-router-dblhx-8x858 I0624 05:55:12.428166 1 reflector.go:187] Starting reflector *api.Route (10m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:88 I0624 05:55:12.428215 1 reflector.go:187] Starting reflector *api.Endpoints (10m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:95 I0624 05:55:12.428250 1 router_controller.go:70] Running router controller I0624 05:55:12.428267 1 reaper.go:17] Launching reaper I0624 05:55:12.428345 1 reflector.go:236] Listing and watching *api.Route from github.com/openshift/origin/pkg/router/controller/factory/factory.go:88 I0624 05:55:12.428681 1 reflector.go:236] Listing and watching *api.Endpoints from github.com/openshift/origin/pkg/router/controller/factory/factory.go:95 I0624 05:55:12.439409 1 plugin.go:159] Processing 0 Endpoints for Name: endpoints (MODIFIED) I0624 05:55:12.439439 1 plugin.go:171] Modifying endpoints for extended-test-scoped-router-dblhx-8x858/endpoints I0624 05:55:12.439453 1 router.go:823] Ignoring change for extended-test-scoped-router-dblhx-8x858/endpoints, endpoints are the same I0624 05:55:12.439461 1 router_controller.go:296] Router sync in progress I0624 05:55:12.444657 1 router_controller.go:305] Processing Route: extended-test-scoped-router-dblhx-8x858/route-1 -> endpoints I0624 05:55:12.444678 1 router_controller.go:306] Alias: first.example.com I0624 05:55:12.444683 1 router_controller.go:307] Path: I0624 05:55:12.444688 1 router_controller.go:308] Event: MODIFIED I0624 05:55:12.444700 1 router.go:132] host first.example.com admitted I0624 05:55:12.444820 1 unique_host.go:195] Route extended-test-scoped-router-dblhx-8x858/route-1 claims first.example.com I0624 05:55:12.444843 1 status.go:179] has last touch <nil> for extended-test-scoped-router-dblhx-8x858/route-1 I0624 05:55:12.444877 1 status.go:269] admit: admitting route by updating status: route-1 (true): first.example.com I0624 05:55:12.459646 1 router.go:781] Adding route extended-test-scoped-router-dblhx-8x858/route-1 I0624 05:55:12.459663 1 router_controller.go:298] Router sync complete I0624 05:55:12.459670 1 router.go:435] Router state synchronized for the first time I0624 05:55:12.459721 1 router.go:455] Writing the router state I0624 05:55:12.461603 1 router_controller.go:305] Processing Route: extended-test-scoped-router-dblhx-8x858/route-1 -> endpoints I0624 05:55:12.461618 1 router_controller.go:306] Alias: first.example.com I0624 05:55:12.461623 1 router_controller.go:307] Path: I0624 05:55:12.461628 1 router_controller.go:308] Event: MODIFIED I0624 05:55:12.461636 1 router.go:132] host first.example.com admitted I0624 05:55:12.461664 1 status.go:245] admit: route already admitted I0624 05:55:12.466920 1 router.go:462] Writing the router config I0624 05:55:12.468898 1 router.go:476] Reloading the router I0624 05:55:12.500301 1 reaper.go:24] Signal received: child exited I0624 05:55:12.500353 1 reaper.go:32] Reaped process with pid 28 I0624 05:55:12.511811 1 reaper.go:24] Signal received: child exited I0624 05:55:12.511811 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:55:13.024850 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:13.028924 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:14.029127 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:14.032767 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:15.033011 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:15.038961 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:16.039150 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:16.042996 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:16.558621 1 plugin.go:159] Processing 1 Endpoints for Name: endpoints (MODIFIED) I0624 05:55:16.558644 1 plugin.go:162] Subset 0 : api.EndpointSubset{Addresses:[]api.EndpointAddress{api.EndpointAddress{IP:"172.17.0.7", Hostname:"", NodeName:(*string)(0xc4202d6200), TargetRef:(*api.ObjectReference)(0xc420e649a0)}}, NotReadyAddresses:[]api.EndpointAddress(nil), Ports:[]api.EndpointPort{api.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0624 05:55:16.558730 1 plugin.go:171] Modifying endpoints for extended-test-scoped-router-dblhx-8x858/endpoints I0624 05:55:16.609091 1 router_controller.go:305] Processing Route: extended-test-scoped-router-dblhx-8x858/route-1 -> endpoints I0624 05:55:16.609111 1 router_controller.go:306] Alias: first.example.com I0624 05:55:16.609117 1 router_controller.go:307] Path: I0624 05:55:16.609121 1 router_controller.go:308] Event: MODIFIED I0624 05:55:16.609133 1 router.go:132] host first.example.com admitted I0624 05:55:16.609200 1 status.go:179] has last touch 2019-06-24 05:55:16 +0000 UTC for extended-test-scoped-router-dblhx-8x858/route-1 I0624 05:55:16.609226 1 status.go:196] different cached last touch of 2019-06-24 05:55:12 +0000 UTC I0624 05:55:16.609249 1 status.go:261] admit: observed a route update from someone else: route extended-test-scoped-router-dblhx-8x858/route-1 has been updated to an inconsistent value, doing nothing I0624 05:55:16.955954 1 router.go:455] Writing the router state I0624 05:55:16.956259 1 router.go:462] Writing the router config I0624 05:55:16.958031 1 router.go:476] Reloading the router I0624 05:55:17.000452 1 reaper.go:24] Signal received: child exited I0624 05:55:17.000496 1 reaper.go:32] Reaped process with pid 54 I0624 05:55:17.024941 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:55:17.033248 1 reaper.go:24] Signal received: child exited I0624 05:55:17.043381 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:17.068584 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:18.074553 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:18.077930 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:19.078318 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:19.081594 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:20.081796 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:20.085090 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:21.092260 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:21.096335 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster I0624 05:55:22.096533 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:55:22.137127 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-scoped-router-dblhx-8x858:default" cannot list all services in the cluster [AfterEach] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:22.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-scoped-router-dblhx-8x858" for this suite. Jun 24 01:55:47.815: INFO: namespace: extended-test-scoped-router-dblhx-8x858, resource: bindings, ignored listing per whitelist • [SLOW TEST:43.063 seconds] [Conformance][networking][router] openshift routers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:147 The HAProxy router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:146 should override the route host with a custom value /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/scoped.go:145 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] StatefulSet [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should creating a working CockroachDB cluster /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:552 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:494 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:39.863: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:39.931: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:494 STEP: Creating configMap with name configmap-test-volume-b1fff659-9644-11e9-a527-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 01:55:39.988: INFO: Waiting up to 5m0s for pod pod-configmaps-b2006954-9644-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:55:39.990: INFO: Waiting for pod pod-configmaps-b2006954-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-25pz0' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.844535ms elapsed) Jun 24 01:55:41.992: INFO: Waiting for pod pod-configmaps-b2006954-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-25pz0' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00444897s elapsed) STEP: Saw pod success Jun 24 01:55:43.996: INFO: Trying to get logs from node 172.18.11.204 pod pod-configmaps-b2006954-9644-11e9-a527-0e9110352016 container configmap-volume-test: <nil> STEP: delete the pod Jun 24 01:55:44.010: INFO: Waiting for pod pod-configmaps-b2006954-9644-11e9-a527-0e9110352016 to disappear Jun 24 01:55:44.012: INFO: Pod pod-configmaps-b2006954-9644-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:44.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-25pz0" for this suite. Jun 24 01:55:54.188: INFO: namespace: e2e-tests-configmap-25pz0, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.542 seconds] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable in multiple volumes in the same pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:494 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:499 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:285 should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:368 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:348 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:53:59.997: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:54:00.080: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:63 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:84 STEP: Creating service test in namespace e2e-tests-statefulset-t0wzl [It] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:348 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-t0wzl, and pausing scale operations after each pod Jun 24 01:54:00.173: INFO: Found 0 stateful pods, waiting for 1 Jun 24 01:54:10.176: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Scaling up stateful set ss to 3 replicas and pausing after 2nd pod Jun 24 01:54:10.184: INFO: Set annotation pod.alpha.kubernetes.io/initialized to true on pod ss-0 Jun 24 01:54:10.203: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:54:20.206: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:54:20.206: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Pending - Ready=false Jun 24 01:54:30.206: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:54:30.206: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Jun 24 01:54:40.207: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:54:40.207: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true STEP: Before scale up finished setting 2nd pod to be not ready by breaking readiness probe Jun 24 01:54:40.209: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-statefulset-t0wzl ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/' Jun 24 01:54:40.656: INFO: stderr: "" Jun 24 01:54:40.656: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 24 01:54:40.656: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-statefulset-t0wzl ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/' Jun 24 01:54:41.028: INFO: stderr: "" Jun 24 01:54:41.028: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 24 01:54:41.028: INFO: Waiting for statefulset status.replicas updated to 0 Jun 24 01:54:41.031: INFO: Waiting for stateful set status to become 0, currently 1 Jun 24 01:54:51.035: INFO: Waiting for stateful set status to become 0, currently 1 Jun 24 01:55:01.034: INFO: Waiting for stateful set status to become 0, currently 1 Jun 24 01:55:11.035: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 24 01:55:11.035: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false STEP: Continue scale operation after the 2nd pod, and scaling down to 1 replica Jun 24 01:55:11.045: INFO: Set annotation pod.alpha.kubernetes.io/initialized to true on pod ss-1 STEP: Verifying that the 2nd pod wont be removed if it is not running and ready Jun 24 01:55:11.061: INFO: Verifying statefulset ss doesn't scale past 2 for another 9.999999555s Jun 24 01:55:12.064: INFO: Verifying statefulset ss doesn't scale past 2 for another 8.997903718s Jun 24 01:55:13.066: INFO: Verifying statefulset ss doesn't scale past 2 for another 7.994956288s Jun 24 01:55:14.069: INFO: Verifying statefulset ss doesn't scale past 2 for another 6.992098031s Jun 24 01:55:15.072: INFO: Verifying statefulset ss doesn't scale past 2 for another 5.989462622s Jun 24 01:55:16.075: INFO: Verifying statefulset ss doesn't scale past 2 for another 4.986807433s Jun 24 01:55:17.087: INFO: Verifying statefulset ss doesn't scale past 2 for another 3.984106037s Jun 24 01:55:18.090: INFO: Verifying statefulset ss doesn't scale past 2 for another 2.971355816s Jun 24 01:55:19.097: INFO: Verifying statefulset ss doesn't scale past 2 for another 1.964978945s Jun 24 01:55:20.100: INFO: Verifying statefulset ss doesn't scale past 2 for another 961.770972ms STEP: Verifying the 2nd pod is removed only when it becomes running and ready Jun 24 01:55:21.125: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-statefulset-t0wzl ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/' Jun 24 01:55:21.775: INFO: stderr: "" Jun 24 01:55:21.775: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 24 01:55:21.775: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=e2e-tests-statefulset-t0wzl ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/' Jun 24 01:55:22.770: INFO: stderr: "" Jun 24 01:55:22.770: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 24 01:55:30.277: INFO: Observed event MODIFIED for pod ss-1. Phase Running, Pod is ready true [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:92 Jun 24 01:55:30.277: INFO: Deleting all statefulset in ns e2e-tests-statefulset-t0wzl Jun 24 01:55:30.297: INFO: Scaling statefulset ss to 0 Jun 24 01:55:50.351: INFO: Waiting for statefulset status.replicas updated to 0 Jun 24 01:55:50.353: INFO: Deleting statefulset ss [AfterEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:50.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-t0wzl" for this suite. Jun 24 01:56:00.465: INFO: namespace: e2e-tests-statefulset-t0wzl, resource: bindings, ignored listing per whitelist • [SLOW TEST:120.527 seconds] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:348 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] build can have Docker image source [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/image_source.go:81 build with image source /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/image_source.go:55 should complete successfully and contain the expected file /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/image_source.go:54 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:321 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:47.880: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:47.950: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:321 STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a PersistentVolumeClaim STEP: Ensuring resource quota status captures persistent volume claim creation STEP: Deleting a PersistentVolumeClaim STEP: Ensuring resource quota status released usage [AfterEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:54.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-resourcequota-r1m8n" for this suite. Jun 24 01:56:04.405: INFO: namespace: e2e-tests-resourcequota-r1m8n, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.525 seconds] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:321 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "ci.dev.openshift.redhat.com:5000/openshift/perl-516-rhel7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubernetes Dashboard [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should check that the kubernetes-dashboard instance is alive /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dashboard.go:98 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Services should serve multiport endpoints from pods [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:215 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:46.549: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:46.616: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:52 [It] should serve multiport endpoints from pods [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:215 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-2gx8c STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2gx8c to expose endpoints map[] Jun 24 01:55:46.694: INFO: Get endpoints failed (17.816256ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 24 01:55:47.696: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2gx8c exposes endpoints map[] (1.020067705s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-2gx8c STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2gx8c to expose endpoints map[pod1:[100]] Jun 24 01:55:50.745: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2gx8c exposes endpoints map[pod1:[100]] (3.041929761s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-2gx8c STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2gx8c to expose endpoints map[pod1:[100] pod2:[101]] Jun 24 01:55:53.800: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2gx8c exposes endpoints map[pod2:[101] pod1:[100]] (3.050450354s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-2gx8c STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2gx8c to expose endpoints map[pod2:[101]] Jun 24 01:55:53.836: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2gx8c exposes endpoints map[pod2:[101]] (25.954584ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-2gx8c STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-2gx8c to expose endpoints map[] Jun 24 01:55:54.917: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-2gx8c exposes endpoints map[] (1.067960518s elapsed) [AfterEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:54.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-2gx8c" for this suite. Jun 24 01:56:05.150: INFO: namespace: e2e-tests-services-2gx8c, resource: bindings, ignored listing per whitelist [AfterEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:64 • [SLOW TEST:18.689 seconds] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should serve multiport endpoints from pods [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:215 ------------------------------ [k8s.io] ConfigMap should be consumable via the environment [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:422 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:54.408: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:54.482: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:422 STEP: Creating configMap e2e-tests-configmap-vzzqx/configmap-test-bac3771c-9644-11e9-a527-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 01:55:54.702: INFO: Waiting up to 5m0s for pod pod-configmaps-bac54478-9644-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:55:54.717: INFO: Waiting for pod pod-configmaps-bac54478-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-vzzqx' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.767722ms elapsed) Jun 24 01:55:56.719: INFO: Waiting for pod pod-configmaps-bac54478-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-vzzqx' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.016909213s elapsed) STEP: Saw pod success Jun 24 01:55:58.727: INFO: Trying to get logs from node 172.18.11.204 pod pod-configmaps-bac54478-9644-11e9-a527-0e9110352016 container env-test: <nil> STEP: delete the pod Jun 24 01:55:58.778: INFO: Waiting for pod pod-configmaps-bac54478-9644-11e9-a527-0e9110352016 to disappear Jun 24 01:55:58.783: INFO: Pod pod-configmaps-bac54478-9644-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:55:58.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vzzqx" for this suite. Jun 24 01:56:08.849: INFO: namespace: e2e-tests-configmap-vzzqx, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.615 seconds] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable via the environment [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:422 ------------------------------ [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:04.416: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:04.516: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 24 01:56:04.594: INFO: Waiting up to 5m0s for pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 status to be success or failure Jun 24 01:56:04.596: INFO: Waiting for pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-emptydir-6v26r' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.710391ms elapsed) Jun 24 01:56:06.601: INFO: Waiting for pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-emptydir-6v26r' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00711099s elapsed) Jun 24 01:56:08.606: INFO: Waiting for pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-emptydir-6v26r' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.01207786s elapsed) Jun 24 01:56:10.609: INFO: Waiting for pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-emptydir-6v26r' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.01465275s elapsed) Jun 24 01:56:12.628: INFO: Waiting for pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-emptydir-6v26r' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.033924325s elapsed) STEP: Saw pod success Jun 24 01:56:14.633: INFO: Trying to get logs from node 172.18.11.204 pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:56:14.665: INFO: Waiting for pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 to disappear Jun 24 01:56:14.670: INFO: Pod pod-c0a67314-9644-11e9-a4d2-0e9110352016 no longer exists [AfterEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:56:14.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6v26r" for this suite. Jun 24 01:56:24.834: INFO: namespace: e2e-tests-emptydir-6v26r, resource: bindings, ignored listing per whitelist • [SLOW TEST:20.421 seconds] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support (non-root,0666,tmpfs) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:85 ------------------------------ [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:61 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:09.025: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:09.179: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:61 STEP: Creating projection with secret that has name projected-secret-test-map-c375187c-9644-11e9-a527-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 01:56:09.285: INFO: Waiting up to 5m0s for pod pod-projected-secrets-c375a490-9644-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:56:09.292: INFO: Waiting for pod pod-projected-secrets-c375a490-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-ltxww' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.398046ms elapsed) Jun 24 01:56:11.295: INFO: Waiting for pod pod-projected-secrets-c375a490-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-ltxww' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00944991s elapsed) Jun 24 01:56:13.302: INFO: Waiting for pod pod-projected-secrets-c375a490-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-ltxww' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.016984801s elapsed) STEP: Saw pod success Jun 24 01:56:15.308: INFO: Trying to get logs from node 172.18.11.204 pod pod-projected-secrets-c375a490-9644-11e9-a527-0e9110352016 container projected-secret-volume-test: <nil> STEP: delete the pod Jun 24 01:56:15.339: INFO: Waiting for pod pod-projected-secrets-c375a490-9644-11e9-a527-0e9110352016 to disappear Jun 24 01:56:15.354: INFO: Pod pod-projected-secrets-c375a490-9644-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:56:15.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ltxww" for this suite. Jun 24 01:56:25.442: INFO: namespace: e2e-tests-projected-ltxww, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.519 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:61 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Deployment [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 deployment should label adopted RSs and pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:92 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][ruby][Slow] hot deploy for openshift ruby image [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_ruby.go:91 Rails example /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_ruby.go:90 should work with hot deploy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_ruby.go:89 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated Services [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 with clusters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/service.go:287 Federated Service /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/service.go:141 should not be deleted from underlying clusters when OrphanDependents is true /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/service.go:134 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [networking] network isolation [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:58 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:273 should allow communication from default to non-default namespace on a different node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:48 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:25.559: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:25.660: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a docker exec liveness probe with timeout [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:265 [AfterEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:56:25.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-tvqp9" for this suite. Jun 24 01:56:35.843: INFO: namespace: e2e-tests-container-probe-tvqp9, resource: bindings, ignored listing per whitelist S [SKIPPING] [10.302 seconds] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be restarted with a docker exec liveness probe with timeout [Conformance] [It] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:265 The default exec handler, dockertools.NativeExecHandler, does not support timeouts due to a limitation in the Docker Remote API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:239 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][timing] capture build stages and durations [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:95 should record build stages and durations for s2i /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_timing.go:68 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Projected should provide podname only [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:812 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:24.839: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:24.972: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should provide podname only [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:812 STEP: Creating a pod to test downward API volume plugin Jun 24 01:56:25.055: INFO: Waiting up to 5m0s for pod downwardapi-volume-ccdc5019-9644-11e9-a4d2-0e9110352016 status to be success or failure Jun 24 01:56:25.070: INFO: Waiting for pod downwardapi-volume-ccdc5019-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-x6g1f' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.909546ms elapsed) Jun 24 01:56:27.076: INFO: Waiting for pod downwardapi-volume-ccdc5019-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-x6g1f' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.021000712s elapsed) Jun 24 01:56:29.082: INFO: Waiting for pod downwardapi-volume-ccdc5019-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-projected-x6g1f' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.026954897s elapsed) STEP: Saw pod success Jun 24 01:56:31.270: INFO: Trying to get logs from node 172.18.11.204 pod downwardapi-volume-ccdc5019-9644-11e9-a4d2-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 01:56:31.522: INFO: Waiting for pod downwardapi-volume-ccdc5019-9644-11e9-a4d2-0e9110352016 to disappear Jun 24 01:56:31.589: INFO: Pod downwardapi-volume-ccdc5019-9644-11e9-a4d2-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:56:31.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x6g1f" for this suite. Jun 24 01:56:41.704: INFO: namespace: e2e-tests-projected-x6g1f, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.901 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide podname only [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:812 ------------------------------ [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:156 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:55:17.003: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:55:17.087: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:156 STEP: Creating configMap with name configmap-test-upd-a46334fe-9644-11e9-a60e-0e9110352016 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a46334fe-9644-11e9-a60e-0e9110352016 STEP: waiting to observe update in volume [AfterEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:56:42.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dgjq8" for this suite. Jun 24 01:57:07.687: INFO: namespace: e2e-tests-configmap-dgjq8, resource: bindings, ignored listing per whitelist • [SLOW TEST:110.704 seconds] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 updates should be reflected in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:156 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation daemonsets [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 DaemonSet objects /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/daemonset.go:101 should be created and deleted successfully /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/daemonset.go:77 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 using the SCL in s2i images /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73 "ci.dev.openshift.redhat.com:5000/openshift/nodejs-010-rhel7" should be SCL enabled /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Security Context [Feature:SecurityContext] should support seccomp default which is unconfined [Feature:Seccomp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:140 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:07.713: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:07.792: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [Feature:Seccomp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:140 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Jun 24 01:57:07.891: INFO: Waiting up to 5m0s for pod security-context-e664defd-9644-11e9-a60e-0e9110352016 status to be success or failure Jun 24 01:57:07.905: INFO: Waiting for pod security-context-e664defd-9644-11e9-a60e-0e9110352016 in namespace 'e2e-tests-security-context-gxnsv' status to be 'success or failure'(found phase: "Pending", readiness: false) (13.805118ms elapsed) Jun 24 01:57:09.928: INFO: Waiting for pod security-context-e664defd-9644-11e9-a60e-0e9110352016 in namespace 'e2e-tests-security-context-gxnsv' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.036517341s elapsed) STEP: Saw pod success Jun 24 01:57:11.932: INFO: Trying to get logs from node 172.18.11.204 pod security-context-e664defd-9644-11e9-a60e-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:57:11.948: INFO: Waiting for pod security-context-e664defd-9644-11e9-a60e-0e9110352016 to disappear Jun 24 01:57:11.952: INFO: Pod security-context-e664defd-9644-11e9-a60e-0e9110352016 no longer exists [AfterEach] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:11.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-security-context-gxnsv" for this suite. Jun 24 01:57:22.191: INFO: namespace: e2e-tests-security-context-gxnsv, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.528 seconds] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support seccomp default which is unconfined [Feature:Seccomp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:140 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Garbage collector [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should orphan pods created by rc if delete options say so /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/garbage_collector.go:318 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Feature:ImagePrune] Image prune [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/images/prune.go:117 with --all=false flag /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/images/prune.go:116 should prune only internally managed images /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/images/prune.go:115 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [builds][Conformance] build without output image building from templates should create an image from a S2i template without an output image reference defined /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [builds][Conformance] build without output image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:00.529: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:00.548: INFO: configPath is now "extended-test-build-no-outputname-jhqbx-v10rd-user.kubeconfig" Jun 24 01:56:00.548: INFO: The user is now "extended-test-build-no-outputname-jhqbx-v10rd-user" Jun 24 01:56:00.548: INFO: Creating project "extended-test-build-no-outputname-jhqbx-v10rd" STEP: Waiting for a default service account to be provisioned in namespace [It] should create an image from a S2i template without an output image reference defined /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:53 Jun 24 01:56:00.974: INFO: Running 'oc create --config=extended-test-build-no-outputname-jhqbx-v10rd-user.kubeconfig --namespace=extended-test-build-no-outputname-jhqbx-v10rd -f /tmp/fixture-testdata-dir557889249/test/extended/testdata/test-s2i-no-outputname.json' buildconfig "test-sti" created STEP: expecting build to pass without an output image reference specified Jun 24 01:56:01.228: INFO: Running 'oc start-build --config=extended-test-build-no-outputname-jhqbx-v10rd-user.kubeconfig --namespace=extended-test-build-no-outputname-jhqbx-v10rd test-sti -o=name' start-build output with args [test-sti -o=name]: Error><nil> StdOut> build/test-sti-1 StdErr> Waiting for test-sti-1 to complete Done waiting for test-sti-1: util.BuildResult{BuildPath:"build/test-sti-1", BuildName:"test-sti-1", StartBuildStdErr:"", StartBuildStdOut:"build/test-sti-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*api.Build)(0xc42067fb80), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), oc:(*util.CLI)(0xc4202db860)} with error: <nil> STEP: verifying the build test-sti-1 output Jun 24 01:57:12.518: INFO: Running 'oc logs --config=extended-test-build-no-outputname-jhqbx-v10rd-user.kubeconfig --namespace=extended-test-build-no-outputname-jhqbx-v10rd -f build/test-sti-1 --timestamps' Build log: 2019-06-24T05:56:05.086085000Z I0624 05:56:05.080640 1 builder.go:72] redacted build: {"kind":"Build","apiVersion":"v1","metadata":{"name":"test-sti-1","namespace":"extended-test-build-no-outputname-jhqbx-v10rd","selfLink":"/oapi/v1/namespaces/extended-test-build-no-outputname-jhqbx-v10rd/builds/test-sti-1","uid":"becf42e9-9644-11e9-9f9d-0e9110352016","resourceVersion":"5826","creationTimestamp":"2019-06-24T05:56:01Z","labels":{"buildconfig":"test-sti","name":"test-sti","openshift.io/build-config.name":"test-sti","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"test-sti","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"v1","kind":"BuildConfig","name":"test-sti","uid":"bea8b386-9644-11e9-9f9d-0e9110352016","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-hello-world"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"centos/ruby-22-centos7"},"env":[{"name":"BUILD_LOGLEVEL","value":"5"}]}},"output":{},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Manually triggered"}]},"status":{"phase":"New","config":{"kind":"BuildConfig","namespace":"extended-test-build-no-outputname-jhqbx-v10rd","name":"test-sti"},"output":{}}} 2019-06-24T05:56:05.087534000Z I0624 05:56:05.083579 1 builder.go:83] Master version "v3.6.0-alpha.1+bf447db-1052", Builder version "v3.6.0-alpha.1+bf447db-1052" 2019-06-24T05:56:05.087795000Z I0624 05:56:05.084946 1 builder.go:174] Running build with cgroup limits: api.CGroupLimits{MemoryLimitBytes:92233720368547, CPUShares:2, CPUPeriod:100000, CPUQuota:-1, MemorySwap:92233720368547} 2019-06-24T05:56:05.098477000Z Cloning "https://github.com/openshift/ruby-hello-world" ... 2019-06-24T05:56:05.098790000Z I0624 05:56:05.094010 1 source.go:138] git ls-remote --heads https://github.com/openshift/ruby-hello-world 2019-06-24T05:56:05.099107000Z I0624 05:56:05.094036 1 repository.go:385] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world 2019-06-24T05:56:05.565832000Z I0624 05:56:05.564543 1 source.go:141] cf1fa898d2a78685ccde72f14b4922b474f73cd1 refs/heads/beta2 2019-06-24T05:56:05.566174000Z 2602ace61490de0513dfbd7c7de949356cf9bd17 refs/heads/beta3 2019-06-24T05:56:05.566515000Z 394e0f7c0446d65d163ecae9cf5b559ad60de6dd refs/heads/beta4 2019-06-24T05:56:05.566823000Z 11e9bbac1dcf5a06df07f5a6ab893a3cb9448011 refs/heads/blog_part1 2019-06-24T05:56:05.567147000Z 5619f11232c0a623f7da419438539335d49acfa3 refs/heads/config 2019-06-24T05:56:05.567583000Z 787f1beae9956c959c6af62ee43bfdda73769cf7 refs/heads/master 2019-06-24T05:56:05.567923000Z 9f70e0daf56b57d7f3cc012020df06ba7f914d0f refs/heads/revert-64-feature/fix-for-ruby-2.5-compatibility 2019-06-24T05:56:05.568304000Z ffa3f8596f3f82c0ee224f1b1d0c23102b1ad1f1 refs/heads/revert-66-feature/fix-for-ruby-2.5-compatibility-with-ci 2019-06-24T05:56:05.568620000Z d71bdd56df54d7400e1f72dc0929280e43627138 refs/heads/revert-69-gemfile 2019-06-24T05:56:05.568928000Z faccd39c6857edb7a3015cc6837fb347613f23c3 refs/heads/undo 2019-06-24T05:56:05.569306000Z I0624 05:56:05.564992 1 source.go:240] Cloning source from https://github.com/openshift/ruby-hello-world 2019-06-24T05:56:05.569622000Z I0624 05:56:05.565011 1 repository.go:385] Executing git clone --recursive --depth=1 https://github.com/openshift/ruby-hello-world /tmp/s2i-build409151771/upload/src 2019-06-24T05:56:05.952450000Z I0624 05:56:05.951733 1 repository.go:385] Executing git rev-parse --abbrev-ref HEAD 2019-06-24T05:56:05.954622000Z I0624 05:56:05.954123 1 repository.go:385] Executing git rev-parse --verify HEAD 2019-06-24T05:56:05.956320000Z I0624 05:56:05.955927 1 repository.go:385] Executing git --no-pager show -s --format=%an HEAD 2019-06-24T05:56:05.959017000Z I0624 05:56:05.958129 1 repository.go:385] Executing git --no-pager show -s --format=%ae HEAD 2019-06-24T05:56:05.960559000Z I0624 05:56:05.960120 1 repository.go:385] Executing git --no-pager show -s --format=%cn HEAD 2019-06-24T05:56:05.962739000Z I0624 05:56:05.962251 1 repository.go:385] Executing git --no-pager show -s --format=%ce HEAD 2019-06-24T05:56:05.965189000Z I0624 05:56:05.964200 1 repository.go:385] Executing git --no-pager show -s --format=%ad HEAD 2019-06-24T05:56:05.966784000Z I0624 05:56:05.966233 1 repository.go:385] Executing git --no-pager show -s --format=%<(80,trunc)%s HEAD 2019-06-24T05:56:05.968935000Z I0624 05:56:05.968522 1 repository.go:385] Executing git config --get remote.origin.url 2019-06-24T05:56:05.970721000Z Commit: 787f1beae9956c959c6af62ee43bfdda73769cf7 (Merge pull request #78 from bparees/v22) 2019-06-24T05:56:05.971041000Z Author: Ben Parees <bparees@users.noreply.github.com> 2019-06-24T05:56:05.971374000Z Date: Thu Jan 17 17:21:03 2019 -0500 2019-06-24T05:56:05.971673000Z I0624 05:56:05.970185 1 repository.go:385] Executing git rev-parse --abbrev-ref HEAD 2019-06-24T05:56:05.973380000Z I0624 05:56:05.972356 1 repository.go:385] Executing git rev-parse --verify HEAD 2019-06-24T05:56:05.974763000Z I0624 05:56:05.974324 1 repository.go:385] Executing git --no-pager show -s --format=%an HEAD 2019-06-24T05:56:05.977252000Z I0624 05:56:05.976277 1 repository.go:385] Executing git --no-pager show -s --format=%ae HEAD 2019-06-24T05:56:05.978766000Z I0624 05:56:05.978361 1 repository.go:385] Executing git --no-pager show -s --format=%cn HEAD 2019-06-24T05:56:05.981400000Z I0624 05:56:05.980867 1 repository.go:385] Executing git --no-pager show -s --format=%ce HEAD 2019-06-24T05:56:05.983792000Z I0624 05:56:05.983293 1 repository.go:385] Executing git --no-pager show -s --format=%ad HEAD 2019-06-24T05:56:05.986074000Z I0624 05:56:05.985629 1 repository.go:385] Executing git --no-pager show -s --format=%<(80,trunc)%s HEAD 2019-06-24T05:56:05.988623000Z I0624 05:56:05.987726 1 repository.go:385] Executing git config --get remote.origin.url 2019-06-24T05:56:06.026885000Z I0624 05:56:06.025560 1 sti.go:241] With force pull false, setting policies to if-not-present 2019-06-24T05:56:06.027312000Z I0624 05:56:06.025584 1 sti.go:248] The value of ALLOWED_UIDS is [1-] 2019-06-24T05:56:06.027630000Z I0624 05:56:06.025600 1 sti.go:256] The value of DROP_CAPS is [KILL,MKNOD,SETGID,SETUID,SYS_CHROOT] 2019-06-24T05:56:06.027965000Z I0624 05:56:06.025613 1 cfg.go:39] Locating docker auth for image centos/ruby-22-centos7 and type PULL_DOCKERCFG_PATH 2019-06-24T05:56:06.028282000Z I0624 05:56:06.025620 1 cfg.go:49] Getting docker auth in paths : [] 2019-06-24T05:56:06.028567000Z I0624 05:56:06.025648 1 config.go:131] looking for config.json at /var/lib/origin/config.json 2019-06-24T05:56:06.028840000Z I0624 05:56:06.025668 1 config.go:131] looking for config.json at /var/lib/origin/config.json 2019-06-24T05:56:06.029108000Z I0624 05:56:06.025677 1 config.go:131] looking for config.json at /root/.docker/config.json 2019-06-24T05:56:06.029526000Z I0624 05:56:06.025688 1 config.go:131] looking for config.json at /.docker/config.json 2019-06-24T05:56:06.029827000Z I0624 05:56:06.025716 1 cfg.go:39] Locating docker auth for image test-sti-1 and type PUSH_DOCKERCFG_PATH 2019-06-24T05:56:06.030130000Z I0624 05:56:06.025723 1 cfg.go:49] Getting docker auth in paths : [] 2019-06-24T05:56:06.030478000Z I0624 05:56:06.025734 1 config.go:131] looking for config.json at /var/lib/origin/config.json 2019-06-24T05:56:06.030788000Z I0624 05:56:06.025747 1 config.go:131] looking for config.json at /var/lib/origin/config.json 2019-06-24T05:56:06.031089000Z I0624 05:56:06.025755 1 config.go:131] looking for config.json at /root/.docker/config.json 2019-06-24T05:56:06.031560000Z I0624 05:56:06.025763 1 config.go:131] looking for config.json at /.docker/config.json 2019-06-24T05:56:06.033999000Z I0624 05:56:06.033260 1 docker.go:514] error inspecting image centos/ruby-22-centos7:latest: Error: No such image: centos/ruby-22-centos7:latest 2019-06-24T05:56:06.034313000Z I0624 05:56:06.033283 1 docker.go:501] Image "centos/ruby-22-centos7:latest" not available locally, pulling ... 2019-06-24T05:56:06.302977000Z I0624 05:56:06.302091 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 4.744 kB/4.744 kB 2019-06-24T05:56:06.353766000Z I0624 05:56:06.351366 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 524.8 kB/73.17 MB 2019-06-24T05:56:06.357341000Z I0624 05:56:06.356729 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 1.064 MB/73.17 MB 2019-06-24T05:56:06.364801000Z I0624 05:56:06.361935 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 102.9 kB/10.22 MB 2019-06-24T05:56:06.374563000Z I0624 05:56:06.364152 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 220.5 kB/10.22 MB 2019-06-24T05:56:06.378760000Z I0624 05:56:06.373398 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 325 kB/10.22 MB 2019-06-24T05:56:06.379416000Z I0624 05:56:06.373772 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 429.4 kB/10.22 MB 2019-06-24T05:56:06.380035000Z I0624 05:56:06.373853 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 533.9 kB/10.22 MB 2019-06-24T05:56:06.391357000Z I0624 05:56:06.382336 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 638.3 kB/10.22 MB 2019-06-24T05:56:06.391714000Z I0624 05:56:06.383370 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 742.8 kB/10.22 MB 2019-06-24T05:56:06.392036000Z I0624 05:56:06.384355 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 847.2 kB/10.22 MB 2019-06-24T05:56:06.393559000Z I0624 05:56:06.384998 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 951.7 kB/10.22 MB 2019-06-24T05:56:06.393899000Z I0624 05:56:06.386302 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 1.056 MB/10.22 MB 2019-06-24T05:56:06.394236000Z I0624 05:56:06.387025 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 1.161 MB/10.22 MB 2019-06-24T05:56:06.394580000Z I0624 05:56:06.388635 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 1.265 MB/10.22 MB 2019-06-24T05:56:06.394905000Z I0624 05:56:06.390306 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 1.369 MB/10.22 MB 2019-06-24T05:56:06.401635000Z I0624 05:56:06.390820 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 1.474 MB/10.22 MB 2019-06-24T05:56:06.401971000Z I0624 05:56:06.395411 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 1.578 MB/10.22 MB 2019-06-24T05:56:06.402976000Z I0624 05:56:06.397129 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 1.683 MB/10.22 MB 2019-06-24T05:56:06.403337000Z I0624 05:56:06.399395 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 1.787 MB/10.22 MB 2019-06-24T05:56:06.414524000Z I0624 05:56:06.401278 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 1.602 MB/73.17 MB 2019-06-24T05:56:06.414857000Z I0624 05:56:06.403350 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 1.892 MB/10.22 MB 2019-06-24T05:56:06.415172000Z I0624 05:56:06.404328 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 1.996 MB/10.22 MB 2019-06-24T05:56:06.415508000Z I0624 05:56:06.407585 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 2.101 MB/10.22 MB 2019-06-24T05:56:06.415846000Z I0624 05:56:06.409788 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 2.205 MB/10.22 MB 2019-06-24T05:56:06.416167000Z I0624 05:56:06.413247 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.142 MB/73.17 MB 2019-06-24T05:56:06.428311000Z I0624 05:56:06.413823 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 2.309 MB/10.22 MB 2019-06-24T05:56:06.428651000Z I0624 05:56:06.417317 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 2.414 MB/10.22 MB 2019-06-24T05:56:06.428980000Z I0624 05:56:06.420285 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 2.518 MB/10.22 MB 2019-06-24T05:56:06.430197000Z I0624 05:56:06.421880 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 2.623 MB/10.22 MB 2019-06-24T05:56:06.430550000Z I0624 05:56:06.423268 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.681 MB/73.17 MB 2019-06-24T05:56:06.430882000Z I0624 05:56:06.424109 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 2.727 MB/10.22 MB 2019-06-24T05:56:06.432442000Z I0624 05:56:06.426915 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 2.832 MB/10.22 MB 2019-06-24T05:56:06.464608000Z I0624 05:56:06.429303 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 2.936 MB/10.22 MB 2019-06-24T05:56:06.465232000Z I0624 05:56:06.431243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 3.041 MB/10.22 MB 2019-06-24T05:56:06.465533000Z I0624 05:56:06.433802 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 3.145 MB/10.22 MB 2019-06-24T05:56:06.465837000Z I0624 05:56:06.435747 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 3.25 MB/10.22 MB 2019-06-24T05:56:06.466151000Z I0624 05:56:06.437214 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 3.354 MB/10.22 MB 2019-06-24T05:56:06.466515000Z I0624 05:56:06.438464 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 3.458 MB/10.22 MB 2019-06-24T05:56:06.466830000Z I0624 05:56:06.440254 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 3.563 MB/10.22 MB 2019-06-24T05:56:06.467142000Z I0624 05:56:06.441113 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 3.667 MB/10.22 MB 2019-06-24T05:56:06.467481000Z I0624 05:56:06.443231 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 3.772 MB/10.22 MB 2019-06-24T05:56:06.467807000Z I0624 05:56:06.444522 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 3.876 MB/10.22 MB 2019-06-24T05:56:06.468117000Z I0624 05:56:06.446244 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 3.981 MB/10.22 MB 2019-06-24T05:56:06.469532000Z I0624 05:56:06.447201 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 4.085 MB/10.22 MB 2019-06-24T05:56:06.470386000Z I0624 05:56:06.449265 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 4.19 MB/10.22 MB 2019-06-24T05:56:06.470727000Z I0624 05:56:06.450406 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 4.294 MB/10.22 MB 2019-06-24T05:56:06.471425000Z I0624 05:56:06.452249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 4.398 MB/10.22 MB 2019-06-24T05:56:06.471798000Z I0624 05:56:06.453099 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 4.503 MB/10.22 MB 2019-06-24T05:56:06.472110000Z I0624 05:56:06.455269 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 4.607 MB/10.22 MB 2019-06-24T05:56:06.472474000Z I0624 05:56:06.455885 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 4.712 MB/10.22 MB 2019-06-24T05:56:06.472789000Z I0624 05:56:06.470010 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 16.36 kB/173 kB 2019-06-24T05:56:06.485314000Z I0624 05:56:06.471252 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 33.28 kB/173 kB 2019-06-24T05:56:06.486514000Z I0624 05:56:06.471284 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 50.69 kB/173 kB 2019-06-24T05:56:06.486827000Z I0624 05:56:06.471305 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 68.1 kB/173 kB 2019-06-24T05:56:06.487148000Z I0624 05:56:06.475079 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 3.221 MB/73.17 MB 2019-06-24T05:56:06.491539000Z I0624 05:56:06.484316 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 3.76 MB/73.17 MB 2019-06-24T05:56:06.491889000Z I0624 05:56:06.487256 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 85.5 kB/173 kB 2019-06-24T05:56:06.492874000Z I0624 05:56:06.487283 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 102.9 kB/173 kB 2019-06-24T05:56:06.493240000Z I0624 05:56:06.487303 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 120.3 kB/173 kB 2019-06-24T05:56:06.493560000Z I0624 05:56:06.487323 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 137.7 kB/173 kB 2019-06-24T05:56:06.493862000Z I0624 05:56:06.487343 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 155.1 kB/173 kB 2019-06-24T05:56:06.494175000Z I0624 05:56:06.487370 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 172.5 kB/173 kB 2019-06-24T05:56:06.503898000Z I0624 05:56:06.489802 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 4.816 MB/10.22 MB 2019-06-24T05:56:06.504284000Z I0624 05:56:06.490953 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 4.921 MB/10.22 MB 2019-06-24T05:56:06.504624000Z I0624 05:56:06.497313 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 5.025 MB/10.22 MB 2019-06-24T05:56:06.504945000Z I0624 05:56:06.498635 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 5.13 MB/10.22 MB 2019-06-24T05:56:06.506763000Z I0624 05:56:06.499341 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 5.234 MB/10.22 MB 2019-06-24T05:56:06.507109000Z I0624 05:56:06.500312 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 5.338 MB/10.22 MB 2019-06-24T05:56:06.507468000Z I0624 05:56:06.501329 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 4.3 MB/73.17 MB 2019-06-24T05:56:06.507783000Z I0624 05:56:06.501413 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 5.443 MB/10.22 MB 2019-06-24T05:56:06.529364000Z I0624 05:56:06.509840 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 5.547 MB/10.22 MB 2019-06-24T05:56:06.530642000Z I0624 05:56:06.515311 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 4.84 MB/73.17 MB 2019-06-24T05:56:06.531317000Z I0624 05:56:06.522659 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 5.652 MB/10.22 MB 2019-06-24T05:56:06.532345000Z I0624 05:56:06.523304 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 5.756 MB/10.22 MB 2019-06-24T05:56:06.533158000Z I0624 05:56:06.523342 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 5.861 MB/10.22 MB 2019-06-24T05:56:06.534050000Z I0624 05:56:06.523368 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 5.965 MB/10.22 MB 2019-06-24T05:56:06.535769000Z I0624 05:56:06.523388 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 6.07 MB/10.22 MB 2019-06-24T05:56:06.537773000Z I0624 05:56:06.523406 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 6.174 MB/10.22 MB 2019-06-24T05:56:06.538691000Z I0624 05:56:06.533013 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 5.379 MB/73.17 MB 2019-06-24T05:56:06.544519000Z I0624 05:56:06.541245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 6.279 MB/10.22 MB 2019-06-24T05:56:06.550492000Z I0624 05:56:06.543269 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 6.383 MB/10.22 MB 2019-06-24T05:56:06.550843000Z I0624 05:56:06.543300 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 6.487 MB/10.22 MB 2019-06-24T05:56:06.551132000Z I0624 05:56:06.545324 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 6.592 MB/10.22 MB 2019-06-24T05:56:06.551460000Z I0624 05:56:06.547020 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 6.696 MB/10.22 MB 2019-06-24T05:56:06.551776000Z I0624 05:56:06.548279 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 6.801 MB/10.22 MB 2019-06-24T05:56:06.557509000Z I0624 05:56:06.554151 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 5.919 MB/73.17 MB 2019-06-24T05:56:06.618025000Z I0624 05:56:06.568989 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 6.905 MB/10.22 MB 2019-06-24T05:56:06.625144000Z I0624 05:56:06.605315 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 7.01 MB/10.22 MB 2019-06-24T05:56:06.625554000Z I0624 05:56:06.605352 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 7.114 MB/10.22 MB 2019-06-24T05:56:06.625879000Z I0624 05:56:06.605373 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 7.219 MB/10.22 MB 2019-06-24T05:56:06.626149000Z I0624 05:56:06.605394 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 7.323 MB/10.22 MB 2019-06-24T05:56:06.626452000Z I0624 05:56:06.605414 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 7.427 MB/10.22 MB 2019-06-24T05:56:06.626713000Z I0624 05:56:06.605435 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 7.532 MB/10.22 MB 2019-06-24T05:56:06.627016000Z I0624 05:56:06.605455 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 6.459 MB/73.17 MB 2019-06-24T05:56:06.627380000Z I0624 05:56:06.605476 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 7.636 MB/10.22 MB 2019-06-24T05:56:06.627821000Z I0624 05:56:06.605504 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 7.741 MB/10.22 MB 2019-06-24T05:56:06.628140000Z I0624 05:56:06.605524 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 7.845 MB/10.22 MB 2019-06-24T05:56:06.628486000Z I0624 05:56:06.605550 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 6.998 MB/73.17 MB 2019-06-24T05:56:06.628778000Z I0624 05:56:06.605573 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 7.95 MB/10.22 MB 2019-06-24T05:56:06.629077000Z I0624 05:56:06.605594 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 7.538 MB/73.17 MB 2019-06-24T05:56:06.629423000Z I0624 05:56:06.605613 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 8.054 MB/10.22 MB 2019-06-24T05:56:06.629730000Z I0624 05:56:06.605633 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 8.078 MB/73.17 MB 2019-06-24T05:56:06.630028000Z I0624 05:56:06.605653 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 8.159 MB/10.22 MB 2019-06-24T05:56:06.630361000Z I0624 05:56:06.605686 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 8.263 MB/10.22 MB 2019-06-24T05:56:06.630670000Z I0624 05:56:06.605708 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 8.617 MB/73.17 MB 2019-06-24T05:56:06.630982000Z I0624 05:56:06.605727 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 8.367 MB/10.22 MB 2019-06-24T05:56:06.632366000Z I0624 05:56:06.605747 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 8.472 MB/10.22 MB 2019-06-24T05:56:06.632651000Z I0624 05:56:06.605768 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 8.576 MB/10.22 MB 2019-06-24T05:56:06.632908000Z I0624 05:56:06.605788 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 8.681 MB/10.22 MB 2019-06-24T05:56:06.633159000Z I0624 05:56:06.605806 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 8.785 MB/10.22 MB 2019-06-24T05:56:06.633429000Z I0624 05:56:06.605827 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 9.157 MB/73.17 MB 2019-06-24T05:56:06.633697000Z I0624 05:56:06.605848 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 8.89 MB/10.22 MB 2019-06-24T05:56:06.633942000Z I0624 05:56:06.607886 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 8.994 MB/10.22 MB 2019-06-24T05:56:06.636618000Z I0624 05:56:06.617322 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 9.099 MB/10.22 MB 2019-06-24T05:56:06.636908000Z I0624 05:56:06.617348 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 9.203 MB/10.22 MB 2019-06-24T05:56:06.637231000Z I0624 05:56:06.617369 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 9.307 MB/10.22 MB 2019-06-24T05:56:06.637502000Z I0624 05:56:06.617406 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 9.412 MB/10.22 MB 2019-06-24T05:56:06.637749000Z I0624 05:56:06.617435 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 9.516 MB/10.22 MB 2019-06-24T05:56:06.637996000Z I0624 05:56:06.617453 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 9.621 MB/10.22 MB 2019-06-24T05:56:06.638308000Z I0624 05:56:06.617471 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 9.725 MB/10.22 MB 2019-06-24T05:56:06.638562000Z I0624 05:56:06.617489 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 9.83 MB/10.22 MB 2019-06-24T05:56:06.638805000Z I0624 05:56:06.617509 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 9.934 MB/10.22 MB 2019-06-24T05:56:06.639052000Z I0624 05:56:06.618947 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 10.04 MB/10.22 MB 2019-06-24T05:56:06.639331000Z I0624 05:56:06.618980 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 10.14 MB/10.22 MB 2019-06-24T05:56:06.663447000Z I0624 05:56:06.659296 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 9.697 MB/73.17 MB 2019-06-24T05:56:06.668052000Z I0624 05:56:06.664286 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 10.24 MB/73.17 MB 2019-06-24T05:56:06.674548000Z I0624 05:56:06.671244 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 10.78 MB/73.17 MB 2019-06-24T05:56:06.676871000Z I0624 05:56:06.675689 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 11.32 MB/73.17 MB 2019-06-24T05:56:06.691542000Z I0624 05:56:06.689741 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 11.86 MB/73.17 MB 2019-06-24T05:56:06.699335000Z I0624 05:56:06.695860 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 12.39 MB/73.17 MB 2019-06-24T05:56:06.708897000Z I0624 05:56:06.701809 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 12.93 MB/73.17 MB 2019-06-24T05:56:06.711020000Z I0624 05:56:06.707071 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 13.47 MB/73.17 MB 2019-06-24T05:56:06.717061000Z I0624 05:56:06.713170 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 14.01 MB/73.17 MB 2019-06-24T05:56:06.723200000Z I0624 05:56:06.720409 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 14.55 MB/73.17 MB 2019-06-24T05:56:06.731205000Z I0624 05:56:06.728681 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 15.09 MB/73.17 MB 2019-06-24T05:56:06.738430000Z I0624 05:56:06.737117 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 15.63 MB/73.17 MB 2019-06-24T05:56:06.746429000Z I0624 05:56:06.744108 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 16.17 MB/73.17 MB 2019-06-24T05:56:06.748674000Z I0624 05:56:06.746933 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 538.1 kB/98.8 MB 2019-06-24T05:56:06.756261000Z I0624 05:56:06.755242 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 16.71 MB/73.17 MB 2019-06-24T05:56:06.763228000Z I0624 05:56:06.762309 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 17.25 MB/73.17 MB 2019-06-24T05:56:06.771986000Z I0624 05:56:06.771590 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 17.79 MB/73.17 MB 2019-06-24T05:56:06.775411000Z I0624 05:56:06.774447 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 1.078 MB/98.8 MB 2019-06-24T05:56:06.787017000Z I0624 05:56:06.785253 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 190 kB/18.57 MB 2019-06-24T05:56:06.789964000Z I0624 05:56:06.786721 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 18.33 MB/73.17 MB 2019-06-24T05:56:06.790313000Z I0624 05:56:06.788466 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 381.4 kB/18.57 MB 2019-06-24T05:56:06.795194000Z I0624 05:56:06.792987 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 572.9 kB/18.57 MB 2019-06-24T05:56:06.799488000Z I0624 05:56:06.797391 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 764.4 kB/18.57 MB 2019-06-24T05:56:06.804120000Z I0624 05:56:06.803371 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 955.9 kB/18.57 MB 2019-06-24T05:56:06.807649000Z I0624 05:56:06.806282 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 18.87 MB/73.17 MB 2019-06-24T05:56:06.809633000Z I0624 05:56:06.809027 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 1.147 MB/18.57 MB 2019-06-24T05:56:06.815294000Z I0624 05:56:06.814170 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 1.339 MB/18.57 MB 2019-06-24T05:56:06.822656000Z I0624 05:56:06.821250 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 1.53 MB/18.57 MB 2019-06-24T05:56:06.825351000Z I0624 05:56:06.823511 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 19.41 MB/73.17 MB 2019-06-24T05:56:06.828878000Z I0624 05:56:06.827180 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 1.722 MB/18.57 MB 2019-06-24T05:56:06.840236000Z I0624 05:56:06.835234 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 1.913 MB/18.57 MB 2019-06-24T05:56:06.840579000Z I0624 05:56:06.838365 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 1.617 MB/98.8 MB 2019-06-24T05:56:06.843021000Z I0624 05:56:06.839280 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 2.105 MB/18.57 MB 2019-06-24T05:56:06.845220000Z I0624 05:56:06.843500 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 19.95 MB/73.17 MB 2019-06-24T05:56:06.845582000Z I0624 05:56:06.843530 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 2.296 MB/18.57 MB 2019-06-24T05:56:06.850287000Z I0624 05:56:06.848481 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 2.488 MB/18.57 MB 2019-06-24T05:56:06.856008000Z I0624 05:56:06.854056 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 2.679 MB/18.57 MB 2019-06-24T05:56:06.860436000Z I0624 05:56:06.859618 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 2.871 MB/18.57 MB 2019-06-24T05:56:06.863766000Z I0624 05:56:06.862270 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 20.49 MB/73.17 MB 2019-06-24T05:56:06.865378000Z I0624 05:56:06.864962 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 3.062 MB/18.57 MB 2019-06-24T05:56:06.880589000Z I0624 05:56:06.874243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 3.254 MB/18.57 MB 2019-06-24T05:56:06.882748000Z I0624 05:56:06.878713 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 21.03 MB/73.17 MB 2019-06-24T05:56:06.883091000Z I0624 05:56:06.881118 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 3.445 MB/18.57 MB 2019-06-24T05:56:06.887395000Z I0624 05:56:06.886271 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 3.637 MB/18.57 MB 2019-06-24T05:56:06.897360000Z I0624 05:56:06.895701 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 3.828 MB/18.57 MB 2019-06-24T05:56:06.899622000Z I0624 05:56:06.898229 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.157 MB/98.8 MB 2019-06-24T05:56:06.899960000Z I0624 05:56:06.898260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 21.57 MB/73.17 MB 2019-06-24T05:56:06.900300000Z I0624 05:56:06.898284 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 4.02 MB/18.57 MB 2019-06-24T05:56:06.903380000Z I0624 05:56:06.901538 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 4.211 MB/18.57 MB 2019-06-24T05:56:06.907301000Z I0624 05:56:06.906368 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 4.403 MB/18.57 MB 2019-06-24T05:56:06.912772000Z I0624 05:56:06.911638 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 4.594 MB/18.57 MB 2019-06-24T05:56:06.918249000Z I0624 05:56:06.917297 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 4.786 MB/18.57 MB 2019-06-24T05:56:06.919916000Z I0624 05:56:06.919131 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 22.11 MB/73.17 MB 2019-06-24T05:56:06.938655000Z I0624 05:56:06.934257 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 4.977 MB/18.57 MB 2019-06-24T05:56:06.940480000Z I0624 05:56:06.938300 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 5.169 MB/18.57 MB 2019-06-24T05:56:06.940823000Z I0624 05:56:06.938350 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 22.65 MB/73.17 MB 2019-06-24T05:56:06.943785000Z I0624 05:56:06.942441 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 5.36 MB/18.57 MB 2019-06-24T05:56:06.945435000Z I0624 05:56:06.944770 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 5.552 MB/18.57 MB 2019-06-24T05:56:06.952234000Z I0624 05:56:06.951246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 5.743 MB/18.57 MB 2019-06-24T05:56:06.954899000Z I0624 05:56:06.951909 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 23.19 MB/73.17 MB 2019-06-24T05:56:06.955254000Z I0624 05:56:06.951939 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.697 MB/98.8 MB 2019-06-24T05:56:06.965507000Z I0624 05:56:06.958306 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 5.935 MB/18.57 MB 2019-06-24T05:56:06.967474000Z I0624 05:56:06.964970 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 23.73 MB/73.17 MB 2019-06-24T05:56:06.970167000Z I0624 05:56:06.968295 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 6.126 MB/18.57 MB 2019-06-24T05:56:06.972621000Z I0624 05:56:06.971833 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 6.318 MB/18.57 MB 2019-06-24T05:56:06.978696000Z I0624 05:56:06.977904 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 6.509 MB/18.57 MB 2019-06-24T05:56:06.984135000Z I0624 05:56:06.983260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 6.701 MB/18.57 MB 2019-06-24T05:56:06.987995000Z I0624 05:56:06.986642 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 24.27 MB/73.17 MB 2019-06-24T05:56:06.993636000Z I0624 05:56:06.990254 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 6.892 MB/18.57 MB 2019-06-24T05:56:07.005679000Z I0624 05:56:06.995218 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 7.084 MB/18.57 MB 2019-06-24T05:56:07.006348000Z I0624 05:56:06.997484 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 7.275 MB/18.57 MB 2019-06-24T05:56:07.007012000Z I0624 05:56:06.999560 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 7.466 MB/18.57 MB 2019-06-24T05:56:07.007765000Z I0624 05:56:07.002970 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 3.236 MB/98.8 MB 2019-06-24T05:56:07.029792000Z I0624 05:56:07.006272 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 7.658 MB/18.57 MB 2019-06-24T05:56:07.030144000Z I0624 05:56:07.009878 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 7.849 MB/18.57 MB 2019-06-24T05:56:07.030639000Z I0624 05:56:07.018281 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 24.81 MB/73.17 MB 2019-06-24T05:56:07.030962000Z I0624 05:56:07.023260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 8.041 MB/18.57 MB 2019-06-24T05:56:07.058000000Z I0624 05:56:07.034240 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 8.232 MB/18.57 MB 2019-06-24T05:56:07.058720000Z I0624 05:56:07.040248 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 8.424 MB/18.57 MB 2019-06-24T05:56:07.059657000Z I0624 05:56:07.044255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 8.615 MB/18.57 MB 2019-06-24T05:56:07.060352000Z I0624 05:56:07.048748 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 8.807 MB/18.57 MB 2019-06-24T05:56:07.079788000Z I0624 05:56:07.055248 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 8.998 MB/18.57 MB 2019-06-24T05:56:07.080526000Z I0624 05:56:07.055279 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 25.35 MB/73.17 MB 2019-06-24T05:56:07.081790000Z I0624 05:56:07.064257 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 25.89 MB/73.17 MB 2019-06-24T05:56:07.082625000Z I0624 05:56:07.065048 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 9.19 MB/18.57 MB 2019-06-24T05:56:07.083387000Z I0624 05:56:07.070278 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 9.381 MB/18.57 MB 2019-06-24T05:56:07.084082000Z I0624 05:56:07.074264 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 9.573 MB/18.57 MB 2019-06-24T05:56:07.096807000Z I0624 05:56:07.077543 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 9.764 MB/18.57 MB 2019-06-24T05:56:07.097672000Z I0624 05:56:07.081468 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 26.43 MB/73.17 MB 2019-06-24T05:56:07.098861000Z I0624 05:56:07.087287 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 9.956 MB/18.57 MB 2019-06-24T05:56:07.099609000Z I0624 05:56:07.089757 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 10.15 MB/18.57 MB 2019-06-24T05:56:07.099952000Z I0624 05:56:07.093376 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 10.34 MB/18.57 MB 2019-06-24T05:56:07.100726000Z I0624 05:56:07.097305 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 26.97 MB/73.17 MB 2019-06-24T05:56:07.111247000Z I0624 05:56:07.105104 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 10.53 MB/18.57 MB 2019-06-24T05:56:07.112664000Z I0624 05:56:07.107144 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 10.72 MB/18.57 MB 2019-06-24T05:56:07.114977000Z I0624 05:56:07.109107 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 10.91 MB/18.57 MB 2019-06-24T05:56:07.115748000Z I0624 05:56:07.110324 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 3.776 MB/98.8 MB 2019-06-24T05:56:07.116829000Z I0624 05:56:07.112346 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 11.1 MB/18.57 MB 2019-06-24T05:56:07.121001000Z I0624 05:56:07.114325 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 11.3 MB/18.57 MB 2019-06-24T05:56:07.121682000Z I0624 05:56:07.116342 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 11.49 MB/18.57 MB 2019-06-24T05:56:07.128915000Z I0624 05:56:07.118260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 11.68 MB/18.57 MB 2019-06-24T05:56:07.129786000Z I0624 05:56:07.120285 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 27.5 MB/73.17 MB 2019-06-24T05:56:07.130705000Z I0624 05:56:07.120344 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 11.87 MB/18.57 MB 2019-06-24T05:56:07.131423000Z I0624 05:56:07.122224 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 12.06 MB/18.57 MB 2019-06-24T05:56:07.132039000Z I0624 05:56:07.124873 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 12.25 MB/18.57 MB 2019-06-24T05:56:07.135237000Z I0624 05:56:07.128223 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 12.45 MB/18.57 MB 2019-06-24T05:56:07.141038000Z I0624 05:56:07.134218 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 12.64 MB/18.57 MB 2019-06-24T05:56:07.149499000Z I0624 05:56:07.141876 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 12.83 MB/18.57 MB 2019-06-24T05:56:07.149854000Z I0624 05:56:07.144779 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 13.02 MB/18.57 MB 2019-06-24T05:56:07.155576000Z I0624 05:56:07.147224 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 28.04 MB/73.17 MB 2019-06-24T05:56:07.155906000Z I0624 05:56:07.147261 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 13.21 MB/18.57 MB 2019-06-24T05:56:07.156893000Z I0624 05:56:07.150942 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 13.4 MB/18.57 MB 2019-06-24T05:56:07.159576000Z I0624 05:56:07.155242 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 13.59 MB/18.57 MB 2019-06-24T05:56:07.160322000Z I0624 05:56:07.159102 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 4.316 MB/98.8 MB 2019-06-24T05:56:07.173809000Z I0624 05:56:07.163001 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 13.79 MB/18.57 MB 2019-06-24T05:56:07.178919000Z I0624 05:56:07.173263 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 13.98 MB/18.57 MB 2019-06-24T05:56:07.179277000Z I0624 05:56:07.173303 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 28.58 MB/73.17 MB 2019-06-24T05:56:07.179603000Z I0624 05:56:07.173324 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 14.17 MB/18.57 MB 2019-06-24T05:56:07.179922000Z I0624 05:56:07.176722 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 14.36 MB/18.57 MB 2019-06-24T05:56:07.184231000Z I0624 05:56:07.183441 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 14.55 MB/18.57 MB 2019-06-24T05:56:07.187084000Z I0624 05:56:07.186147 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 29.12 MB/73.17 MB 2019-06-24T05:56:07.191472000Z I0624 05:56:07.190260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 14.74 MB/18.57 MB 2019-06-24T05:56:07.194992000Z I0624 05:56:07.194243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 14.93 MB/18.57 MB 2019-06-24T05:56:07.200676000Z I0624 05:56:07.199224 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 15.13 MB/18.57 MB 2019-06-24T05:56:07.206325000Z I0624 05:56:07.204872 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 15.32 MB/18.57 MB 2019-06-24T05:56:07.208664000Z I0624 05:56:07.206462 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 29.66 MB/73.17 MB 2019-06-24T05:56:07.210674000Z I0624 05:56:07.209355 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 4.855 MB/98.8 MB 2019-06-24T05:56:07.212842000Z I0624 05:56:07.211171 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 15.51 MB/18.57 MB 2019-06-24T05:56:07.223285000Z I0624 05:56:07.218606 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 15.7 MB/18.57 MB 2019-06-24T05:56:07.223621000Z I0624 05:56:07.221439 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 15.89 MB/18.57 MB 2019-06-24T05:56:07.227422000Z I0624 05:56:07.226080 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 16.08 MB/18.57 MB 2019-06-24T05:56:07.231537000Z I0624 05:56:07.230410 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 30.2 MB/73.17 MB 2019-06-24T05:56:07.234157000Z I0624 05:56:07.232255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 16.27 MB/18.57 MB 2019-06-24T05:56:07.239278000Z I0624 05:56:07.237384 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 16.47 MB/18.57 MB 2019-06-24T05:56:07.243929000Z I0624 05:56:07.242529 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 16.66 MB/18.57 MB 2019-06-24T05:56:07.248037000Z I0624 05:56:07.246637 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 5.395 MB/98.8 MB 2019-06-24T05:56:07.249117000Z I0624 05:56:07.248394 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 16.85 MB/18.57 MB 2019-06-24T05:56:07.254927000Z I0624 05:56:07.253622 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 17.04 MB/18.57 MB 2019-06-24T05:56:07.256852000Z I0624 05:56:07.255317 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 30.74 MB/73.17 MB 2019-06-24T05:56:07.260871000Z I0624 05:56:07.259630 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 17.23 MB/18.57 MB 2019-06-24T05:56:07.264827000Z I0624 05:56:07.264235 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 17.42 MB/18.57 MB 2019-06-24T05:56:07.274869000Z I0624 05:56:07.271870 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 17.62 MB/18.57 MB 2019-06-24T05:56:07.277705000Z I0624 05:56:07.276313 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 17.81 MB/18.57 MB 2019-06-24T05:56:07.280228000Z I0624 05:56:07.279409 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 5.935 MB/98.8 MB 2019-06-24T05:56:07.282748000Z I0624 05:56:07.281224 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 18 MB/18.57 MB 2019-06-24T05:56:07.287747000Z I0624 05:56:07.284785 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 31.28 MB/73.17 MB 2019-06-24T05:56:07.288075000Z I0624 05:56:07.286088 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 18.19 MB/18.57 MB 2019-06-24T05:56:07.292620000Z I0624 05:56:07.291353 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 18.38 MB/18.57 MB 2019-06-24T05:56:07.298745000Z I0624 05:56:07.297548 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 18.57 MB/18.57 MB 2019-06-24T05:56:07.302761000Z I0624 05:56:07.301855 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 6.474 MB/98.8 MB 2019-06-24T05:56:07.309679000Z I0624 05:56:07.308689 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 31.82 MB/73.17 MB 2019-06-24T05:56:07.314792000Z I0624 05:56:07.313954 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 7.014 MB/98.8 MB 2019-06-24T05:56:07.329334000Z I0624 05:56:07.328153 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 7.553 MB/98.8 MB 2019-06-24T05:56:07.335675000Z I0624 05:56:07.334169 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 8.093 MB/98.8 MB 2019-06-24T05:56:07.343923000Z I0624 05:56:07.342267 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 32.36 MB/73.17 MB 2019-06-24T05:56:07.348116000Z I0624 05:56:07.344638 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 8.633 MB/98.8 MB 2019-06-24T05:56:07.354157000Z I0624 05:56:07.353605 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 9.172 MB/98.8 MB 2019-06-24T05:56:07.370415000Z I0624 05:56:07.366253 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 9.712 MB/98.8 MB 2019-06-24T05:56:07.373187000Z I0624 05:56:07.371692 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 10.25 MB/98.8 MB 2019-06-24T05:56:07.378140000Z I0624 05:56:07.377133 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 32.9 MB/73.17 MB 2019-06-24T05:56:07.385558000Z I0624 05:56:07.383345 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 10.79 MB/98.8 MB 2019-06-24T05:56:07.402343000Z I0624 05:56:07.393111 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 11.33 MB/98.8 MB 2019-06-24T05:56:07.403228000Z I0624 05:56:07.399421 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 11.87 MB/98.8 MB 2019-06-24T05:56:07.416660000Z I0624 05:56:07.410252 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 1.839 kB/1.839 kB 2019-06-24T05:56:07.417127000Z I0624 05:56:07.411172 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 12.41 MB/98.8 MB 2019-06-24T05:56:07.417824000Z I0624 05:56:07.412625 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 33.44 MB/73.17 MB 2019-06-24T05:56:07.573098000Z I0624 05:56:07.427352 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 12.95 MB/98.8 MB 2019-06-24T05:56:07.573506000Z I0624 05:56:07.436797 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 13.49 MB/98.8 MB 2019-06-24T05:56:07.573840000Z I0624 05:56:07.442282 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 14.03 MB/98.8 MB 2019-06-24T05:56:07.574159000Z I0624 05:56:07.450368 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 14.57 MB/98.8 MB 2019-06-24T05:56:07.574531000Z I0624 05:56:07.461335 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 15.11 MB/98.8 MB 2019-06-24T05:56:07.574861000Z I0624 05:56:07.465728 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 15.65 MB/98.8 MB 2019-06-24T05:56:07.575215000Z I0624 05:56:07.482389 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 16.19 MB/98.8 MB 2019-06-24T05:56:07.575536000Z I0624 05:56:07.490938 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 16.73 MB/98.8 MB 2019-06-24T05:56:07.575853000Z I0624 05:56:07.496785 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 17.27 MB/98.8 MB 2019-06-24T05:56:07.576171000Z I0624 05:56:07.502453 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 17.81 MB/98.8 MB 2019-06-24T05:56:07.576525000Z I0624 05:56:07.508104 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 18.35 MB/98.8 MB 2019-06-24T05:56:07.576847000Z I0624 05:56:07.519302 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 18.89 MB/98.8 MB 2019-06-24T05:56:07.577166000Z I0624 05:56:07.523809 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 19.43 MB/98.8 MB 2019-06-24T05:56:07.577503000Z I0624 05:56:07.529755 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 19.97 MB/98.8 MB 2019-06-24T05:56:07.577826000Z I0624 05:56:07.535708 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 20.51 MB/98.8 MB 2019-06-24T05:56:07.578149000Z I0624 05:56:07.541009 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 21.04 MB/98.8 MB 2019-06-24T05:56:07.578523000Z I0624 05:56:07.549750 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 33.98 MB/73.17 MB 2019-06-24T05:56:07.578843000Z I0624 05:56:07.551511 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 21.58 MB/98.8 MB 2019-06-24T05:56:07.579164000Z I0624 05:56:07.560267 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 22.12 MB/98.8 MB 2019-06-24T05:56:07.580302000Z I0624 05:56:07.565436 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 22.66 MB/98.8 MB 2019-06-24T05:56:07.588500000Z I0624 05:56:07.583784 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 23.2 MB/98.8 MB 2019-06-24T05:56:07.601707000Z I0624 05:56:07.593173 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 23.74 MB/98.8 MB 2019-06-24T05:56:07.602280000Z I0624 05:56:07.598920 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 34.52 MB/73.17 MB 2019-06-24T05:56:07.615692000Z I0624 05:56:07.605035 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 24.28 MB/98.8 MB 2019-06-24T05:56:07.617273000Z I0624 05:56:07.611019 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 24.82 MB/98.8 MB 2019-06-24T05:56:07.621980000Z I0624 05:56:07.617199 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 25.36 MB/98.8 MB 2019-06-24T05:56:07.631064000Z I0624 05:56:07.622288 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 25.9 MB/98.8 MB 2019-06-24T05:56:07.637364000Z I0624 05:56:07.630148 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 26.44 MB/98.8 MB 2019-06-24T05:56:07.643064000Z I0624 05:56:07.637245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 26.98 MB/98.8 MB 2019-06-24T05:56:07.643417000Z I0624 05:56:07.640239 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 3.87 kB/3.87 kB 2019-06-24T05:56:07.647159000Z I0624 05:56:07.644255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 27.52 MB/98.8 MB 2019-06-24T05:56:07.660247000Z I0624 05:56:07.654701 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 28.06 MB/98.8 MB 2019-06-24T05:56:07.665337000Z I0624 05:56:07.664402 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 35.06 MB/73.17 MB 2019-06-24T05:56:07.665678000Z I0624 05:56:07.664471 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 28.6 MB/98.8 MB 2019-06-24T05:56:07.682485000Z I0624 05:56:07.674454 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 29.14 MB/98.8 MB 2019-06-24T05:56:07.682840000Z I0624 05:56:07.679966 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 29.68 MB/98.8 MB 2019-06-24T05:56:07.699529000Z I0624 05:56:07.693160 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 30.22 MB/98.8 MB 2019-06-24T05:56:07.702593000Z I0624 05:56:07.696461 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 35.6 MB/73.17 MB 2019-06-24T05:56:07.707585000Z I0624 05:56:07.703187 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 30.76 MB/98.8 MB 2019-06-24T05:56:07.719256000Z I0624 05:56:07.716292 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 36.14 MB/73.17 MB 2019-06-24T05:56:07.727102000Z I0624 05:56:07.718375 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 31.3 MB/98.8 MB 2019-06-24T05:56:07.748976000Z I0624 05:56:07.740308 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 31.84 MB/98.8 MB 2019-06-24T05:56:07.773549000Z I0624 05:56:07.751330 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 36.68 MB/73.17 MB 2019-06-24T05:56:07.773896000Z I0624 05:56:07.767737 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 37.22 MB/73.17 MB 2019-06-24T05:56:07.783521000Z I0624 05:56:07.778241 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 32.38 MB/98.8 MB 2019-06-24T05:56:07.783871000Z I0624 05:56:07.781397 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 32.92 MB/98.8 MB 2019-06-24T05:56:07.807486000Z I0624 05:56:07.794242 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 33.46 MB/98.8 MB 2019-06-24T05:56:07.807815000Z I0624 05:56:07.797553 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 37.76 MB/73.17 MB 2019-06-24T05:56:07.819031000Z I0624 05:56:07.814794 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 34 MB/98.8 MB 2019-06-24T05:56:07.819779000Z I0624 05:56:07.816249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 38.3 MB/73.17 MB 2019-06-24T05:56:07.823905000Z I0624 05:56:07.818949 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 34.54 MB/98.8 MB 2019-06-24T05:56:07.864645000Z I0624 05:56:07.830310 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 35.08 MB/98.8 MB 2019-06-24T05:56:07.864978000Z I0624 05:56:07.837280 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 35.62 MB/98.8 MB 2019-06-24T05:56:07.866944000Z I0624 05:56:07.843171 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 36.15 MB/98.8 MB 2019-06-24T05:56:07.867627000Z I0624 05:56:07.850334 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 36.69 MB/98.8 MB 2019-06-24T05:56:07.867961000Z I0624 05:56:07.857112 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 37.23 MB/98.8 MB 2019-06-24T05:56:07.877473000Z I0624 05:56:07.865246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 38.84 MB/73.17 MB 2019-06-24T05:56:07.881554000Z I0624 05:56:07.871246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 16.36 kB/175.2 kB 2019-06-24T05:56:07.881888000Z I0624 05:56:07.871277 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 33.28 kB/175.2 kB 2019-06-24T05:56:07.882810000Z I0624 05:56:07.871301 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 50.69 kB/175.2 kB 2019-06-24T05:56:07.883149000Z I0624 05:56:07.871322 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 37.77 MB/98.8 MB 2019-06-24T05:56:07.883500000Z I0624 05:56:07.875965 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 38.31 MB/98.8 MB 2019-06-24T05:56:07.891418000Z I0624 05:56:07.881243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 68.1 kB/175.2 kB 2019-06-24T05:56:07.892048000Z I0624 05:56:07.885243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 85.5 kB/175.2 kB 2019-06-24T05:56:07.892468000Z I0624 05:56:07.885274 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 102.9 kB/175.2 kB 2019-06-24T05:56:07.892795000Z I0624 05:56:07.885297 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 120.3 kB/175.2 kB 2019-06-24T05:56:07.893115000Z I0624 05:56:07.885317 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 137.7 kB/175.2 kB 2019-06-24T05:56:07.893879000Z I0624 05:56:07.886448 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 155.1 kB/175.2 kB 2019-06-24T05:56:07.894205000Z I0624 05:56:07.888333 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 172.5 kB/175.2 kB 2019-06-24T05:56:07.894487000Z I0624 05:56:07.888361 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 175.2 kB/175.2 kB 2019-06-24T05:56:07.901560000Z I0624 05:56:07.896247 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 39.38 MB/73.17 MB 2019-06-24T05:56:07.912517000Z I0624 05:56:07.901256 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 38.85 MB/98.8 MB 2019-06-24T05:56:07.912848000Z I0624 05:56:07.908251 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 39.39 MB/98.8 MB 2019-06-24T05:56:07.924564000Z I0624 05:56:07.914453 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 39.92 MB/73.17 MB 2019-06-24T05:56:07.924902000Z I0624 05:56:07.921242 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 40.46 MB/73.17 MB 2019-06-24T05:56:07.935468000Z I0624 05:56:07.928279 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 39.93 MB/98.8 MB 2019-06-24T05:56:07.935804000Z I0624 05:56:07.934220 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 40.47 MB/98.8 MB 2019-06-24T05:56:07.950848000Z I0624 05:56:07.943507 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 41.01 MB/98.8 MB 2019-06-24T05:56:07.955128000Z I0624 05:56:07.950323 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 41 MB/73.17 MB 2019-06-24T05:56:07.955496000Z I0624 05:56:07.951893 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 41.55 MB/98.8 MB 2019-06-24T05:56:07.966083000Z I0624 05:56:07.963137 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 42.09 MB/98.8 MB 2019-06-24T05:56:07.966439000Z I0624 05:56:07.964437 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 41.54 MB/73.17 MB 2019-06-24T05:56:07.979529000Z I0624 05:56:07.973421 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 42.63 MB/98.8 MB 2019-06-24T05:56:07.981550000Z I0624 05:56:07.979174 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 43.17 MB/98.8 MB 2019-06-24T05:56:07.984495000Z I0624 05:56:07.982588 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 42.08 MB/73.17 MB 2019-06-24T05:56:07.991492000Z I0624 05:56:07.990245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 43.71 MB/98.8 MB 2019-06-24T05:56:07.997207000Z I0624 05:56:07.996460 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 44.25 MB/98.8 MB 2019-06-24T05:56:08.009593000Z I0624 05:56:08.003624 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 44.79 MB/98.8 MB 2019-06-24T05:56:08.013337000Z I0624 05:56:08.009249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 45.33 MB/98.8 MB 2019-06-24T05:56:08.015563000Z I0624 05:56:08.012935 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 42.62 MB/73.17 MB 2019-06-24T05:56:08.020170000Z I0624 05:56:08.017327 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 45.87 MB/98.8 MB 2019-06-24T05:56:08.031308000Z I0624 05:56:08.028532 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 46.41 MB/98.8 MB 2019-06-24T05:56:08.038082000Z I0624 05:56:08.031034 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 43.15 MB/73.17 MB 2019-06-24T05:56:08.041686000Z I0624 05:56:08.040626 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 46.95 MB/98.8 MB 2019-06-24T05:56:08.051887000Z I0624 05:56:08.050752 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 47.49 MB/98.8 MB 2019-06-24T05:56:08.066931000Z I0624 05:56:08.059608 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 43.69 MB/73.17 MB 2019-06-24T05:56:08.067295000Z I0624 05:56:08.065021 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 48.03 MB/98.8 MB 2019-06-24T05:56:08.075106000Z I0624 05:56:08.072455 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 44.23 MB/73.17 MB 2019-06-24T05:56:08.081046000Z I0624 05:56:08.079888 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 48.57 MB/98.8 MB 2019-06-24T05:56:08.092527000Z I0624 05:56:08.085147 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 44.77 MB/73.17 MB 2019-06-24T05:56:08.094835000Z I0624 05:56:08.094266 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 49.11 MB/98.8 MB 2019-06-24T05:56:08.105417000Z I0624 05:56:08.104283 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 45.31 MB/73.17 MB 2019-06-24T05:56:08.108388000Z I0624 05:56:08.107554 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 49.65 MB/98.8 MB 2019-06-24T05:56:08.118154000Z I0624 05:56:08.117654 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 45.85 MB/73.17 MB 2019-06-24T05:56:08.136920000Z I0624 05:56:08.129683 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 50.19 MB/98.8 MB 2019-06-24T05:56:08.138044000Z I0624 05:56:08.133244 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 46.39 MB/73.17 MB 2019-06-24T05:56:08.142307000Z I0624 05:56:08.138956 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 50.73 MB/98.8 MB 2019-06-24T05:56:08.150215000Z I0624 05:56:08.148914 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 46.93 MB/73.17 MB 2019-06-24T05:56:08.153914000Z I0624 05:56:08.153110 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 51.26 MB/98.8 MB 2019-06-24T05:56:08.168552000Z I0624 05:56:08.165860 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 51.8 MB/98.8 MB 2019-06-24T05:56:08.170818000Z I0624 05:56:08.168245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 47.47 MB/73.17 MB 2019-06-24T05:56:08.180271000Z I0624 05:56:08.179309 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 52.34 MB/98.8 MB 2019-06-24T05:56:08.184733000Z I0624 05:56:08.183744 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 48.01 MB/73.17 MB 2019-06-24T05:56:08.191941000Z I0624 05:56:08.191017 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 52.88 MB/98.8 MB 2019-06-24T05:56:08.203538000Z I0624 05:56:08.202507 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 48.55 MB/73.17 MB 2019-06-24T05:56:08.205454000Z I0624 05:56:08.204654 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 53.42 MB/98.8 MB 2019-06-24T05:56:08.247425000Z I0624 05:56:08.223142 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 49.09 MB/73.17 MB 2019-06-24T05:56:08.256677000Z I0624 05:56:08.237147 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 49.63 MB/73.17 MB 2019-06-24T05:56:08.257278000Z I0624 05:56:08.245265 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 50.17 MB/73.17 MB 2019-06-24T05:56:08.257588000Z I0624 05:56:08.247336 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 53.96 MB/98.8 MB 2019-06-24T05:56:08.311980000Z I0624 05:56:08.271465 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 54.5 MB/98.8 MB 2019-06-24T05:56:08.315811000Z I0624 05:56:08.314100 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 50.71 MB/73.17 MB 2019-06-24T05:56:08.347994000Z I0624 05:56:08.325245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 55.04 MB/98.8 MB 2019-06-24T05:56:08.383638000Z I0624 05:56:08.349427 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 55.58 MB/98.8 MB 2019-06-24T05:56:08.384005000Z I0624 05:56:08.361257 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 51.25 MB/73.17 MB 2019-06-24T05:56:08.384349000Z I0624 05:56:08.365243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 56.12 MB/98.8 MB 2019-06-24T05:56:08.384677000Z I0624 05:56:08.377240 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 56.66 MB/98.8 MB 2019-06-24T05:56:08.423011000Z I0624 05:56:08.390992 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 51.79 MB/73.17 MB 2019-06-24T05:56:08.423408000Z I0624 05:56:08.397846 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 52.33 MB/73.17 MB 2019-06-24T05:56:08.423725000Z I0624 05:56:08.406275 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 57.2 MB/98.8 MB 2019-06-24T05:56:08.424034000Z I0624 05:56:08.406318 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 52.87 MB/73.17 MB 2019-06-24T05:56:08.424380000Z I0624 05:56:08.410291 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 53.41 MB/73.17 MB 2019-06-24T05:56:08.424711000Z I0624 05:56:08.414131 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 57.74 MB/98.8 MB 2019-06-24T05:56:08.425023000Z I0624 05:56:08.416323 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 53.95 MB/73.17 MB 2019-06-24T05:56:08.459396000Z I0624 05:56:08.421866 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 58.28 MB/98.8 MB 2019-06-24T05:56:08.459756000Z I0624 05:56:08.434297 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 58.82 MB/98.8 MB 2019-06-24T05:56:08.460072000Z I0624 05:56:08.439702 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 59.36 MB/98.8 MB 2019-06-24T05:56:08.465464000Z I0624 05:56:08.444662 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 59.9 MB/98.8 MB 2019-06-24T05:56:08.490479000Z I0624 05:56:08.471260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 60.44 MB/98.8 MB 2019-06-24T05:56:08.490824000Z I0624 05:56:08.471293 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 60.98 MB/98.8 MB 2019-06-24T05:56:08.491147000Z I0624 05:56:08.471316 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 61.52 MB/98.8 MB 2019-06-24T05:56:08.491485000Z I0624 05:56:08.477283 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 62.06 MB/98.8 MB 2019-06-24T05:56:08.491813000Z I0624 05:56:08.481964 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 62.6 MB/98.8 MB 2019-06-24T05:56:08.492139000Z I0624 05:56:08.488246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 63.14 MB/98.8 MB 2019-06-24T05:56:08.514528000Z I0624 05:56:08.495228 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 63.68 MB/98.8 MB 2019-06-24T05:56:08.515097000Z I0624 05:56:08.496243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 54.49 MB/73.17 MB 2019-06-24T05:56:08.515701000Z I0624 05:56:08.501060 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 64.22 MB/98.8 MB 2019-06-24T05:56:08.516098000Z I0624 05:56:08.507249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 64.76 MB/98.8 MB 2019-06-24T05:56:08.533986000Z I0624 05:56:08.514254 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 65.3 MB/98.8 MB 2019-06-24T05:56:08.534623000Z I0624 05:56:08.525258 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 65.84 MB/98.8 MB 2019-06-24T05:56:08.537307000Z I0624 05:56:08.530342 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 66.38 MB/98.8 MB 2019-06-24T05:56:08.551955000Z I0624 05:56:08.537252 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 66.91 MB/98.8 MB 2019-06-24T05:56:08.552880000Z I0624 05:56:08.543240 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 67.45 MB/98.8 MB 2019-06-24T05:56:08.555342000Z I0624 05:56:08.547449 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 67.99 MB/98.8 MB 2019-06-24T05:56:08.561284000Z I0624 05:56:08.552884 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 68.53 MB/98.8 MB 2019-06-24T05:56:08.563462000Z I0624 05:56:08.558350 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 69.07 MB/98.8 MB 2019-06-24T05:56:08.577649000Z I0624 05:56:08.571254 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 69.61 MB/98.8 MB 2019-06-24T05:56:08.582490000Z I0624 05:56:08.577248 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 55.03 MB/73.17 MB 2019-06-24T05:56:08.582849000Z I0624 05:56:08.577279 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 70.15 MB/98.8 MB 2019-06-24T05:56:08.586493000Z I0624 05:56:08.585767 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 70.69 MB/98.8 MB 2019-06-24T05:56:08.620442000Z I0624 05:56:08.601254 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 71.23 MB/98.8 MB 2019-06-24T05:56:08.622451000Z I0624 05:56:08.610262 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 71.77 MB/98.8 MB 2019-06-24T05:56:08.624536000Z I0624 05:56:08.617281 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 72.31 MB/98.8 MB 2019-06-24T05:56:08.634782000Z I0624 05:56:08.620863 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 72.85 MB/98.8 MB 2019-06-24T05:56:08.639963000Z I0624 05:56:08.633914 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 55.57 MB/73.17 MB 2019-06-24T05:56:08.659137000Z I0624 05:56:08.640468 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 73.39 MB/98.8 MB 2019-06-24T05:56:08.659871000Z I0624 05:56:08.657404 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 56.11 MB/73.17 MB 2019-06-24T05:56:08.662936000Z I0624 05:56:08.658300 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 73.93 MB/98.8 MB 2019-06-24T05:56:08.675253000Z I0624 05:56:08.668933 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 74.47 MB/98.8 MB 2019-06-24T05:56:08.689394000Z I0624 05:56:08.681745 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 75.01 MB/98.8 MB 2019-06-24T05:56:08.690056000Z I0624 05:56:08.686826 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 56.65 MB/73.17 MB 2019-06-24T05:56:08.691315000Z I0624 05:56:08.689245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 75.55 MB/98.8 MB 2019-06-24T05:56:08.702787000Z I0624 05:56:08.697272 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 76.09 MB/98.8 MB 2019-06-24T05:56:08.704308000Z I0624 05:56:08.701753 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 76.63 MB/98.8 MB 2019-06-24T05:56:08.712824000Z I0624 05:56:08.712089 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 77.17 MB/98.8 MB 2019-06-24T05:56:08.722000000Z I0624 05:56:08.721308 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 57.19 MB/73.17 MB 2019-06-24T05:56:08.722349000Z I0624 05:56:08.721453 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 77.71 MB/98.8 MB 2019-06-24T05:56:08.739689000Z I0624 05:56:08.736249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 57.73 MB/73.17 MB 2019-06-24T05:56:08.746594000Z I0624 05:56:08.739246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 78.25 MB/98.8 MB 2019-06-24T05:56:08.748062000Z I0624 05:56:08.745608 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 58.26 MB/73.17 MB 2019-06-24T05:56:08.766038000Z I0624 05:56:08.752854 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 58.8 MB/73.17 MB 2019-06-24T05:56:08.768875000Z I0624 05:56:08.758235 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 78.79 MB/98.8 MB 2019-06-24T05:56:08.771239000Z I0624 05:56:08.766524 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 59.34 MB/73.17 MB 2019-06-24T05:56:08.778371000Z I0624 05:56:08.775990 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 79.33 MB/98.8 MB 2019-06-24T05:56:08.780445000Z I0624 05:56:08.779243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 59.88 MB/73.17 MB 2019-06-24T05:56:08.788316000Z I0624 05:56:08.784912 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 79.87 MB/98.8 MB 2019-06-24T05:56:08.792150000Z I0624 05:56:08.791332 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 60.42 MB/73.17 MB 2019-06-24T05:56:08.805780000Z I0624 05:56:08.804477 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 80.41 MB/98.8 MB 2019-06-24T05:56:08.807463000Z I0624 05:56:08.806710 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 60.96 MB/73.17 MB 2019-06-24T05:56:08.829644000Z I0624 05:56:08.819377 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 61.5 MB/73.17 MB 2019-06-24T05:56:08.845739000Z I0624 05:56:08.828265 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 80.95 MB/98.8 MB 2019-06-24T05:56:08.846100000Z I0624 05:56:08.831780 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 62.04 MB/73.17 MB 2019-06-24T05:56:08.867690000Z I0624 05:56:08.843854 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 62.58 MB/73.17 MB 2019-06-24T05:56:08.868036000Z I0624 05:56:08.847291 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 81.49 MB/98.8 MB 2019-06-24T05:56:08.868394000Z I0624 05:56:08.855910 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 63.12 MB/73.17 MB 2019-06-24T05:56:08.886554000Z I0624 05:56:08.865592 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 82.02 MB/98.8 MB 2019-06-24T05:56:08.886894000Z I0624 05:56:08.868387 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 63.66 MB/73.17 MB 2019-06-24T05:56:08.887238000Z I0624 05:56:08.879322 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 64.2 MB/73.17 MB 2019-06-24T05:56:08.906398000Z I0624 05:56:08.891356 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 64.74 MB/73.17 MB 2019-06-24T05:56:08.942595000Z I0624 05:56:08.902306 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 82.56 MB/98.8 MB 2019-06-24T05:56:08.942996000Z I0624 05:56:08.903104 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 65.28 MB/73.17 MB 2019-06-24T05:56:08.943346000Z I0624 05:56:08.916308 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 65.82 MB/73.17 MB 2019-06-24T05:56:08.943676000Z I0624 05:56:08.920142 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 83.1 MB/98.8 MB 2019-06-24T05:56:08.943997000Z I0624 05:56:08.930280 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 66.36 MB/73.17 MB 2019-06-24T05:56:08.967121000Z I0624 05:56:08.942323 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 83.64 MB/98.8 MB 2019-06-24T05:56:08.967486000Z I0624 05:56:08.943426 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 66.9 MB/73.17 MB 2019-06-24T05:56:08.967811000Z I0624 05:56:08.956103 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 67.44 MB/73.17 MB 2019-06-24T05:56:08.990637000Z I0624 05:56:08.965347 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 84.18 MB/98.8 MB 2019-06-24T05:56:08.990985000Z I0624 05:56:08.970282 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 67.98 MB/73.17 MB 2019-06-24T05:56:08.991333000Z I0624 05:56:08.981429 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 68.52 MB/73.17 MB 2019-06-24T05:56:09.030366000Z I0624 05:56:08.992963 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 69.06 MB/73.17 MB 2019-06-24T05:56:09.030767000Z I0624 05:56:08.996289 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 84.72 MB/98.8 MB 2019-06-24T05:56:09.031106000Z I0624 05:56:09.005324 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 69.6 MB/73.17 MB 2019-06-24T05:56:09.032725000Z I0624 05:56:09.015342 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 70.14 MB/73.17 MB 2019-06-24T05:56:09.057517000Z I0624 05:56:09.028303 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 85.26 MB/98.8 MB 2019-06-24T05:56:09.057885000Z I0624 05:56:09.038300 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 85.8 MB/98.8 MB 2019-06-24T05:56:09.059080000Z I0624 05:56:09.048282 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 86.34 MB/98.8 MB 2019-06-24T05:56:09.059438000Z I0624 05:56:09.058334 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 70.68 MB/73.17 MB 2019-06-24T05:56:09.096549000Z I0624 05:56:09.070316 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 71.22 MB/73.17 MB 2019-06-24T05:56:09.096906000Z I0624 05:56:09.072644 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 86.88 MB/98.8 MB 2019-06-24T05:56:09.097223000Z I0624 05:56:09.082475 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 71.76 MB/73.17 MB 2019-06-24T05:56:09.112497000Z I0624 05:56:09.109255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 87.42 MB/98.8 MB 2019-06-24T05:56:09.118583000Z I0624 05:56:09.115783 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 72.3 MB/73.17 MB 2019-06-24T05:56:09.144530000Z I0624 05:56:09.122474 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 72.84 MB/73.17 MB 2019-06-24T05:56:09.144888000Z I0624 05:56:09.125241 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 87.96 MB/98.8 MB 2019-06-24T05:56:09.161485000Z I0624 05:56:09.156927 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 88.5 MB/98.8 MB 2019-06-24T05:56:09.195720000Z I0624 05:56:09.181387 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 89.04 MB/98.8 MB 2019-06-24T05:56:09.210350000Z I0624 05:56:09.198063 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 89.58 MB/98.8 MB 2019-06-24T05:56:09.220460000Z I0624 05:56:09.212336 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 90.12 MB/98.8 MB 2019-06-24T05:56:09.229589000Z I0624 05:56:09.223117 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 90.66 MB/98.8 MB 2019-06-24T05:56:09.239469000Z I0624 05:56:09.229243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 91.2 MB/98.8 MB 2019-06-24T05:56:09.246453000Z I0624 05:56:09.237253 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 91.74 MB/98.8 MB 2019-06-24T05:56:09.251478000Z I0624 05:56:09.244572 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 92.28 MB/98.8 MB 2019-06-24T05:56:09.257522000Z I0624 05:56:09.252795 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 92.82 MB/98.8 MB 2019-06-24T05:56:09.296358000Z I0624 05:56:09.262306 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 93.36 MB/98.8 MB 2019-06-24T05:56:09.296716000Z I0624 05:56:09.275249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 93.9 MB/98.8 MB 2019-06-24T05:56:09.297022000Z I0624 05:56:09.282925 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 94.44 MB/98.8 MB 2019-06-24T05:56:09.298236000Z I0624 05:56:09.288079 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 94.98 MB/98.8 MB 2019-06-24T05:56:09.298588000Z I0624 05:56:09.295618 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 95.52 MB/98.8 MB 2019-06-24T05:56:09.309204000Z I0624 05:56:09.308416 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 96.06 MB/98.8 MB 2019-06-24T05:56:09.309525000Z I0624 05:56:09.308445 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 96.6 MB/98.8 MB 2019-06-24T05:56:09.323518000Z I0624 05:56:09.311447 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 97.14 MB/98.8 MB 2019-06-24T05:56:09.324642000Z I0624 05:56:09.322560 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 97.67 MB/98.8 MB 2019-06-24T05:56:09.324965000Z I0624 05:56:09.322596 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 98.21 MB/98.8 MB 2019-06-24T05:56:09.326934000Z I0624 05:56:09.326210 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 98.75 MB/98.8 MB 2019-06-24T05:56:10.093543000Z I0624 05:56:10.078156 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 557.1 kB/73.17 MB 2019-06-24T05:56:10.141603000Z I0624 05:56:10.124023 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 1.114 MB/73.17 MB 2019-06-24T05:56:10.168541000Z I0624 05:56:10.159036 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 1.671 MB/73.17 MB 2019-06-24T05:56:10.216578000Z I0624 05:56:10.194486 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.228 MB/73.17 MB 2019-06-24T05:56:10.225663000Z I0624 05:56:10.223251 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.785 MB/73.17 MB 2019-06-24T05:56:10.256939000Z I0624 05:56:10.256252 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 3.342 MB/73.17 MB 2019-06-24T05:56:10.296434000Z I0624 05:56:10.287266 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 3.899 MB/73.17 MB 2019-06-24T05:56:10.333722000Z I0624 05:56:10.322485 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 4.456 MB/73.17 MB 2019-06-24T05:56:10.353676000Z I0624 05:56:10.350244 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 5.014 MB/73.17 MB 2019-06-24T05:56:10.395076000Z I0624 05:56:10.384272 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 5.571 MB/73.17 MB 2019-06-24T05:56:10.415925000Z I0624 05:56:10.415574 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 6.128 MB/73.17 MB 2019-06-24T05:56:10.454486000Z I0624 05:56:10.442389 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 6.685 MB/73.17 MB 2019-06-24T05:56:10.486611000Z I0624 05:56:10.471826 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 7.242 MB/73.17 MB 2019-06-24T05:56:10.516546000Z I0624 05:56:10.501250 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 7.799 MB/73.17 MB 2019-06-24T05:56:10.550334000Z I0624 05:56:10.534816 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 8.356 MB/73.17 MB 2019-06-24T05:56:10.573488000Z I0624 05:56:10.564884 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 8.913 MB/73.17 MB 2019-06-24T05:56:10.677457000Z I0624 05:56:10.598134 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 9.47 MB/73.17 MB 2019-06-24T05:56:10.677792000Z I0624 05:56:10.629210 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 10.03 MB/73.17 MB 2019-06-24T05:56:10.678086000Z I0624 05:56:10.659822 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 10.58 MB/73.17 MB 2019-06-24T05:56:10.689387000Z I0624 05:56:10.688477 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 11.14 MB/73.17 MB 2019-06-24T05:56:10.727481000Z I0624 05:56:10.724973 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 11.7 MB/73.17 MB 2019-06-24T05:56:10.756735000Z I0624 05:56:10.753970 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 12.26 MB/73.17 MB 2019-06-24T05:56:10.793351000Z I0624 05:56:10.792791 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 12.81 MB/73.17 MB 2019-06-24T05:56:10.838607000Z I0624 05:56:10.828164 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 13.37 MB/73.17 MB 2019-06-24T05:56:10.901048000Z I0624 05:56:10.879324 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 13.93 MB/73.17 MB 2019-06-24T05:56:10.906001000Z I0624 05:56:10.904548 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 14.48 MB/73.17 MB 2019-06-24T05:56:10.956611000Z I0624 05:56:10.937845 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 15.04 MB/73.17 MB 2019-06-24T05:56:10.971697000Z I0624 05:56:10.967378 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 15.6 MB/73.17 MB 2019-06-24T05:56:11.016657000Z I0624 05:56:10.995173 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 16.15 MB/73.17 MB 2019-06-24T05:56:11.024553000Z I0624 05:56:11.023970 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 16.71 MB/73.17 MB 2019-06-24T05:56:11.077407000Z I0624 05:56:11.057251 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 17.27 MB/73.17 MB 2019-06-24T05:56:11.084400000Z I0624 05:56:11.083347 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 17.83 MB/73.17 MB 2019-06-24T05:56:11.154576000Z I0624 05:56:11.138876 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 18.38 MB/73.17 MB 2019-06-24T05:56:11.204501000Z I0624 05:56:11.189382 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 18.94 MB/73.17 MB 2019-06-24T05:56:11.295939000Z I0624 05:56:11.290309 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 19.5 MB/73.17 MB 2019-06-24T05:56:11.354587000Z I0624 05:56:11.350296 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 20.05 MB/73.17 MB 2019-06-24T05:56:11.458601000Z I0624 05:56:11.439335 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 20.61 MB/73.17 MB 2019-06-24T05:56:11.470475000Z I0624 05:56:11.469102 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 21.17 MB/73.17 MB 2019-06-24T05:56:11.511405000Z I0624 05:56:11.508133 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 21.73 MB/73.17 MB 2019-06-24T05:56:11.556556000Z I0624 05:56:11.538916 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 22.28 MB/73.17 MB 2019-06-24T05:56:11.568952000Z I0624 05:56:11.568266 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 22.84 MB/73.17 MB 2019-06-24T05:56:11.627784000Z I0624 05:56:11.609482 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 23.4 MB/73.17 MB 2019-06-24T05:56:11.654769000Z I0624 05:56:11.636551 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 23.95 MB/73.17 MB 2019-06-24T05:56:11.690432000Z I0624 05:56:11.677056 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 24.51 MB/73.17 MB 2019-06-24T05:56:11.738700000Z I0624 05:56:11.718559 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 25.07 MB/73.17 MB 2019-06-24T05:56:11.753566000Z I0624 05:56:11.750855 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 25.62 MB/73.17 MB 2019-06-24T05:56:11.792676000Z I0624 05:56:11.776113 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 26.18 MB/73.17 MB 2019-06-24T05:56:11.821459000Z I0624 05:56:11.819010 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 26.74 MB/73.17 MB 2019-06-24T05:56:11.859560000Z I0624 05:56:11.857283 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 27.3 MB/73.17 MB 2019-06-24T05:56:11.919729000Z I0624 05:56:11.902262 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 27.85 MB/73.17 MB 2019-06-24T05:56:11.987542000Z I0624 05:56:11.957268 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 28.41 MB/73.17 MB 2019-06-24T05:56:11.996174000Z I0624 05:56:11.993286 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 28.97 MB/73.17 MB 2019-06-24T05:56:12.052271000Z I0624 05:56:12.031606 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 29.52 MB/73.17 MB 2019-06-24T05:56:12.053160000Z I0624 05:56:12.052572 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 30.08 MB/73.17 MB 2019-06-24T05:56:12.098543000Z I0624 05:56:12.079930 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 30.64 MB/73.17 MB 2019-06-24T05:56:12.111419000Z I0624 05:56:12.108537 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 31.2 MB/73.17 MB 2019-06-24T05:56:12.155546000Z I0624 05:56:12.142539 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 31.75 MB/73.17 MB 2019-06-24T05:56:12.200502000Z I0624 05:56:12.199227 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 32.31 MB/73.17 MB 2019-06-24T05:56:12.237092000Z I0624 05:56:12.236671 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 32.87 MB/73.17 MB 2019-06-24T05:56:12.299755000Z I0624 05:56:12.293005 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 33.42 MB/73.17 MB 2019-06-24T05:56:12.390102000Z I0624 05:56:12.372335 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 33.98 MB/73.17 MB 2019-06-24T05:56:12.402624000Z I0624 05:56:12.399487 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 34.54 MB/73.17 MB 2019-06-24T05:56:12.450662000Z I0624 05:56:12.447367 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 35.09 MB/73.17 MB 2019-06-24T05:56:12.491881000Z I0624 05:56:12.480048 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 35.65 MB/73.17 MB 2019-06-24T05:56:12.520992000Z I0624 05:56:12.512028 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 36.21 MB/73.17 MB 2019-06-24T05:56:12.576731000Z I0624 05:56:12.555272 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 36.77 MB/73.17 MB 2019-06-24T05:56:12.626518000Z I0624 05:56:12.601708 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 37.32 MB/73.17 MB 2019-06-24T05:56:12.639486000Z I0624 05:56:12.638290 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 37.88 MB/73.17 MB 2019-06-24T05:56:12.706018000Z I0624 05:56:12.685774 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 38.44 MB/73.17 MB 2019-06-24T05:56:12.745649000Z I0624 05:56:12.732436 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 38.99 MB/73.17 MB 2019-06-24T05:56:12.779536000Z I0624 05:56:12.763664 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 39.55 MB/73.17 MB 2019-06-24T05:56:12.809909000Z I0624 05:56:12.795633 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 40.11 MB/73.17 MB 2019-06-24T05:56:12.833828000Z I0624 05:56:12.824313 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 40.67 MB/73.17 MB 2019-06-24T05:56:12.890993000Z I0624 05:56:12.860355 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 41.22 MB/73.17 MB 2019-06-24T05:56:12.893011000Z I0624 05:56:12.891524 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 41.78 MB/73.17 MB 2019-06-24T05:56:12.941414000Z I0624 05:56:12.920393 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 42.34 MB/73.17 MB 2019-06-24T05:56:12.953145000Z I0624 05:56:12.952077 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 42.89 MB/73.17 MB 2019-06-24T05:56:13.006732000Z I0624 05:56:12.998658 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 43.45 MB/73.17 MB 2019-06-24T05:56:13.033639000Z I0624 05:56:13.031260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 44.01 MB/73.17 MB 2019-06-24T05:56:13.074561000Z I0624 05:56:13.059253 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 44.56 MB/73.17 MB 2019-06-24T05:56:13.097671000Z I0624 05:56:13.089502 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 45.12 MB/73.17 MB 2019-06-24T05:56:13.150893000Z I0624 05:56:13.132243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 45.68 MB/73.17 MB 2019-06-24T05:56:13.197345000Z I0624 05:56:13.184281 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 46.24 MB/73.17 MB 2019-06-24T05:56:13.253508000Z I0624 05:56:13.251474 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 46.79 MB/73.17 MB 2019-06-24T05:56:13.362617000Z I0624 05:56:13.341507 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 47.35 MB/73.17 MB 2019-06-24T05:56:13.431624000Z I0624 05:56:13.418486 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 47.91 MB/73.17 MB 2019-06-24T05:56:13.515055000Z I0624 05:56:13.501932 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 48.46 MB/73.17 MB 2019-06-24T05:56:13.564200000Z I0624 05:56:13.560297 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 49.02 MB/73.17 MB 2019-06-24T05:56:13.618512000Z I0624 05:56:13.616419 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 49.58 MB/73.17 MB 2019-06-24T05:56:13.692499000Z I0624 05:56:13.680025 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 50.14 MB/73.17 MB 2019-06-24T05:56:13.742357000Z I0624 05:56:13.737407 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 50.69 MB/73.17 MB 2019-06-24T05:56:13.814758000Z I0624 05:56:13.789301 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 51.25 MB/73.17 MB 2019-06-24T05:56:13.845040000Z I0624 05:56:13.844627 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 51.81 MB/73.17 MB 2019-06-24T05:56:13.932055000Z I0624 05:56:13.930677 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 52.36 MB/73.17 MB 2019-06-24T05:56:13.997102000Z I0624 05:56:13.993118 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 52.92 MB/73.17 MB 2019-06-24T05:56:14.083495000Z I0624 05:56:14.081983 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 53.48 MB/73.17 MB 2019-06-24T05:56:14.134532000Z I0624 05:56:14.127356 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 54.03 MB/73.17 MB 2019-06-24T05:56:14.167011000Z I0624 05:56:14.161892 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 54.59 MB/73.17 MB 2019-06-24T05:56:14.223898000Z I0624 05:56:14.199308 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 55.15 MB/73.17 MB 2019-06-24T05:56:14.243435000Z I0624 05:56:14.237224 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 55.71 MB/73.17 MB 2019-06-24T05:56:14.284631000Z I0624 05:56:14.267263 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 56.26 MB/73.17 MB 2019-06-24T05:56:14.328294000Z I0624 05:56:14.326902 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 56.82 MB/73.17 MB 2019-06-24T05:56:14.373433000Z I0624 05:56:14.358289 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 57.38 MB/73.17 MB 2019-06-24T05:56:14.402430000Z I0624 05:56:14.390275 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 57.93 MB/73.17 MB 2019-06-24T05:56:14.434542000Z I0624 05:56:14.416203 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 58.49 MB/73.17 MB 2019-06-24T05:56:14.450593000Z I0624 05:56:14.445268 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 59.05 MB/73.17 MB 2019-06-24T05:56:14.492521000Z I0624 05:56:14.474270 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 59.6 MB/73.17 MB 2019-06-24T05:56:14.513199000Z I0624 05:56:14.508688 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 60.16 MB/73.17 MB 2019-06-24T05:56:14.567409000Z I0624 05:56:14.548255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 60.72 MB/73.17 MB 2019-06-24T05:56:14.584940000Z I0624 05:56:14.580520 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 61.28 MB/73.17 MB 2019-06-24T05:56:14.609128000Z I0624 05:56:14.602463 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 61.83 MB/73.17 MB 2019-06-24T05:56:14.642502000Z I0624 05:56:14.621109 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 62.39 MB/73.17 MB 2019-06-24T05:56:14.672803000Z I0624 05:56:14.667255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 62.95 MB/73.17 MB 2019-06-24T05:56:14.715475000Z I0624 05:56:14.701689 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 63.5 MB/73.17 MB 2019-06-24T05:56:14.761919000Z I0624 05:56:14.760899 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 64.06 MB/73.17 MB 2019-06-24T05:56:14.825552000Z I0624 05:56:14.812360 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 64.62 MB/73.17 MB 2019-06-24T05:56:15.046920000Z I0624 05:56:15.044377 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 65.18 MB/73.17 MB 2019-06-24T05:56:15.143095000Z I0624 05:56:15.141763 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 65.73 MB/73.17 MB 2019-06-24T05:56:15.194136000Z I0624 05:56:15.183500 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 66.29 MB/73.17 MB 2019-06-24T05:56:15.260578000Z I0624 05:56:15.240373 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 66.85 MB/73.17 MB 2019-06-24T05:56:15.501592000Z I0624 05:56:15.485429 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 67.4 MB/73.17 MB 2019-06-24T05:56:15.547545000Z I0624 05:56:15.525408 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 67.96 MB/73.17 MB 2019-06-24T05:56:15.576547000Z I0624 05:56:15.567297 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 68.52 MB/73.17 MB 2019-06-24T05:56:15.622987000Z I0624 05:56:15.608699 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 69.07 MB/73.17 MB 2019-06-24T05:56:15.646427000Z I0624 05:56:15.645298 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 69.63 MB/73.17 MB 2019-06-24T05:56:15.696507000Z I0624 05:56:15.677269 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 70.19 MB/73.17 MB 2019-06-24T05:56:15.716529000Z I0624 05:56:15.711138 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 70.75 MB/73.17 MB 2019-06-24T05:56:15.767565000Z I0624 05:56:15.747167 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 71.3 MB/73.17 MB 2019-06-24T05:56:15.781434000Z I0624 05:56:15.780267 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 71.86 MB/73.17 MB 2019-06-24T05:56:15.832126000Z I0624 05:56:15.818312 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 72.42 MB/73.17 MB 2019-06-24T05:56:16.059776000Z I0624 05:56:16.045035 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 72.97 MB/73.17 MB 2019-06-24T05:56:16.088965000Z I0624 05:56:16.086282 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 73.17 MB/73.17 MB 2019-06-24T05:56:20.920956000Z I0624 05:56:20.920335 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 131.1 kB/10.22 MB 2019-06-24T05:56:20.931539000Z I0624 05:56:20.928888 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 262.1 kB/10.22 MB 2019-06-24T05:56:20.937812000Z I0624 05:56:20.935879 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 393.2 kB/10.22 MB 2019-06-24T05:56:20.942983000Z I0624 05:56:20.942419 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 524.3 kB/10.22 MB 2019-06-24T05:56:20.954060000Z I0624 05:56:20.951526 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 655.4 kB/10.22 MB 2019-06-24T05:56:20.964735000Z I0624 05:56:20.960353 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 786.4 kB/10.22 MB 2019-06-24T05:56:20.970515000Z I0624 05:56:20.969100 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 917.5 kB/10.22 MB 2019-06-24T05:56:20.977730000Z I0624 05:56:20.975514 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 1.049 MB/10.22 MB 2019-06-24T05:56:20.986538000Z I0624 05:56:20.985216 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 1.18 MB/10.22 MB 2019-06-24T05:56:20.995375000Z I0624 05:56:20.991966 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 1.311 MB/10.22 MB 2019-06-24T05:56:21.002384000Z I0624 05:56:20.998716 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 1.442 MB/10.22 MB 2019-06-24T05:56:21.008399000Z I0624 05:56:21.006047 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 1.573 MB/10.22 MB 2019-06-24T05:56:21.036255000Z I0624 05:56:21.017952 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 1.704 MB/10.22 MB 2019-06-24T05:56:21.036669000Z I0624 05:56:21.028288 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 1.835 MB/10.22 MB 2019-06-24T05:56:21.043316000Z I0624 05:56:21.035830 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 1.966 MB/10.22 MB 2019-06-24T05:56:21.061275000Z I0624 05:56:21.060523 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 2.097 MB/10.22 MB 2019-06-24T05:56:21.071562000Z I0624 05:56:21.068238 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 2.228 MB/10.22 MB 2019-06-24T05:56:21.074896000Z I0624 05:56:21.073942 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 2.359 MB/10.22 MB 2019-06-24T05:56:21.085430000Z I0624 05:56:21.082247 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 2.49 MB/10.22 MB 2019-06-24T05:56:21.090476000Z I0624 05:56:21.088021 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 2.621 MB/10.22 MB 2019-06-24T05:56:21.097403000Z I0624 05:56:21.094703 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 2.753 MB/10.22 MB 2019-06-24T05:56:21.104247000Z I0624 05:56:21.100655 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 2.884 MB/10.22 MB 2019-06-24T05:56:21.108906000Z I0624 05:56:21.108241 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 3.015 MB/10.22 MB 2019-06-24T05:56:21.120418000Z I0624 05:56:21.114892 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 3.146 MB/10.22 MB 2019-06-24T05:56:21.122727000Z I0624 05:56:21.121814 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 3.277 MB/10.22 MB 2019-06-24T05:56:21.138885000Z I0624 05:56:21.132242 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 3.408 MB/10.22 MB 2019-06-24T05:56:21.139855000Z I0624 05:56:21.139086 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 3.539 MB/10.22 MB 2019-06-24T05:56:21.143873000Z I0624 05:56:21.143325 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 3.67 MB/10.22 MB 2019-06-24T05:56:21.150328000Z I0624 05:56:21.149695 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 3.801 MB/10.22 MB 2019-06-24T05:56:21.184506000Z I0624 05:56:21.182254 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 3.932 MB/10.22 MB 2019-06-24T05:56:21.202259000Z I0624 05:56:21.199288 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 4.063 MB/10.22 MB 2019-06-24T05:56:21.218028000Z I0624 05:56:21.214430 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 4.194 MB/10.22 MB 2019-06-24T05:56:21.233694000Z I0624 05:56:21.231976 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 4.325 MB/10.22 MB 2019-06-24T05:56:21.247336000Z I0624 05:56:21.246901 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 4.456 MB/10.22 MB 2019-06-24T05:56:21.275513000Z I0624 05:56:21.257778 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 4.588 MB/10.22 MB 2019-06-24T05:56:21.276467000Z I0624 05:56:21.275140 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 4.719 MB/10.22 MB 2019-06-24T05:56:21.287917000Z I0624 05:56:21.286492 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 4.85 MB/10.22 MB 2019-06-24T05:56:21.300348000Z I0624 05:56:21.293866 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 4.981 MB/10.22 MB 2019-06-24T05:56:21.305807000Z I0624 05:56:21.304354 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 5.112 MB/10.22 MB 2019-06-24T05:56:21.317268000Z I0624 05:56:21.316159 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 5.243 MB/10.22 MB 2019-06-24T05:56:21.329714000Z I0624 05:56:21.324482 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 5.374 MB/10.22 MB 2019-06-24T05:56:21.333468000Z I0624 05:56:21.331831 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 5.505 MB/10.22 MB 2019-06-24T05:56:21.344417000Z I0624 05:56:21.339507 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 5.636 MB/10.22 MB 2019-06-24T05:56:21.347133000Z I0624 05:56:21.346528 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 5.767 MB/10.22 MB 2019-06-24T05:56:21.354534000Z I0624 05:56:21.353932 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 5.898 MB/10.22 MB 2019-06-24T05:56:21.365444000Z I0624 05:56:21.362342 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 6.029 MB/10.22 MB 2019-06-24T05:56:21.369320000Z I0624 05:56:21.368680 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 6.16 MB/10.22 MB 2019-06-24T05:56:21.376891000Z I0624 05:56:21.376310 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 6.291 MB/10.22 MB 2019-06-24T05:56:21.384366000Z I0624 05:56:21.383779 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 6.423 MB/10.22 MB 2019-06-24T05:56:21.391720000Z I0624 05:56:21.391208 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 6.554 MB/10.22 MB 2019-06-24T05:56:21.399242000Z I0624 05:56:21.398737 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 6.685 MB/10.22 MB 2019-06-24T05:56:21.406354000Z I0624 05:56:21.405587 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 6.816 MB/10.22 MB 2019-06-24T05:56:21.422104000Z I0624 05:56:21.419266 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 6.947 MB/10.22 MB 2019-06-24T05:56:21.423502000Z I0624 05:56:21.422718 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 7.078 MB/10.22 MB 2019-06-24T05:56:21.438479000Z I0624 05:56:21.436242 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 7.209 MB/10.22 MB 2019-06-24T05:56:21.442869000Z I0624 05:56:21.442249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 7.34 MB/10.22 MB 2019-06-24T05:56:21.449896000Z I0624 05:56:21.449244 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 7.471 MB/10.22 MB 2019-06-24T05:56:21.457870000Z I0624 05:56:21.457239 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 7.602 MB/10.22 MB 2019-06-24T05:56:21.464708000Z I0624 05:56:21.464162 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 7.733 MB/10.22 MB 2019-06-24T05:56:21.473853000Z I0624 05:56:21.473236 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 7.864 MB/10.22 MB 2019-06-24T05:56:21.487411000Z I0624 05:56:21.486273 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 7.995 MB/10.22 MB 2019-06-24T05:56:21.492302000Z I0624 05:56:21.491763 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 8.126 MB/10.22 MB 2019-06-24T05:56:21.508536000Z I0624 05:56:21.502315 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 8.258 MB/10.22 MB 2019-06-24T05:56:21.510987000Z I0624 05:56:21.510415 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 8.389 MB/10.22 MB 2019-06-24T05:56:21.519668000Z I0624 05:56:21.518946 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 8.52 MB/10.22 MB 2019-06-24T05:56:21.531494000Z I0624 05:56:21.528282 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 8.651 MB/10.22 MB 2019-06-24T05:56:21.544776000Z I0624 05:56:21.540972 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 8.782 MB/10.22 MB 2019-06-24T05:56:21.553333000Z I0624 05:56:21.552372 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 8.913 MB/10.22 MB 2019-06-24T05:56:21.561981000Z I0624 05:56:21.561362 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 9.044 MB/10.22 MB 2019-06-24T05:56:21.570937000Z I0624 05:56:21.570243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 9.175 MB/10.22 MB 2019-06-24T05:56:21.584531000Z I0624 05:56:21.575905 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 9.306 MB/10.22 MB 2019-06-24T05:56:21.586544000Z I0624 05:56:21.584246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 9.437 MB/10.22 MB 2019-06-24T05:56:21.594136000Z I0624 05:56:21.593596 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 9.568 MB/10.22 MB 2019-06-24T05:56:21.603651000Z I0624 05:56:21.602146 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 9.699 MB/10.22 MB 2019-06-24T05:56:21.613146000Z I0624 05:56:21.612531 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 9.83 MB/10.22 MB 2019-06-24T05:56:21.620872000Z I0624 05:56:21.620342 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 9.961 MB/10.22 MB 2019-06-24T05:56:21.631516000Z I0624 05:56:21.627468 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 10.09 MB/10.22 MB 2019-06-24T05:56:21.640220000Z I0624 05:56:21.639675 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 10.22 MB/10.22 MB 2019-06-24T05:56:21.684416000Z I0624 05:56:21.683762 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 10.22 MB/10.22 MB 2019-06-24T05:56:22.116086000Z I0624 05:56:22.115264 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 4.744 kB/4.744 kB 2019-06-24T05:56:22.345866000Z I0624 05:56:22.344340 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 4.744 kB/4.744 kB 2019-06-24T05:56:22.448113000Z I0624 05:56:22.447466 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 32.77 kB/173 kB 2019-06-24T05:56:22.658264000Z I0624 05:56:22.651291 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 98.3 kB/173 kB 2019-06-24T05:56:22.659358000Z I0624 05:56:22.658766 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 163.8 kB/173 kB 2019-06-24T05:56:22.662373000Z I0624 05:56:22.661734 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 173 kB/173 kB 2019-06-24T05:56:22.666707000Z I0624 05:56:22.666177 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 173 kB/173 kB 2019-06-24T05:56:23.235829000Z I0624 05:56:23.218936 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 557.1 kB/98.8 MB 2019-06-24T05:56:23.251486000Z I0624 05:56:23.249261 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 1.114 MB/98.8 MB 2019-06-24T05:56:23.292566000Z I0624 05:56:23.280793 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 1.671 MB/98.8 MB 2019-06-24T05:56:23.313467000Z I0624 05:56:23.311366 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.228 MB/98.8 MB 2019-06-24T05:56:23.349081000Z I0624 05:56:23.338232 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 2.785 MB/98.8 MB 2019-06-24T05:56:23.374556000Z I0624 05:56:23.366332 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 3.342 MB/98.8 MB 2019-06-24T05:56:23.405282000Z I0624 05:56:23.392314 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 3.899 MB/98.8 MB 2019-06-24T05:56:23.430069000Z I0624 05:56:23.423338 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 4.456 MB/98.8 MB 2019-06-24T05:56:23.473026000Z I0624 05:56:23.457468 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 5.014 MB/98.8 MB 2019-06-24T05:56:23.482826000Z I0624 05:56:23.482284 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 5.571 MB/98.8 MB 2019-06-24T05:56:23.524710000Z I0624 05:56:23.508128 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 6.128 MB/98.8 MB 2019-06-24T05:56:23.544541000Z I0624 05:56:23.541974 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 6.685 MB/98.8 MB 2019-06-24T05:56:23.591490000Z I0624 05:56:23.579955 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 7.242 MB/98.8 MB 2019-06-24T05:56:23.634007000Z I0624 05:56:23.621243 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 7.799 MB/98.8 MB 2019-06-24T05:56:23.652881000Z I0624 05:56:23.652339 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 8.356 MB/98.8 MB 2019-06-24T05:56:23.713484000Z I0624 05:56:23.688260 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 8.913 MB/98.8 MB 2019-06-24T05:56:23.757403000Z I0624 05:56:23.734741 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 9.47 MB/98.8 MB 2019-06-24T05:56:23.923081000Z I0624 05:56:23.915612 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 10.03 MB/98.8 MB 2019-06-24T05:56:24.088170000Z I0624 05:56:24.076454 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 10.58 MB/98.8 MB 2019-06-24T05:56:24.276913000Z I0624 05:56:24.256800 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 11.14 MB/98.8 MB 2019-06-24T05:56:24.313355000Z I0624 05:56:24.302202 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 11.7 MB/98.8 MB 2019-06-24T05:56:24.421438000Z I0624 05:56:24.405368 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 12.26 MB/98.8 MB 2019-06-24T05:56:24.522524000Z I0624 05:56:24.498416 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 12.81 MB/98.8 MB 2019-06-24T05:56:24.600703000Z I0624 05:56:24.584543 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 13.37 MB/98.8 MB 2019-06-24T05:56:24.744596000Z I0624 05:56:24.742118 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 13.93 MB/98.8 MB 2019-06-24T05:56:24.912956000Z I0624 05:56:24.899307 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 14.48 MB/98.8 MB 2019-06-24T05:56:25.132729000Z I0624 05:56:25.130321 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 15.04 MB/98.8 MB 2019-06-24T05:56:25.258271000Z I0624 05:56:25.257294 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 15.6 MB/98.8 MB 2019-06-24T05:56:25.307583000Z I0624 05:56:25.285071 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 16.15 MB/98.8 MB 2019-06-24T05:56:25.339324000Z I0624 05:56:25.338533 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 16.71 MB/98.8 MB 2019-06-24T05:56:25.383424000Z I0624 05:56:25.364216 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 17.27 MB/98.8 MB 2019-06-24T05:56:25.409498000Z I0624 05:56:25.402980 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 17.83 MB/98.8 MB 2019-06-24T05:56:25.460621000Z I0624 05:56:25.453507 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 18.38 MB/98.8 MB 2019-06-24T05:56:25.499547000Z I0624 05:56:25.493834 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 18.94 MB/98.8 MB 2019-06-24T05:56:25.546639000Z I0624 05:56:25.545396 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 19.5 MB/98.8 MB 2019-06-24T05:56:25.615990000Z I0624 05:56:25.593301 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 20.05 MB/98.8 MB 2019-06-24T05:56:25.711016000Z I0624 05:56:25.704293 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 20.61 MB/98.8 MB 2019-06-24T05:56:25.760118000Z I0624 05:56:25.756293 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 21.17 MB/98.8 MB 2019-06-24T05:56:25.803654000Z I0624 05:56:25.785486 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 21.73 MB/98.8 MB 2019-06-24T05:56:25.825763000Z I0624 05:56:25.818238 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 22.28 MB/98.8 MB 2019-06-24T05:56:25.867119000Z I0624 05:56:25.851245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 22.84 MB/98.8 MB 2019-06-24T05:56:25.887608000Z I0624 05:56:25.884268 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 23.4 MB/98.8 MB 2019-06-24T05:56:25.927765000Z I0624 05:56:25.913394 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 23.95 MB/98.8 MB 2019-06-24T05:56:25.945093000Z I0624 05:56:25.944405 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 24.51 MB/98.8 MB 2019-06-24T05:56:26.058599000Z I0624 05:56:26.047037 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 25.07 MB/98.8 MB 2019-06-24T05:56:26.133078000Z I0624 05:56:26.129994 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 25.62 MB/98.8 MB 2019-06-24T05:56:26.256575000Z I0624 05:56:26.234960 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 26.18 MB/98.8 MB 2019-06-24T05:56:26.327025000Z I0624 05:56:26.314593 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 26.74 MB/98.8 MB 2019-06-24T05:56:26.410630000Z I0624 05:56:26.382453 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 27.3 MB/98.8 MB 2019-06-24T05:56:26.488055000Z I0624 05:56:26.487437 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 27.85 MB/98.8 MB 2019-06-24T05:56:26.564551000Z I0624 05:56:26.549047 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 28.41 MB/98.8 MB 2019-06-24T05:56:26.648492000Z I0624 05:56:26.630382 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 28.97 MB/98.8 MB 2019-06-24T05:56:26.718556000Z I0624 05:56:26.698373 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 29.52 MB/98.8 MB 2019-06-24T05:56:26.810315000Z I0624 05:56:26.798319 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 30.08 MB/98.8 MB 2019-06-24T05:56:26.876495000Z I0624 05:56:26.856621 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 30.64 MB/98.8 MB 2019-06-24T05:56:26.876803000Z I0624 05:56:26.864349 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 31.2 MB/98.8 MB 2019-06-24T05:56:26.877034000Z I0624 05:56:26.876167 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 31.75 MB/98.8 MB 2019-06-24T05:56:26.921329000Z I0624 05:56:26.920089 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 32.31 MB/98.8 MB 2019-06-24T05:56:26.953431000Z I0624 05:56:26.952070 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 32.87 MB/98.8 MB 2019-06-24T05:56:27.049794000Z I0624 05:56:27.035645 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 33.42 MB/98.8 MB 2019-06-24T05:56:27.055538000Z I0624 05:56:27.052878 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 33.98 MB/98.8 MB 2019-06-24T05:56:27.061572000Z I0624 05:56:27.060896 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 34.54 MB/98.8 MB 2019-06-24T05:56:27.070523000Z I0624 05:56:27.067987 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 35.09 MB/98.8 MB 2019-06-24T05:56:27.131493000Z I0624 05:56:27.109006 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 35.65 MB/98.8 MB 2019-06-24T05:56:27.152313000Z I0624 05:56:27.148970 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 36.21 MB/98.8 MB 2019-06-24T05:56:27.240546000Z I0624 05:56:27.227287 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 36.77 MB/98.8 MB 2019-06-24T05:56:27.285912000Z I0624 05:56:27.277975 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 37.32 MB/98.8 MB 2019-06-24T05:56:27.326648000Z I0624 05:56:27.316645 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 37.88 MB/98.8 MB 2019-06-24T05:56:27.363330000Z I0624 05:56:27.361337 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 38.44 MB/98.8 MB 2019-06-24T05:56:27.417073000Z I0624 05:56:27.403448 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 38.99 MB/98.8 MB 2019-06-24T05:56:27.440476000Z I0624 05:56:27.431212 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 39.55 MB/98.8 MB 2019-06-24T05:56:27.479285000Z I0624 05:56:27.458969 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 40.11 MB/98.8 MB 2019-06-24T05:56:27.506785000Z I0624 05:56:27.502576 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 40.67 MB/98.8 MB 2019-06-24T05:56:27.541430000Z I0624 05:56:27.539083 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 41.22 MB/98.8 MB 2019-06-24T05:56:27.574481000Z I0624 05:56:27.573082 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 41.78 MB/98.8 MB 2019-06-24T05:56:27.599843000Z I0624 05:56:27.592775 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 42.34 MB/98.8 MB 2019-06-24T05:56:27.684070000Z I0624 05:56:27.624946 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 42.89 MB/98.8 MB 2019-06-24T05:56:27.684422000Z I0624 05:56:27.657784 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 43.45 MB/98.8 MB 2019-06-24T05:56:27.692796000Z I0624 05:56:27.692230 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 44.01 MB/98.8 MB 2019-06-24T05:56:27.737812000Z I0624 05:56:27.723045 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 44.56 MB/98.8 MB 2019-06-24T05:56:27.746650000Z I0624 05:56:27.746106 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 45.12 MB/98.8 MB 2019-06-24T05:56:27.784787000Z I0624 05:56:27.784116 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 45.68 MB/98.8 MB 2019-06-24T05:56:27.830071000Z I0624 05:56:27.814350 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 46.24 MB/98.8 MB 2019-06-24T05:56:27.854494000Z I0624 05:56:27.852296 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 46.79 MB/98.8 MB 2019-06-24T05:56:27.898509000Z I0624 05:56:27.891678 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 47.35 MB/98.8 MB 2019-06-24T05:56:27.930930000Z I0624 05:56:27.930317 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 47.91 MB/98.8 MB 2019-06-24T05:56:27.989453000Z I0624 05:56:27.969607 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 48.46 MB/98.8 MB 2019-06-24T05:56:28.000904000Z I0624 05:56:27.999456 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 49.02 MB/98.8 MB 2019-06-24T05:56:28.065723000Z I0624 05:56:28.046622 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 49.58 MB/98.8 MB 2019-06-24T05:56:28.111875000Z I0624 05:56:28.094286 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 50.14 MB/98.8 MB 2019-06-24T05:56:28.133005000Z I0624 05:56:28.126141 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 50.69 MB/98.8 MB 2019-06-24T05:56:28.175444000Z I0624 05:56:28.162387 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 51.25 MB/98.8 MB 2019-06-24T05:56:28.332501000Z I0624 05:56:28.329340 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 51.81 MB/98.8 MB 2019-06-24T05:56:28.376516000Z I0624 05:56:28.365286 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 52.36 MB/98.8 MB 2019-06-24T05:56:28.418855000Z I0624 05:56:28.417833 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 52.92 MB/98.8 MB 2019-06-24T05:56:28.479685000Z I0624 05:56:28.466847 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 53.48 MB/98.8 MB 2019-06-24T05:56:28.500126000Z I0624 05:56:28.492011 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 54.03 MB/98.8 MB 2019-06-24T05:56:28.534299000Z I0624 05:56:28.526986 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 54.59 MB/98.8 MB 2019-06-24T05:56:28.572506000Z I0624 05:56:28.557136 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 55.15 MB/98.8 MB 2019-06-24T05:56:28.597706000Z I0624 05:56:28.589657 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 55.71 MB/98.8 MB 2019-06-24T05:56:28.633511000Z I0624 05:56:28.626300 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 56.26 MB/98.8 MB 2019-06-24T05:56:28.689942000Z I0624 05:56:28.663271 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 56.82 MB/98.8 MB 2019-06-24T05:56:28.705385000Z I0624 05:56:28.697384 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 57.38 MB/98.8 MB 2019-06-24T05:56:28.777691000Z I0624 05:56:28.762133 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 57.93 MB/98.8 MB 2019-06-24T05:56:28.805371000Z I0624 05:56:28.800274 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 58.49 MB/98.8 MB 2019-06-24T05:56:28.862847000Z I0624 05:56:28.839276 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 59.05 MB/98.8 MB 2019-06-24T05:56:28.901473000Z I0624 05:56:28.882504 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 59.6 MB/98.8 MB 2019-06-24T05:56:28.934441000Z I0624 05:56:28.923272 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 60.16 MB/98.8 MB 2019-06-24T05:56:29.015535000Z I0624 05:56:28.996964 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 60.72 MB/98.8 MB 2019-06-24T05:56:29.051413000Z I0624 05:56:29.036276 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 61.28 MB/98.8 MB 2019-06-24T05:56:29.097468000Z I0624 05:56:29.087304 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 61.83 MB/98.8 MB 2019-06-24T05:56:29.120549000Z I0624 05:56:29.114237 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 62.39 MB/98.8 MB 2019-06-24T05:56:29.152418000Z I0624 05:56:29.138064 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 62.95 MB/98.8 MB 2019-06-24T05:56:29.167742000Z I0624 05:56:29.165446 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 63.5 MB/98.8 MB 2019-06-24T05:56:29.198944000Z I0624 05:56:29.186826 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 64.06 MB/98.8 MB 2019-06-24T05:56:29.215086000Z I0624 05:56:29.210982 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 64.62 MB/98.8 MB 2019-06-24T05:56:29.246345000Z I0624 05:56:29.235657 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 65.18 MB/98.8 MB 2019-06-24T05:56:29.267483000Z I0624 05:56:29.259892 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 65.73 MB/98.8 MB 2019-06-24T05:56:29.308615000Z I0624 05:56:29.289483 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 66.29 MB/98.8 MB 2019-06-24T05:56:29.329574000Z I0624 05:56:29.321405 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 66.85 MB/98.8 MB 2019-06-24T05:56:29.363670000Z I0624 05:56:29.350877 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 67.4 MB/98.8 MB 2019-06-24T05:56:29.384514000Z I0624 05:56:29.378423 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 67.96 MB/98.8 MB 2019-06-24T05:56:29.419533000Z I0624 05:56:29.402604 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 68.52 MB/98.8 MB 2019-06-24T05:56:29.432499000Z I0624 05:56:29.430420 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 69.07 MB/98.8 MB 2019-06-24T05:56:29.475498000Z I0624 05:56:29.457245 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 69.63 MB/98.8 MB 2019-06-24T05:56:29.483374000Z I0624 05:56:29.480765 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 70.19 MB/98.8 MB 2019-06-24T05:56:29.523926000Z I0624 05:56:29.505386 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 70.75 MB/98.8 MB 2019-06-24T05:56:29.535406000Z I0624 05:56:29.529255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 71.3 MB/98.8 MB 2019-06-24T05:56:29.573802000Z I0624 05:56:29.553907 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 71.86 MB/98.8 MB 2019-06-24T05:56:29.578833000Z I0624 05:56:29.578273 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 72.42 MB/98.8 MB 2019-06-24T05:56:29.622298000Z I0624 05:56:29.602512 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 72.97 MB/98.8 MB 2019-06-24T05:56:29.639398000Z I0624 05:56:29.634246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 73.53 MB/98.8 MB 2019-06-24T05:56:29.688865000Z I0624 05:56:29.667256 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 74.09 MB/98.8 MB 2019-06-24T05:56:29.700687000Z I0624 05:56:29.696825 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 74.65 MB/98.8 MB 2019-06-24T05:56:29.736757000Z I0624 05:56:29.722358 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 75.2 MB/98.8 MB 2019-06-24T05:56:29.750408000Z I0624 05:56:29.747519 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 75.76 MB/98.8 MB 2019-06-24T05:56:29.798478000Z I0624 05:56:29.774019 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 76.32 MB/98.8 MB 2019-06-24T05:56:29.802024000Z I0624 05:56:29.800657 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 76.87 MB/98.8 MB 2019-06-24T05:56:29.844890000Z I0624 05:56:29.825285 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 77.43 MB/98.8 MB 2019-06-24T05:56:29.852765000Z I0624 05:56:29.852214 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 77.99 MB/98.8 MB 2019-06-24T05:56:29.901478000Z I0624 05:56:29.884844 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 78.54 MB/98.8 MB 2019-06-24T05:56:29.916468000Z I0624 05:56:29.910348 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 79.1 MB/98.8 MB 2019-06-24T05:56:29.943536000Z I0624 05:56:29.937266 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 79.66 MB/98.8 MB 2019-06-24T05:56:29.979454000Z I0624 05:56:29.964404 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 80.22 MB/98.8 MB 2019-06-24T05:56:30.072792000Z I0624 05:56:30.054278 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 80.77 MB/98.8 MB 2019-06-24T05:56:30.119164000Z I0624 05:56:30.098378 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 81.33 MB/98.8 MB 2019-06-24T05:56:30.137824000Z I0624 05:56:30.136147 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 81.89 MB/98.8 MB 2019-06-24T05:56:30.173464000Z I0624 05:56:30.171673 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 82.44 MB/98.8 MB 2019-06-24T05:56:30.238565000Z I0624 05:56:30.224299 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 83 MB/98.8 MB 2019-06-24T05:56:30.239965000Z I0624 05:56:30.239288 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 83.56 MB/98.8 MB 2019-06-24T05:56:30.298666000Z I0624 05:56:30.281005 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 84.12 MB/98.8 MB 2019-06-24T05:56:30.334520000Z I0624 05:56:30.311250 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 84.67 MB/98.8 MB 2019-06-24T05:56:30.380997000Z I0624 05:56:30.378557 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 85.23 MB/98.8 MB 2019-06-24T05:56:30.438489000Z I0624 05:56:30.435121 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 85.79 MB/98.8 MB 2019-06-24T05:56:30.507636000Z I0624 05:56:30.501462 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 86.34 MB/98.8 MB 2019-06-24T05:56:30.558471000Z I0624 05:56:30.557265 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 86.9 MB/98.8 MB 2019-06-24T05:56:30.636464000Z I0624 05:56:30.632271 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 87.46 MB/98.8 MB 2019-06-24T05:56:30.699483000Z I0624 05:56:30.673353 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 88.01 MB/98.8 MB 2019-06-24T05:56:30.715232000Z I0624 05:56:30.712334 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 88.57 MB/98.8 MB 2019-06-24T05:56:30.777891000Z I0624 05:56:30.765268 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 89.13 MB/98.8 MB 2019-06-24T05:56:30.893256000Z I0624 05:56:30.886341 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 89.69 MB/98.8 MB 2019-06-24T05:56:30.962589000Z I0624 05:56:30.948380 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 90.24 MB/98.8 MB 2019-06-24T05:56:31.398633000Z I0624 05:56:31.396111 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 90.8 MB/98.8 MB 2019-06-24T05:56:31.576582000Z I0624 05:56:31.551321 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 91.36 MB/98.8 MB 2019-06-24T05:56:31.709436000Z I0624 05:56:31.705149 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 91.91 MB/98.8 MB 2019-06-24T05:56:31.778507000Z I0624 05:56:31.775988 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 92.47 MB/98.8 MB 2019-06-24T05:56:31.845495000Z I0624 05:56:31.831304 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 93.03 MB/98.8 MB 2019-06-24T05:56:31.896513000Z I0624 05:56:31.885263 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 93.59 MB/98.8 MB 2019-06-24T05:56:31.957471000Z I0624 05:56:31.933926 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 94.14 MB/98.8 MB 2019-06-24T05:56:31.983970000Z I0624 05:56:31.982444 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 94.7 MB/98.8 MB 2019-06-24T05:56:32.036908000Z I0624 05:56:32.035819 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 95.26 MB/98.8 MB 2019-06-24T05:56:32.087843000Z I0624 05:56:32.082979 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 95.81 MB/98.8 MB 2019-06-24T05:56:32.143466000Z I0624 05:56:32.124282 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 96.37 MB/98.8 MB 2019-06-24T05:56:32.171766000Z I0624 05:56:32.166643 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 96.93 MB/98.8 MB 2019-06-24T05:56:32.210611000Z I0624 05:56:32.206993 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 97.48 MB/98.8 MB 2019-06-24T05:56:32.268099000Z I0624 05:56:32.245819 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 98.04 MB/98.8 MB 2019-06-24T05:56:32.304534000Z I0624 05:56:32.289684 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 98.6 MB/98.8 MB 2019-06-24T05:56:32.603501000Z I0624 05:56:32.600833 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 98.8 MB/98.8 MB 2019-06-24T05:56:39.439485000Z I0624 05:56:39.426345 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [> ] 196.6 kB/18.57 MB 2019-06-24T05:56:39.440514000Z I0624 05:56:39.438184 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 393.2 kB/18.57 MB 2019-06-24T05:56:39.448437000Z I0624 05:56:39.446237 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=> ] 589.8 kB/18.57 MB 2019-06-24T05:56:39.458456000Z I0624 05:56:39.456263 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 786.4 kB/18.57 MB 2019-06-24T05:56:39.465947000Z I0624 05:56:39.465406 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==> ] 983 kB/18.57 MB 2019-06-24T05:56:39.479164000Z I0624 05:56:39.478577 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 1.18 MB/18.57 MB 2019-06-24T05:56:39.576810000Z I0624 05:56:39.494330 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===> ] 1.376 MB/18.57 MB 2019-06-24T05:56:39.577175000Z I0624 05:56:39.510321 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 1.573 MB/18.57 MB 2019-06-24T05:56:39.577494000Z I0624 05:56:39.519156 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====> ] 1.769 MB/18.57 MB 2019-06-24T05:56:39.577782000Z I0624 05:56:39.528521 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 1.966 MB/18.57 MB 2019-06-24T05:56:39.578070000Z I0624 05:56:39.537818 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====> ] 2.163 MB/18.57 MB 2019-06-24T05:56:39.579770000Z I0624 05:56:39.546450 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 2.359 MB/18.57 MB 2019-06-24T05:56:39.580087000Z I0624 05:56:39.555246 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======> ] 2.556 MB/18.57 MB 2019-06-24T05:56:39.580415000Z I0624 05:56:39.564627 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 2.753 MB/18.57 MB 2019-06-24T05:56:39.584694000Z I0624 05:56:39.578247 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======> ] 2.949 MB/18.57 MB 2019-06-24T05:56:39.598824000Z I0624 05:56:39.593021 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 3.146 MB/18.57 MB 2019-06-24T05:56:39.618370000Z I0624 05:56:39.617622 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========> ] 3.342 MB/18.57 MB 2019-06-24T05:56:39.627779000Z I0624 05:56:39.627171 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 3.539 MB/18.57 MB 2019-06-24T05:56:39.675489000Z I0624 05:56:39.668309 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 3.736 MB/18.57 MB 2019-06-24T05:56:39.750921000Z I0624 05:56:39.717288 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========> ] 3.932 MB/18.57 MB 2019-06-24T05:56:39.751337000Z I0624 05:56:39.739927 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 4.129 MB/18.57 MB 2019-06-24T05:56:39.761409000Z I0624 05:56:39.757514 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========> ] 4.325 MB/18.57 MB 2019-06-24T05:56:39.798518000Z I0624 05:56:39.795321 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 4.522 MB/18.57 MB 2019-06-24T05:56:39.817071000Z I0624 05:56:39.813180 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============> ] 4.719 MB/18.57 MB 2019-06-24T05:56:39.859656000Z I0624 05:56:39.858108 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 4.915 MB/18.57 MB 2019-06-24T05:56:39.904810000Z I0624 05:56:39.891969 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============> ] 5.112 MB/18.57 MB 2019-06-24T05:56:39.920009000Z I0624 05:56:39.915592 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 5.308 MB/18.57 MB 2019-06-24T05:56:39.935175000Z I0624 05:56:39.933249 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============> ] 5.505 MB/18.57 MB 2019-06-24T05:56:39.964418000Z I0624 05:56:39.940938 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 5.702 MB/18.57 MB 2019-06-24T05:56:39.964757000Z I0624 05:56:39.953141 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============> ] 5.898 MB/18.57 MB 2019-06-24T05:56:39.966305000Z I0624 05:56:39.962424 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 6.095 MB/18.57 MB 2019-06-24T05:56:39.973807000Z I0624 05:56:39.971456 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================> ] 6.291 MB/18.57 MB 2019-06-24T05:56:39.982878000Z I0624 05:56:39.982280 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 6.488 MB/18.57 MB 2019-06-24T05:56:39.994470000Z I0624 05:56:39.993303 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================> ] 6.685 MB/18.57 MB 2019-06-24T05:56:40.006496000Z I0624 05:56:40.004738 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================> ] 6.881 MB/18.57 MB 2019-06-24T05:56:40.015528000Z I0624 05:56:40.012856 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 7.078 MB/18.57 MB 2019-06-24T05:56:40.019715000Z I0624 05:56:40.019110 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================> ] 7.274 MB/18.57 MB 2019-06-24T05:56:40.045660000Z I0624 05:56:40.042075 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 7.471 MB/18.57 MB 2019-06-24T05:56:40.069405000Z I0624 05:56:40.064614 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================> ] 7.668 MB/18.57 MB 2019-06-24T05:56:40.074873000Z I0624 05:56:40.073807 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 7.864 MB/18.57 MB 2019-06-24T05:56:40.091135000Z I0624 05:56:40.090589 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================> ] 8.061 MB/18.57 MB 2019-06-24T05:56:40.211974000Z I0624 05:56:40.107362 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 8.258 MB/18.57 MB 2019-06-24T05:56:40.212304000Z I0624 05:56:40.120392 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================> ] 8.454 MB/18.57 MB 2019-06-24T05:56:40.212540000Z I0624 05:56:40.129958 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 8.651 MB/18.57 MB 2019-06-24T05:56:40.212765000Z I0624 05:56:40.139334 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================> ] 8.847 MB/18.57 MB 2019-06-24T05:56:40.212999000Z I0624 05:56:40.150169 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 9.044 MB/18.57 MB 2019-06-24T05:56:40.213238000Z I0624 05:56:40.161240 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================> ] 9.241 MB/18.57 MB 2019-06-24T05:56:40.213457000Z I0624 05:56:40.170422 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 9.437 MB/18.57 MB 2019-06-24T05:56:40.213669000Z I0624 05:56:40.181605 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================> ] 9.634 MB/18.57 MB 2019-06-24T05:56:40.213898000Z I0624 05:56:40.194486 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 9.83 MB/18.57 MB 2019-06-24T05:56:40.214126000Z I0624 05:56:40.198291 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================> ] 10.03 MB/18.57 MB 2019-06-24T05:56:40.227387000Z I0624 05:56:40.226661 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================> ] 10.22 MB/18.57 MB 2019-06-24T05:56:40.247299000Z I0624 05:56:40.246247 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 10.42 MB/18.57 MB 2019-06-24T05:56:40.337665000Z I0624 05:56:40.288481 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 10.62 MB/18.57 MB 2019-06-24T05:56:40.338013000Z I0624 05:56:40.302106 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 10.81 MB/18.57 MB 2019-06-24T05:56:40.338345000Z I0624 05:56:40.313420 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================> ] 11.01 MB/18.57 MB 2019-06-24T05:56:40.338657000Z I0624 05:56:40.324793 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 11.21 MB/18.57 MB 2019-06-24T05:56:40.340631000Z I0624 05:56:40.338348 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================> ] 11.4 MB/18.57 MB 2019-06-24T05:56:40.358414000Z I0624 05:56:40.354677 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 11.6 MB/18.57 MB 2019-06-24T05:56:40.378410000Z I0624 05:56:40.370315 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================> ] 11.8 MB/18.57 MB 2019-06-24T05:56:40.381409000Z I0624 05:56:40.380440 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 11.99 MB/18.57 MB 2019-06-24T05:56:40.392474000Z I0624 05:56:40.391255 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================> ] 12.19 MB/18.57 MB 2019-06-24T05:56:40.427719000Z I0624 05:56:40.409385 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 12.39 MB/18.57 MB 2019-06-24T05:56:40.436514000Z I0624 05:56:40.430372 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================> ] 12.58 MB/18.57 MB 2019-06-24T05:56:40.450411000Z I0624 05:56:40.443716 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 12.78 MB/18.57 MB 2019-06-24T05:56:40.465988000Z I0624 05:56:40.462057 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================> ] 12.98 MB/18.57 MB 2019-06-24T05:56:40.474406000Z I0624 05:56:40.473150 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 13.17 MB/18.57 MB 2019-06-24T05:56:40.488913000Z I0624 05:56:40.484763 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===================================> ] 13.37 MB/18.57 MB 2019-06-24T05:56:40.496689000Z I0624 05:56:40.496040 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [====================================> ] 13.57 MB/18.57 MB 2019-06-24T05:56:40.508092000Z I0624 05:56:40.507432 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 13.76 MB/18.57 MB 2019-06-24T05:56:40.524570000Z I0624 05:56:40.520216 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=====================================> ] 13.96 MB/18.57 MB 2019-06-24T05:56:40.534953000Z I0624 05:56:40.531957 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 14.16 MB/18.57 MB 2019-06-24T05:56:40.544112000Z I0624 05:56:40.543535 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [======================================> ] 14.35 MB/18.57 MB 2019-06-24T05:56:40.558103000Z I0624 05:56:40.556841 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 14.55 MB/18.57 MB 2019-06-24T05:56:40.603655000Z I0624 05:56:40.572332 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=======================================> ] 14.75 MB/18.57 MB 2019-06-24T05:56:40.603957000Z I0624 05:56:40.590555 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 14.94 MB/18.57 MB 2019-06-24T05:56:40.604292000Z I0624 05:56:40.601614 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [========================================> ] 15.14 MB/18.57 MB 2019-06-24T05:56:40.619603000Z I0624 05:56:40.615146 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 15.34 MB/18.57 MB 2019-06-24T05:56:40.625866000Z I0624 05:56:40.624748 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========================================> ] 15.53 MB/18.57 MB 2019-06-24T05:56:40.642478000Z I0624 05:56:40.640655 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 15.73 MB/18.57 MB 2019-06-24T05:56:40.658655000Z I0624 05:56:40.657340 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==========================================> ] 15.93 MB/18.57 MB 2019-06-24T05:56:40.669436000Z I0624 05:56:40.668856 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 16.12 MB/18.57 MB 2019-06-24T05:56:40.687338000Z I0624 05:56:40.686303 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===========================================> ] 16.32 MB/18.57 MB 2019-06-24T05:56:40.701039000Z I0624 05:56:40.698275 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 16.52 MB/18.57 MB 2019-06-24T05:56:40.713195000Z I0624 05:56:40.712655 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================================> ] 16.71 MB/18.57 MB 2019-06-24T05:56:40.729642000Z I0624 05:56:40.725939 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=============================================> ] 16.91 MB/18.57 MB 2019-06-24T05:56:40.739964000Z I0624 05:56:40.739379 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 17.1 MB/18.57 MB 2019-06-24T05:56:40.751504000Z I0624 05:56:40.750925 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 17.3 MB/18.57 MB 2019-06-24T05:56:40.764927000Z I0624 05:56:40.764340 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 17.5 MB/18.57 MB 2019-06-24T05:56:40.782791000Z I0624 05:56:40.775946 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [===============================================> ] 17.69 MB/18.57 MB 2019-06-24T05:56:40.790937000Z I0624 05:56:40.789296 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 17.89 MB/18.57 MB 2019-06-24T05:56:40.807652000Z I0624 05:56:40.802347 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [================================================> ] 18.09 MB/18.57 MB 2019-06-24T05:56:40.819902000Z I0624 05:56:40.819253 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 18.28 MB/18.57 MB 2019-06-24T05:56:40.835716000Z I0624 05:56:40.833338 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=================================================> ] 18.48 MB/18.57 MB 2019-06-24T05:56:40.897613000Z I0624 05:56:40.895814 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 18.57 MB/18.57 MB 2019-06-24T05:56:42.570072000Z I0624 05:56:42.560277 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 1.839 kB/1.839 kB 2019-06-24T05:56:42.820520000Z I0624 05:56:42.817361 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 1.839 kB/1.839 kB 2019-06-24T05:56:43.090256000Z I0624 05:56:43.087956 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 3.87 kB/3.87 kB 2019-06-24T05:56:43.340175000Z I0624 05:56:43.339107 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 3.87 kB/3.87 kB 2019-06-24T05:56:43.739895000Z I0624 05:56:43.739193 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [=========> ] 32.77 kB/175.2 kB 2019-06-24T05:56:44.231458000Z I0624 05:56:44.217077 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [============================> ] 98.3 kB/175.2 kB 2019-06-24T05:56:44.234418000Z I0624 05:56:44.233434 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==============================================> ] 163.8 kB/175.2 kB 2019-06-24T05:56:44.235440000Z I0624 05:56:44.234621 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 175.2 kB/175.2 kB 2019-06-24T05:56:44.235760000Z I0624 05:56:44.234649 1 docker.go:573] pulling image centos/ruby-22-centos7:latest: [==================================================>] 175.2 kB/175.2 kB 2019-06-24T05:56:44.507952000Z I0624 05:56:44.504767 1 sti.go:290] Creating a new S2I builder with config: "Builder Name:\t\t\tRuby 2.2\nBuilder Image:\t\t\tcentos/ruby-22-centos7\nBuilder Image Version:\t\t\"c159276\"\nSource:\t\t\t\t/tmp/s2i-build409151771/upload/src\nOutput Image Tag:\t\textended-test-build-no-outputname-jhqbx-v10rd/test-sti-1:9b2d8d5e\nEnvironment:\t\t\tOPENSHIFT_BUILD_NAME=test-sti-1,OPENSHIFT_BUILD_NAMESPACE=extended-test-build-no-outputname-jhqbx-v10rd,OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world,OPENSHIFT_BUILD_COMMIT=787f1beae9956c959c6af62ee43bfdda73769cf7,BUILD_LOGLEVEL=5\nLabels:\t\t\t\tio.openshift.build.name=\"test-sti-1\",io.openshift.build.namespace=\"extended-test-build-no-outputname-jhqbx-v10rd\"\nIncremental Build:\t\tdisabled\nRemove Old Build:\t\tdisabled\nBuilder Pull Policy:\t\tif-not-present\nPrevious Image Pull Policy:\talways\nQuiet:\t\t\t\tdisabled\nLayered Build:\t\t\tdisabled\nS2I Scripts URL:\t\t//redacted@\nWorkdir:\t\t\t/tmp/s2i-build409151771\nDocker NetworkMode:\t\tcontainer:4653bdf21e188f3953704faccfd05d7eafbe46c89f7197369830a3afa54ae30c\nDocker Endpoint:\t\tunix:///var/run/docker.sock\n" 2019-06-24T05:56:44.543937000Z I0624 05:56:44.528718 1 docker.go:505] Using locally available image "centos/ruby-22-centos7:latest" 2019-06-24T05:56:44.554852000Z I0624 05:56:44.554069 1 docker.go:505] Using locally available image "centos/ruby-22-centos7:latest" 2019-06-24T05:56:44.555208000Z I0624 05:56:44.554102 1 docker.go:736] Image sha256:e42d0dccf073123561d83ea8bbc9f0cc5e491cfd07130a464a416cdb99ced387 contains io.openshift.s2i.scripts-url set to "image:///usr/libexec/s2i" 2019-06-24T05:56:44.555541000Z I0624 05:56:44.554133 1 scm.go:21] DownloadForSource /tmp/s2i-build409151771/upload/src 2019-06-24T05:56:44.558177000Z I0624 05:56:44.554766 1 git.go:213] makePathAbsolute /tmp/s2i-build409151771/upload/src 2019-06-24T05:56:44.558524000Z I0624 05:56:44.554783 1 scm.go:28] return from ParseFile file exists true proto specified false use copy false 2019-06-24T05:56:44.558841000Z I0624 05:56:44.554791 1 scm.go:40] new source from parse file /tmp/s2i-build409151771/upload/src 2019-06-24T05:56:44.559175000Z I0624 05:56:44.554821 1 sti.go:303] Starting S2I build from extended-test-build-no-outputname-jhqbx-v10rd/test-sti-1 BuildConfig ... 2019-06-24T05:56:44.560359000Z I0624 05:56:44.554836 1 sti.go:197] Preparing to build extended-test-build-no-outputname-jhqbx-v10rd/test-sti-1:9b2d8d5e 2019-06-24T05:56:44.560609000Z I0624 05:56:44.555014 1 download.go:30] Copying sources from "/tmp/s2i-build409151771/upload/src" to "/tmp/s2i-build409151771/upload/src" 2019-06-24T05:56:44.560838000Z I0624 05:56:44.555065 1 install.go:249] Using "assemble" installed from "image:///usr/libexec/s2i/assemble" 2019-06-24T05:56:44.561099000Z I0624 05:56:44.555094 1 install.go:249] Using "run" installed from "image:///usr/libexec/s2i/run" 2019-06-24T05:56:44.561359000Z I0624 05:56:44.555126 1 install.go:249] Using "save-artifacts" installed from "image:///usr/libexec/s2i/save-artifacts" 2019-06-24T05:56:44.561593000Z I0624 05:56:44.555153 1 ignore.go:63] .s2iignore file does not exist 2019-06-24T05:56:44.561826000Z I0624 05:56:44.555167 1 sti.go:206] Clean build will be performed 2019-06-24T05:56:44.562087000Z I0624 05:56:44.555174 1 sti.go:209] Performing source build from file:///tmp/s2i-build409151771/upload/src 2019-06-24T05:56:44.564501000Z I0624 05:56:44.555207 1 sti.go:220] Running "assemble" in "extended-test-build-no-outputname-jhqbx-v10rd/test-sti-1:9b2d8d5e" 2019-06-24T05:56:44.564832000Z I0624 05:56:44.555219 1 sti.go:556] Using image name centos/ruby-22-centos7 2019-06-24T05:56:44.565143000Z I0624 05:56:44.558489 1 docker.go:505] Using locally available image "centos/ruby-22-centos7:latest" 2019-06-24T05:56:44.565474000Z I0624 05:56:44.558544 1 environment.go:45] Setting 1 environment variables provided by environment file in sources 2019-06-24T05:56:44.565795000Z I0624 05:56:44.558683 1 sti.go:669] starting the source uploading ... 2019-06-24T05:56:44.566117000Z I0624 05:56:44.558702 1 tar.go:200] Adding "/tmp/s2i-build409151771/upload" to tar ... 2019-06-24T05:56:44.566580000Z I0624 05:56:44.558819 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/HEAD as src/.git/HEAD 2019-06-24T05:56:44.640231000Z I0624 05:56:44.639521 1 docker.go:736] Image sha256:e42d0dccf073123561d83ea8bbc9f0cc5e491cfd07130a464a416cdb99ced387 contains io.openshift.s2i.scripts-url set to "image:///usr/libexec/s2i" 2019-06-24T05:56:44.640603000Z I0624 05:56:44.639550 1 docker.go:811] Base directory for S2I scripts is '/usr/libexec/s2i'. Untarring destination is '/tmp'. 2019-06-24T05:56:44.640929000Z I0624 05:56:44.639567 1 docker.go:967] Setting "/bin/sh -c tar -C /tmp -xf - && /usr/libexec/s2i/assemble" command for container ... 2019-06-24T05:56:44.642295000Z I0624 05:56:44.639787 1 docker.go:976] Creating container with options {Name:"s2i_centos_ruby_22_centos7_b5f96706" Config:{Hostname: Domainname: User: AttachStdin:false AttachStdout:true AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:true StdinOnce:true Env:[RACK_ENV=production OPENSHIFT_BUILD_NAME=test-sti-1 OPENSHIFT_BUILD_NAMESPACE=extended-test-build-no-outputname-jhqbx-v10rd OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world OPENSHIFT_BUILD_COMMIT=787f1beae9956c959c6af62ee43bfdda73769cf7 BUILD_LOGLEVEL=5] Cmd:[/bin/sh -c tar -C /tmp -xf - && /usr/libexec/s2i/assemble] ArgsEscaped:false Image:centos/ruby-22-centos7:latest Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[] StopSignal:} HostConfig:&{Binds:[] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode:container:4653bdf21e188f3953704faccfd05d7eafbe46c89f7197369830a3afa54ae30c PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[KILL MKNOD SETGID SETUID SYS_CHROOT] DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:false PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[] UTSMode: UsernsMode: ShmSize:67108864 Sysctls:map[] ConsoleSize:[0 0] Isolation: Resources:{CPUShares:2 Memory:92233720368547 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:100000 CPUQuota:-1 CpusetCpus: CpusetMems: Devices:[] DiskQuota:0 KernelMemory:0 MemoryReservation:0 MemorySwap:92233720368547 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:0 Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0 NetworkMaximumBandwidth:0}}} ... 2019-06-24T05:56:45.576594000Z I0624 05:56:45.570340 1 docker.go:1008] Attaching to container "eac5801cc248a636adde6c70798c370653bd6c54e3d87c610ff23bac6c3fb74b" ... 2019-06-24T05:56:45.577248000Z I0624 05:56:45.574877 1 docker.go:1019] Starting container "eac5801cc248a636adde6c70798c370653bd6c54e3d87c610ff23bac6c3fb74b" ... 2019-06-24T05:56:45.852699000Z I0624 05:56:45.851987 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/config as src/.git/config 2019-06-24T05:56:45.853064000Z I0624 05:56:45.852075 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/description as src/.git/description 2019-06-24T05:56:45.853405000Z I0624 05:56:45.852167 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/applypatch-msg.sample as src/.git/hooks/applypatch-msg.sample 2019-06-24T05:56:45.854203000Z I0624 05:56:45.852255 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/commit-msg.sample as src/.git/hooks/commit-msg.sample 2019-06-24T05:56:45.854546000Z I0624 05:56:45.852314 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/post-update.sample as src/.git/hooks/post-update.sample 2019-06-24T05:56:45.854857000Z I0624 05:56:45.852372 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/pre-applypatch.sample as src/.git/hooks/pre-applypatch.sample 2019-06-24T05:56:45.855167000Z I0624 05:56:45.852430 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/pre-commit.sample as src/.git/hooks/pre-commit.sample 2019-06-24T05:56:45.856022000Z I0624 05:56:45.852492 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/pre-push.sample as src/.git/hooks/pre-push.sample 2019-06-24T05:56:45.856367000Z I0624 05:56:45.852548 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/pre-rebase.sample as src/.git/hooks/pre-rebase.sample 2019-06-24T05:56:45.856678000Z I0624 05:56:45.852611 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/prepare-commit-msg.sample as src/.git/hooks/prepare-commit-msg.sample 2019-06-24T05:56:45.856989000Z I0624 05:56:45.852669 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/hooks/update.sample as src/.git/hooks/update.sample 2019-06-24T05:56:45.857464000Z I0624 05:56:45.852722 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/index as src/.git/index 2019-06-24T05:56:45.857779000Z I0624 05:56:45.852787 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/info/exclude as src/.git/info/exclude 2019-06-24T05:56:45.858091000Z I0624 05:56:45.852853 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/logs/HEAD as src/.git/logs/HEAD 2019-06-24T05:56:45.858428000Z I0624 05:56:45.852964 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/logs/refs/heads/master as src/.git/logs/refs/heads/master 2019-06-24T05:56:45.858733000Z I0624 05:56:45.853059 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/logs/refs/remotes/origin/HEAD as src/.git/logs/refs/remotes/origin/HEAD 2019-06-24T05:56:45.859036000Z I0624 05:56:45.853154 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/03/ea8db67ae0f2628e70b93d0ecb7ff6cb8839cc as src/.git/objects/03/ea8db67ae0f2628e70b93d0ecb7ff6cb8839cc 2019-06-24T05:56:45.859493000Z I0624 05:56:45.853245 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/0a/5a0d2c143de3244295b7750eacdfff7b94d546 as src/.git/objects/0a/5a0d2c143de3244295b7750eacdfff7b94d546 2019-06-24T05:56:45.859825000Z I0624 05:56:45.853320 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/2b/b6c76c29870c9b4a9cff52cfc41f7e6bf44329 as src/.git/objects/2b/b6c76c29870c9b4a9cff52cfc41f7e6bf44329 2019-06-24T05:56:45.860149000Z I0624 05:56:45.853391 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/31/056df85336b9e9ffafad75681f817d9f33c7dd as src/.git/objects/31/056df85336b9e9ffafad75681f817d9f33c7dd 2019-06-24T05:56:45.860566000Z I0624 05:56:45.853496 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/3e/9aa8d62ebbe7f50c8e3aff4dd43d4726389f45 as src/.git/objects/3e/9aa8d62ebbe7f50c8e3aff4dd43d4726389f45 2019-06-24T05:56:45.860876000Z I0624 05:56:45.853565 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/40/35900d6eb9eb35af51d7a2d1fc71de9521083f as src/.git/objects/40/35900d6eb9eb35af51d7a2d1fc71de9521083f 2019-06-24T05:56:45.861203000Z I0624 05:56:45.853645 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/57/0bd16c41745891a5aabc60399d1a743c231236 as src/.git/objects/57/0bd16c41745891a5aabc60399d1a743c231236 2019-06-24T05:56:45.861519000Z I0624 05:56:45.853704 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/57/d3a71d22204c36e16f951d7317dcda004af5b0 as src/.git/objects/57/d3a71d22204c36e16f951d7317dcda004af5b0 2019-06-24T05:56:45.861832000Z I0624 05:56:45.853777 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/6c/d67a5b26558948520bfd7e803dcdedce0e7f92 as src/.git/objects/6c/d67a5b26558948520bfd7e803dcdedce0e7f92 2019-06-24T05:56:45.862135000Z I0624 05:56:45.853848 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/6d/98b321f0f4d9d69aee86cb71247bdf78a18613 as src/.git/objects/6d/98b321f0f4d9d69aee86cb71247bdf78a18613 2019-06-24T05:56:45.862653000Z I0624 05:56:45.853921 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/78/7f1beae9956c959c6af62ee43bfdda73769cf7 as src/.git/objects/78/7f1beae9956c959c6af62ee43bfdda73769cf7 2019-06-24T05:56:45.862966000Z I0624 05:56:45.854000 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/7d/6bbc17aa73403f45f7e2b5548a8faf6795ffec as src/.git/objects/7d/6bbc17aa73403f45f7e2b5548a8faf6795ffec 2019-06-24T05:56:45.863298000Z I0624 05:56:45.854078 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/8f/1b5fb76498255f4d2fc2308209116c562324dc as src/.git/objects/8f/1b5fb76498255f4d2fc2308209116c562324dc 2019-06-24T05:56:45.863612000Z I0624 05:56:45.854153 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/9d/920bc2a425ce2ef02156fce1d79b4d4f091245 as src/.git/objects/9d/920bc2a425ce2ef02156fce1d79b4d4f091245 2019-06-24T05:56:45.863926000Z I0624 05:56:45.854248 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/ab/3c480ee0810c7e9cff9275857d63cdcf98db2d as src/.git/objects/ab/3c480ee0810c7e9cff9275857d63cdcf98db2d 2019-06-24T05:56:45.864399000Z I0624 05:56:45.854317 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/af/f947776b1769b6e667d154122d645f7b150a83 as src/.git/objects/af/f947776b1769b6e667d154122d645f7b150a83 2019-06-24T05:56:45.864717000Z I0624 05:56:45.854386 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/b3/5478f5ae420bdf313be0050f405a8957b97153 as src/.git/objects/b3/5478f5ae420bdf313be0050f405a8957b97153 2019-06-24T05:56:45.865014000Z I0624 05:56:45.854459 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/bc/0cb8f548e62100af9f815e72b1dafe9ba1974d as src/.git/objects/bc/0cb8f548e62100af9f815e72b1dafe9ba1974d 2019-06-24T05:56:45.865352000Z I0624 05:56:45.854515 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/bc/1356d49e0dc5f1688c6d91dd0bfca270b1d2dc as src/.git/objects/bc/1356d49e0dc5f1688c6d91dd0bfca270b1d2dc 2019-06-24T05:56:45.865663000Z I0624 05:56:45.854586 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/ca/70bdc8c829db40a044fed3133f45cbc58f9c21 as src/.git/objects/ca/70bdc8c829db40a044fed3133f45cbc58f9c21 2019-06-24T05:56:45.865968000Z I0624 05:56:45.854657 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/cd/b36b569837bce573dc3a1d8a951add228feeeb as src/.git/objects/cd/b36b569837bce573dc3a1d8a951add228feeeb 2019-06-24T05:56:45.866410000Z I0624 05:56:45.854724 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/da/602cd8902c7d3bb7d51099cbd1ea48daa44c74 as src/.git/objects/da/602cd8902c7d3bb7d51099cbd1ea48daa44c74 2019-06-24T05:56:45.866708000Z I0624 05:56:45.854803 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/db/84f883f29fb7c98b4309874a07ab0a91789f44 as src/.git/objects/db/84f883f29fb7c98b4309874a07ab0a91789f44 2019-06-24T05:56:45.866945000Z I0624 05:56:45.854874 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/df/6078b30a7a52303ee9e0ebe7cfba0c79178347 as src/.git/objects/df/6078b30a7a52303ee9e0ebe7cfba0c79178347 2019-06-24T05:56:45.867176000Z I0624 05:56:45.854952 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/e6/2d5f04ada6459f2ccd61eb6a1f37c99077a919 as src/.git/objects/e6/2d5f04ada6459f2ccd61eb6a1f37c99077a919 2019-06-24T05:56:45.867463000Z I0624 05:56:45.855023 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/f0/3b50e4f9ec339f7a75ec5fd4f3af255e3e74ec as src/.git/objects/f0/3b50e4f9ec339f7a75ec5fd4f3af255e3e74ec 2019-06-24T05:56:45.867722000Z I0624 05:56:45.855093 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/objects/f8/25659db8127d255def0af624073151662b09c3 as src/.git/objects/f8/25659db8127d255def0af624073151662b09c3 2019-06-24T05:56:45.867962000Z I0624 05:56:45.855173 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/packed-refs as src/.git/packed-refs 2019-06-24T05:56:45.868276000Z I0624 05:56:45.855289 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/refs/heads/master as src/.git/refs/heads/master 2019-06-24T05:56:45.868593000Z I0624 05:56:45.855392 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/refs/remotes/origin/HEAD as src/.git/refs/remotes/origin/HEAD 2019-06-24T05:56:45.868906000Z I0624 05:56:45.855461 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.git/shallow as src/.git/shallow 2019-06-24T05:56:45.869311000Z I0624 05:56:45.855515 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.gitignore as src/.gitignore 2019-06-24T05:56:45.869637000Z I0624 05:56:45.855604 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.s2i/bin/README as src/.s2i/bin/README 2019-06-24T05:56:45.869952000Z I0624 05:56:45.855660 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.s2i/environment as src/.s2i/environment 2019-06-24T05:56:45.870372000Z I0624 05:56:45.855709 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/.travis.yml as src/.travis.yml 2019-06-24T05:56:45.870685000Z I0624 05:56:45.855761 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/Dockerfile as src/Dockerfile 2019-06-24T05:56:45.870988000Z I0624 05:56:45.855811 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/Gemfile as src/Gemfile 2019-06-24T05:56:45.871318000Z I0624 05:56:45.855866 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/Gemfile.lock as src/Gemfile.lock 2019-06-24T05:56:45.871640000Z I0624 05:56:45.855916 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/README.md as src/README.md 2019-06-24T05:56:45.871955000Z I0624 05:56:45.855967 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/Rakefile as src/Rakefile 2019-06-24T05:56:45.872350000Z I0624 05:56:45.856019 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/app.rb as src/app.rb 2019-06-24T05:56:45.872661000Z I0624 05:56:45.856096 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/config/database.rb as src/config/database.rb 2019-06-24T05:56:45.872950000Z I0624 05:56:45.856146 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/config/database.yml as src/config/database.yml 2019-06-24T05:56:45.873265000Z I0624 05:56:45.856223 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/config.ru as src/config.ru 2019-06-24T05:56:45.873575000Z I0624 05:56:45.856319 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/db/migrate/20141102191902_create_key_pair.rb as src/db/migrate/20141102191902_create_key_pair.rb 2019-06-24T05:56:45.873896000Z I0624 05:56:45.856400 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/models.rb as src/models.rb 2019-06-24T05:56:45.874218000Z I0624 05:56:45.856451 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/run.sh as src/run.sh 2019-06-24T05:56:45.874507000Z I0624 05:56:45.856519 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/test/sample_test.rb as src/test/sample_test.rb 2019-06-24T05:56:45.874754000Z I0624 05:56:45.856622 1 tar.go:286] Adding to tar: /tmp/s2i-build409151771/upload/src/views/main.erb as src/views/main.erb 2019-06-24T05:56:45.965339000Z I0624 05:56:45.964782 1 sti.go:677] ---> Installing application source ... 2019-06-24T05:56:45.971490000Z I0624 05:56:45.969825 1 sti.go:677] ---> Building your Ruby application from source ... 2019-06-24T05:56:45.971819000Z I0624 05:56:45.969838 1 sti.go:677] ---> Running 'bundle install --retry 2 --deployment --without development:test' ... 2019-06-24T05:56:50.139776000Z I0624 05:56:50.138521 1 sti.go:677] Fetching gem metadata from https://rubygems.org/.......... 2019-06-24T05:56:50.312930000Z I0624 05:56:50.311672 1 sti.go:677] Installing rake 12.3.0 2019-06-24T05:56:50.421005000Z I0624 05:56:50.419506 1 sti.go:677] Installing concurrent-ruby 1.0.5 2019-06-24T05:56:50.509494000Z I0624 05:56:50.508276 1 sti.go:677] Installing i18n 0.9.3 2019-06-24T05:56:50.596623000Z I0624 05:56:50.594987 1 sti.go:677] Installing minitest 5.11.3 2019-06-24T05:56:50.749920000Z I0624 05:56:50.748859 1 sti.go:677] Installing thread_safe 0.3.6 2019-06-24T05:56:50.867557000Z I0624 05:56:50.866276 1 sti.go:677] Installing tzinfo 1.2.5 2019-06-24T05:56:51.249133000Z I0624 05:56:51.247704 1 sti.go:677] Installing activesupport 5.1.4 2019-06-24T05:56:51.347308000Z I0624 05:56:51.346315 1 sti.go:677] Installing activemodel 5.1.4 2019-06-24T05:56:51.416385000Z I0624 05:56:51.415264 1 sti.go:677] Installing arel 8.0.0 2019-06-24T05:56:51.656374000Z I0624 05:56:51.654930 1 sti.go:677] Installing activerecord 5.1.4 2019-06-24T05:56:51.707404000Z I0624 05:56:51.706299 1 sti.go:677] Installing mustermann 1.0.3 2019-06-24T05:56:59.280299000Z I0624 05:56:59.278274 1 sti.go:677] Installing mysql2 0.4.10 2019-06-24T05:56:59.522966000Z I0624 05:56:59.520804 1 sti.go:677] Installing rack 2.0.6 2019-06-24T05:56:59.627241000Z I0624 05:56:59.626272 1 sti.go:677] Installing rack-protection 2.0.5 2019-06-24T05:56:59.702487000Z I0624 05:56:59.701361 1 sti.go:677] Installing tilt 2.0.9 2019-06-24T05:56:59.785335000Z I0624 05:56:59.784330 1 sti.go:677] Installing sinatra 2.0.5 2019-06-24T05:56:59.819200000Z I0624 05:56:59.817939 1 sti.go:677] Installing sinatra-activerecord 2.0.13 2019-06-24T05:56:59.820340000Z I0624 05:56:59.819776 1 sti.go:677] Using bundler 1.7.8 2019-06-24T05:56:59.820665000Z I0624 05:56:59.819795 1 sti.go:677] Your bundle is complete! 2019-06-24T05:56:59.822058000Z I0624 05:56:59.821597 1 sti.go:677] Gems in the groups development and test were not installed. 2019-06-24T05:56:59.822401000Z I0624 05:56:59.821610 1 sti.go:677] It was installed into ./bundle 2019-06-24T05:56:59.843159000Z I0624 05:56:59.842333 1 sti.go:677] ---> Cleaning up unused ruby gems ... 2019-06-24T05:57:01.048805000Z I0624 05:57:01.046294 1 sti.go:681] WARNING: If you plan to load any of ActiveSupport's core extensions to Hash, be 2019-06-24T05:57:01.049138000Z I0624 05:57:01.046317 1 sti.go:681] sure to do so *before* loading Sinatra::Application or Sinatra::Base. If not, 2019-06-24T05:57:01.051394000Z I0624 05:57:01.046325 1 sti.go:681] you may disregard this warning. 2019-06-24T05:57:01.051729000Z I0624 05:57:01.046332 1 sti.go:681] 2019-06-24T05:57:01.052061000Z I0624 05:57:01.046338 1 sti.go:681] Set SINATRA_ACTIVESUPPORT_WARNING=false in the environment to hide this warning. 2019-06-24T05:57:02.038292000Z I0624 05:57:02.029676 1 docker.go:1050] Waiting for container "eac5801cc248a636adde6c70798c370653bd6c54e3d87c610ff23bac6c3fb74b" to stop ... 2019-06-24T05:57:02.419362000Z I0624 05:57:02.418336 1 docker.go:1067] Invoking PostExecute function 2019-06-24T05:57:02.419752000Z I0624 05:57:02.418369 1 postexecutorstep.go:63] Skipping step: store previous image 2019-06-24T05:57:02.420067000Z I0624 05:57:02.418378 1 postexecutorstep.go:110] Executing step: commit image 2019-06-24T05:57:02.429864000Z I0624 05:57:02.429453 1 docker.go:1101] Committing container with dockerOpts: {Reference:extended-test-build-no-outputname-jhqbx-v10rd/test-sti-1:9b2d8d5e Comment: Author: Changes:[] Pause:false Config:0xc42109d8c0}, config: {Hostname: Domainname: User:1001 AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[RACK_ENV=production OPENSHIFT_BUILD_NAME=test-sti-1 OPENSHIFT_BUILD_NAMESPACE=extended-test-build-no-outputname-jhqbx-v10rd OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world OPENSHIFT_BUILD_COMMIT=787f1beae9956c959c6af62ee43bfdda73769cf7 BUILD_LOGLEVEL=5] Cmd:[/usr/libexec/s2i/run] ArgsEscaped:false Image: Volumes:map[] WorkingDir: Entrypoint:[container-entrypoint] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[io.s2i.scripts-url:image:///usr/libexec/s2i description:Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible. version:2.2 io.openshift.builder-version:"c159276" maintainer:SoftwareCollections.org <sclorg@redhat.com> summary:Platform for building and running Ruby 2.2 applications io.openshift.build.commit.date:Thu Jan 17 17:21:03 2019 -0500 io.openshift.build.namespace:extended-test-build-no-outputname-jhqbx-v10rd name:centos/ruby-22-centos7 io.openshift.build.commit.ref:master io.openshift.build.source-location:https://github.com/openshift/ruby-hello-world io.openshift.build.commit.author:Ben Parees <bparees@users.noreply.github.com> io.openshift.tags:builder,ruby,ruby22 usage:s2i build https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.4/test/puma-test-app/ centos/ruby-22-centos7 ruby-sample-app io.k8s.description:Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible. io.k8s.display-name:extended-test-build-no-outputname-jhqbx-v10rd/test-sti-1:9b2d8d5e io.openshift.s2i.scripts-url:image:///usr/libexec/s2i io.openshift.build.commit.id:787f1beae9956c959c6af62ee43bfdda73769cf7 io.openshift.build.image:centos/ruby-22-centos7 io.openshift.build.name:test-sti-1 com.redhat.component:rh-ruby22-docker release:1 io.openshift.expose-services:8080:http org.label-schema.schema-version:= 1.0 org.label-schema.name=CentOS Base Image org.label-schema.vendor=CentOS org.label-schema.license=GPLv2 org.label-schema.build-date=20180402 io.openshift.build.commit.message:Merge pull request #78 from bparees/v22] StopSignal:} 2019-06-24T05:57:06.855040000Z I0624 05:57:06.852171 1 postexecutorstep.go:413] Executing step: report success 2019-06-24T05:57:06.856021000Z I0624 05:57:06.852229 1 postexecutorstep.go:418] Successfully built extended-test-build-no-outputname-jhqbx-v10rd/test-sti-1:9b2d8d5e 2019-06-24T05:57:06.857789000Z I0624 05:57:06.852238 1 postexecutorstep.go:88] Skipping step: remove previous image 2019-06-24T05:57:06.858120000Z I0624 05:57:06.852272 1 docker.go:986] Removing container "eac5801cc248a636adde6c70798c370653bd6c54e3d87c610ff23bac6c3fb74b" ... 2019-06-24T05:57:06.898802000Z I0624 05:57:06.894829 1 docker.go:996] Removed container "eac5801cc248a636adde6c70798c370653bd6c54e3d87c610ff23bac6c3fb74b" 2019-06-24T05:57:06.899158000Z I0624 05:57:06.894922 1 cleanup.go:31] Temporary directory "/tmp/s2i-build409151771" will be saved, not deleted 2019-06-24T05:57:06.929511000Z Build complete, no image push requested [AfterEach] [builds][Conformance] build without output image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:12.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-build-no-outputname-jhqbx-v10rd" for this suite. Jun 24 01:57:23.040: INFO: namespace: extended-test-build-no-outputname-jhqbx-v10rd, resource: bindings, ignored listing per whitelist • [SLOW TEST:82.547 seconds] [builds][Conformance] build without output image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:55 building from templates /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:54 should create an image from a S2i template without an output image reference defined /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/no_outputname.go:53 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Network Partition [Disruptive] [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] [StatefulSet] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should come back up if node goes down [Slow] [Disruptive] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:407 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [networking] services [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:65 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:273 should prevent connections to pods in different namespaces on different nodes via service IPs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:46 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][jenkins][Slow] openshift pipeline plugin [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:664 jenkins-plugin test context /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:663 jenkins-plugin test trigger build with envs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:340 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Sysctls [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should reject invalid sysctls /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/sysctl.go:181 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Pod garbage collector [Feature:PodGarbageCollector] [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should handle the creation of 1000 pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pod_gc.go:79 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:65 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:22.249: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:22.349: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:65 STEP: Creating a pod to test override all Jun 24 01:57:22.414: INFO: Waiting up to 5m0s for pod client-containers-ef0d7347-9644-11e9-a60e-0e9110352016 status to be success or failure Jun 24 01:57:22.426: INFO: Waiting for pod client-containers-ef0d7347-9644-11e9-a60e-0e9110352016 in namespace 'e2e-tests-containers-wmlk7' status to be 'success or failure'(found phase: "Pending", readiness: false) (11.697292ms elapsed) Jun 24 01:57:24.429: INFO: Waiting for pod client-containers-ef0d7347-9644-11e9-a60e-0e9110352016 in namespace 'e2e-tests-containers-wmlk7' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.014753472s elapsed) STEP: Saw pod success Jun 24 01:57:26.432: INFO: Trying to get logs from node 172.18.11.204 pod client-containers-ef0d7347-9644-11e9-a60e-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:57:26.446: INFO: Waiting for pod client-containers-ef0d7347-9644-11e9-a60e-0e9110352016 to disappear Jun 24 01:57:26.449: INFO: Pod client-containers-ef0d7347-9644-11e9-a60e-0e9110352016 no longer exists [AfterEach] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:26.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-wmlk7" for this suite. Jun 24 01:57:36.551: INFO: namespace: e2e-tests-containers-wmlk7, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.366 seconds] [k8s.io] Docker Containers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be able to override the image's default command and arguments [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/docker_containers.go:65 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] etcd Upgrade [Feature:EtcdUpgrade] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] etcd upgrade /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should maintain a functioning cluster /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:194 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Port forwarding [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] With a server listening on 0.0.0.0 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support forwarding over websockets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/portforward.go:492 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Garbage collector [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should delete pods created by rc when not orphaning /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/garbage_collector.go:252 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Projected should be consumable in multiple volumes in a pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:171 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:23.096: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:23.174: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should be consumable in multiple volumes in a pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:171 STEP: Creating secret with name projected-secret-test-ef8a1672-9644-11e9-afd4-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 01:57:23.234: INFO: Waiting up to 5m0s for pod pod-projected-secrets-ef8a74ec-9644-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:57:23.236: INFO: Waiting for pod pod-projected-secrets-ef8a74ec-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-f5dcc' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.149782ms elapsed) Jun 24 01:57:25.238: INFO: Waiting for pod pod-projected-secrets-ef8a74ec-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-f5dcc' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004620701s elapsed) STEP: Saw pod success Jun 24 01:57:27.243: INFO: Trying to get logs from node 172.18.11.204 pod pod-projected-secrets-ef8a74ec-9644-11e9-afd4-0e9110352016 container secret-volume-test: <nil> STEP: delete the pod Jun 24 01:57:27.257: INFO: Waiting for pod pod-projected-secrets-ef8a74ec-9644-11e9-afd4-0e9110352016 to disappear Jun 24 01:57:27.259: INFO: Pod pod-projected-secrets-ef8a74ec-9644-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:27.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f5dcc" for this suite. Jun 24 01:57:37.400: INFO: namespace: e2e-tests-projected-f5dcc, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.307 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable in multiple volumes in a pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:171 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "openshift/perl-516-centos7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Upgrade [Feature:Upgrade] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] cluster upgrade /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should maintain a functioning cluster [Feature:ClusterUpgrade] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:123 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 override environment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:118 BUILD_LOGLEVEL in buildconfig can be overridden by build-loglevel /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:116 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl taint [Serial] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should update the taint on a node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1476 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs paused [Conformance] should disable actions on deployments /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:704 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:41.741: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:41.761: INFO: configPath is now "/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig" Jun 24 01:56:41.761: INFO: The user is now "extended-test-cli-deployment-n81dj-t81vj-user" Jun 24 01:56:41.761: INFO: Creating project "extended-test-cli-deployment-n81dj-t81vj" STEP: Waiting for a default service account to be provisioned in namespace [It] should disable actions on deployments /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:704 Jun 24 01:56:41.884: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj -f /tmp/fixture-testdata-dir824123364/test/extended/testdata/deployments/paused-deployment.yaml -o name' STEP: verifying that we cannot start a new deployment via oc deploy Jun 24 01:56:42.181: INFO: Running 'oc deploy --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj deploymentconfig/paused --latest' Jun 24 01:56:42.422: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc deploy --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj deploymentconfig/paused --latest] [] Flag --latest has been deprecated, use 'oc rollout latest' instead error: cannot deploy a paused deployment config Flag --latest has been deprecated, use 'oc rollout latest' instead error: cannot deploy a paused deployment config [] <nil> 0xc420b09470 exit status 1 <nil> <nil> true [0xc4217a2758 0xc4217a2780 0xc4217a2780] [0xc4217a2758 0xc4217a2780] [0xc4217a2760 0xc4217a2778] [0xdd8a30 0xdd8b30] 0xc421982540 <nil>}: Flag --latest has been deprecated, use 'oc rollout latest' instead error: cannot deploy a paused deployment config STEP: verifying that we cannot start a new deployment via oc rollout Jun 24 01:56:42.422: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj latest deploymentconfig/paused' Jun 24 01:56:42.721: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj latest deploymentconfig/paused] [] error: cannot deploy a paused deployment config error: cannot deploy a paused deployment config [] <nil> 0xc4207d1b00 exit status 1 <nil> <nil> true [0xc420fa8788 0xc420fa87b0 0xc420fa87b0] [0xc420fa8788 0xc420fa87b0] [0xc420fa8790 0xc420fa87a8] [0xdd8a30 0xdd8b30] 0xc421973bc0 <nil>}: error: cannot deploy a paused deployment config STEP: verifying that we cannot cancel a deployment Jun 24 01:56:42.721: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj cancel deploymentconfig/paused' Jun 24 01:56:43.038: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj cancel deploymentconfig/paused] [] unable to cancel paused deployment extended-test-cli-deployment-n81dj-t81vj/paused there have been no replication controllers for extended-test-cli-deployment-n81dj-t81vj/paused unable to cancel paused deployment extended-test-cli-deployment-n81dj-t81vj/paused there have been no replication controllers for extended-test-cli-deployment-n81dj-t81vj/paused [] <nil> 0xc4207d1ef0 exit status 1 <nil> <nil> true [0xc420fa87b8 0xc420fa87e0 0xc420fa87e0] [0xc420fa87b8 0xc420fa87e0] [0xc420fa87c0 0xc420fa87d8] [0xdd8a30 0xdd8b30] 0xc421973f20 <nil>}: unable to cancel paused deployment extended-test-cli-deployment-n81dj-t81vj/paused there have been no replication controllers for extended-test-cli-deployment-n81dj-t81vj/paused STEP: verifying that we cannot retry a deployment Jun 24 01:56:43.039: INFO: Running 'oc deploy --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj deploymentconfig/paused --retry' Jun 24 01:56:43.302: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc deploy --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj deploymentconfig/paused --retry] [] error: cannot retry a paused deployment config error: cannot retry a paused deployment config [] <nil> 0xc420b097d0 exit status 1 <nil> <nil> true [0xc4217a2788 0xc4217a27b0 0xc4217a27b0] [0xc4217a2788 0xc4217a27b0] [0xc4217a2790 0xc4217a27a8] [0xdd8a30 0xdd8b30] 0xc4219827e0 <nil>}: error: cannot retry a paused deployment config STEP: verifying that we cannot rollout retry a deployment Jun 24 01:56:43.302: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj retry deploymentconfig/paused' Jun 24 01:56:43.592: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj retry deploymentconfig/paused] [] error: unable to retry paused deployment config "paused" error: unable to retry paused deployment config "paused" [] <nil> 0xc421620300 exit status 1 <nil> <nil> true [0xc420fa87e8 0xc420fa8810 0xc420fa8810] [0xc420fa87e8 0xc420fa8810] [0xc420fa87f0 0xc420fa8808] [0xdd8a30 0xdd8b30] 0xc42171e360 <nil>}: error: unable to retry paused deployment config "paused" STEP: verifying that we cannot rollback a deployment Jun 24 01:56:43.592: INFO: Running 'oc rollback --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj deploymentconfig/paused --to-version 1' Jun 24 01:56:43.859: INFO: Error running &{/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/oc [oc rollback --config=/tmp/extended-test-cli-deployment-n81dj-t81vj-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-t81vj deploymentconfig/paused --to-version 1] [] error: cannot rollback a paused deployment config error: cannot rollback a paused deployment config [] <nil> 0xc421620690 exit status 1 <nil> <nil> true [0xc420fa8818 0xc420fa8840 0xc420fa8840] [0xc420fa8818 0xc420fa8840] [0xc420fa8820 0xc420fa8838] [0xdd8a30 0xdd8b30] 0xc42171e5a0 <nil>}: error: cannot rollback a paused deployment config Jun 24 01:56:55.746: INFO: Latest rollout of dc/paused (rc/paused-1) is complete. [AfterEach] paused [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:650 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:56:55.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-n81dj-t81vj" for this suite. Jun 24 01:57:45.870: INFO: namespace: extended-test-cli-deployment-n81dj-t81vj, resource: bindings, ignored listing per whitelist • [SLOW TEST:64.134 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 paused [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:705 should disable actions on deployments /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:704 ------------------------------ [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:249 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:35.864: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:35.947: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:63 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:84 STEP: Creating service test in namespace e2e-tests-statefulset-vdzlq [It] should not deadlock when a pod's predecessor fails /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:249 STEP: Creating statefulset ss in namespace e2e-tests-statefulset-vdzlq Jun 24 01:56:36.009: INFO: Found 0 stateful pods, waiting for 1 Jun 24 01:56:46.013: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Marking stateful pod at index 0 as healthy. Jun 24 01:56:46.023: INFO: Set annotation pod.alpha.kubernetes.io/initialized to true on pod ss-0 STEP: Waiting for stateful pod at index 1 to enter running. Jun 24 01:56:46.030: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:56:56.041: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:56:56.041: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true STEP: Deleting healthy stateful pod at index 0. STEP: Confirming stateful pod at index 0 is recreated. Jun 24 01:56:56.074: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:57:06.077: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:57:06.077: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true STEP: Deleting unhealthy stateful pod at index 1. STEP: Confirming all stateful pods in statefulset are created. Jun 24 01:57:06.088: INFO: Waiting for stateful pod at index 1 to enter Running Jun 24 01:57:06.099: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:57:06.099: INFO: Marking stateful pod at index 0 healthy Jun 24 01:57:06.174: INFO: Set annotation pod.alpha.kubernetes.io/initialized to true on pod ss-0 Jun 24 01:57:06.174: INFO: Waiting for stateful pod at index 2 to enter Running Jun 24 01:57:06.196: INFO: Found 1 stateful pods, waiting for 2 Jun 24 01:57:16.199: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:57:16.199: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 24 01:57:16.199: INFO: Marking stateful pod at index 1 healthy Jun 24 01:57:16.215: INFO: Set annotation pod.alpha.kubernetes.io/initialized to true on pod ss-1 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:92 Jun 24 01:57:16.215: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vdzlq Jun 24 01:57:16.218: INFO: Scaling statefulset ss to 0 Jun 24 01:57:26.228: INFO: Waiting for statefulset status.replicas updated to 0 Jun 24 01:57:26.230: INFO: Deleting statefulset ss Jun 24 01:57:26.236: INFO: Deleting pvc: datadir-ss-0 with volume pvc-d36695da-9644-11e9-9f9d-0e9110352016 Jun 24 01:57:26.239: INFO: Deleting pvc: datadir-ss-1 with volume pvc-d960c331-9644-11e9-9f9d-0e9110352016 Jun 24 01:57:26.246: INFO: Still waiting for pvs of statefulset to disappear: pvc-d36695da-9644-11e9-9f9d-0e9110352016: {Phase:Released Message: Reason:} pvc-d960c331-9644-11e9-9f9d-0e9110352016: {Phase:Bound Message: Reason:} [AfterEach] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:36.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vdzlq" for this suite. Jun 24 01:57:46.324: INFO: namespace: e2e-tests-statefulset-vdzlq, resource: bindings, ignored listing per whitelist • [SLOW TEST:70.514 seconds] [k8s.io] StatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not deadlock when a pod's predecessor fails /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:249 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Reboot [Disruptive] [Feature:Reboot] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 each node by triggering kernel panic and ensure they function upon restart /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/reboot.go:108 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:154 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:37.411: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:37.487: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:154 STEP: Creating secret with name secret-test-f8145f02-9644-11e9-afd4-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 01:57:37.562: INFO: Waiting up to 5m0s for pod pod-secrets-f814b96b-9644-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:57:37.569: INFO: Waiting for pod pod-secrets-f814b96b-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-b7slr' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.932218ms elapsed) Jun 24 01:57:39.572: INFO: Waiting for pod pod-secrets-f814b96b-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-b7slr' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.010334762s elapsed) Jun 24 01:57:41.575: INFO: Waiting for pod pod-secrets-f814b96b-9644-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-b7slr' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012866161s elapsed) STEP: Saw pod success Jun 24 01:57:43.580: INFO: Trying to get logs from node 172.18.11.204 pod pod-secrets-f814b96b-9644-11e9-afd4-0e9110352016 container secret-volume-test: <nil> STEP: delete the pod Jun 24 01:57:43.602: INFO: Waiting for pod pod-secrets-f814b96b-9644-11e9-afd4-0e9110352016 to disappear Jun 24 01:57:43.606: INFO: Pod pod-secrets-f814b96b-9644-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:43.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b7slr" for this suite. Jun 24 01:57:53.681: INFO: namespace: e2e-tests-secrets-b7slr, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.324 seconds] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable in multiple volumes in a pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:154 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Guestbook application /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create and stop a working application [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:373 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ReplicaSet [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Features /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:199 Preferences /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:198 should create replicasets with min replicas preference /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:165 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [job][Conformance] openshift can execute jobs controller should create and run a job in user project /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:53 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [job][Conformance] openshift can execute jobs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:36.629: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:36.652: INFO: configPath is now "/tmp/extended-test-job-controller-b4d2w-p0l2j-user.kubeconfig" Jun 24 01:57:36.652: INFO: The user is now "extended-test-job-controller-b4d2w-p0l2j-user" Jun 24 01:57:36.652: INFO: Creating project "extended-test-job-controller-b4d2w-p0l2j" STEP: Waiting for a default service account to be provisioned in namespace [It] should create and run a job in user project /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:53 STEP: creating a job from "/tmp/fixture-testdata-dir061046886/test/extended/testdata/jobs/v1.yaml"... Jun 24 01:57:36.783: INFO: Running 'oc create --config=/tmp/extended-test-job-controller-b4d2w-p0l2j-user.kubeconfig --namespace=extended-test-job-controller-b4d2w-p0l2j -f /tmp/fixture-testdata-dir061046886/test/extended/testdata/jobs/v1.yaml' job "simplev1" created STEP: waiting for a pod... STEP: waiting for a job... STEP: checking job status... STEP: removing a job... Jun 24 01:57:43.126: INFO: Running 'oc delete --config=/tmp/extended-test-job-controller-b4d2w-p0l2j-user.kubeconfig --namespace=extended-test-job-controller-b4d2w-p0l2j job/simplev1' job "simplev1" deleted [AfterEach] [job][Conformance] openshift can execute jobs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:45.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-job-controller-b4d2w-p0l2j" for this suite. Jun 24 01:57:55.597: INFO: namespace: extended-test-job-controller-b4d2w-p0l2j, resource: bindings, ignored listing per whitelist • [SLOW TEST:18.974 seconds] [job][Conformance] openshift can execute jobs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:55 controller /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:54 should create and run a job in user project /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/jobs/jobs.go:53 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][mariadb][Slow] openshift mariadb image [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mariadb_ephemeral.go:41 Creating from a template /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mariadb_ephemeral.go:39 should instantiate the template /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mariadb_ephemeral.go:38 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift sample application repositories [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:225 [image_ecosystem][nodejs] images with nodejs-ex repo /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:195 Building nodejs app from new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:92 should build a nodejs image and run it in a pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:91 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] build can have Dockerfile input [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:116 being created from new-build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:115 should create a image via new-build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:62 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [cli][Slow] oc debug [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/debug.go:50 should print the docker image-based container entrypoint/command /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/debug.go:43 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "ci.dev.openshift.redhat.com:5000/openshift/ruby-22-rhel7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ReplicaSet [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should surface a failure condition on a common issue like exceeded quota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/replica_set.go:92 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Deployment [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 overlapping deployment should not fight with each other /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:101 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] update failure status [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:195 Build status fetch S2I source failure /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:99 should contain the S2I fetch source failure reason and message /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:98 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Deployment [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 RecreateDeployment should delete old pods and create new ones /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:74 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:45.878: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:45.937: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59 STEP: Creating a pod to test downward API volume plugin Jun 24 01:57:46.005: INFO: Waiting up to 5m0s for pod downwardapi-volume-fd1d09c0-9644-11e9-a4d2-0e9110352016 status to be success or failure Jun 24 01:57:46.012: INFO: Waiting for pod downwardapi-volume-fd1d09c0-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-downward-api-dl7r5' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.03825ms elapsed) Jun 24 01:57:48.019: INFO: Waiting for pod downwardapi-volume-fd1d09c0-9644-11e9-a4d2-0e9110352016 in namespace 'e2e-tests-downward-api-dl7r5' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.013445591s elapsed) STEP: Saw pod success Jun 24 01:57:50.023: INFO: Trying to get logs from node 172.18.11.204 pod downwardapi-volume-fd1d09c0-9644-11e9-a4d2-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 01:57:50.038: INFO: Waiting for pod downwardapi-volume-fd1d09c0-9644-11e9-a4d2-0e9110352016 to disappear Jun 24 01:57:50.041: INFO: Pod downwardapi-volume-fd1d09c0-9644-11e9-a4d2-0e9110352016 no longer exists [AfterEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:50.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dl7r5" for this suite. Jun 24 01:58:00.100: INFO: namespace: e2e-tests-downward-api-dl7r5, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.392 seconds] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should set DefaultMode on files [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Density [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Feature:ManualPerformance] should allow starting 30 pods per node using {extensions Deployment} with 2 secrets and 0 daemons /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/density.go:743 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] volume on tmpfs should have the correct mode using FSGroup [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:60 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:46.382: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:46.468: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode using FSGroup [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:60 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 24 01:57:46.528: INFO: Waiting up to 5m0s for pod pod-fd6ce8c9-9644-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:57:46.540: INFO: Waiting for pod pod-fd6ce8c9-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-emptydir-0nm41' status to be 'success or failure'(found phase: "Pending", readiness: false) (11.930267ms elapsed) Jun 24 01:57:48.543: INFO: Waiting for pod pod-fd6ce8c9-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-emptydir-0nm41' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.014663863s elapsed) Jun 24 01:57:50.550: INFO: Waiting for pod pod-fd6ce8c9-9644-11e9-a527-0e9110352016 in namespace 'e2e-tests-emptydir-0nm41' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.021520646s elapsed) STEP: Saw pod success Jun 24 01:57:52.554: INFO: Trying to get logs from node 172.18.11.204 pod pod-fd6ce8c9-9644-11e9-a527-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:57:52.567: INFO: Waiting for pod pod-fd6ce8c9-9644-11e9-a527-0e9110352016 to disappear Jun 24 01:57:52.571: INFO: Pod pod-fd6ce8c9-9644-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:52.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-0nm41" for this suite. Jun 24 01:58:02.705: INFO: namespace: e2e-tests-emptydir-0nm41, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.359 seconds] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 when FSGroup is specified [Feature:FSGroup] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:61 volume on tmpfs should have the correct mode using FSGroup [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:60 ------------------------------ [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:967 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:53.743: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:53.835: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:967 STEP: Creating a pod to test downward API volume plugin Jun 24 01:57:53.959: INFO: Waiting up to 5m0s for pod downwardapi-volume-01dad80e-9645-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:57:53.961: INFO: Waiting for pod downwardapi-volume-01dad80e-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-nkxvb' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.911597ms elapsed) Jun 24 01:57:55.963: INFO: Waiting for pod downwardapi-volume-01dad80e-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-nkxvb' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004424217s elapsed) STEP: Saw pod success Jun 24 01:57:57.983: INFO: Trying to get logs from node 172.18.11.204 pod downwardapi-volume-01dad80e-9645-11e9-afd4-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 01:57:58.096: INFO: Waiting for pod downwardapi-volume-01dad80e-9645-11e9-afd4-0e9110352016 to disappear Jun 24 01:57:58.102: INFO: Pod downwardapi-volume-01dad80e-9645-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:57:58.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nkxvb" for this suite. Jun 24 01:58:08.162: INFO: namespace: e2e-tests-projected-nkxvb, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.516 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:967 ------------------------------ [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:02.744: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:02.841: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 24 01:58:02.945: INFO: Waiting up to 5m0s for pod pod-073520de-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:58:02.960: INFO: Waiting for pod pod-073520de-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-emptydir-fsl8w' status to be 'success or failure'(found phase: "Pending", readiness: false) (14.821025ms elapsed) Jun 24 01:58:04.962: INFO: Waiting for pod pod-073520de-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-emptydir-fsl8w' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.01676622s elapsed) Jun 24 01:58:06.968: INFO: Waiting for pod pod-073520de-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-emptydir-fsl8w' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.02286619s elapsed) Jun 24 01:58:08.973: INFO: Waiting for pod pod-073520de-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-emptydir-fsl8w' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.028530183s elapsed) STEP: Saw pod success Jun 24 01:58:10.977: INFO: Trying to get logs from node 172.18.11.204 pod pod-073520de-9645-11e9-a527-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 01:58:11.194: INFO: Waiting for pod pod-073520de-9645-11e9-a527-0e9110352016 to disappear Jun 24 01:58:11.201: INFO: Pod pod-073520de-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:58:11.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fsl8w" for this suite. Jun 24 01:58:21.385: INFO: namespace: e2e-tests-emptydir-fsl8w, resource: bindings, ignored listing per whitelist • [SLOW TEST:18.680 seconds] [k8s.io] EmptyDir volumes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support (non-root,0777,tmpfs) [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 using the SCL in s2i images /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73 "ci.dev.openshift.redhat.com:5000/openshift/ruby-22-rhel7" should be SCL enabled /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ReplicaSet [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Features /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:199 CRUD /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:138 should be deleted from underlying clusters when OrphanDependents is false /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:122 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] PersistentVolumes:NFS[Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 with multiple PVs and PVCs all in same ns /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:241 should create 4 PVs and 2 PVCs: test write access /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:240 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Proxy version v1 should proxy logs on node [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:63 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:21.431: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:21.510: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:63 Jun 24 01:58:21.567: INFO: (0) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 4.4667ms) Jun 24 01:58:21.570: INFO: (1) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.220474ms) Jun 24 01:58:21.572: INFO: (2) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.248151ms) Jun 24 01:58:21.574: INFO: (3) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.232354ms) Jun 24 01:58:21.577: INFO: (4) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.392313ms) Jun 24 01:58:21.579: INFO: (5) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.348093ms) Jun 24 01:58:21.581: INFO: (6) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.335666ms) Jun 24 01:58:21.584: INFO: (7) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.267212ms) Jun 24 01:58:21.586: INFO: (8) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.314679ms) Jun 24 01:58:21.588: INFO: (9) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.323029ms) Jun 24 01:58:21.591: INFO: (10) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.310241ms) Jun 24 01:58:21.593: INFO: (11) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.31835ms) Jun 24 01:58:21.595: INFO: (12) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.254794ms) Jun 24 01:58:21.597: INFO: (13) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.248929ms) Jun 24 01:58:21.600: INFO: (14) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.33901ms) Jun 24 01:58:21.602: INFO: (15) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.188507ms) Jun 24 01:58:21.604: INFO: (16) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.2917ms) Jun 24 01:58:21.607: INFO: (17) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.318296ms) Jun 24 01:58:21.609: INFO: (18) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.301097ms) Jun 24 01:58:21.611: INFO: (19) /api/v1/proxy/nodes/172.18.11.204/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.260323ms) [AfterEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:58:21.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-bx21x" for this suite. Jun 24 01:58:31.674: INFO: namespace: e2e-tests-proxy-bx21x, resource: bindings, ignored listing per whitelist • [SLOW TEST:10.351 seconds] [k8s.io] Proxy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:275 should proxy logs on node [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:63 ------------------------------ [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/events.go:129 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Events /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:00.274: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:00.363: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/events.go:129 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-05bf4292-9645-11e9-a4d2-0e9110352016,GenerateName:,Namespace:e2e-tests-events-2n137,SelfLink:/api/v1/namespaces/e2e-tests-events-2n137/pods/send-events-05bf4292-9645-11e9-a4d2-0e9110352016,UID:05bfe13d-9645-11e9-9f9d-0e9110352016,ResourceVersion:7468,Generation:0,CreationTimestamp:2019-06-24 01:58:00.488171467 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 484115267,},Annotations:map[string]string{openshift.io/scc: anyuid,},OwnerReferences:[],Finalizers:[],ClusterName:,},Spec:PodSpec{Volumes:[{default-token-czx56 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-czx56,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-czx56 true /var/run/secrets/kubernetes.io/serviceaccount }] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD SYS_CHROOT],},Privileged:*false,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c26,c0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:172.18.11.204,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c26,c0,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,},ImagePullSecrets:[{default-dockercfg-9c7gh}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 01:58:00 -0400 EDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 01:58:19 -0400 EDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 01:58:00 -0400 EDT }],Message:,Reason:,HostIP:172.18.11.204,PodIP:172.17.0.6,StartTime:2019-06-24 01:58:00 -0400 EDT,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-06-24 01:58:16 -0400 EDT,} nil} {nil nil nil} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker-pullable://gcr.io/google_containers/serve_hostname@sha256:a49737ee84a3b94f0b977f32e60c5daf11f0b5636f1f7503a2981524f351c57a docker://dd473e30795dc01db5cc62edadf41c79682880cd2b6afe23effdc1094301900f}],QOSClass:BestEffort,InitContainerStatuses:[],},} STEP: checking for scheduler event about the pod Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] Events /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:58:24.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-2n137" for this suite. Jun 24 01:58:34.599: INFO: namespace: e2e-tests-events-2n137, resource: bindings, ignored listing per whitelist • [SLOW TEST:34.384 seconds] [k8s.io] Events /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/events.go:129 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift sample application repositories [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:225 [image_ecosystem][ruby] test ruby images with rails-ex db repo /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:110 Building rails-postgresql app from new-app /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:92 should build a rails-postgresql image and run it in a pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/sample_repos.go:91 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:795 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:31.791: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:31.893: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should be consumable in multiple volumes in the same pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:795 STEP: Creating configMap with name projected-configmap-test-volume-18832881-9645-11e9-a527-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 01:58:31.986: INFO: Waiting up to 5m0s for pod pod-projected-configmaps-18846f84-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 01:58:31.997: INFO: Waiting for pod pod-projected-configmaps-18846f84-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-qt7wf' status to be 'success or failure'(found phase: "Pending", readiness: false) (11.246598ms elapsed) Jun 24 01:58:34.003: INFO: Waiting for pod pod-projected-configmaps-18846f84-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-qt7wf' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.016818273s elapsed) STEP: Saw pod success Jun 24 01:58:36.007: INFO: Trying to get logs from node 172.18.11.204 pod pod-projected-configmaps-18846f84-9645-11e9-a527-0e9110352016 container projected-configmap-volume-test: <nil> STEP: delete the pod Jun 24 01:58:36.124: INFO: Waiting for pod pod-projected-configmaps-18846f84-9645-11e9-a527-0e9110352016 to disappear Jun 24 01:58:36.127: INFO: Pod pod-projected-configmaps-18846f84-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:58:36.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qt7wf" for this suite. Jun 24 01:58:46.234: INFO: namespace: e2e-tests-projected-qt7wf, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.477 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable in multiple volumes in the same pod [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:795 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ReplicaSet [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Features /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:199 Preferences /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:198 should create replicasets with weight preference /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:160 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [Conformance][networking][router] weighted openshift router The HAProxy router should appropriately serve a route that points to two services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:114 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][networking][router] weighted openshift router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:08.261: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:08.280: INFO: configPath is now "/tmp/extended-test-weighted-router-fdcxm-gz935-user.kubeconfig" Jun 24 01:58:08.280: INFO: The user is now "extended-test-weighted-router-fdcxm-gz935-user" Jun 24 01:58:08.281: INFO: Creating project "extended-test-weighted-router-fdcxm-gz935" STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][networking][router] weighted openshift router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:38 Jun 24 01:58:08.399: INFO: Running 'oc adm --config=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig --namespace=extended-test-weighted-router-fdcxm-gz935 policy add-cluster-role-to-user system:router extended-test-weighted-router-fdcxm-gz935-user' cluster role "system:router" added: "extended-test-weighted-router-fdcxm-gz935-user" Jun 24 01:58:08.779: INFO: Running 'oc new-app --config=/tmp/extended-test-weighted-router-fdcxm-gz935-user.kubeconfig --namespace=extended-test-weighted-router-fdcxm-gz935 -f /tmp/fixture-testdata-dir557889249/test/extended/testdata/weighted-router.yaml -p IMAGE=openshift/origin-haproxy-router' --> Deploying template "extended-test-weighted-router-fdcxm-gz935/" for "/tmp/fixture-testdata-dir557889249/test/extended/testdata/weighted-router.yaml" to project extended-test-weighted-router-fdcxm-gz935 * With parameters: * IMAGE=openshift/origin-haproxy-router --> Creating resources ... pod "weighted-router" created rolebinding "system-router" created route "weightedroute" created route "zeroweightroute" created service "weightedendpoints1" created service "weightedendpoints2" created pod "endpoint-1" created pod "endpoint-2" created --> Success Run 'oc status' to view your app. [It] should appropriately serve a route that points to two services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:114 Jun 24 01:58:09.431: INFO: Creating new exec pod STEP: creating a weighted router from a config file "/tmp/fixture-testdata-dir557889249/test/extended/testdata/weighted-router.yaml" STEP: waiting for the healthz endpoint to respond Jun 24 01:58:20.648: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fdcxm-gz935 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 172.17.0.7' "http://172.17.0.7:1936/healthz" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jun 24 01:58:22.354: INFO: stderr: "" STEP: checking that 100 requests go through successfully Jun 24 01:58:22.354: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fdcxm-gz935 execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://172.17.0.7" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jun 24 01:58:29.986: INFO: stderr: "" Jun 24 01:58:29.986: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fdcxm-gz935 execpod -- /bin/sh -c set -e for i in $(seq 1 100); do code=$( curl -s -o /dev/null -w '%{http_code}\n' --header 'Host: weighted.example.com' "http://172.17.0.7" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -ne 200 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi done ' Jun 24 01:58:31.350: INFO: stderr: "" STEP: checking that there are two weighted backends in the router stats Jun 24 01:58:31.350: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fdcxm-gz935 execpod -- /bin/sh -c curl -s -u admin:password --header 'Host: weighted.example.com' "http://172.17.0.7:1936/;csv"' Jun 24 01:58:31.898: INFO: stderr: "" STEP: checking that zero weights are also respected by the router Jun 24 01:58:31.898: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-weighted-router-fdcxm-gz935 execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' --header 'Host: zeroweight.example.com' "http://172.17.0.7"' Jun 24 01:58:32.411: INFO: stderr: "" Jun 24 01:58:32.530: INFO: Weighted Router test [Conformance][networking][router] weighted openshift router The HAProxy router should appropriately serve a route that points to two services logs: I0624 05:58:18.561780 1 merged_client_builder.go:123] Using in-cluster configuration I0624 05:58:18.595703 1 reflector.go:187] Starting reflector *api.Service (10m0s) from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 I0624 05:58:18.599101 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 I0624 05:58:18.614512 1 router.go:156] Creating a new template router, writing to /var/lib/haproxy/router I0624 05:58:18.614639 1 router.go:350] Template router will coalesce reloads within 5 seconds of each other I0624 05:58:18.614669 1 router.go:400] Router default cert from router container I0624 05:58:18.614676 1 router.go:214] Reading persisted state I0624 05:58:18.619272 1 router.go:218] Committing state I0624 05:58:18.619282 1 router.go:455] Writing the router state I0624 05:58:18.626314 1 router.go:462] Writing the router config I0624 05:58:18.631460 1 router.go:476] Reloading the router E0624 05:58:18.661744 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:18.994260 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:58:18.994327 1 router.go:237] Router is only using resources in namespace extended-test-weighted-router-fdcxm-gz935 I0624 05:58:18.994362 1 reflector.go:187] Starting reflector *api.Route (10m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:88 I0624 05:58:18.994402 1 reflector.go:187] Starting reflector *api.Endpoints (10m0s) from github.com/openshift/origin/pkg/router/controller/factory/factory.go:95 I0624 05:58:18.994543 1 reflector.go:236] Listing and watching *api.Route from github.com/openshift/origin/pkg/router/controller/factory/factory.go:88 I0624 05:58:18.996203 1 router_controller.go:70] Running router controller I0624 05:58:18.996228 1 reaper.go:17] Launching reaper I0624 05:58:18.996301 1 reflector.go:236] Listing and watching *api.Endpoints from github.com/openshift/origin/pkg/router/controller/factory/factory.go:95 I0624 05:58:19.001895 1 router_controller.go:305] Processing Route: extended-test-weighted-router-fdcxm-gz935/weightedroute -> weightedendpoints1 I0624 05:58:19.001910 1 router_controller.go:306] Alias: weighted.example.com I0624 05:58:19.001915 1 router_controller.go:307] Path: I0624 05:58:19.001920 1 router_controller.go:308] Event: MODIFIED I0624 05:58:19.001961 1 router.go:132] host weighted.example.com admitted I0624 05:58:19.004880 1 unique_host.go:195] Route extended-test-weighted-router-fdcxm-gz935/weightedroute claims weighted.example.com I0624 05:58:19.004906 1 status.go:179] has last touch <nil> for extended-test-weighted-router-fdcxm-gz935/weightedroute I0624 05:58:19.004938 1 status.go:269] admit: admitting route by updating status: weightedroute (true): weighted.example.com I0624 05:58:19.013729 1 router.go:781] Adding route extended-test-weighted-router-fdcxm-gz935/weightedroute I0624 05:58:19.013745 1 router.go:787] Creating new frontend for key: extended-test-weighted-router-fdcxm-gz935/weightedendpoints1 I0624 05:58:19.013755 1 router.go:787] Creating new frontend for key: extended-test-weighted-router-fdcxm-gz935/weightedendpoints2 I0624 05:58:19.013766 1 router_controller.go:296] Router sync in progress I0624 05:58:19.013799 1 plugin.go:159] Processing 0 Endpoints for Name: weightedendpoints2 (MODIFIED) I0624 05:58:19.013807 1 plugin.go:171] Modifying endpoints for extended-test-weighted-router-fdcxm-gz935/weightedendpoints2 I0624 05:58:19.013817 1 router.go:823] Ignoring change for extended-test-weighted-router-fdcxm-gz935/weightedendpoints2, endpoints are the same I0624 05:58:19.015296 1 plugin.go:159] Processing 0 Endpoints for Name: weightedendpoints1 (MODIFIED) I0624 05:58:19.015312 1 plugin.go:171] Modifying endpoints for extended-test-weighted-router-fdcxm-gz935/weightedendpoints1 I0624 05:58:19.015321 1 router.go:823] Ignoring change for extended-test-weighted-router-fdcxm-gz935/weightedendpoints1, endpoints are the same I0624 05:58:19.015351 1 router_controller.go:305] Processing Route: extended-test-weighted-router-fdcxm-gz935/zeroweightroute -> weightedendpoints1 I0624 05:58:19.015358 1 router_controller.go:306] Alias: zeroweight.example.com I0624 05:58:19.015362 1 router_controller.go:307] Path: I0624 05:58:19.015366 1 router_controller.go:308] Event: MODIFIED I0624 05:58:19.015373 1 router.go:132] host zeroweight.example.com admitted I0624 05:58:19.015405 1 unique_host.go:195] Route extended-test-weighted-router-fdcxm-gz935/zeroweightroute claims zeroweight.example.com I0624 05:58:19.015414 1 status.go:179] has last touch <nil> for extended-test-weighted-router-fdcxm-gz935/zeroweightroute I0624 05:58:19.015429 1 status.go:269] admit: admitting route by updating status: zeroweightroute (true): zeroweight.example.com I0624 05:58:19.020700 1 router.go:781] Adding route extended-test-weighted-router-fdcxm-gz935/zeroweightroute I0624 05:58:19.020716 1 router_controller.go:298] Router sync complete I0624 05:58:19.020723 1 router.go:435] Router state synchronized for the first time I0624 05:58:19.020772 1 router.go:455] Writing the router state I0624 05:58:19.022257 1 router_controller.go:305] Processing Route: extended-test-weighted-router-fdcxm-gz935/weightedroute -> weightedendpoints1 I0624 05:58:19.022269 1 router_controller.go:306] Alias: weighted.example.com I0624 05:58:19.022274 1 router_controller.go:307] Path: I0624 05:58:19.022279 1 router_controller.go:308] Event: MODIFIED I0624 05:58:19.022286 1 router.go:132] host weighted.example.com admitted I0624 05:58:19.022312 1 status.go:245] admit: route already admitted I0624 05:58:19.024774 1 router.go:462] Writing the router config I0624 05:58:19.027797 1 router.go:476] Reloading the router I0624 05:58:19.028973 1 router_controller.go:305] Processing Route: extended-test-weighted-router-fdcxm-gz935/zeroweightroute -> weightedendpoints1 I0624 05:58:19.028998 1 router_controller.go:306] Alias: zeroweight.example.com I0624 05:58:19.029003 1 router_controller.go:307] Path: I0624 05:58:19.029008 1 router_controller.go:308] Event: MODIFIED I0624 05:58:19.029015 1 router.go:132] host zeroweight.example.com admitted I0624 05:58:19.029091 1 status.go:245] admit: route already admitted I0624 05:58:19.079400 1 reaper.go:24] Signal received: child exited I0624 05:58:19.079444 1 reaper.go:32] Reaped process with pid 29 I0624 05:58:19.088282 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:58:19.088338 1 reaper.go:24] Signal received: child exited I0624 05:58:19.661954 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:19.664912 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:20.665242 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:20.668576 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:21.668774 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:21.672397 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:22.672629 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:22.677823 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:23.678018 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:23.681247 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:24.681475 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:24.684890 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:25.685035 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:25.688505 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:26.688740 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:26.694555 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:27.694762 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:27.698263 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:28.698465 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:28.701877 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:29.237096 1 plugin.go:159] Processing 1 Endpoints for Name: weightedendpoints2 (MODIFIED) I0624 05:58:29.237121 1 plugin.go:162] Subset 0 : api.EndpointSubset{Addresses:[]api.EndpointAddress{api.EndpointAddress{IP:"172.17.0.9", Hostname:"", NodeName:(*string)(0xc42112a1d0), TargetRef:(*api.ObjectReference)(0xc420270bd0)}}, NotReadyAddresses:[]api.EndpointAddress(nil), Ports:[]api.EndpointPort{api.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0624 05:58:29.276772 1 plugin.go:171] Modifying endpoints for extended-test-weighted-router-fdcxm-gz935/weightedendpoints2 I0624 05:58:29.276843 1 router.go:455] Writing the router state I0624 05:58:29.277155 1 router.go:462] Writing the router config I0624 05:58:29.297274 1 router.go:476] Reloading the router I0624 05:58:29.328400 1 plugin.go:159] Processing 1 Endpoints for Name: weightedendpoints1 (MODIFIED) I0624 05:58:29.328428 1 plugin.go:162] Subset 0 : api.EndpointSubset{Addresses:[]api.EndpointAddress{api.EndpointAddress{IP:"172.17.0.8", Hostname:"", NodeName:(*string)(0xc42111c940), TargetRef:(*api.ObjectReference)(0xc420315b90)}}, NotReadyAddresses:[]api.EndpointAddress(nil), Ports:[]api.EndpointPort{api.EndpointPort{Name:"", Port:8080, Protocol:"TCP"}}} I0624 05:58:29.328461 1 plugin.go:171] Modifying endpoints for extended-test-weighted-router-fdcxm-gz935/weightedendpoints1 I0624 05:58:29.377001 1 reaper.go:24] Signal received: child exited I0624 05:58:29.377046 1 reaper.go:32] Reaped process with pid 53 I0624 05:58:29.405261 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:58:29.405353 1 reaper.go:24] Signal received: child exited I0624 05:58:29.408392 1 router.go:455] Writing the router state I0624 05:58:29.408725 1 router.go:462] Writing the router config I0624 05:58:29.418670 1 router.go:476] Reloading the router I0624 05:58:29.476034 1 reaper.go:24] Signal received: child exited I0624 05:58:29.476081 1 reaper.go:32] Reaped process with pid 77 I0624 05:58:29.514504 1 router.go:554] Router reloaded: - Proxy protocol 'FALSE'. Checking HAProxy /healthz on port 1936 ... - HAProxy port 1936 health check ok : 0 retry attempt(s). I0624 05:58:29.514653 1 reaper.go:24] Signal received: child exited I0624 05:58:29.705281 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:29.739818 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:30.742120 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:30.771506 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster I0624 05:58:31.771716 1 reflector.go:236] Listing and watching *api.Service from github.com/openshift/origin/pkg/router/template/service_lookup.go:31 E0624 05:58:31.776909 1 reflector.go:190] github.com/openshift/origin/pkg/router/template/service_lookup.go:31: Failed to list *api.Service: User "system:serviceaccount:extended-test-weighted-router-fdcxm-gz935:default" cannot list all services in the cluster [AfterEach] [Conformance][networking][router] weighted openshift router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:58:32.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-weighted-router-fdcxm-gz935" for this suite. Jun 24 01:58:57.577: INFO: namespace: extended-test-weighted-router-fdcxm-gz935, resource: bindings, ignored listing per whitelist • [SLOW TEST:49.403 seconds] [Conformance][networking][router] weighted openshift router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:116 The HAProxy router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:115 should appropriately serve a route that points to two services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/weighted.go:114 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [bldcompat][Slow][Compatibility] build controller [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:52 RunBuildConfigChangeControllerTest /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:51 should succeed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:50 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs with multiple image change triggers [Conformance] should run a successful deployment with multiple triggers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:439 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:57:55.620: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:57:55.641: INFO: configPath is now "/tmp/extended-test-cli-deployment-p9x1g-j0g3g-user.kubeconfig" Jun 24 01:57:55.641: INFO: The user is now "extended-test-cli-deployment-p9x1g-j0g3g-user" Jun 24 01:57:55.641: INFO: Creating project "extended-test-cli-deployment-p9x1g-j0g3g" STEP: Waiting for a default service account to be provisioned in namespace [It] should run a successful deployment with multiple triggers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:439 Jun 24 01:57:55.735: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-p9x1g-j0g3g-user.kubeconfig --namespace=extended-test-cli-deployment-p9x1g-j0g3g -f /tmp/fixture-testdata-dir061046886/test/extended/testdata/deployments/deployment-example.yaml -o name' STEP: verifying the deployment is marked complete Jun 24 01:58:31.373: INFO: Latest rollout of dc/example (rc/example-1) is complete. [AfterEach] with multiple image change triggers [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:431 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:58:31.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-p9x1g-j0g3g" for this suite. Jun 24 01:59:16.454: INFO: namespace: extended-test-cli-deployment-p9x1g-j0g3g, resource: bindings, ignored listing per whitelist • [SLOW TEST:80.913 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 with multiple image change triggers [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:448 should run a successful deployment with multiple triggers /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:439 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][perl][Slow] hot deploy for openshift perl image [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_perl.go:92 Dancer example /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_perl.go:91 should work with hot deploy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/s2i_perl.go:90 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] CronJob [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should schedule multiple jobs concurrently /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cronjob.go:84 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs when tagging images [Conformance] should successfully tag the deployed image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:401 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:57.669: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:57.691: INFO: configPath is now "/tmp/extended-test-cli-deployment-k08fq-95m64-user.kubeconfig" Jun 24 01:58:57.691: INFO: The user is now "extended-test-cli-deployment-k08fq-95m64-user" Jun 24 01:58:57.691: INFO: Creating project "extended-test-cli-deployment-k08fq-95m64" STEP: Waiting for a default service account to be provisioned in namespace [It] should successfully tag the deployed image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:401 Jun 24 01:58:57.821: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-k08fq-95m64-user.kubeconfig --namespace=extended-test-cli-deployment-k08fq-95m64 -f /tmp/fixture-testdata-dir557889249/test/extended/testdata/deployments/tag-images-deployment.yaml -o name' STEP: verifying the deployment is marked complete Jun 24 01:59:10.105: INFO: Latest rollout of dc/tag-images (rc/tag-images-1) is complete. STEP: verifying the post deployment action happened: tag is set [AfterEach] when tagging images [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:371 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:59:10.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-k08fq-95m64" for this suite. Jun 24 01:59:20.187: INFO: namespace: extended-test-cli-deployment-k08fq-95m64, resource: bindings, ignored listing per whitelist • [SLOW TEST:22.657 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 when tagging images [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:402 should successfully tag the deployed image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:401 ------------------------------ [Conformance][networking][router] The HAProxy router should support reencrypt to services backed by a serving certificate automatically /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:64 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][networking][router] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:34.662: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:34.681: INFO: configPath is now "/tmp/extended-test-router-reencrypt-jjmnl-04mwn-user.kubeconfig" Jun 24 01:58:34.681: INFO: The user is now "extended-test-router-reencrypt-jjmnl-04mwn-user" Jun 24 01:58:34.681: INFO: Creating project "extended-test-router-reencrypt-jjmnl-04mwn" STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][networking][router] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:35 [It] should support reencrypt to services backed by a serving certificate automatically /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:64 Jun 24 01:58:34.808: INFO: Creating new exec pod STEP: deploying a service using a reencrypt route without a destinationCACertificate Jun 24 01:58:38.822: INFO: Running 'oc create --config=/tmp/extended-test-router-reencrypt-jjmnl-04mwn-user.kubeconfig --namespace=extended-test-router-reencrypt-jjmnl-04mwn -f /tmp/fixture-testdata-dir824123364/test/extended/testdata/reencrypt-serving-cert.yaml' pod "serving-cert" created configmap "serving-cert" created service "serving-cert" created route "serving-cert" created Jun 24 01:58:40.750: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-router-reencrypt-jjmnl-04mwn execpod -- /bin/sh -c set -e for i in $(seq 1 180); do code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: serving-cert-extended-test-router-reencrypt-jjmnl-04mwn.router.default.svc.cluster.local' "https://172.30.36.32" ) || rc=$? if [[ "${rc:-0}" -eq 0 ]]; then echo $code if [[ $code -eq 200 ]]; then exit 0 fi if [[ $code -ne 503 ]]; then exit 1 fi else echo "error ${rc}" 1>&2 fi sleep 1 done ' Jun 24 01:58:56.732: INFO: stderr: "" [AfterEach] [Conformance][networking][router] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:58:56.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-reencrypt-jjmnl-04mwn" for this suite. Jun 24 01:59:21.880: INFO: namespace: extended-test-router-reencrypt-jjmnl-04mwn, resource: bindings, ignored listing per whitelist • [SLOW TEST:47.299 seconds] [Conformance][networking][router] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:66 The HAProxy router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:65 should support reencrypt to services backed by a serving certificate automatically /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/reencrypt.go:64 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] update failure status [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:195 Build status fetch builder image failure /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:118 should contain the fetch builder image failure reason and message /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/failure_status.go:117 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [cli][Slow] oc debug [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/debug.go:50 should print the imagestream-based container entrypoint/command /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/debug.go:31 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Networking [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Granular Checks: Services [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should update nodePort: udp [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:187 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][jenkins][Slow] openshift pipeline plugin [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:664 jenkins-plugin test context /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:663 jenkins-plugin test create obj delete obj /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:310 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Feature:ImageQuota] Image resource quota [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:119 should deny a push of built image exceeding openshift.io/imagestreams quota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/quota_admission.go:118 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Network Partition [Disruptive] [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be evicted from unready Node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:658 [Feature:TaintEviction] All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be evicted after eviction timeout passes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/network_partition.go:657 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [cli][Slow] oc debug [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/debug.go:50 should print the overridden imagestream-based container entrypoint/command /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/debug.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] PersistentVolumes:NFS[Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 with Single PV - PVC pairs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:190 create a PV and a pre-bound PVC: test write access /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:189 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] build can have Dockerfile input [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:116 being created from new-build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:115 should be able to start a build from Dockerfile with FROM reference to scratch /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:114 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ConfigMap [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume with mappings as non-root with FSGroup [Feature:FSGroup] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:74 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Daemon set [Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should retry creating failed daemon pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/daemon_set.go:248 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] builds with a context directory [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/contextdir.go:107 docker context directory build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/contextdir.go:106 should docker build an application using a context directory /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/contextdir.go:105 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] using build configuration runPolicy [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/run_policy.go:392 build configuration with Parallel build run policy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/run_policy.go:110 runs the builds in parallel /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/run_policy.go:109 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Upgrade [Feature:Upgrade] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Federated clusters upgrade /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should maintain a functioning federation [Feature:FederatedClustersUpgrade] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/upgrade.go:64 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Load capacity [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Feature:ManualPerformance] should be able to handle 30 pods per node {extensions Deployment} with 0 secrets and 0 daemons /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/load.go:265 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Deployment [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 deployment should support rollover /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:80 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] HostPath [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support r/w [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/host_path.go:82 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:66 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:59:16.542: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:59:16.627: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:66 Jun 24 01:59:16.915: INFO: (0) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 23.912334ms) Jun 24 01:59:16.932: INFO: (1) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 16.959359ms) Jun 24 01:59:16.943: INFO: (2) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 11.015954ms) Jun 24 01:59:16.960: INFO: (3) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 16.887517ms) Jun 24 01:59:17.011: INFO: (4) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 51.117166ms) Jun 24 01:59:17.016: INFO: (5) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 4.87502ms) Jun 24 01:59:17.019: INFO: (6) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.799585ms) Jun 24 01:59:17.024: INFO: (7) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 5.087131ms) Jun 24 01:59:17.034: INFO: (8) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 10.017133ms) Jun 24 01:59:17.037: INFO: (9) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.697301ms) Jun 24 01:59:17.046: INFO: (10) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 9.495049ms) Jun 24 01:59:17.049: INFO: (11) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.585704ms) Jun 24 01:59:17.051: INFO: (12) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.511492ms) Jun 24 01:59:17.055: INFO: (13) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.620199ms) Jun 24 01:59:17.058: INFO: (14) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.160935ms) Jun 24 01:59:17.061: INFO: (15) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.182605ms) Jun 24 01:59:17.064: INFO: (16) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.395509ms) Jun 24 01:59:17.067: INFO: (17) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.165196ms) Jun 24 01:59:17.070: INFO: (18) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.683619ms) Jun 24 01:59:17.074: INFO: (19) /api/v1/nodes/172.18.11.204:10250/proxy/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 4.224495ms) [AfterEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:59:17.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-1hchh" for this suite. Jun 24 01:59:27.196: INFO: namespace: e2e-tests-proxy-1hchh, resource: bindings, ignored listing per whitelist • [SLOW TEST:10.674 seconds] [k8s.io] Proxy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:275 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:66 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] GKE local SSD [Feature:GKELocalSSD] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should write and read from node local SSD [Feature:GKELocalSSD] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/gke_local_ssd.go:44 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Pod Disks [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pd.go:351 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] build can have Dockerfile input [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:116 being created from new-build /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:115 should create a image via new-build and infer the origin tag /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/dockerfile.go:94 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Opaque resources [Feature:OpaqueResources] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not schedule pods that exceed the available amount of opaque integer resource. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/opaque_resource.go:128 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Federation API server authentication [NoCluster] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/authn.go:101 should accept cluster resources when the client has HTTP Basic authentication credentials /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/authn.go:59 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [templates] templateservicebroker end-to-end test [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:244 should pass an end-to-end test /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/templates/templateservicebroker_e2e.go:243 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Services should prevent NodePort collisions /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:878 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:59:20.329: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:59:20.423: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:52 [It] should prevent NodePort collisions /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:878 STEP: creating service nodeport-collision-1 with type NodePort in namespace e2e-tests-services-th9qh STEP: creating service nodeport-collision-2 with conflicting NodePort STEP: deleting service nodeport-collision-1 to release NodePort STEP: creating service nodeport-collision-2 with no-longer-conflicting NodePort STEP: deleting service nodeport-collision-2 in namespace e2e-tests-services-th9qh [AfterEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:59:20.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-th9qh" for this suite. Jun 24 01:59:30.748: INFO: namespace: e2e-tests-services-th9qh, resource: bindings, ignored listing per whitelist [AfterEach] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:64 • [SLOW TEST:10.447 seconds] [k8s.io] Services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should prevent NodePort collisions /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/service.go:878 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 cancel a build started by oc start-build --wait /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:290 should start a build and wait for the build to cancel /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:288 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] the s2i build should support proxies [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/proxy.go:81 start build with broken proxy and a no_proxy override /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/proxy.go:79 should start an s2i build and wait for the build to succeed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/proxy.go:64 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] PersistentVolumes:NFS[Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 with Single PV - PVC pairs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:190 create a PVC and non-pre-bound PV: test write access /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:173 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs ignores deployer and lets the config with a NewReplicationControllerCreated reason [Conformance] should let the deployment config with a NewReplicationControllerCreated reason /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:977 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:59:27.228: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:59:27.247: INFO: configPath is now "/tmp/extended-test-cli-deployment-p9x1g-l1pdm-user.kubeconfig" Jun 24 01:59:27.247: INFO: The user is now "extended-test-cli-deployment-p9x1g-l1pdm-user" Jun 24 01:59:27.247: INFO: Creating project "extended-test-cli-deployment-p9x1g-l1pdm" STEP: Waiting for a default service account to be provisioned in namespace [It] should let the deployment config with a NewReplicationControllerCreated reason /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:977 Jun 24 01:59:27.376: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-p9x1g-l1pdm-user.kubeconfig --namespace=extended-test-cli-deployment-p9x1g-l1pdm -f /tmp/fixture-testdata-dir061046886/test/extended/testdata/deployments/deployment-ignores-deployer.yaml -o name' STEP: verifying that the deployment config is bumped to the first version STEP: verifying that the deployment config has the desired condition and reason [AfterEach] ignores deployer and lets the config with a NewReplicationControllerCreated reason [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:943 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:59:27.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-p9x1g-l1pdm" for this suite. Jun 24 01:59:37.810: INFO: namespace: extended-test-cli-deployment-p9x1g-l1pdm, resource: bindings, ignored listing per whitelist • [SLOW TEST:10.598 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 ignores deployer and lets the config with a NewReplicationControllerCreated reason [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:978 should let the deployment config with a NewReplicationControllerCreated reason /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:977 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Loadbalancing: L7 [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] GCE [Slow] [Feature:Ingress] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should conform to Ingress spec /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/ingress.go:104 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [bldcompat][Slow][Compatibility] build controller [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:52 RunImageChangeTriggerTest [SkipPrevControllers] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:36 should succeed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:35 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] ConfigMap should be consumable via environment variable [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:382 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:59:30.786: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:59:30.864: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:382 STEP: Creating configMap e2e-tests-configmap-hh5v5/configmap-test-3bab12ab-9645-11e9-afd4-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 01:59:30.961: INFO: Waiting up to 5m0s for pod pod-configmaps-3babe014-9645-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:59:30.963: INFO: Waiting for pod pod-configmaps-3babe014-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-configmap-hh5v5' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.879076ms elapsed) Jun 24 01:59:32.985: INFO: Waiting for pod pod-configmaps-3babe014-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-configmap-hh5v5' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.024010126s elapsed) STEP: Saw pod success Jun 24 01:59:34.997: INFO: Trying to get logs from node 172.18.11.204 pod pod-configmaps-3babe014-9645-11e9-afd4-0e9110352016 container env-test: <nil> STEP: delete the pod Jun 24 01:59:35.048: INFO: Waiting for pod pod-configmaps-3babe014-9645-11e9-afd4-0e9110352016 to disappear Jun 24 01:59:35.052: INFO: Pod pod-configmaps-3babe014-9645-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:59:35.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hh5v5" for this suite. Jun 24 01:59:45.307: INFO: namespace: e2e-tests-configmap-hh5v5, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.552 seconds] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable via environment variable [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:382 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Generated release_1_5 clientset [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/generated_clientset.go:213 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][jenkins][Slow] openshift pipeline plugin [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:664 jenkins-plugin test context /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:663 jenkins-plugin test imagestream SCM DSL /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:632 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Density [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Feature:ManualPerformance] should allow starting 100 pods per node using { ReplicationController} with 0 secrets and 0 daemons /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/density.go:743 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Secrets should be consumable from pods in env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:385 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:59:45.350: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:59:45.482: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:385 STEP: Creating secret with name secret-test-44603f88-9645-11e9-afd4-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 01:59:45.575: INFO: Waiting up to 5m0s for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 status to be success or failure Jun 24 01:59:45.592: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-4gvc9' status to be 'success or failure'(found phase: "Pending", readiness: false) (17.030706ms elapsed) Jun 24 01:59:47.595: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-4gvc9' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.020281316s elapsed) Jun 24 01:59:49.601: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-4gvc9' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.026003592s elapsed) Jun 24 01:59:51.603: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-4gvc9' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.028218842s elapsed) Jun 24 01:59:53.606: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-4gvc9' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.030871311s elapsed) Jun 24 01:59:55.608: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-4gvc9' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.033436017s elapsed) Jun 24 01:59:57.611: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-4gvc9' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.036148693s elapsed) STEP: Saw pod success Jun 24 01:59:59.615: INFO: Trying to get logs from node 172.18.11.204 pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 container secret-env-test: <nil> STEP: delete the pod Jun 24 01:59:59.631: INFO: Waiting for pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 to disappear Jun 24 01:59:59.634: INFO: Pod pod-secrets-446136fa-9645-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 01:59:59.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4gvc9" for this suite. Jun 24 02:00:09.742: INFO: namespace: e2e-tests-secrets-4gvc9, resource: bindings, ignored listing per whitelist • [SLOW TEST:24.409 seconds] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:385 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] StatefulSet [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should creating a working redis cluster /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:542 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Volume Placement [Feature:Volume] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] provision pod on node with matching labels /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create and delete pod with the same volume source on the same worker node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/vsphere_volume_placement.go:133 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:235 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:56:05.240: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:56:05.314: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:235 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jk6bg Jun 24 01:56:15.398: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jk6bg STEP: checking the pod's current state and verifying that restartCount is present Jun 24 01:56:15.400: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:16.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jk6bg" for this suite. Jun 24 02:00:26.508: INFO: namespace: e2e-tests-container-probe-jk6bg, resource: bindings, ignored listing per whitelist • [SLOW TEST:261.272 seconds] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should *not* be restarted with a /healthz http liveness probe [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:235 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Cluster size autoscaling [Slow] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/cluster_size_autoscaling.go:292 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:60 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:09.767: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:10.073: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:60 STEP: Creating secret with name secret-test-map-531cf3ec-9645-11e9-afd4-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 02:00:10.293: INFO: Waiting up to 5m0s for pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 status to be success or failure Jun 24 02:00:10.301: INFO: Waiting for pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-5f04p' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.340002ms elapsed) Jun 24 02:00:12.308: INFO: Waiting for pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-5f04p' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.015101698s elapsed) Jun 24 02:00:14.311: INFO: Waiting for pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-5f04p' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.018299741s elapsed) Jun 24 02:00:16.314: INFO: Waiting for pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-5f04p' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.020907803s elapsed) STEP: Saw pod success Jun 24 02:00:18.329: INFO: Trying to get logs from node 172.18.11.204 pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 container secret-volume-test: <nil> STEP: delete the pod Jun 24 02:00:18.380: INFO: Waiting for pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 to disappear Jun 24 02:00:18.384: INFO: Pod pod-secrets-531d6b3f-9645-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:18.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5f04p" for this suite. Jun 24 02:00:28.569: INFO: namespace: e2e-tests-secrets-5f04p, resource: bindings, ignored listing per whitelist • [SLOW TEST:18.803 seconds] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:60 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][mysql][Slow] openshift mysql replication [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mysql_replica.go:202 MySQL replication template for 5.7: https://raw.githubusercontent.com/sclorg/mysql-container/master/5.7/examples/replica/mysql_replica.json /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mysql_replica.go:200 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] SchedulerPredicates [Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 validates that taints-tolerations is respected if not matching /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:740 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Pod Disks [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pd.go:518 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Feature:ImageQuota] Image limit range [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/limitrange_admission.go:225 should deny an import of a repository exceeding limit on openshift.io/image-tags resource /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/imageapis/limitrange_admission.go:224 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PodPreset [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a pod preset /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/podpreset.go:151 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] build can have Docker image source [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/image_source.go:81 build with image docker /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/image_source.go:79 should complete successfully and contain the expected file /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/image_source.go:77 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:122 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:59:22.015: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:59:22.242: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:122 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-th1qz Jun 24 01:59:28.550: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-th1qz STEP: checking the pod's current state and verifying that restartCount is present Jun 24 01:59:28.553: INFO: Initial restart count of pod liveness-exec is 0 Jun 24 02:00:18.802: INFO: Restart count of pod e2e-tests-container-probe-th1qz/liveness-exec is now 1 (50.249003225s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:18.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-th1qz" for this suite. Jun 24 02:00:28.999: INFO: namespace: e2e-tests-container-probe-th1qz, resource: bindings, ignored listing per whitelist • [SLOW TEST:67.048 seconds] [k8s.io] Probing container /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:122 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Volume Placement [Feature:Volume] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] provision pod on node with matching labels /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create and delete pod with the same volume source attach/detach to different worker nodes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/vsphere_volume_placement.go:156 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [builds][pullsecret][Conformance] docker build using a pull secret Building from a template should create a docker build that pulls using a secret run it /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:48 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [builds][pullsecret][Conformance] docker build using a pull secret /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:58:46.274: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:58:46.307: INFO: configPath is now "/tmp/extended-test-docker-build-pullsecret-80s54-n1s8p-user.kubeconfig" Jun 24 01:58:46.307: INFO: The user is now "extended-test-docker-build-pullsecret-80s54-n1s8p-user" Jun 24 01:58:46.307: INFO: Creating project "extended-test-docker-build-pullsecret-80s54-n1s8p" STEP: Waiting for a default service account to be provisioned in namespace [JustBeforeEach] [builds][pullsecret][Conformance] docker build using a pull secret /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:28 STEP: waiting for builder service account [It] should create a docker build that pulls using a secret run it /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:48 STEP: calling oc create -f "/tmp/fixture-testdata-dir978613527/test/extended/testdata/test-docker-build-pullsecret.json" Jun 24 01:58:46.562: INFO: Running 'oc create --config=/tmp/extended-test-docker-build-pullsecret-80s54-n1s8p-user.kubeconfig --namespace=extended-test-docker-build-pullsecret-80s54-n1s8p -f /tmp/fixture-testdata-dir978613527/test/extended/testdata/test-docker-build-pullsecret.json' imagestream "image1" created buildconfig "docker-build" created buildconfig "docker-build-pull" created STEP: starting a build Jun 24 01:58:46.918: INFO: Running 'oc start-build --config=/tmp/extended-test-docker-build-pullsecret-80s54-n1s8p-user.kubeconfig --namespace=extended-test-docker-build-pullsecret-80s54-n1s8p docker-build -o=name' start-build output with args [docker-build -o=name]: Error><nil> StdOut> build/docker-build-1 StdErr> Waiting for docker-build-1 to complete Done waiting for docker-build-1: util.BuildResult{BuildPath:"build/docker-build-1", BuildName:"docker-build-1", StartBuildStdErr:"", StartBuildStdOut:"build/docker-build-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*api.Build)(0xc421070580), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), oc:(*util.CLI)(0xc4201db680)} with error: <nil> STEP: starting a second build that pulls the image from the first build Jun 24 01:59:48.353: INFO: Running 'oc start-build --config=/tmp/extended-test-docker-build-pullsecret-80s54-n1s8p-user.kubeconfig --namespace=extended-test-docker-build-pullsecret-80s54-n1s8p docker-build-pull -o=name' start-build output with args [docker-build-pull -o=name]: Error><nil> StdOut> build/docker-build-pull-1 StdErr> Waiting for docker-build-pull-1 to complete Done waiting for docker-build-pull-1: util.BuildResult{BuildPath:"build/docker-build-pull-1", BuildName:"docker-build-pull-1", StartBuildStdErr:"", StartBuildStdOut:"build/docker-build-pull-1", StartBuildErr:error(nil), BuildConfigName:"", Build:(*api.Build)(0xc421071080), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), oc:(*util.CLI)(0xc4201db680)} with error: <nil> [AfterEach] [builds][pullsecret][Conformance] docker build using a pull secret /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:19.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-docker-build-pullsecret-80s54-n1s8p" for this suite. Jun 24 02:00:29.864: INFO: namespace: extended-test-docker-build-pullsecret-80s54-n1s8p, resource: bindings, ignored listing per whitelist • [SLOW TEST:103.689 seconds] [builds][pullsecret][Conformance] docker build using a pull secret /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:50 Building from a template /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:49 should create a docker build that pulls using a secret run it /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/docker_pullsecret.go:48 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] PersistentVolumes:GCEPD[Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:379 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ReplicaSet [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Features /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:199 CRUD /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:138 should not be deleted from underlying clusters when OrphanDependents is true /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/replicaset.go:130 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Pod Disks [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pd.go:466 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:843 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 01:59:37.831: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 01:59:37.899: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:289 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:843 Jun 24 01:59:37.958: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig version --client' Jun 24 01:59:38.205: INFO: stderr: "" Jun 24 01:59:38.205: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"6\", GitVersion:\"v1.6.1+5115d708d7\", GitCommit:\"5115d70\", GitTreeState:\"clean\", BuildDate:\"2017-06-06T22:41:15Z\", GoVersion:\"go1.7.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Jun 24 01:59:38.207: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig create -f - --namespace=e2e-tests-kubectl-qqmn5' Jun 24 01:59:38.638: INFO: stderr: "" Jun 24 01:59:38.638: INFO: stdout: "replicationcontroller \"redis-master\" created\n" Jun 24 01:59:38.638: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig create -f - --namespace=e2e-tests-kubectl-qqmn5' Jun 24 01:59:39.079: INFO: stderr: "" Jun 24 01:59:39.079: INFO: stdout: "service \"redis-master\" created\n" STEP: Waiting for Redis master to start. Jun 24 01:59:40.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:40.082: INFO: Found 0 / 1 Jun 24 01:59:41.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:41.082: INFO: Found 0 / 1 Jun 24 01:59:42.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:42.082: INFO: Found 0 / 1 Jun 24 01:59:43.083: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:43.083: INFO: Found 0 / 1 Jun 24 01:59:44.087: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:44.087: INFO: Found 0 / 1 Jun 24 01:59:45.084: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:45.084: INFO: Found 0 / 1 Jun 24 01:59:46.086: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:46.086: INFO: Found 0 / 1 Jun 24 01:59:47.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:47.082: INFO: Found 0 / 1 Jun 24 01:59:48.103: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:48.103: INFO: Found 0 / 1 Jun 24 01:59:49.083: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:49.083: INFO: Found 0 / 1 Jun 24 01:59:50.090: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:50.090: INFO: Found 0 / 1 Jun 24 01:59:51.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:51.082: INFO: Found 0 / 1 Jun 24 01:59:52.090: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:52.090: INFO: Found 0 / 1 Jun 24 01:59:53.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:53.082: INFO: Found 0 / 1 Jun 24 01:59:54.081: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:54.081: INFO: Found 0 / 1 Jun 24 01:59:55.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:55.082: INFO: Found 0 / 1 Jun 24 01:59:56.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:56.082: INFO: Found 0 / 1 Jun 24 01:59:57.090: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:57.090: INFO: Found 0 / 1 Jun 24 01:59:58.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:58.082: INFO: Found 0 / 1 Jun 24 01:59:59.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 01:59:59.082: INFO: Found 0 / 1 Jun 24 02:00:00.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:00.082: INFO: Found 0 / 1 Jun 24 02:00:01.090: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:01.090: INFO: Found 0 / 1 Jun 24 02:00:02.097: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:02.097: INFO: Found 0 / 1 Jun 24 02:00:03.087: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:03.087: INFO: Found 0 / 1 Jun 24 02:00:04.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:04.082: INFO: Found 0 / 1 Jun 24 02:00:05.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:05.082: INFO: Found 0 / 1 Jun 24 02:00:06.083: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:06.083: INFO: Found 0 / 1 Jun 24 02:00:07.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:07.082: INFO: Found 0 / 1 Jun 24 02:00:08.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:08.082: INFO: Found 0 / 1 Jun 24 02:00:09.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:09.082: INFO: Found 0 / 1 Jun 24 02:00:10.107: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:10.107: INFO: Found 0 / 1 Jun 24 02:00:11.083: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:11.083: INFO: Found 0 / 1 Jun 24 02:00:12.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:12.082: INFO: Found 0 / 1 Jun 24 02:00:13.081: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:13.081: INFO: Found 0 / 1 Jun 24 02:00:14.085: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:14.085: INFO: Found 0 / 1 Jun 24 02:00:15.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:15.082: INFO: Found 0 / 1 Jun 24 02:00:16.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:16.082: INFO: Found 0 / 1 Jun 24 02:00:17.090: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:17.090: INFO: Found 0 / 1 Jun 24 02:00:18.084: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:18.084: INFO: Found 0 / 1 Jun 24 02:00:19.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:19.082: INFO: Found 0 / 1 Jun 24 02:00:20.082: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:20.082: INFO: Found 1 / 1 Jun 24 02:00:20.082: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 24 02:00:20.083: INFO: Selector matched 1 pods for map[app:redis] Jun 24 02:00:20.083: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 24 02:00:20.084: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig describe pod redis-master-6k5kq --namespace=e2e-tests-kubectl-qqmn5' Jun 24 02:00:20.364: INFO: stderr: "" Jun 24 02:00:20.364: INFO: stdout: "Name:\t\t\tredis-master-6k5kq\nNamespace:\t\te2e-tests-kubectl-qqmn5\nSecurity Policy:\tanyuid\nNode:\t\t\t172.18.11.204/172.18.11.204\nStart Time:\t\tMon, 24 Jun 2019 01:59:38 -0400\nLabels:\t\t\tapp=redis\n\t\t\trole=master\nAnnotations:\t\tkubernetes.io/created-by={\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-kubectl-qqmn5\",\"name\":\"redis-master\",\"uid\":\"403ef766-9645-11...\n\t\t\topenshift.io/scc=anyuid\nStatus:\t\t\tRunning\nIP:\t\t\t172.17.0.4\nControllers:\t\tReplicationController/redis-master\nContainers:\n redis-master:\n Container ID:\tdocker://a252622939306c09ed0a9563d3cfad07b909caa5b4182f2eba63ec1808992028\n Image:\t\tgcr.io/google_containers/redis:e2e\n Image ID:\t\tdocker-pullable://gcr.io/google_containers/redis@sha256:f066bcf26497fbc55b9bf0769cb13a35c0afa2aa42e737cc46b7fb04b23a2f25\n Port:\t\t6379/TCP\n State:\t\tRunning\n Started:\t\tMon, 24 Jun 2019 02:00:19 -0400\n Ready:\t\tTrue\n Restart Count:\t0\n Environment:\t<none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wd2f0 (ro)\nConditions:\n Type\t\tStatus\n Initialized \tTrue \n Ready \tTrue \n PodScheduled \tTrue \nVolumes:\n default-token-wd2f0:\n Type:\tSecret (a volume populated by a Secret)\n SecretName:\tdefault-token-wd2f0\n Optional:\tfalse\nQoS Class:\tBestEffort\nNode-Selectors:\t<none>\nTolerations:\t<none>\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubObjectPath\t\t\tType\t\tReason\t\tMessage\n ---------\t--------\t-----\t----\t\t\t-------------\t\t\t--------\t------\t\t-------\n 42s\t\t42s\t\t1\tdefault-scheduler\t\t\t\t\tNormal\t\tScheduled\tSuccessfully assigned redis-master-6k5kq to 172.18.11.204\n 40s\t\t40s\t\t1\tkubelet, 172.18.11.204\tspec.containers{redis-master}\tNormal\t\tPulling\t\tpulling image \"gcr.io/google_containers/redis:e2e\"\n 2s\t\t2s\t\t1\tkubelet, 172.18.11.204\tspec.containers{redis-master}\tNormal\t\tPulled\t\tSuccessfully pulled image \"gcr.io/google_containers/redis:e2e\"\n 2s\t\t2s\t\t1\tkubelet, 172.18.11.204\tspec.containers{redis-master}\tNormal\t\tCreated\t\tCreated container with id a252622939306c09ed0a9563d3cfad07b909caa5b4182f2eba63ec1808992028\n 1s\t\t1s\t\t1\tkubelet, 172.18.11.204\tspec.containers{redis-master}\tNormal\t\tStarted\t\tStarted container with id a252622939306c09ed0a9563d3cfad07b909caa5b4182f2eba63ec1808992028\n" Jun 24 02:00:20.364: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig describe rc redis-master --namespace=e2e-tests-kubectl-qqmn5' Jun 24 02:00:20.659: INFO: stderr: "" Jun 24 02:00:20.659: INFO: stdout: "Name:\t\tredis-master\nNamespace:\te2e-tests-kubectl-qqmn5\nSelector:\tapp=redis,role=master\nLabels:\t\tapp=redis\n\t\trole=master\nAnnotations:\t<none>\nReplicas:\t1 current / 1 desired\nPods Status:\t1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels:\tapp=redis\n\t\trole=master\n Containers:\n redis-master:\n Image:\t\tgcr.io/google_containers/redis:e2e\n Port:\t\t6379/TCP\n Environment:\t<none>\n Mounts:\t\t<none>\n Volumes:\t\t<none>\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubObjectPath\tType\t\tReason\t\t\tMessage\n ---------\t--------\t-----\t----\t\t\t-------------\t--------\t------\t\t\t-------\n 42s\t\t42s\t\t1\treplication-controller\t\t\tNormal\t\tSuccessfulCreate\tCreated pod: redis-master-6k5kq\n" Jun 24 02:00:20.659: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig describe service redis-master --namespace=e2e-tests-kubectl-qqmn5' Jun 24 02:00:21.009: INFO: stderr: "" Jun 24 02:00:21.009: INFO: stdout: "Name:\t\t\tredis-master\nNamespace:\t\te2e-tests-kubectl-qqmn5\nLabels:\t\t\tapp=redis\n\t\t\trole=master\nAnnotations:\t\t<none>\nSelector:\t\tapp=redis,role=master\nType:\t\t\tClusterIP\nIP:\t\t\t172.30.57.104\nPort:\t\t\t<unset>\t6379/TCP\nEndpoints:\t\t172.17.0.4:6379\nSession Affinity:\tNone\nEvents:\t\t\t<none>\n" Jun 24 02:00:21.012: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig describe node 172.18.11.204' Jun 24 02:00:21.331: INFO: stderr: "" Jun 24 02:00:21.331: INFO: stdout: "Name:\t\t\t172.18.11.204\nRole:\t\t\t\nLabels:\t\t\tbeta.kubernetes.io/arch=amd64\n\t\t\tbeta.kubernetes.io/os=linux\n\t\t\tkubernetes.io/hostname=172.18.11.204\nAnnotations:\t\tvolumes.kubernetes.io/controller-managed-attach-detach=true\nTaints:\t\t\t<none>\nCreationTimestamp:\tMon, 24 Jun 2019 01:48:53 -0400\nPhase:\t\t\t\nConditions:\n Type\t\t\tStatus\tLastHeartbeatTime\t\t\tLastTransitionTime\t\t\tReason\t\t\t\tMessage\n ----\t\t\t------\t-----------------\t\t\t------------------\t\t\t------\t\t\t\t-------\n OutOfDisk \t\tFalse \tMon, 24 Jun 2019 02:00:15 -0400 \tMon, 24 Jun 2019 01:48:53 -0400 \tKubeletHasSufficientDisk \tkubelet has sufficient disk space available\n MemoryPressure \tFalse \tMon, 24 Jun 2019 02:00:15 -0400 \tMon, 24 Jun 2019 01:48:53 -0400 \tKubeletHasSufficientMemory \tkubelet has sufficient memory available\n DiskPressure \t\tFalse \tMon, 24 Jun 2019 02:00:15 -0400 \tMon, 24 Jun 2019 01:48:53 -0400 \tKubeletHasNoDiskPressure \tkubelet has no disk pressure\n Ready \t\tTrue \tMon, 24 Jun 2019 02:00:15 -0400 \tMon, 24 Jun 2019 01:48:53 -0400 \tKubeletReady \t\t\tkubelet is posting ready status\nAddresses:\t\t172.18.11.204,172.18.11.204,172.18.11.204\nCapacity:\n cpu:\t\t2\n memory:\t7747388Ki\n pods:\t\t20\nAllocatable:\n cpu:\t\t2\n memory:\t7644988Ki\n pods:\t\t20\nSystem Info:\n Machine ID:\t\t\tf9370ed252a14f73b014c1301a9b6d1b\n System UUID:\t\t\t868228EC-F192-B389-15C2-61279A8BCDCC\n Boot ID:\t\t\tdedba9fd-fd51-4a65-bc8c-bb5e08a40ef7\n Kernel Version:\t\t3.10.0-327.22.2.el7.x86_64\n OS Image:\t\t\tRed Hat Enterprise Linux Server 7.3 (Maipo)\n Operating System:\t\tlinux\n Architecture:\t\t\tamd64\n Container Runtime Version:\tdocker://1.12.6\n Kubelet Version:\t\tv1.6.1+5115d708d7\n Kube-Proxy Version:\t\tv1.6.1+5115d708d7\nExternalID:\t\t\t172.18.11.204\nNon-terminated Pods:\t\t(3 in total)\n Namespace\t\t\tName\t\t\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ---------\t\t\t----\t\t\t\t------------\t----------\t---------------\t-------------\n default\t\t\tdocker-registry-1-cqttq\t\t100m (5%)\t0 (0%)\t\t256Mi (3%)\t0 (0%)\n default\t\t\trouter-2-mwj77\t\t\t100m (5%)\t0 (0%)\t\t256Mi (3%)\t0 (0%)\n e2e-tests-kubectl-qqmn5\tredis-master-6k5kq\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\t\t0 (0%)\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n CPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\n ------------\t----------\t---------------\t-------------\n 200m (10%)\t0 (0%)\t\t512Mi (6%)\t0 (0%)\nEvents:\n FirstSeen\tLastSeen\tCount\tFrom\t\t\tSubObjectPath\tType\t\tReason\t\t\tMessage\n ---------\t--------\t-----\t----\t\t\t-------------\t--------\t------\t\t\t-------\n 11m\t\t11m\t\t1\tkubelet, 172.18.11.204\t\t\tNormal\t\tStarting\t\tStarting kubelet.\n 11m\t\t11m\t\t1\tkubelet, 172.18.11.204\t\t\tWarning\t\tImageGCFailed\t\tunable to find data for container /\n 11m\t\t11m\t\t2\tkubelet, 172.18.11.204\t\t\tNormal\t\tNodeHasSufficientDisk\tNode 172.18.11.204 status is now: NodeHasSufficientDisk\n 11m\t\t11m\t\t2\tkubelet, 172.18.11.204\t\t\tNormal\t\tNodeHasSufficientMemory\tNode 172.18.11.204 status is now: NodeHasSufficientMemory\n 11m\t\t11m\t\t2\tkubelet, 172.18.11.204\t\t\tNormal\t\tNodeHasNoDiskPressure\tNode 172.18.11.204 status is now: NodeHasNoDiskPressure\n" Jun 24 02:00:21.332: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig describe namespace e2e-tests-kubectl-qqmn5' Jun 24 02:00:21.702: INFO: stderr: "" Jun 24 02:00:21.702: INFO: stdout: "Name:\t\te2e-tests-kubectl-qqmn5\nLabels:\t\te2e-framework=kubectl\n\t\te2e-run=d8590d13-9643-11e9-a60e-0e9110352016\nAnnotations:\topenshift.io/sa.scc.mcs=s0:c28,c12\n\t\topenshift.io/sa.scc.supplemental-groups=1000780000/10000\n\t\topenshift.io/sa.scc.uid-range=1000780000/10000\nStatus:\t\tActive\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:21.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qqmn5" for this suite. Jun 24 02:00:46.807: INFO: namespace: e2e-tests-kubectl-qqmn5, resource: bindings, ignored listing per whitelist • [SLOW TEST:69.003 seconds] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl describe /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should check if kubectl describe prints relevant information for rc and pods [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:843 ------------------------------ [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:51 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:28.584: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:28.655: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:51 STEP: Creating secret with name secret-test-5e18d708-9645-11e9-afd4-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 02:00:28.718: INFO: Waiting up to 5m0s for pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 status to be success or failure Jun 24 02:00:28.721: INFO: Waiting for pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-vw1k9' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.054884ms elapsed) Jun 24 02:00:30.724: INFO: Waiting for pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-vw1k9' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005408582s elapsed) Jun 24 02:00:32.726: INFO: Waiting for pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-vw1k9' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.00775976s elapsed) Jun 24 02:00:34.730: INFO: Waiting for pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-vw1k9' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.011293344s elapsed) Jun 24 02:00:36.732: INFO: Waiting for pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-secrets-vw1k9' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.013784431s elapsed) STEP: Saw pod success Jun 24 02:00:38.743: INFO: Trying to get logs from node 172.18.11.204 pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 container secret-volume-test: <nil> STEP: delete the pod Jun 24 02:00:38.811: INFO: Waiting for pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 to disappear Jun 24 02:00:38.814: INFO: Pod pod-secrets-5e192e1d-9645-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:38.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vw1k9" for this suite. Jun 24 02:00:48.917: INFO: namespace: e2e-tests-secrets-vw1k9, resource: bindings, ignored listing per whitelist • [SLOW TEST:20.458 seconds] [k8s.io] Secrets /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/secrets.go:51 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] SchedulerPredicates [Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 validates that InterPodAffinity is respected if matching with multiple Affinities /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:609 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] idling and unidling [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:468 unidling /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:467 should work with UDP [local] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/idling/idling.go:414 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:66 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:29.976: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:30.056: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:66 STEP: Creating configMap with name configmap-test-volume-map-5eecb2a0-9645-11e9-a527-0e9110352016 STEP: Creating a pod to test consume configMaps Jun 24 02:00:30.107: INFO: Waiting up to 5m0s for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 02:00:30.118: INFO: Waiting for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-7ks1z' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.991864ms elapsed) Jun 24 02:00:32.120: INFO: Waiting for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-7ks1z' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.013430247s elapsed) Jun 24 02:00:34.124: INFO: Waiting for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-7ks1z' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.016620877s elapsed) Jun 24 02:00:36.127: INFO: Waiting for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-7ks1z' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.019594906s elapsed) Jun 24 02:00:38.129: INFO: Waiting for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-7ks1z' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.02240594s elapsed) Jun 24 02:00:40.133: INFO: Waiting for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-configmap-7ks1z' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.026056452s elapsed) STEP: Saw pod success Jun 24 02:00:42.137: INFO: Trying to get logs from node 172.18.11.204 pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 container configmap-volume-test: <nil> STEP: delete the pod Jun 24 02:00:42.153: INFO: Waiting for pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 to disappear Jun 24 02:00:42.156: INFO: Pod pod-configmaps-5eed09ec-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:42.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7ks1z" for this suite. Jun 24 02:00:52.448: INFO: namespace: e2e-tests-configmap-7ks1z, resource: bindings, ignored listing per whitelist • [SLOW TEST:22.594 seconds] [k8s.io] ConfigMap /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/configmap.go:66 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] DNS configMap federations [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be able to change federation configuration /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns_configmap.go:43 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Simple pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support exec through an HTTP proxy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:464 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:960 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:49.048: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:49.125: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:960 STEP: Creating a pod to test downward API volume plugin Jun 24 02:00:49.192: INFO: Waiting up to 5m0s for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 status to be success or failure Jun 24 02:00:49.198: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-7qj0b' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.06181ms elapsed) Jun 24 02:00:51.201: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-7qj0b' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.00855657s elapsed) Jun 24 02:00:53.254: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-7qj0b' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.061701169s elapsed) Jun 24 02:00:55.273: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-7qj0b' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.080656088s elapsed) Jun 24 02:00:57.297: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-7qj0b' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.104362112s elapsed) Jun 24 02:00:59.299: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-7qj0b' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.106949732s elapsed) Jun 24 02:01:01.302: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 in namespace 'e2e-tests-projected-7qj0b' status to be 'success or failure'(found phase: "Pending", readiness: false) (12.109663094s elapsed) STEP: Saw pod success Jun 24 02:01:03.319: INFO: Trying to get logs from node 172.18.11.204 pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 02:01:03.360: INFO: Waiting for pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 to disappear Jun 24 02:01:03.368: INFO: Pod downwardapi-volume-6a4d183b-9645-11e9-afd4-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:03.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7qj0b" for this suite. Jun 24 02:01:13.473: INFO: namespace: e2e-tests-projected-7qj0b, resource: bindings, ignored listing per whitelist • [SLOW TEST:24.521 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:960 ------------------------------ [k8s.io] Networking should provide Internet connection for containers [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:49 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:52.575: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:52.645: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:44 STEP: Executing a successful http request from the external internet [It] should provide Internet connection for containers [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:49 STEP: Running container which tries to wget google.com Jun 24 02:00:53.334: INFO: Waiting up to 5m0s for pod wget-test status to be success or failure Jun 24 02:00:53.343: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-dg9hz' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.965305ms elapsed) Jun 24 02:00:55.347: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-dg9hz' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.012522276s elapsed) Jun 24 02:00:57.365: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-dg9hz' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.030375847s elapsed) Jun 24 02:00:59.367: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-dg9hz' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.032437971s elapsed) Jun 24 02:01:01.388: INFO: Waiting for pod wget-test in namespace 'e2e-tests-nettest-dg9hz' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.05392966s elapsed) STEP: Saw pod success [AfterEach] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:03.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-nettest-dg9hz" for this suite. Jun 24 02:01:13.598: INFO: namespace: e2e-tests-nettest-dg9hz, resource: bindings, ignored listing per whitelist • [SLOW TEST:21.030 seconds] [k8s.io] Networking /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide Internet connection for containers [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:49 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Pod Disks [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be able to delete a non-existent PD without error /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pd.go:525 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [builds] Optimized image builds should succeed [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:75 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [builds] Optimized image builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:46.836: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:46.856: INFO: configPath is now "/tmp/extended-test-build-dockerfile-env-03wnv-n08mh-user.kubeconfig" Jun 24 02:00:46.856: INFO: The user is now "extended-test-build-dockerfile-env-03wnv-n08mh-user" Jun 24 02:00:46.856: INFO: Creating project "extended-test-build-dockerfile-env-03wnv-n08mh" STEP: Waiting for a default service account to be provisioned in namespace [JustBeforeEach] [builds] Optimized image builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:34 STEP: waiting for builder service account [It] should succeed [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:75 STEP: creating a build directly Waiting for optimized to complete Done waiting for optimized: util.BuildResult{BuildPath:"builds/optimized", BuildName:"optimized", StartBuildStdErr:"", StartBuildStdOut:"", StartBuildErr:error(nil), BuildConfigName:"", Build:(*api.Build)(0xc421abb080), BuildAttempt:true, BuildSuccess:true, BuildFailure:false, BuildCancelled:false, BuildTimeout:false, LogDumper:(util.LogDumperFunc)(nil), oc:(*util.CLI)(0xc420245b30)} with error: <nil> Jun 24 02:01:03.115: INFO: Running 'oc logs --config=/tmp/extended-test-build-dockerfile-env-03wnv-n08mh-user.kubeconfig --namespace=extended-test-build-dockerfile-env-03wnv-n08mh -f builds/optimized --timestamps' Jun 24 02:01:03.680: INFO: Build logs: &{builds/optimized optimized <nil> %!s(*api.Build=&{{ } {optimized extended-test-build-dockerfile-env-03wnv-n08mh /oapi/v1/namespaces/extended-test-build-dockerfile-env-03wnv-n08mh/builds/optimized 690f13fd-9645-11e9-9f9d-0e9110352016 9327 0 {{63696952847 0 0x590a6e0}} <nil> <nil> map[] map[openshift.io/build.pod-name:optimized-build] [] [] } {{ {<nil> 0xc420d563c0 <nil> [] <nil> []} <nil> {0xc4219b3500 <nil> <nil> <nil>} {<nil> <nil> []} {map[] map[]} {[] [] } <nil> map[]} []} {Complete false 0xc4224c14e0 0xc4224c1560 15000000000 <nil> {<nil>} []}}) %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(bool=false) %!s(bool=false) %!s(util.LogDumperFunc=<nil>) %!s(*util.CLI=&{oc /tmp/extended-test-build-dockerfile-env-03wnv-n08mh-user.kubeconfig /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig extended-test-build-dockerfile-env-03wnv-n08mh-user /tmp/openshift-extended-tests [] [] [] <nil> <nil> <nil> false false <nil> 0xc42008cb40})} [AfterEach] [builds] Optimized image builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:03.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-build-dockerfile-env-03wnv-n08mh" for this suite. Jun 24 02:01:14.287: INFO: namespace: extended-test-build-dockerfile-env-03wnv-n08mh, resource: bindings, ignored listing per whitelist • [SLOW TEST:27.547 seconds] [builds] Optimized image builds /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:76 should succeed [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/optimized.go:75 ------------------------------ [k8s.io] Downward API volume should provide container's memory limit [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:172 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:01:13.611: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:01:13.957: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:172 STEP: Creating a pod to test downward API volume plugin Jun 24 02:01:14.261: INFO: Waiting up to 5m0s for pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 02:01:14.263: INFO: Waiting for pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-2wg15' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.995077ms elapsed) Jun 24 02:01:16.265: INFO: Waiting for pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-2wg15' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004168883s elapsed) Jun 24 02:01:18.268: INFO: Waiting for pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-2wg15' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.006845503s elapsed) Jun 24 02:01:20.270: INFO: Waiting for pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-2wg15' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.008917605s elapsed) STEP: Saw pod success Jun 24 02:01:22.303: INFO: Trying to get logs from node 172.18.11.204 pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 02:01:22.370: INFO: Waiting for pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 to disappear Jun 24 02:01:22.381: INFO: Pod downwardapi-volume-793e22d2-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:22.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2wg15" for this suite. Jun 24 02:01:32.627: INFO: namespace: e2e-tests-downward-api-2wg15, resource: bindings, ignored listing per whitelist • [SLOW TEST:19.050 seconds] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide container's memory limit [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:172 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl create quota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a quota without scopes /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1575 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 using the SCL in s2i images /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73 "ci.dev.openshift.redhat.com:5000/openshift/php-56-rhel7" should be SCL enabled /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [Conformance][networking][router] openshift router metrics The HAProxy router should expose the profiling endpoints /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:170 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:01:13.573: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:01:13.607: INFO: configPath is now "/tmp/extended-test-router-metrics-lrwv9-zcthj-user.kubeconfig" Jun 24 02:01:13.607: INFO: The user is now "extended-test-router-metrics-lrwv9-zcthj-user" Jun 24 02:01:13.607: INFO: Creating project "extended-test-router-metrics-lrwv9-zcthj" STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:51 [It] should expose the profiling endpoints /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:170 Jun 24 02:01:14.177: INFO: Creating new exec pod STEP: preventing access without a username and password Jun 24 02:01:22.225: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-router-metrics-lrwv9-zcthj execpod -- /bin/sh -c curl -s -o /dev/null -w '%{http_code}' "http://172.18.11.204:1935/debug/pprof/heap"' Jun 24 02:01:22.751: INFO: stderr: "" STEP: at /debug/pprof Jun 24 02:01:22.751: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig exec --namespace=extended-test-router-metrics-lrwv9-zcthj execpod -- /bin/sh -c curl -s -u admin:M3Czto4twK "http://172.18.11.204:1935/debug/pprof/heap"' Jun 24 02:01:23.415: INFO: stderr: "" [AfterEach] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:23.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-router-metrics-lrwv9-zcthj" for this suite. Jun 24 02:01:38.577: INFO: namespace: extended-test-router-metrics-lrwv9-zcthj, resource: bindings, ignored listing per whitelist • [SLOW TEST:25.005 seconds] [Conformance][networking][router] openshift router metrics /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:172 The HAProxy router /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:171 should expose the profiling endpoints /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/router/metrics.go:170 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:26.516: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:26.589: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 24 02:00:48.657: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-2gkj1 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 24 02:00:48.657: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 02:00:48.888: INFO: Exec stderr: "" Jun 24 02:00:48.888: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-2gkj1 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 24 02:00:48.888: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 02:00:49.425: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 24 02:00:49.425: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-2gkj1 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 24 02:00:49.426: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 02:00:49.878: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 24 02:00:49.878: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-2gkj1 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 24 02:00:49.878: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 02:00:52.116: INFO: Exec stderr: "" Jun 24 02:00:52.116: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-2gkj1 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 24 02:00:52.116: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig Jun 24 02:00:54.273: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:00:54.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-2gkj1" for this suite. Jun 24 02:01:49.451: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-2gkj1, resource: bindings, ignored listing per whitelist • [SLOW TEST:82.941 seconds] [k8s.io] KubeletManagedEtcHosts /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should test kubelet managed /etc/hosts file [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:55 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 using the SCL in s2i images /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:73 "openshift/php-56-centos7" should be SCL enabled /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:72 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 oc start-build --wait /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:54 should start a build and wait for the build to complete /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:45 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [cli][Slow] can use rsync to upload files to pods [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:396 copy by strategy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:285 should copy files with the tar strategy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:283 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:213 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:01:32.674: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:01:32.801: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:213 STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:41.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-resourcequota-dnjmp" for this suite. Jun 24 02:01:51.469: INFO: namespace: e2e-tests-resourcequota-dnjmp, resource: bindings, ignored listing per whitelist • [SLOW TEST:18.796 seconds] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a ResourceQuota and capture the life of a pod. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:213 ------------------------------ [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1107 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:01:51.476: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:01:51.588: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:289 [BeforeEach] [k8s.io] Kubectl run default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1087 [It] should create an rc or deployment from an image [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1107 STEP: running the image gcr.io/google_containers/nginx-slim:0.7 Jun 24 02:01:51.720: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig run e2e-test-nginx-deployment --image=gcr.io/google_containers/nginx-slim:0.7 --namespace=e2e-tests-kubectl-t7w3j' Jun 24 02:01:52.490: INFO: stderr: "" Jun 24 02:01:52.490: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1091 Jun 24 02:01:54.506: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://172.18.11.204:8443 --kubeconfig=/tmp/openshift/core/openshift.local.config/master/admin.kubeconfig delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-t7w3j' Jun 24 02:01:57.897: INFO: stderr: "" Jun 24 02:01:57.897: INFO: stdout: "deployment \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:57.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t7w3j" for this suite. Jun 24 02:02:07.961: INFO: namespace: e2e-tests-kubectl-t7w3j, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.577 seconds] [k8s.io] Kubectl client /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl run default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create an rc or deployment from an image [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:1107 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation deployments [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Federated Deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/deployment.go:127 should not be deleted from underlying clusters when OrphanDependents is nil /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/deployment.go:125 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Networking [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide unchanging, static URL paths for kubernetes api services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:74 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [cli][Slow] can use rsync to upload files to pods [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:396 rsync specific flags /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:395 should honor the --include flag /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/cli/rsync.go:335 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] [Feature:Example] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] CassandraStatefulSet /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create statefulset /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/examples.go:333 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl alpha client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl run CronJob /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a CronJob /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:264 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ingresses [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Federated Ingresses /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:274 should create and update matching ingresses in underlying clusters /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:200 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][pruning] prune builds based on settings in the buildconfig [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:122 should prune completed builds based on the successfulBuildsHistoryLimit setting /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:65 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][jenkins][Slow] openshift pipeline plugin [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:664 jenkins-plugin test context /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:663 jenkins-plugin test trigger build with slave /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:285 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Deployment [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 deployment should create new pods /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/deployment.go:65 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][jenkins][Slow] openshift pipeline plugin [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:664 jenkins-plugin test context /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:663 jenkins-plugin test multitag DSL /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/jenkins_plugin.go:591 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:63 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:01:49.468: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:01:49.624: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name and namespace as env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:63 STEP: Creating a pod to test downward api env vars Jun 24 02:01:50.312: INFO: Waiting up to 5m0s for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 status to be success or failure Jun 24 02:01:50.313: INFO: Waiting for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-rp9nt' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.771525ms elapsed) Jun 24 02:01:52.351: INFO: Waiting for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-rp9nt' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.039835498s elapsed) Jun 24 02:01:54.354: INFO: Waiting for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-rp9nt' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.042658276s elapsed) Jun 24 02:01:56.357: INFO: Waiting for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-rp9nt' status to be 'success or failure'(found phase: "Pending", readiness: false) (6.045132375s elapsed) Jun 24 02:01:58.360: INFO: Waiting for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-rp9nt' status to be 'success or failure'(found phase: "Pending", readiness: false) (8.048397779s elapsed) Jun 24 02:02:00.365: INFO: Waiting for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-rp9nt' status to be 'success or failure'(found phase: "Pending", readiness: false) (10.053510099s elapsed) STEP: Saw pod success Jun 24 02:02:02.372: INFO: Trying to get logs from node 172.18.11.204 pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 container dapi-container: <nil> STEP: delete the pod Jun 24 02:02:02.394: INFO: Waiting for pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 to disappear Jun 24 02:02:02.396: INFO: Pod downward-api-8eba13b1-9645-11e9-906c-0e9110352016 no longer exists [AfterEach] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:02.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rp9nt" for this suite. Jun 24 02:02:13.101: INFO: namespace: e2e-tests-downward-api-rp9nt, resource: bindings, ignored listing per whitelist • [SLOW TEST:23.647 seconds] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide pod name and namespace as env vars [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:63 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Load capacity [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Feature:ManualPerformance] should be able to handle 30 pods per node { ReplicationController} with 0 secrets and 2 daemons /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/load.go:265 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] s2i extended build [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_extended_build.go:148 with scripts from the source repository /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_extended_build.go:72 should use assemble-runtime script from the source repository /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/s2i_extended_build.go:71 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [networking] services [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:65 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:273 should allow connections to services in the default namespace from a pod in another namespace on a different node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:55 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Probing container [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should have monotonically increasing restart count [Conformance] [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/container_probe.go:206 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] PreStop should call prestop when killing a pod [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pre_stop.go:194 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] PreStop /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:01:14.385: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:01:14.459: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] PreStop /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pre_stop.go:190 [It] should call prestop when killing a pod [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pre_stop.go:194 STEP: Creating server pod server in namespace e2e-tests-prestop-h4hdf STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-h4hdf STEP: Deleting pre-stop pod Jun 24 02:01:33.799: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again.", "Unable to read the endpoints for default/nettest: endpoints \"nettest\" not found; will try again." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] PreStop /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:33.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-h4hdf" for this suite. Jun 24 02:02:13.938: INFO: namespace: e2e-tests-prestop-h4hdf, resource: bindings, ignored listing per whitelist • [SLOW TEST:59.568 seconds] [k8s.io] PreStop /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should call prestop when killing a pod [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pre_stop.go:194 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Etcd failure [Disruptive] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should recover from network partition with master /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/etcd_failure.go:60 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Generated release_1_5 clientset [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/generated_clientset.go:337 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][Slow] openshift images should be SCL enabled [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:76 returning s2i usage when running the image /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:38 "openshift/python-33-centos7" should print the usage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/scl.go:37 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [Feature:ImagePrune] Image prune [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/images/prune.go:117 of schema 2 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/images/prune.go:94 should prune old image with config /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/images/prune.go:93 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Pod Disks [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pd.go:246 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:62 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:13.964: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:14.036: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:62 Jun 24 02:02:14.101: INFO: (0) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 5.833928ms) Jun 24 02:02:14.103: INFO: (1) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.39586ms) Jun 24 02:02:14.111: INFO: (2) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 7.514317ms) Jun 24 02:02:14.113: INFO: (3) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.51229ms) Jun 24 02:02:14.116: INFO: (4) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.474255ms) Jun 24 02:02:14.118: INFO: (5) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.426802ms) Jun 24 02:02:14.121: INFO: (6) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.518245ms) Jun 24 02:02:14.123: INFO: (7) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.512146ms) Jun 24 02:02:14.126: INFO: (8) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.595322ms) Jun 24 02:02:14.129: INFO: (9) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.530843ms) Jun 24 02:02:14.133: INFO: (10) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 3.123765ms) Jun 24 02:02:14.135: INFO: (11) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.39412ms) Jun 24 02:02:14.137: INFO: (12) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.357831ms) Jun 24 02:02:14.140: INFO: (13) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.404396ms) Jun 24 02:02:14.142: INFO: (14) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.312492ms) Jun 24 02:02:14.144: INFO: (15) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.224494ms) Jun 24 02:02:14.147: INFO: (16) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.243187ms) Jun 24 02:02:14.149: INFO: (17) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.44574ms) Jun 24 02:02:14.151: INFO: (18) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.35128ms) Jun 24 02:02:14.154: INFO: (19) /api/v1/proxy/nodes/172.18.11.204:10250/logs/: <pre> <a href="anaconda/">anaconda/</a> <a href="audit/">audit/</a> <a href="boot.log">boot.log</... (200; 2.494646ms) [AfterEach] version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:14.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-3jmqn" for this suite. Jun 24 02:02:24.240: INFO: namespace: e2e-tests-proxy-3jmqn, resource: bindings, ignored listing per whitelist • [SLOW TEST:10.320 seconds] [k8s.io] Proxy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 version v1 /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:275 should proxy logs on node with explicit kubelet port [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/proxy.go:62 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Garbage collector [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [Feature:GarbageCollector] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/garbage_collector.go:662 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federation apiserver [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Cluster objects [Serial] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/apiserver.go:82 should be created and deleted successfully /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/apiserver.go:81 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Networking [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Granular Checks: Services [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should function for endpoint-Service: http /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/networking.go:132 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:40 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:08.074: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:08.133: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should be consumable from pods in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:40 STEP: Creating projection with secret that has name projected-secret-test-99658207-9645-11e9-a527-0e9110352016 STEP: Creating a pod to test consume secrets Jun 24 02:02:08.207: INFO: Waiting up to 5m0s for pod pod-projected-secrets-9965e7bd-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 02:02:08.214: INFO: Waiting for pod pod-projected-secrets-9965e7bd-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-tvmmd' status to be 'success or failure'(found phase: "Pending", readiness: false) (7.165771ms elapsed) Jun 24 02:02:10.217: INFO: Waiting for pod pod-projected-secrets-9965e7bd-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-tvmmd' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.009751205s elapsed) Jun 24 02:02:12.220: INFO: Waiting for pod pod-projected-secrets-9965e7bd-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-tvmmd' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.012295792s elapsed) STEP: Saw pod success Jun 24 02:02:14.223: INFO: Trying to get logs from node 172.18.11.204 pod pod-projected-secrets-9965e7bd-9645-11e9-a527-0e9110352016 container projected-secret-volume-test: <nil> STEP: delete the pod Jun 24 02:02:14.705: INFO: Waiting for pod pod-projected-secrets-9965e7bd-9645-11e9-a527-0e9110352016 to disappear Jun 24 02:02:14.709: INFO: Pod pod-projected-secrets-9965e7bd-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:14.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tvmmd" for this suite. Jun 24 02:02:24.985: INFO: namespace: e2e-tests-projected-tvmmd, resource: bindings, ignored listing per whitelist • [SLOW TEST:17.124 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should be consumable from pods in volume [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:40 ------------------------------ [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:204 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:13.146: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:13.367: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:204 STEP: Creating a pod to test downward API volume plugin Jun 24 02:02:13.466: INFO: Waiting up to 5m0s for pod downwardapi-volume-9c87f7f2-9645-11e9-906c-0e9110352016 status to be success or failure Jun 24 02:02:13.469: INFO: Waiting for pod downwardapi-volume-9c87f7f2-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-bzp4n' status to be 'success or failure'(found phase: "Pending", readiness: false) (3.218645ms elapsed) Jun 24 02:02:15.472: INFO: Waiting for pod downwardapi-volume-9c87f7f2-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-bzp4n' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.005665729s elapsed) Jun 24 02:02:17.474: INFO: Waiting for pod downwardapi-volume-9c87f7f2-9645-11e9-906c-0e9110352016 in namespace 'e2e-tests-downward-api-bzp4n' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.008065937s elapsed) STEP: Saw pod success Jun 24 02:02:19.481: INFO: Trying to get logs from node 172.18.11.204 pod downwardapi-volume-9c87f7f2-9645-11e9-906c-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 02:02:19.597: INFO: Waiting for pod downwardapi-volume-9c87f7f2-9645-11e9-906c-0e9110352016 to disappear Jun 24 02:02:19.600: INFO: Pod downwardapi-volume-9c87f7f2-9645-11e9-906c-0e9110352016 no longer exists [AfterEach] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:19.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bzp4n" for this suite. Jun 24 02:02:29.726: INFO: namespace: e2e-tests-downward-api-bzp4n, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.583 seconds] [k8s.io] Downward API volume /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:204 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] PersistentVolumes [Volume][Serial] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] PersistentVolumes:GCEPD[Flaky] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/persistent_volumes.go:363 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] DNS should provide DNS for ExternalName services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns.go:512 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] DNS /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:01:38.580: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:01:38.652: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns.go:512 STEP: Creating a test externalName service STEP: Running these commands on wheezy: dig +short +tries=12 +norecurse dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local STEP: Running these commands on jessie: dig +short +tries=12 +norecurse dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 24 02:02:06.893: INFO: DNS probes using dns-test-87e66838-9645-11e9-afd4-0e9110352016 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: dig +short +tries=12 +norecurse dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local STEP: Running these commands on jessie: dig +short +tries=12 +norecurse dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 24 02:02:16.936: INFO: DNS probes using dns-test-98a0bbb3-9645-11e9-afd4-0e9110352016 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: dig +short +tries=12 +norecurse dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local STEP: Running these commands on jessie: dig +short +tries=12 +norecurse dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 24 02:02:25.007: INFO: File jessie_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local from pod dns-test-9e9d61e8-9645-11e9-afd4-0e9110352016 contains '' instead of '127.1.2.3' Jun 24 02:02:25.007: INFO: Lookups using dns-test-9e9d61e8-9645-11e9-afd4-0e9110352016 failed for: [jessie_udp@dns-test-service-3.e2e-tests-dns-c3fj1.svc.cluster.local] Jun 24 02:02:26.993: INFO: DNS probes using dns-test-9e9d61e8-9645-11e9-afd4-0e9110352016 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [k8s.io] DNS /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:27.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-c3fj1" for this suite. Jun 24 02:02:37.087: INFO: namespace: e2e-tests-dns-c3fj1, resource: bindings, ignored listing per whitelist • [SLOW TEST:58.592 seconds] [k8s.io] DNS /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide DNS for ExternalName services /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns.go:512 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] DNS horizontal autoscaling [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:207 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Security Context [Feature:SecurityContext] should support pod.Spec.SecurityContext.SupplementalGroups /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:71 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:25.200: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:25.290: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:71 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Jun 24 02:02:25.365: INFO: Waiting up to 5m0s for pod security-context-a39f9c82-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 02:02:25.367: INFO: Waiting for pod security-context-a39f9c82-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-security-context-83ctf' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.036341ms elapsed) Jun 24 02:02:27.370: INFO: Waiting for pod security-context-a39f9c82-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-security-context-83ctf' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004917189s elapsed) STEP: Saw pod success Jun 24 02:02:29.375: INFO: Trying to get logs from node 172.18.11.204 pod security-context-a39f9c82-9645-11e9-a527-0e9110352016 container test-container: <nil> STEP: delete the pod Jun 24 02:02:29.390: INFO: Waiting for pod security-context-a39f9c82-9645-11e9-a527-0e9110352016 to disappear Jun 24 02:02:29.393: INFO: Pod security-context-a39f9c82-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:29.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-security-context-83ctf" for this suite. Jun 24 02:02:39.522: INFO: namespace: e2e-tests-security-context-83ctf, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.409 seconds] [k8s.io] Security Context [Feature:SecurityContext] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support pod.Spec.SecurityContext.SupplementalGroups /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/security_context.go:71 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Dynamic provisioning [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] DynamicProvisioner Beta /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should not provision a volume in an unmanaged GCE zone. [Slow] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/volume_provisioning.go:216 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [bldcompat][Slow][Compatibility] build controller [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:52 RunBuildPodControllerTest /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:31 should succeed /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/controller_compat.go:30 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.002 seconds] [networking] network isolation [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:58 when using a plugin that isolates namespaces by default /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:273 should allow communication from default to non-default namespace on the same node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/isolation.go:44 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Simple pod /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support exec /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:427 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] starting a build using CLI [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:341 oc start-build --wait /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:54 should start a build and wait for the build to fail /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/start.go:53 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] DNS [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide DNS for pods for Hostname and Subdomain Annotation /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/dns.go:448 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] StatefulSet [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should creating a working zookeeper cluster /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/statefulset.go:537 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [networking] services [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:65 basic functionality /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:21 should allow connections to another pod on the same node via a service IP /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/services.go:16 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [Conformance][registry][migration] manifest migration from etcd to registry storage registry can get access to manifest [local] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [Conformance][registry][migration] manifest migration from etcd to registry storage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:29.733: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:29.752: INFO: configPath is now "/tmp/extended-test-registry-migration-jpl6g-68ds9-user.kubeconfig" Jun 24 02:02:29.752: INFO: The user is now "extended-test-registry-migration-jpl6g-68ds9-user" Jun 24 02:02:29.752: INFO: Creating project "extended-test-registry-migration-jpl6g-68ds9" STEP: Waiting for a default service account to be provisioned in namespace [It] registry can get access to manifest [local] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122 STEP: set up policy for registry to have anonymous access to images Jun 24 02:02:29.875: INFO: Running 'oc policy --config=/tmp/extended-test-registry-migration-jpl6g-68ds9-user.kubeconfig --namespace=extended-test-registry-migration-jpl6g-68ds9 add-role-to-user registry-viewer system:anonymous' role "registry-viewer" added: "system:anonymous" STEP: pushing image... Step 1 : FROM scratch ---> Step 2 : COPY data1 /data1 ---> 4d865ce2722a Removing intermediate container 60e2b0dc09c4 Successfully built 4d865ce2722a Jun 24 02:02:31.538: INFO: Running 'oc whoami --config=/tmp/extended-test-registry-migration-jpl6g-68ds9-user.kubeconfig --namespace=extended-test-registry-migration-jpl6g-68ds9 -t' The push refers to a repository [172.30.191.57:5000/extended-test-registry-migration-jpl6g-68ds9/app] Preparing Pushing [====================> ] 512 B/1.28 kB Pushing Pushing [==================================================>] 1.792 kB Pushing Pushing [==================================================>] 3.072 kB Pushing Pushing [==================================================>] 3.072 kB Pushing Pushed latest: digest: sha256:48f565b174a4402ba9c8f46fef9202e521205e928d1f59f27e8c5059347c950e size: 1536 STEP: checking that the image converted... STEP: getting image manifest from docker-registry... I0624 02:02:32.495414 8251 client.go:334] Failed to get https, trying http: Get https://172.30.191.57:5000/v2/: http: server gave HTTP response to HTTPS client I0624 02:02:32.506291 8251 client.go:353] Found registry v2 API at http://172.30.191.57:5000/v2/ STEP: restoring manifest... STEP: checking that the manifest is present in the image... STEP: getting image manifest from docker-registry one more time... STEP: waiting until image is updated... STEP: checking that the manifest was removed from the image... STEP: getting image manifest from docker-registry to check if he's available... STEP: pulling image... STEP: get secret list err <nil> STEP: secret name builder-dockercfg-d62qd STEP: docker cfg token json {"172.30.191.57:5000":{"username":"serviceaccount","password":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi1qcGw2Zy02OGRzOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLWpqczl4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhNjQxOWNiZS05NjQ1LTExZTktOWY5ZC0wZTkxMTAzNTIwMTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24tanBsNmctNjhkczk6YnVpbGRlciJ9.f8KYitjqAUZNE0BcOmLPgqN68Eed4oQsUuYQ933mCAA-Psgx95QpUkRCrVG3G9blrWjCQEh_lYocJm6BY1oslQp1PtNbW9N_1ipLttME33BPku8sxy88_Fxf5De7WwGxgb0daHPv04Msq67fNI44J9AYn7wDQDDHsQicmwJm5Wo7MDld6HZSszHJEvVf4ZhzeG1gs8o0dy43CecOsOzfpwaJKuCmlWeZZtfsJ3H8iVWm5zmke-nmw1ZeUCmvw_xfklVTlu6ZX3j_8pGRGrZOPOGNGQFcCsry3971uFcT9fCwbxx5P5sOhScNBi84CmzbVyy8jsT6JU-Vtnf8c48MLA","email":"serviceaccount@example.org","auth":"c2VydmljZWFjY291bnQ6ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpsZUhSbGJtUmxaQzEwWlhOMExYSmxaMmx6ZEhKNUxXMXBaM0poZEdsdmJpMXFjR3cyWnkwMk9HUnpPU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUppZFdsc1pHVnlMWFJ2YTJWdUxXcHFjemw0SWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1KMWFXeGtaWElpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUpoTmpReE9XTmlaUzA1TmpRMUxURXhaVGt0T1dZNVpDMHdaVGt4TVRBek5USXdNVFlpTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpYaDBaVzVrWldRdGRHVnpkQzF5WldkcGMzUnllUzF0YVdkeVlYUnBiMjR0YW5Cc05tY3ROamhrY3prNlluVnBiR1JsY2lKOS5mOEtZaXRqcUFVWk5FMEJjT21MUGdxTjY4RWVkNG9Rc1V1WVE5MzNtQ0FBLVBzZ3g5NVFwVWtSQ3JWRzNHOWJscldqQ1FFaF9sWW9jSm02Qlkxb3NsUXAxUHROYlc5Tl8xaXBMdHRNRTMzQlBrdThzeHk4OF9GeGY1RGU3V3dHeGdiMGRhSFB2MDRNc3E2N2ZOSTQ0SjlBWW43d0RRRERIc1FpY213Sm01V283TURsZDZIWlNzekhKRXZWZjRaaHplRzFnczhvMGR5NDNDZWNPc096ZnB3YUpLdUNtbFdlWlp0ZnNKM0g4aVZXbTV6bWtlLW5tdzFaZVVDbXZ3X3hma2xWVGx1NlpYM2pfOHBHUkdyWk9QT0dOR1FGY0NzcnkzOTcxdUZjVDlmQ3dieHg1UDVzT2hTY05CaTg0Q216YlZ5eThqc1Q2SlUtVnRuZjhjNDhNTEE="},"docker-registry.default.svc:5000":{"username":"serviceaccount","password":"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi1qcGw2Zy02OGRzOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLWpqczl4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhNjQxOWNiZS05NjQ1LTExZTktOWY5ZC0wZTkxMTAzNTIwMTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24tanBsNmctNjhkczk6YnVpbGRlciJ9.f8KYitjqAUZNE0BcOmLPgqN68Eed4oQsUuYQ933mCAA-Psgx95QpUkRCrVG3G9blrWjCQEh_lYocJm6BY1oslQp1PtNbW9N_1ipLttME33BPku8sxy88_Fxf5De7WwGxgb0daHPv04Msq67fNI44J9AYn7wDQDDHsQicmwJm5Wo7MDld6HZSszHJEvVf4ZhzeG1gs8o0dy43CecOsOzfpwaJKuCmlWeZZtfsJ3H8iVWm5zmke-nmw1ZeUCmvw_xfklVTlu6ZX3j_8pGRGrZOPOGNGQFcCsry3971uFcT9fCwbxx5P5sOhScNBi84CmzbVyy8jsT6JU-Vtnf8c48MLA","email":"serviceaccount@example.org","auth":"c2VydmljZWFjY291bnQ6ZXlKaGJHY2lPaUpTVXpJMU5pSXNJblI1Y0NJNklrcFhWQ0o5LmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpsZUhSbGJtUmxaQzEwWlhOMExYSmxaMmx6ZEhKNUxXMXBaM0poZEdsdmJpMXFjR3cyWnkwMk9HUnpPU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUppZFdsc1pHVnlMWFJ2YTJWdUxXcHFjemw0SWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1KMWFXeGtaWElpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUpoTmpReE9XTmlaUzA1TmpRMUxURXhaVGt0T1dZNVpDMHdaVGt4TVRBek5USXdNVFlpTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNlpYaDBaVzVrWldRdGRHVnpkQzF5WldkcGMzUnllUzF0YVdkeVlYUnBiMjR0YW5Cc05tY3ROamhrY3prNlluVnBiR1JsY2lKOS5mOEtZaXRqcUFVWk5FMEJjT21MUGdxTjY4RWVkNG9Rc1V1WVE5MzNtQ0FBLVBzZ3g5NVFwVWtSQ3JWRzNHOWJscldqQ1FFaF9sWW9jSm02Qlkxb3NsUXAxUHROYlc5Tl8xaXBMdHRNRTMzQlBrdThzeHk4OF9GeGY1RGU3V3dHeGdiMGRhSFB2MDRNc3E2N2ZOSTQ0SjlBWW43d0RRRERIc1FpY213Sm01V283TURsZDZIWlNzekhKRXZWZjRaaHplRzFnczhvMGR5NDNDZWNPc096ZnB3YUpLdUNtbFdlWlp0ZnNKM0g4aVZXbTV6bWtlLW5tdzFaZVVDbXZ3X3hma2xWVGx1NlpYM2pfOHBHUkdyWk9QT0dOR1FGY0NzcnkzOTcxdUZjVDlmQ3dieHg1UDVzT2hTY05CaTg0Q216YlZ5eThqc1Q2SlUtVnRuZjhjNDhNTEE="}} STEP: json unmarshal err <nil> STEP: found auth true with auth cfg len 1 STEP: dockercfg with svrAddr 172.30.191.57:5000 user serviceaccount pass eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJleHRlbmRlZC10ZXN0LXJlZ2lzdHJ5LW1pZ3JhdGlvbi1qcGw2Zy02OGRzOSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJidWlsZGVyLXRva2VuLWpqczl4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImJ1aWxkZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJhNjQxOWNiZS05NjQ1LTExZTktOWY5ZC0wZTkxMTAzNTIwMTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZXh0ZW5kZWQtdGVzdC1yZWdpc3RyeS1taWdyYXRpb24tanBsNmctNjhkczk6YnVpbGRlciJ9.f8KYitjqAUZNE0BcOmLPgqN68Eed4oQsUuYQ933mCAA-Psgx95QpUkRCrVG3G9blrWjCQEh_lYocJm6BY1oslQp1PtNbW9N_1ipLttME33BPku8sxy88_Fxf5De7WwGxgb0daHPv04Msq67fNI44J9AYn7wDQDDHsQicmwJm5Wo7MDld6HZSszHJEvVf4ZhzeG1gs8o0dy43CecOsOzfpwaJKuCmlWeZZtfsJ3H8iVWm5zmke-nmw1ZeUCmvw_xfklVTlu6ZX3j_8pGRGrZOPOGNGQFcCsry3971uFcT9fCwbxx5P5sOhScNBi84CmzbVyy8jsT6JU-Vtnf8c48MLA email serviceaccount@example.org STEP: removing image... STEP: Deleting images and image streams in project "extended-test-registry-migration-jpl6g-68ds9" [AfterEach] [Conformance][registry][migration] manifest migration from etcd to registry storage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:33.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-registry-migration-jpl6g-68ds9" for this suite. Jun 24 02:02:43.408: INFO: namespace: extended-test-registry-migration-jpl6g-68ds9, resource: bindings, ignored listing per whitelist • [SLOW TEST:13.719 seconds] [Conformance][registry][migration] manifest migration from etcd to registry storage /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:123 registry can get access to manifest [local] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/registry/registry.go:122 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Federated ingresses [Feature:Federation] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 Federated Ingresses /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:274 should not be deleted from underlying clusters when OrphanDependents is nil /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e_federation/ingress.go:220 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][pruning] prune builds based on settings in the buildconfig [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:122 should prune canceled builds based on the failedBuildsHistoryLimit setting /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/build_pruning.go:121 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Pod Disks [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/pd.go:134 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ deploymentconfigs rolled back [Conformance] should rollback to an older deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:770 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:00:29.067: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:00:29.086: INFO: configPath is now "/tmp/extended-test-cli-deployment-n81dj-fx43r-user.kubeconfig" Jun 24 02:00:29.086: INFO: The user is now "extended-test-cli-deployment-n81dj-fx43r-user" Jun 24 02:00:29.086: INFO: Creating project "extended-test-cli-deployment-n81dj-fx43r" STEP: Waiting for a default service account to be provisioned in namespace [It] should rollback to an older deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:770 Jun 24 02:00:29.201: INFO: Running 'oc create --config=/tmp/extended-test-cli-deployment-n81dj-fx43r-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-fx43r -f /tmp/fixture-testdata-dir824123364/test/extended/testdata/deployments/deployment-simple.yaml -o name' Jun 24 02:00:49.933: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-1) is complete. Jun 24 02:00:49.934: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-fx43r-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-fx43r latest deployment-simple' STEP: verifying that we are on the second version Jun 24 02:00:50.215: INFO: Running 'oc get --config=/tmp/extended-test-cli-deployment-n81dj-fx43r-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-fx43r deploymentconfig/deployment-simple --output=jsonpath="{.status.latestVersion}"' Jun 24 02:01:24.536: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-2) is complete. STEP: verifying that we can rollback Jun 24 02:01:24.536: INFO: Running 'oc rollout --config=/tmp/extended-test-cli-deployment-n81dj-fx43r-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-fx43r undo deploymentconfig/deployment-simple' Jun 24 02:01:58.381: INFO: Latest rollout of dc/deployment-simple (rc/deployment-simple-3) is complete. STEP: verifying that we are on the third version Jun 24 02:01:58.382: INFO: Running 'oc get --config=/tmp/extended-test-cli-deployment-n81dj-fx43r-user.kubeconfig --namespace=extended-test-cli-deployment-n81dj-fx43r deploymentconfig/deployment-simple --output=jsonpath="{.status.latestVersion}"' [AfterEach] rolled back [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:731 [AfterEach] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:01:58.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "extended-test-cli-deployment-n81dj-fx43r" for this suite. Jun 24 02:02:48.779: INFO: namespace: extended-test-cli-deployment-n81dj-fx43r, resource: bindings, ignored listing per whitelist • [SLOW TEST:139.740 seconds] deploymentconfigs /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:979 rolled back [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:771 should rollback to an older deployment /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/deployments/deployments.go:770 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] ServiceLoadBalancer [Feature:ServiceLoadBalancer] [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should support simple GET on Ingress ips /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/serviceloadbalancers.go:239 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [builds][Slow] testing build configuration hooks [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/hooks.go:79 testing postCommit hook /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/hooks.go:77 failing postCommit script /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/builds/hooks.go:59 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] Kubectl client [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Kubectl apply /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should apply a new configuration to an existing RC /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/kubectl.go:700 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:822 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:39.636: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:39.711: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should set DefaultMode on files [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:822 STEP: Creating a pod to test downward API volume plugin Jun 24 02:02:39.804: INFO: Waiting up to 5m0s for pod downwardapi-volume-ac3a7490-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 02:02:39.817: INFO: Waiting for pod downwardapi-volume-ac3a7490-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-xl6jn' status to be 'success or failure'(found phase: "Pending", readiness: false) (13.49194ms elapsed) Jun 24 02:02:41.820: INFO: Waiting for pod downwardapi-volume-ac3a7490-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-projected-xl6jn' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.015874559s elapsed) STEP: Saw pod success Jun 24 02:02:43.830: INFO: Trying to get logs from node 172.18.11.204 pod downwardapi-volume-ac3a7490-9645-11e9-a527-0e9110352016 container client-container: <nil> STEP: delete the pod Jun 24 02:02:43.851: INFO: Waiting for pod downwardapi-volume-ac3a7490-9645-11e9-a527-0e9110352016 to disappear Jun 24 02:02:43.856: INFO: Pod downwardapi-volume-ac3a7490-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:43.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xl6jn" for this suite. Jun 24 02:02:53.942: INFO: namespace: e2e-tests-projected-xl6jn, resource: bindings, ignored listing per whitelist • [SLOW TEST:14.410 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should set DefaultMode on files [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:822 ------------------------------ [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:93 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:43.459: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:43.529: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:93 STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:49.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-resourcequota-01j8v" for this suite. Jun 24 02:02:59.783: INFO: namespace: e2e-tests-resourcequota-01j8v, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.404 seconds] [k8s.io] ResourceQuota /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should create a ResourceQuota and capture the life of a service. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/resource_quota.go:93 ------------------------------ [k8s.io] Projected should update annotations on modification [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:917 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:24.295: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:24.403: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:803 [It] should update annotations on modification [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:917 STEP: Creating the pod Jun 24 02:02:31.018: INFO: Successfully updated pod "annotationupdatea319fcf9-9645-11e9-a60e-0e9110352016" [AfterEach] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:02:35.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gb3s4" for this suite. Jun 24 02:03:00.449: INFO: namespace: e2e-tests-projected-gb3s4, resource: bindings, ignored listing per whitelist • [SLOW TEST:36.154 seconds] [k8s.io] Projected /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should update annotations on modification [Conformance] [Volume] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/projected.go:917 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] NetworkPolicy [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:499 when using a plugin that implements NetworkPolicy /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/util.go:285 should enforce policy based on PodSelector [Feature:NetworkPolicy] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/networking/networkpolicy.go:146 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [k8s.io] Downward API should provide pod IP as an env var [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:84 [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 [BeforeEach] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:120 STEP: Creating a kubernetes client Jun 24 02:02:54.047: INFO: >>> kubeConfig: /tmp/openshift/core/openshift.local.config/master/admin.kubeconfig STEP: Building a namespace api object Jun 24 02:02:54.115: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod IP as an env var [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:84 STEP: Creating a pod to test downward api env vars Jun 24 02:02:54.186: INFO: Waiting up to 5m0s for pod downward-api-b4cda908-9645-11e9-a527-0e9110352016 status to be success or failure Jun 24 02:02:54.188: INFO: Waiting for pod downward-api-b4cda908-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-mkt5b' status to be 'success or failure'(found phase: "Pending", readiness: false) (1.988611ms elapsed) Jun 24 02:02:56.191: INFO: Waiting for pod downward-api-b4cda908-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-mkt5b' status to be 'success or failure'(found phase: "Pending", readiness: false) (2.004654303s elapsed) Jun 24 02:02:58.193: INFO: Waiting for pod downward-api-b4cda908-9645-11e9-a527-0e9110352016 in namespace 'e2e-tests-downward-api-mkt5b' status to be 'success or failure'(found phase: "Pending", readiness: false) (4.007252028s elapsed) STEP: Saw pod success Jun 24 02:03:00.198: INFO: Trying to get logs from node 172.18.11.204 pod downward-api-b4cda908-9645-11e9-a527-0e9110352016 container dapi-container: <nil> STEP: delete the pod Jun 24 02:03:00.219: INFO: Waiting for pod downward-api-b4cda908-9645-11e9-a527-0e9110352016 to disappear Jun 24 02:03:00.222: INFO: Pod downward-api-b4cda908-9645-11e9-a527-0e9110352016 no longer exists [AfterEach] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:121 Jun 24 02:03:00.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mkt5b" for this suite. Jun 24 02:03:10.429: INFO: namespace: e2e-tests-downward-api-mkt5b, resource: bindings, ignored listing per whitelist • [SLOW TEST:16.480 seconds] [k8s.io] Downward API /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 should provide pod IP as an env var [Conformance] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downward_api.go:84 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [image_ecosystem][mysql][Slow] openshift mysql replication [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mysql_replica.go:202 MySQL replication template for 5.5: https://raw.githubusercontent.com/sclorg/mysql-container/master/5.5/examples/replica/mysql_replica.json /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/image_ecosystem/mysql_replica.go:200 skipping tests not in the Origin conformance suite /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:361 ------------------------------ [BeforeEach] [Top Level] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:51 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [k8s.io] kubelet [BeforeEach] /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 [k8s.io] Clean up pods on node /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:656 kubelet should be able to delete 10 pods per node in 1m0s. /tmp/openshift/build-rpm-release/tito/rpmbuild-origin93hL1f/BUILD/origin-3.6.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/ku