Started by remote host 50.17.198.52 [EnvInject] - Loading node environment variables. Building in workspace /var/lib/jenkins/jobs/test-origin-metrics/workspace Run condition [And] enabling prebuild for step [BuilderChain] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content OS_ROOT=/data/src/github.com/openshift/origin INSTANCE_TYPE=c4.xlarge GIT_BRANCH=master GITHUB_REPO=openshift OS=centos7 TESTNAME=metrics [EnvInject] - Variables injected successfully. [workspace] $ /bin/sh -xe /tmp/jenkins218580410056815004.sh + true + approve.sh origin-metrics master none + vagrant origin-local-checkout --replace --repo origin-metrics You don't seem to have the GOPATH environment variable set on your system. See: 'go help gopath' for more details about GOPATH. Waiting for the cloning process to finish Checking repo integrity for /var/lib/jenkins/jobs/test-origin-metrics/workspace/origin-metrics ~/jobs/test-origin-metrics/workspace/origin-metrics ~/jobs/test-origin-metrics/workspace # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # (use "git push" to publish your local commits) # nothing to commit, working directory clean ~/jobs/test-origin-metrics/workspace Replacing: /var/lib/jenkins/jobs/test-origin-metrics/workspace/origin-metrics ~/jobs/test-origin-metrics/workspace/origin-metrics ~/jobs/test-origin-metrics/workspace Already on 'master' Your branch is ahead of 'origin/master' by 1 commit. (use "git push" to publish your local commits) HEAD is now at 2242ca1 Merge pull request #418 from jsanda/bump-hawkular-metrics Deleted branch tpr_keys_jsanda (was cc59617). ~/jobs/test-origin-metrics/workspace Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-metrics/workspace + '[' -n 419 ']' + set +x *****Locally Merging Pull Request: https://github.com/openshift/origin-metrics/pull/419 + test_pull_requests --local_merge_pull_request 419 --repo origin-metrics --config /var/lib/jenkins/.test_pull_requests_metrics.json WARNING: Cache won't persist between runs. *** Starting with empty cache Rate limit remaining: 4963 Checking if current base repo commit ID matches what we expect Local merging pull request #419 for repo 'origin-metrics' against base repo commit id 2242ca12746ea981b6d3ff75ec1273999555ba2a from pr branch commit id c18a0ad5f7059b66c64969cfb6963d2b4989d883 + pushd origin-metrics + git checkout master Already on 'master' + git checkout -b tpr_keys_jsanda Switched to a new branch 'tpr_keys_jsanda' + git pull git@github.com:jsanda/origin-metrics.git keys From github.com:jsanda/origin-metrics * branch keys -> FETCH_HEAD + git pull git@github.com:jsanda/origin-metrics.git keys --tags From github.com:jsanda/origin-metrics * branch keys -> FETCH_HEAD + git checkout master Switched to branch 'master' + git merge tpr_keys_jsanda + git submodule update --recursive + popd Rate limit resets in: 2565s, at 2018-05-07T18:06:00-04:00 (1525730760) Rate limit remaining: 4959; delta: 4 Rate limit is not too low. Would run. Cache stats: Hits: 0, Misses: 4, Bypass: 0 (raw): [:fresh, 0, :invalid, 0, :miss, 4, :unacceptable, 0, :valid, 0] Done ~/jobs/test-origin-metrics/workspace/origin-metrics ~/jobs/test-origin-metrics/workspace Updating 2242ca1..c18a0ad Fast-forward hack/build-images.sh | 2 +- hack/keys/create.sh | 50 --------------- hack/keys/hawkular/hawkular-noca.pem | 46 ------------- hack/keys/hawkular/hawkular.cert | 37 ----------- hack/keys/hawkular/hawkular.key | 27 -------- hack/keys/hawkular/hawkular.pem | 64 ------------------ .../hawkularWildCard/hawkularWildCard-noca.pem | 46 ------------- hack/keys/hawkularWildCard/hawkularWildCard.cert | 37 ----------- hack/keys/hawkularWildCard/hawkularWildCard.key | 27 -------- hack/keys/hawkularWildCard/hawkularWildCard.pem | 64 ------------------ hack/keys/intermediary_ca/ca-chain.pem | 41 ------------ hack/keys/intermediary_ca/hawkular-metrics.cert | 48 -------------- hack/keys/intermediary_ca/hawkular-metrics.key | 27 -------- hack/keys/intermediary_ca/hawkular-metrics.pem | 75 ---------------------- hack/keys/signer.ca | 18 ------ hack/push-release.sh | 2 +- hack/tests/test_default_deploy.sh | 65 ------------------- 17 files changed, 2 insertions(+), 674 deletions(-) delete mode 100755 hack/keys/create.sh delete mode 100644 hack/keys/hawkular/hawkular-noca.pem delete mode 100644 hack/keys/hawkular/hawkular.cert delete mode 100644 hack/keys/hawkular/hawkular.key delete mode 100644 hack/keys/hawkular/hawkular.pem delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard-noca.pem delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard.cert delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard.key delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard.pem delete mode 100644 hack/keys/intermediary_ca/ca-chain.pem delete mode 100644 hack/keys/intermediary_ca/hawkular-metrics.cert delete mode 100644 hack/keys/intermediary_ca/hawkular-metrics.key delete mode 100644 hack/keys/intermediary_ca/hawkular-metrics.pem delete mode 100644 hack/keys/signer.ca Already up-to-date. Updating 2242ca1..c18a0ad Fast-forward hack/build-images.sh | 2 +- hack/keys/create.sh | 50 --------------- hack/keys/hawkular/hawkular-noca.pem | 46 ------------- hack/keys/hawkular/hawkular.cert | 37 ----------- hack/keys/hawkular/hawkular.key | 27 -------- hack/keys/hawkular/hawkular.pem | 64 ------------------ .../hawkularWildCard/hawkularWildCard-noca.pem | 46 ------------- hack/keys/hawkularWildCard/hawkularWildCard.cert | 37 ----------- hack/keys/hawkularWildCard/hawkularWildCard.key | 27 -------- hack/keys/hawkularWildCard/hawkularWildCard.pem | 64 ------------------ hack/keys/intermediary_ca/ca-chain.pem | 41 ------------ hack/keys/intermediary_ca/hawkular-metrics.cert | 48 -------------- hack/keys/intermediary_ca/hawkular-metrics.key | 27 -------- hack/keys/intermediary_ca/hawkular-metrics.pem | 75 ---------------------- hack/keys/signer.ca | 18 ------ hack/push-release.sh | 2 +- hack/tests/test_default_deploy.sh | 65 ------------------- 17 files changed, 2 insertions(+), 674 deletions(-) delete mode 100755 hack/keys/create.sh delete mode 100644 hack/keys/hawkular/hawkular-noca.pem delete mode 100644 hack/keys/hawkular/hawkular.cert delete mode 100644 hack/keys/hawkular/hawkular.key delete mode 100644 hack/keys/hawkular/hawkular.pem delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard-noca.pem delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard.cert delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard.key delete mode 100644 hack/keys/hawkularWildCard/hawkularWildCard.pem delete mode 100644 hack/keys/intermediary_ca/ca-chain.pem delete mode 100644 hack/keys/intermediary_ca/hawkular-metrics.cert delete mode 100644 hack/keys/intermediary_ca/hawkular-metrics.key delete mode 100644 hack/keys/intermediary_ca/hawkular-metrics.pem delete mode 100644 hack/keys/signer.ca ~/jobs/test-origin-metrics/workspace + vagrant origin-local-checkout --replace You don't seem to have the GOPATH environment variable set on your system. See: 'go help gopath' for more details about GOPATH. Waiting for the cloning process to finish Checking repo integrity for /var/lib/jenkins/jobs/test-origin-metrics/workspace/origin ~/jobs/test-origin-metrics/workspace/origin ~/jobs/test-origin-metrics/workspace # On branch master # Untracked files: # (use "git add ..." to include in what will be committed) # # artifacts/ nothing added to commit but untracked files present (use "git add" to track) ~/jobs/test-origin-metrics/workspace Replacing: /var/lib/jenkins/jobs/test-origin-metrics/workspace/origin ~/jobs/test-origin-metrics/workspace/origin ~/jobs/test-origin-metrics/workspace Already on 'master' HEAD is now at fc6ab10 Merge pull request #19616 from deads2k/cli-34-reap Removing .vagrant-openshift.json Removing .vagrant/ Removing artifacts/ fatal: branch name required ~/jobs/test-origin-metrics/workspace Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-metrics/workspace + pushd origin ~/jobs/test-origin-metrics/workspace/origin ~/jobs/test-origin-metrics/workspace + INSTANCE_NAME=origin_metrics-centos7-330 + GIT_URL=https://github.com/openshift/origin-metrics ++ echo https://github.com/openshift/origin-metrics ++ sed s,https://,, + OMETRICS_LOCAL_PATH=github.com/openshift/origin-metrics + ORIGIN_METRICS_DIR=/data/src/github.com/openshift/origin-metrics + env + sort _=/bin/env BUILD_CAUSE=REMOTECAUSE BUILD_CAUSE_REMOTECAUSE=true BUILD_DISPLAY_NAME=#330 BUILD_ID=330 BUILD_NUMBER=330 BUILD_TAG=jenkins-test-origin-metrics-330 BUILD_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-metrics/330/ EXECUTOR_NUMBER=15 GIT_BRANCH=master GITHUB_REPO=openshift HOME=/var/lib/jenkins HUDSON_COOKIE=12fa49f8-f426-4026-b9f5-da9265782665 HUDSON_HOME=/var/lib/jenkins HUDSON_SERVER_COOKIE=ec11f8b2841c966f HUDSON_URL=https://ci.openshift.redhat.com/jenkins/ INSTANCE_TYPE=c4.xlarge JENKINS_HOME=/var/lib/jenkins JENKINS_SERVER_COOKIE=ec11f8b2841c966f JENKINS_URL=https://ci.openshift.redhat.com/jenkins/ JOB_BASE_NAME=test-origin-metrics JOB_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-metrics/display/redirect JOB_NAME=test-origin-metrics JOB_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-metrics/ LANG=en_US.UTF-8 LOGNAME=jenkins MERGE_SEVERITY=none MERGE=true METRICS_PULL_ID=419 NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat NODE_LABELS=master NODE_NAME=master OLDPWD=/var/lib/jenkins/jobs/test-origin-metrics/workspace OS=centos7 OS_ROOT=/data/src/github.com/openshift/origin PATH=/sbin:/usr/sbin:/bin:/usr/bin PWD=/var/lib/jenkins/jobs/test-origin-metrics/workspace/origin ROOT_BUILD_CAUSE=REMOTECAUSE ROOT_BUILD_CAUSE_REMOTECAUSE=true RUN_CHANGES_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-metrics/330/display/redirect?page=changes RUN_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-metrics/330/display/redirect SHELL=/bin/bash SHLVL=3 TESTNAME=metrics USER=jenkins WORKSPACE=/var/lib/jenkins/jobs/test-origin-metrics/workspace XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt + vagrant origin-init --stage inst --os centos7 --instance-type c4.xlarge origin_metrics-centos7-330 Reading AWS credentials from /var/lib/jenkins/.awscred Searching devenv-centos7_* for latest base AMI (required_name_tag=) Found: ami-2fb42b39 (devenv-centos7_6170) ++ seq 0 2 + for i in '$(seq 0 2)' + vagrant up --provider aws Bringing machine 'openshiftdev' up with 'aws' provider... ==> openshiftdev: Warning! The AWS provider doesn't support any of the Vagrant ==> openshiftdev: high-level network configurations (`config.vm.network`). They ==> openshiftdev: will be silently ignored. ==> openshiftdev: Warning! You're launching this instance into a VPC without an ==> openshiftdev: elastic IP. Please verify you're properly connected to a VPN so ==> openshiftdev: you can access this machine, otherwise Vagrant will not be able ==> openshiftdev: to SSH into it. ==> openshiftdev: Launching an instance with the following settings... ==> openshiftdev: -- Type: c4.xlarge ==> openshiftdev: -- AMI: ami-2fb42b39 ==> openshiftdev: -- Region: us-east-1 ==> openshiftdev: -- Keypair: libra ==> openshiftdev: -- Subnet ID: subnet-cf57c596 ==> openshiftdev: -- User Data: yes ==> openshiftdev: -- User Data: ==> openshiftdev: # cloud-config ==> openshiftdev: ==> openshiftdev: growpart: ==> openshiftdev: mode: auto ==> openshiftdev: devices: ['/'] ==> openshiftdev: runcmd: ==> openshiftdev: - [ sh, -xc, "sed -i s/^Defaults.*requiretty/#Defaults requiretty/g /etc/sudoers"] ==> openshiftdev: ==> openshiftdev: -- Block Device Mapping: [{"DeviceName"=>"/dev/sda1", "Ebs.VolumeSize"=>25, "Ebs.VolumeType"=>"gp2"}, {"DeviceName"=>"/dev/sdb", "Ebs.VolumeSize"=>35, "Ebs.VolumeType"=>"gp2"}] ==> openshiftdev: -- Terminate On Shutdown: false ==> openshiftdev: -- Monitoring: false ==> openshiftdev: -- EBS optimized: false ==> openshiftdev: -- Assigning a public IP address in a VPC: false ==> openshiftdev: Waiting for instance to become "ready"... ==> openshiftdev: Waiting for SSH to become available... ==> openshiftdev: Machine is booted and ready for use! ==> openshiftdev: Running provisioner: setup (shell)... openshiftdev: Running: /tmp/vagrant-shell20180507-13560-mnhrz7.sh ==> openshiftdev: Host: ec2-35-172-140-44.compute-1.amazonaws.com + break + vagrant sync-origin-metrics -c -s Running ssh/sudo command 'rm -rf /data/src/github.com/openshift/origin-metrics-bare; ' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /centos/.ssh; mv /tmp/file20180507-15419-1nkzfrw /centos/.ssh/config && chown centos:centos /centos/.ssh/config && chmod 0600 /centos/.ssh/config' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/builder && chown -R centos:centos /data/src/github.com/openshift/' with timeout 14400. Attempt #0 Running ssh/sudo command 'set -e rm -fr /data/src/github.com/openshift/origin-metrics-bare; if [ ! -d /data/src/github.com/openshift/origin-metrics-bare ]; then git clone --quiet --bare https://github.com/openshift/origin-metrics.git /data/src/github.com/openshift/origin-metrics-bare >/dev/null fi ' with timeout 14400. Attempt #0 Synchronizing local sources Synchronizing [origin-metrics@master] from origin-metrics... Warning: Permanently added '35.172.140.44' (ECDSA) to the list of known hosts. Running ssh/sudo command 'set -e if [ -d /data/src/github.com/openshift/origin-metrics-bare ]; then rm -rf /data/src/github.com/openshift/origin-metrics echo 'Cloning origin-metrics ...' git clone --quiet --recurse-submodules /data/src/github.com/openshift/origin-metrics-bare /data/src/github.com/openshift/origin-metrics else MISSING_REPO+='origin-metrics-bare' fi if [ -n "$MISSING_REPO" ]; then echo 'Missing required upstream repositories:' echo $MISSING_REPO echo 'To fix, execute command: vagrant clone-upstream-repos' fi ' with timeout 14400. Attempt #0 Cloning origin-metrics ... + vagrant ssh -c 'if [ ! -d /tmp/openshift ] ; then mkdir /tmp/openshift ; fi ; sudo chmod 777 /tmp/openshift' + vagrant test-origin-metrics -d --env GIT_URL=https://github.com/openshift/origin-metrics --env GIT_BRANCH=master --env ORIGIN_METRICS_DIR=/data/src/github.com/openshift/origin-metrics --env OS_ROOT=/data/src/github.com/openshift/origin --env USE_LOCAL_SOURCE=true --env VERBOSE=1 *************************************************** Running GIT_URL=https://github.com/openshift/origin-metrics GIT_BRANCH=master ORIGIN_METRICS_DIR=/data/src/github.com/openshift/origin-metrics OS_ROOT=/data/src/github.com/openshift/origin USE_LOCAL_SOURCE=true VERBOSE=1 ./ci_test_every_pr.sh... /data/src/github.com/openshift/origin /data/src/github.com/openshift/origin-metrics/hack/tests /data/src/github.com/openshift/origin-metrics/hack/tests /data/src/github.com/openshift/origin-metrics /data/src/github.com/openshift/origin-metrics/hack/tests /data/src/github.com/openshift/origin-metrics/hack/tests [INFO] Starting metrics tests at Mon May 7 21:25:50 UTC 2018 Generated new key pair as /tmp/openshift/ci_test_every_pr/openshift.local.config/master/serviceaccounts.public.key and /tmp/openshift/ci_test_every_pr/openshift.local.config/master/serviceaccounts.private.key Generating node credentials ... Created node config for 172.18.15.120 in /tmp/openshift/ci_test_every_pr/openshift.local.config/node-172.18.15.120 Wrote master config to: /tmp/openshift/ci_test_every_pr/openshift.local.config/master/master-config.yaml Running hack/lib/start.sh:349: executing 'oc get --raw /healthz --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 2.167s: hack/lib/start.sh:349: executing 'oc get --raw /healthz --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Standard output from the command: Standard error from the command: The connection to the server 172.18.15.120:8443 was refused - did you specify the right host or port? ... repeated 2 times Error from server (Forbidden): User "system:admin" cannot "get" on "/healthz" ... repeated 2 times Error from server (InternalError): an error on the server ("[+]ping ok\n[-]poststarthook/bootstrap-controller failed: not finished\n[+]poststarthook/extensions/third-party-resources ok\n[-]poststarthook/ca-registration failed: not finished\nhealthz check failed") has prevented the request from succeeding Running hack/lib/start.sh:350: executing 'oc get --raw https://172.18.15.120:10250/healthz --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s... SUCCESS after 0.173s: hack/lib/start.sh:350: executing 'oc get --raw https://172.18.15.120:10250/healthz --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s There was no output from the command. Standard error from the command: Error from server (InternalError): an error on the server ("[+]ping ok\n[-]poststarthook/bootstrap-controller failed: not finished\n[+]poststarthook/extensions/third-party-resources ok\n[-]poststarthook/ca-registration failed: not finished\nhealthz check failed") has prevented the request from succeeding Running hack/lib/start.sh:351: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 1.220s: hack/lib/start.sh:351: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Standard output from the command: ok Standard error from the command: Error from server (InternalError): an error on the server ("") has prevented the request from succeeding ... repeated 2 times Running hack/lib/start.sh:352: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s... SUCCESS after 3.361s: hack/lib/start.sh:352: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 5s There was no error output from the command. Running hack/lib/start.sh:353: executing 'oc get --raw /api/v1/nodes/172.18.15.120 --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s... SUCCESS after 0.170s: hack/lib/start.sh:353: executing 'oc get --raw /api/v1/nodes/172.18.15.120 --config='/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s Standard output from the command: {"kind":"Node","apiVersion":"v1","metadata":{"name":"172.18.15.120","selfLink":"/api/v1/nodes172.18.15.120","uid":"43cdce2f-523d-11e8-aaf1-0eee037e39aa","resourceVersion":"459","creationTimestamp":"2018-05-07T21:26:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"172.18.15.120"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"172.18.15.120","providerID":"aws:////i-0c4cd075bd3fb22bc"},"status":{"capacity":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"4","memory":"7231292Ki","pods":"40"},"allocatable":{"alpha.kubernetes.io/nvidia-gpu":"0","cpu":"4","memory":"7231292Ki","pods":"40"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2018-05-07T21:26:11Z","lastTransitionTime":"2018-05-07T21:26:11Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2018-05-07T21:26:11Z","lastTransitionTime":"2018-05-07T21:26:11Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2018-05-07T21:26:11Z","lastTransitionTime":"2018-05-07T21:26:11Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"Ready","status":"False","lastHeartbeatTime":"2018-05-07T21:26:11Z","lastTransitionTime":"2018-05-07T21:26:11Z","reason":"KubeletNotReady","message":"container runtime is down"}],"addresses":[{"type":"LegacyHostIP","address":"172.18.15.120"},{"type":"InternalIP","address":"172.18.15.120"},{"type":"Hostname","address":"172.18.15.120"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"f9afeb75a5a382dce8269887a67fbf58","systemUUID":"50E126EC-075A-7788-CF9F-627D601BAFD6","bootID":"5c032032-0330-4487-8158-9b7eaa452797","kernelVersion":"3.10.0-327.22.2.el7.x86_64","osImage":"CentOS Linux 7 (Core)","containerRuntimeVersion":"docker://1.10.3","kubeletVersion":"v1.5.2+43a9be4","kubeProxyVersion":"v1.5.2+43a9be4","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/openvswitch:c106caf","openshift/openvswitch:latest"],"sizeBytes":1026697644},{"names":["openshift/node:c106caf","openshift/node:latest"],"sizeBytes":1025016185},{"names":["openshift/origin-gitserver:c106caf","openshift/origin-gitserver:latest"],"sizeBytes":1002075992},{"names":["openshift/origin-keepalived-ipfailover:c106caf","openshift/origin-keepalived-ipfailover:latest"],"sizeBytes":971721463},{"names":["openshift/origin-haproxy-router:c106caf","openshift/origin-haproxy-router:latest"],"sizeBytes":965766197},{"names":["openshift/origin-recycler:c106caf","openshift/origin-recycler:latest"],"sizeBytes":947041099},{"names":["openshift/origin-docker-builder:c106caf","openshift/origin-docker-builder:latest"],"sizeBytes":947041099},{"names":["openshift/origin-deployer:c106caf","openshift/origin-deployer:latest"],"sizeBytes":947041099},{"names":["openshift/origin-sti-builder:c106caf","openshift/origin-sti-builder:latest"],"sizeBytes":947041099},{"names":["openshift/origin-f5-router:c106caf","openshift/origin-f5-router:latest"],"sizeBytes":947041099},{"names":["openshift/origin:c106caf","openshift/origin:latest"],"sizeBytes":947041099},{"names":["docker.io/openshift/origin-release:golang-1.7"],"sizeBytes":857644817},{"names":["openshift/origin-release:latest"],"sizeBytes":719000618},{"names":["openshift/dind-master:latest"],"sizeBytes":474218996},{"names":["openshift/dind-node:latest"],"sizeBytes":474215272},{"names":["openshift/origin-docker-registry:c106caf","openshift/origin-docker-registry:latest"],"sizeBytes":468441298},{"names":["openshift/origin-egress-router:c106caf","openshift/origin-egress-router:latest"],"sizeBytes":420088533},{"names":["openshift/origin-base:latest"],"sizeBytes":402162201},{"names":["openshift/dind:latest"],"sizeBytes":384423296},{"names":["openshift/origin-haproxy-router-base:latest"],"sizeBytes":294278935},{"names":["docker.io/fedora:25"],"sizeBytes":230864375},{"names":["docker.io/centos:centos7"],"sizeBytes":196509652},{"names":["openshift/hello-openshift:c106caf","openshift/hello-openshift:latest"],"sizeBytes":5635062},{"names":["openshift/origin-pod:c106caf","openshift/origin-pod:latest"],"sizeBytes":1138998}]}} There was no error output from the command. info: password for stats user admin has been set to QngYGnLkiu --> Creating router router ... serviceaccount "router" created clusterrolebinding "router-router-role" created deploymentconfig "router" created service "router" created --> Success serviceaccount "registry" created clusterrolebinding "registry-registry-role" created deploymentconfig "docker-registry" created service "docker-registry" created Running /data/src/github.com/openshift/origin/ci_test_every_pr.sh:90: executing 'oadm registry' expecting success... SUCCESS after 0.352s: /data/src/github.com/openshift/origin/ci_test_every_pr.sh:90: executing 'oadm registry' expecting success Standard output from the command: Docker registry "docker-registry" service exists There was no error output from the command. Running /data/src/github.com/openshift/origin/ci_test_every_pr.sh:91: executing 'oadm router' expecting success... SUCCESS after 0.218s: /data/src/github.com/openshift/origin/ci_test_every_pr.sh:91: executing 'oadm router' expecting success Standard output from the command: Router "router" service exists There was no error output from the command. Running /data/src/github.com/openshift/origin/ci_test_every_pr.sh:95: executing 'oadm policy add-cluster-role-to-user cluster-admin metrics-admin' expecting success... SUCCESS after 0.213s: /data/src/github.com/openshift/origin/ci_test_every_pr.sh:95: executing 'oadm policy add-cluster-role-to-user cluster-admin metrics-admin' expecting success Standard output from the command: cluster role "cluster-admin" added: "metrics-admin" There was no error output from the command. Running /data/src/github.com/openshift/origin/ci_test_every_pr.sh:96: executing 'oadm policy add-cluster-role-to-user cluster-admin metrics-admin' expecting success... SUCCESS after 0.209s: /data/src/github.com/openshift/origin/ci_test_every_pr.sh:96: executing 'oadm policy add-cluster-role-to-user cluster-admin metrics-admin' expecting success Standard output from the command: cluster role "cluster-admin" added: "metrics-admin" There was no error output from the command. Running /data/src/github.com/openshift/origin/ci_test_every_pr.sh:97: executing 'oc login -u metrics-admin -p g1b315H' expecting success... SUCCESS after 0.200s: /data/src/github.com/openshift/origin/ci_test_every_pr.sh:97: executing 'oc login -u metrics-admin -p g1b315H' expecting success Standard output from the command: Login successful. You have access to the following projects and can switch between them with 'oc project ': * default kube-system openshift openshift-infra Using project "default". There was no error output from the command. + Info ================================================================================ + echo '[INFO]' ================================================================================ [INFO] ================================================================================ + Info 'Starting Origin-Metric end-to-end test' [INFO] Starting Origin-Metric end-to-end test + echo '[INFO]' Starting Origin-Metric end-to-end test + Info [INFO] + echo '[INFO]' + Info Settings: [INFO] Settings: + echo '[INFO]' Settings: ++ realpath /data/src/github.com/openshift/origin-metrics/hack/../hack/tests/../.. + Info 'Base Directory: /data/src/github.com/openshift/origin-metrics' [INFO] Base Directory: /data/src/github.com/openshift/origin-metrics + echo '[INFO]' Base Directory: /data/src/github.com/openshift/origin-metrics + Info ================================================================================ [INFO] ================================================================================ + echo '[INFO]' ================================================================================ + Info [INFO] + echo '[INFO]' ++ date +%s + TEST_STARTTIME=1525728374 ++ date +%s + export TEST_PROJECT=test-1525728374 + TEST_PROJECT=test-1525728374 + trap cleanup SIGINT SIGTERM EXIT + '[' true = true ']' + test.build + Info [INFO] + echo '[INFO]' + Info 'Building new images' [INFO] Building new images + echo '[INFO]' Building new images + sh /data/src/github.com/openshift/origin-metrics/hack/../hack/tests/../../hack/build-images.sh --no-cache --- Building component '/data/src/github.com/openshift/origin-metrics/hack/../hack/tests/../../hack/../heapster/' with docker tag 'openshift/origin-metrics-heapster:latest' --- Sending build context to Docker daemon 7.168 kB Step 1 : FROM openshift/origin-metrics-heapster-base:v1.3.0-2 Trying to pull repository docker.io/openshift/origin-metrics-heapster-base ... v1.3.0-2: Pulling from docker.io/openshift/origin-metrics-heapster-base 93857f76ae30: Already exists 4e61ce2c52dd: Pulling fs layer dd390b33ea7b: Pulling fs layer 3fc0170be004: Pulling fs layer f4e99900d503: Pulling fs layer f4e99900d503: Waiting dd390b33ea7b: Verifying Checksum dd390b33ea7b: Download complete f4e99900d503: Verifying Checksum f4e99900d503: Download complete 3fc0170be004: Verifying Checksum 3fc0170be004: Download complete 4e61ce2c52dd: Verifying Checksum 4e61ce2c52dd: Download complete 4e61ce2c52dd: Pull complete 4e61ce2c52dd: Pull complete dd390b33ea7b: Pull complete dd390b33ea7b: Pull complete 3fc0170be004: Pull complete 3fc0170be004: Pull complete f4e99900d503: Pull complete f4e99900d503: Pull complete Digest: sha256:749fd880577885a94daccc5646410bc7964061ed35574db634e23f3a7e477643 Status: Downloaded newer image for docker.io/openshift/origin-metrics-heapster-base:v1.3.0-2 ---> 0ebd3407458b Step 2 : MAINTAINER Hawkular Metrics ---> Running in 63e9ce0ac6f5 ---> 73d5e56175d8 Removing intermediate container 63e9ce0ac6f5 Step 3 : ADD heapster-wrapper.sh heapster-readiness.sh /opt/ ---> f22ccb899d47 Removing intermediate container ab5e256236bf Step 4 : ENTRYPOINT opt/heapster-wrapper.sh ---> Running in ae12d624dfed ---> e5fa97c59c90 Removing intermediate container ae12d624dfed Successfully built e5fa97c59c90 --- openshift/origin-metrics-heapster:latest took 104 seconds --- --- Building component '/data/src/github.com/openshift/origin-metrics/hack/../hack/tests/../../hack/../hawkular-metrics/' with docker tag 'openshift/origin-metrics-hawkular-metrics:latest' --- Sending build context to Docker daemon 63.49 kB Step 1 : FROM jboss/wildfly:11.0.0.Final Trying to pull repository docker.io/jboss/wildfly ... 11.0.0.Final: Pulling from docker.io/jboss/wildfly 469cfcc7a4b3: Pulling fs layer 05677e4d61f0: Pulling fs layer a9520f492457: Pulling fs layer 4d201219d6b1: Pulling fs layer b55e40a220af: Pulling fs layer 4d201219d6b1: Waiting b55e40a220af: Waiting a9520f492457: Download complete 05677e4d61f0: Verifying Checksum 05677e4d61f0: Download complete 469cfcc7a4b3: Verifying Checksum 469cfcc7a4b3: Download complete 4d201219d6b1: Verifying Checksum 4d201219d6b1: Download complete b55e40a220af: Verifying Checksum b55e40a220af: Download complete 469cfcc7a4b3: Pull complete 469cfcc7a4b3: Pull complete 05677e4d61f0: Pull complete 05677e4d61f0: Pull complete a9520f492457: Pull complete a9520f492457: Pull complete 4d201219d6b1: Pull complete 4d201219d6b1: Pull complete b55e40a220af: Pull complete b55e40a220af: Pull complete Digest: sha256:57b2bcdfd63ddfa8cacb15f7ffb20a1c79539fe2b1c0eda2899c9ac4e0d01b6a Status: Downloaded newer image for docker.io/jboss/wildfly:11.0.0.Final ---> 6926d48f2e5b Step 2 : MAINTAINER Hawkular Metrics ---> Running in 06637a55f038 ---> 3ae054c45fea Removing intermediate container 06637a55f038 Step 3 : ENV HAWKULAR_METRICS_ENDPOINT_PORT "8080" HAWKULAR_METRICS_VERSION "0.30.5.Final" HAWKULAR_METRICS_DIRECTORY "/opt/hawkular" HAWKULAR_METRICS_SCRIPT_DIRECTORY "/opt/hawkular/scripts/" PATH $PATH:$HAWKULAR_METRICS_SCRIPT_DIRECTORY JAVA_OPTS_APPEND "-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdump" ---> Running in 3f230c928724 ---> 18a61e67177c Removing intermediate container 3f230c928724 Step 4 : ARG DEV_BUILD="false" ---> Running in 876b646a58a0 ---> e6b25926a835 Removing intermediate container 876b646a58a0 Step 5 : EXPOSE 8080 8444 ---> Running in 95c07ad78d24 ---> 12550481a1b6 Removing intermediate container 95c07ad78d24 Step 6 : RUN cd $JBOSS_HOME/bin && curl -Lo $JBOSS_HOME/bin/jmx_prometheus_javaagent.jar https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.9/jmx_prometheus_javaagent-0.9.jar ---> Running in 42a78c8400c7 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1223k 100 1223k 0 0 4688k 0 --:--:-- --:--:-- --:--:-- 4688k ---> 42032ef8d59e Removing intermediate container 42a78c8400c7 Step 7 : COPY prometheus.yaml $JBOSS_HOME/standalone/configuration/prometheus.yaml ---> 1cbf9d861820 Removing intermediate container 025de4e29343 Step 8 : RUN mkdir /tmp/hawkular ---> Running in e981b494e2b8 ---> 9dd4ef16322a Removing intermediate container e981b494e2b8 Step 9 : COPY dev/* /tmp/hawkular/ ---> bd7bf6e0d122 Removing intermediate container 524969a1e267 Step 10 : RUN cd $JBOSS_HOME/standalone/deployments/ && if [ ${DEV_BUILD} = "true" ] && [ -s /tmp/hawkular/hawkular-metrics.war ]; then mv /tmp/hawkular/hawkular-metrics.war .; rm -rf /tmp/hawkular; else curl -Lo hawkular-metrics.war https://origin-repository.jboss.org/nexus/service/local/artifact/maven/content?r=public\&g=org.hawkular.metrics\&a=hawkular-metrics-openshift\&e=war\&v=${HAWKULAR_METRICS_VERSION}; fi ---> Running in 7a2d9e941875 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (56) TCP connection reset by peer The command '/bin/sh -c cd $JBOSS_HOME/standalone/deployments/ && if [ ${DEV_BUILD} = "true" ] && [ -s /tmp/hawkular/hawkular-metrics.war ]; then mv /tmp/hawkular/hawkular-metrics.war .; rm -rf /tmp/hawkular; else curl -Lo hawkular-metrics.war https://origin-repository.jboss.org/nexus/service/local/artifact/maven/content?r=public\&g=org.hawkular.metrics\&a=hawkular-metrics-openshift\&e=war\&v=${HAWKULAR_METRICS_VERSION}; fi' returned a non-zero code: 56 + cleanup + out=1 + trap test.cleanup SIGINT SIGTERM EXIT + '[' 1 -ne 0 ']' + Error 'Test failed' [ERROR] Test failed + echo '[ERROR]' Test failed + echo ++ date +%s + ENDTIME=1525728541 + '[' '' = true ']' + test.cleanup + Info [INFO] + echo '[INFO]' + Info 'Deleting test project test-1525728374' [INFO] Deleting test project test-1525728374 + echo '[INFO]' Deleting test project test-1525728374 + oc delete project test-1525728374 Error from server (NotFound): namespaces "test-1525728374" not found + exit [ERROR] PID 1975: /data/src/github.com/openshift/origin/ci_test_every_pr.sh:99: `"${ORIGIN_METRICS_DIR}/hack/e2e-tests.sh" -x --test=DefaultInstall` exited with status 1. [INFO] Stack Trace: [INFO] 1: /data/src/github.com/openshift/origin/ci_test_every_pr.sh:99: `"${ORIGIN_METRICS_DIR}/hack/e2e-tests.sh" -x --test=DefaultInstall` [INFO] Exiting with code 1. >>>>>>>>>>>> ENV VARIABLES <<<<<<<<<<<<<<<<<< ADMIN_KUBECONFIG=/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig API_BIND_HOST=172.18.15.120 API_HOST=172.18.15.120 API_PORT=8443 API_SCHEME=https ARTIFACT_DIR=/tmp/openshift/ci_test_every_pr/artifacts BASETMPDIR=/tmp/openshift/ci_test_every_pr _=/bin/env CLUSTER_ADMIN_CONTEXT=default/172-18-15-120:8443/system:admin ETCD_DATA_DIR=/tmp/openshift/ci_test_every_pr/etcd ETCD_HOST=127.0.0.1 ETCD_PEER_PORT=7001 ETCD_PORT=4001 FAKE_HOME_DIR=/tmp/openshift/ci_test_every_pr/openshift.local.home GIT_BRANCH=master GIT_URL=https://github.com/openshift/origin-metrics GOPATH=/data HISTCONTROL=ignoredups HISTSIZE=1000 HOME=/home/centos HOSTNAME=ip-172-18-15-120 KUBE_CACHE_MUTATION_DETECTOR=true KUBECONFIG=/tmp/openshift/ci_test_every_pr/openshift.local.config/master/admin.kubeconfig KUBELET_BIND_HOST=172.18.15.120 KUBELET_HOST=172.18.15.120 KUBELET_PORT=10250 KUBELET_SCHEME=https LANG=en_US.UTF-8 LESSOPEN=||/usr/bin/lesspipe.sh %s LOG_DIR=/tmp/openshift/ci_test_every_pr/logs LOGGER_PID=2040 LOGNAME=centos MAIL=/var/spool/mail/centos MASTER_ADDR=https://172.18.15.120:8443 MASTER_CONFIG_DIR=/tmp/openshift/ci_test_every_pr/openshift.local.config/master MAX_IMAGES_BULK_IMPORTED_PER_REPOSITORY=3 NODE_CONFIG_DIR=/tmp/openshift/ci_test_every_pr/openshift.local.config/node-172.18.15.120 NUM_OS_JUNIT_SUITES_IN_FLIGHT=1 NUM_OS_JUNIT_TESTS_IN_FLIGHT=0 OLDPWD=/data/src/github.com/openshift/origin-metrics/hack/tests OPENSHIFT_ROUTER_IMAGE=openshift/origin-haproxy-router:c106caf ORIGIN_METRICS_DIR=/data/src/github.com/openshift/origin-metrics OS_ORIGINAL_WD=/data/src/github.com/openshift/origin OS_PID=2246 OS_ROOT=/data/src/github.com/openshift/origin OS_TMP_ENV_SET=ci_test_every_pr OS_USE_STACKTRACE=true PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/data/bin:/data/bin:/data/src/github.com/openshift/origin/_output/etcd/bin:/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/home/centos/.local/bin:/home/centos/bin PUBLIC_MASTER_HOST=172.18.15.120 PWD=/data/src/github.com/openshift/origin QTDIR=/usr/lib64/qt-3.3 QT_GRAPHICSSYSTEM_CHECKED=1 QT_GRAPHICSSYSTEM=native QTINC=/usr/lib64/qt-3.3/include QTLIB=/usr/lib64/qt-3.3/lib SAR_LOGFILE=/tmp/openshift/ci_test_every_pr/logs/sar.log SELINUX_LEVEL_REQUESTED= SELINUX_ROLE_REQUESTED= SELINUX_USE_CURRENT_RANGE= SERVER_CONFIG_DIR=/tmp/openshift/ci_test_every_pr/openshift.local.config SHELL=/bin/bash SHLVL=3 SSH_CLIENT=50.17.198.52 53956 22 SSH_CONNECTION=50.17.198.52 53956 172.18.15.120 22 TAG=c106caf TERM=vt100 TIME_MIN=60000 TIME_MS=1 TIME_SEC=1000 USE_IMAGES=openshift/origin-${component}:c106caf USE_LOCAL_SOURCE=true USER=centos USE_SUDO=true VERBOSE=1 VOLUME_DIR=/tmp/openshift/ci_test_every_pr/volumes XDG_RUNTIME_DIR=/run/user/1000 XDG_SESSION_ID=5 >>>>>>>>>>>> END ENV VARIABLES <<<<<<<<<<<<<< [FAIL] !!!!! Test Failed !!!! [INFO] Dumping container logs to /tmp/openshift/ci_test_every_pr/logs/containers [INFO] Dumping etcd contents to /tmp/openshift/ci_test_every_pr/artifacts/etcd_dump.json [INFO] Tearing down test [INFO] Stopping k8s docker containers [INFO] Removing k8s docker containers [INFO] Pruning etcd data directory... [INFO] Cleanup complete [INFO] Exiting at Mon May 7 21:29:07 UTC 2018 ./ci_test_every_pr.sh took 199 seconds Error while running ssh/sudo command: set -e pushd /data/src/github.com/openshift//origin-metrics/hack/tests >/dev/null export PATH=$GOPATH/bin:$PATH echo '***************************************************' echo 'Running GIT_URL=https://github.com/openshift/origin-metrics GIT_BRANCH=master ORIGIN_METRICS_DIR=/data/src/github.com/openshift/origin-metrics OS_ROOT=/data/src/github.com/openshift/origin USE_LOCAL_SOURCE=true VERBOSE=1 ./ci_test_every_pr.sh...' time GIT_URL=https://github.com/openshift/origin-metrics GIT_BRANCH=master ORIGIN_METRICS_DIR=/data/src/github.com/openshift/origin-metrics OS_ROOT=/data/src/github.com/openshift/origin USE_LOCAL_SOURCE=true VERBOSE=1 ./ci_test_every_pr.sh echo 'Finished GIT_URL=https://github.com/openshift/origin-metrics GIT_BRANCH=master ORIGIN_METRICS_DIR=/data/src/github.com/openshift/origin-metrics OS_ROOT=/data/src/github.com/openshift/origin USE_LOCAL_SOURCE=true VERBOSE=1 ./ci_test_every_pr.sh' echo '***************************************************' popd >/dev/null The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. ==> openshiftdev: Downloading logs ==> openshiftdev: Downloading artifacts from '/var/log/yum.log' to '/var/lib/jenkins/jobs/test-origin-metrics/workspace/origin/artifacts/yum.log' ==> openshiftdev: Downloading artifacts from '/var/log/secure' to '/var/lib/jenkins/jobs/test-origin-metrics/workspace/origin/artifacts/secure' ==> openshiftdev: Downloading artifacts from '/var/log/audit/audit.log' to '/var/lib/jenkins/jobs/test-origin-metrics/workspace/origin/artifacts/audit.log' ==> openshiftdev: /tmp/origin-metrics/ did not exist on the remote system. This is often the case for tests that were not run. Build step 'Execute shell' marked build as failure [description-setter] Description set: https://github.com/openshift/origin-metrics/pull/419 [PostBuildScript] - Execution post build scripts. [workspace] $ /bin/sh -xe /tmp/jenkins685017112981602837.sh + INSTANCE_NAME=origin_metrics-centos7-330 + pushd origin ~/jobs/test-origin-metrics/workspace/origin ~/jobs/test-origin-metrics/workspace + '[' -f .vagrant-openshift.json ']' + /usr/bin/vagrant destroy -f ==> openshiftdev: Terminating the instance... ==> openshiftdev: Running cleanup tasks for 'shell' provisioner... + popd ~/jobs/test-origin-metrics/workspace [BFA] Scanning build for known causes... [BFA] Found failure cause(s): [BFA] Command Failure from category failure [BFA] Done. 0s Finished: FAILURE