Console Output
Skipping 3,660 KB..
Full Log I0622 21:26:57.767337 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (273.636µs) 401 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34554]
I0622 21:26:57.768120 32660 authorization.go:60] Forbidden: "/apis/user.openshift.io/v1/users/~", Reason: "User \"system:anonymous\" cannot get users.user.openshift.io at the cluster scope"
I0622 21:26:57.768212 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (310.873µs) 403 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34544]
I0622 21:26:57.770131 32660 wrap.go:42] GET /apis/oauth.openshift.io/v1/oauthaccesstokens/invalid: (1.092249ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34380]
E0622 21:26:57.770376 32660 authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, [x509: certificate signed by unknown authority, token lookup failed]]]
I0622 21:26:57.770425 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (1.700666ms) 401 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34548]
I0622 21:26:57.771937 32660 wrap.go:42] GET /apis/oauth.openshift.io/v1/oauthaccesstokens/invalid: (865.013µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34380]
E0622 21:26:57.772173 32660 authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, [x509: certificate signed by unknown authority, token lookup failed]]]
I0622 21:26:57.772223 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (1.350328ms) 401 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34554]
I0622 21:26:57.772912 32660 request.go:485] Throttling request took 159.533469ms, request: PUT:https://127.0.0.1:24818/api/v1/namespaces/openshift-infra/serviceaccounts/serviceaccount-pull-secrets-controller
I0622 21:26:57.773918 32660 wrap.go:42] GET /apis/oauth.openshift.io/v1/oauthaccesstokens/invalid: (1.055427ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34380]
E0622 21:26:57.774139 32660 authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, token lookup failed]]
I0622 21:26:57.774208 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (1.58197ms) 401 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34544]
I0622 21:26:57.775441 32660 wrap.go:42] PUT /api/v1/namespaces/openshift-infra/serviceaccounts/serviceaccount-pull-secrets-controller: (2.285741ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:openshift-infra:serviceaccount-pull-secrets-controller] 127.0.0.1:34544]
I0622 21:26:57.775644 32660 create_dockercfg_secrets.go:444] Creating token secret "serviceaccount-pull-secrets-controller-token-g6k4q" for service account openshift-infra/serviceaccount-pull-secrets-controller
I0622 21:26:57.775976 32660 wrap.go:42] GET /apis/oauth.openshift.io/v1/oauthaccesstokens/OurACC3kFUGWz2-vKZK1eLMUYMLXeLpK95aO5IWxuCg: (1.082532ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34380]
I0622 21:26:57.777242 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/user: (914.289µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34380]
I0622 21:26:57.778346 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (3.706948ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34548]
I0622 21:26:57.779646 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (888.033µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34554]
I0622 21:26:57.781010 32660 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (929.063µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:34544]
INFO: 2018/06/22 21:26:57 ccBalancerWrapper: updating state and picker called by balancer: IDLE, 0xc43b419320
INFO: 2018/06/22 21:26:57 dialing to target with scheme: ""
INFO: 2018/06/22 21:26:57 could not get resolver for scheme: ""
INFO: 2018/06/22 21:26:57 balancerWrapper: is pickfirst: false
INFO: 2018/06/22 21:26:57 balancerWrapper: got update addr from Notify: [{127.0.0.1:11402 <nil>}]
INFO: 2018/06/22 21:26:57 ccBalancerWrapper: new subconn: [{127.0.0.1:11402 0 <nil>}]
INFO: 2018/06/22 21:26:57 balancerWrapper: handle subconn state change: 0xc42b676070, CONNECTING
INFO: 2018/06/22 21:26:57 ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc43b419320
INFO: 2018/06/22 21:26:57 balancerWrapper: handle subconn state change: 0xc42b676070, READY
INFO: 2018/06/22 21:26:57 ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc43b419320
INFO: 2018/06/22 21:26:57 balancerWrapper: got update addr from Notify: [{127.0.0.1:11402 <nil>}]
--- PASS: TestIntegration/TestNewAppSourceAuthRequired (0.54s)
runner_test.go:187:
=== OUTPUT
I0622 21:26:57.984660 488 repository.go:388] Executing git init /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth599548438/initial-repo
I0622 21:26:57.987827 488 repository.go:388] Executing git add .
I0622 21:26:57.989660 488 repository.go:388] Executing git commit -m initial commit
I0622 21:26:57.992410 488 repository.go:388] Executing git clone --bare /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth599548438/initial-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth599548438/git-home/test-repo
I0622 21:26:58.003372 488 repository.go:388] Executing git clone --depth=1 --recursive http://127.0.0.1:44428/test-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gen330670717
EVENT: {"fetch" "3f20ef15c943bd33b920c6e0b7722dc96c8ea8da" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth599548438/git-home/test-repo" "" "" "" "exit status 128" %!q(*http.Request=&{POST 0xc420534380 HTTP/1.1 1 1 map[Content-Length:[219] User-Agent:[git/1.8.3.1] Accept-Encoding:[gzip] Content-Type:[application/x-git-upload-pack-request] Accept:[application/x-git-upload-pack-result]] 0xc4210fa280 <nil> 219 [] false 127.0.0.1:44428 map[] map[] <nil> map[] 127.0.0.1:42136 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc4210fa2c0})}
EVENT: {"fetch" "3f20ef15c943bd33b920c6e0b7722dc96c8ea8da" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth599548438/git-home/test-repo" "" "" "" <nil> %!q(*http.Request=&{POST 0xc4204e0100 HTTP/1.1 1 1 map[Accept-Encoding:[gzip] Content-Type:[application/x-git-upload-pack-request] Accept:[application/x-git-upload-pack-result] Content-Length:[228] User-Agent:[git/1.8.3.1]] 0xc421254000 <nil> 228 [] false 127.0.0.1:44428 map[] map[] <nil> map[] 127.0.0.1:42136 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc421254040})}
I0622 21:26:58.065782 488 sourcelookup.go:313] Checking if http://127.0.0.1:44428/test-repo requires authentication
I0622 21:26:58.065822 488 repository.go:388] Executing git ls-remote --heads http://127.0.0.1:44428/test-repo
I0622 21:26:58.077068 488 repository.go:388] Executing git init /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth648622506/initial-repo
I0622 21:26:58.079794 488 repository.go:388] Executing git add .
I0622 21:26:58.081821 488 repository.go:388] Executing git commit -m initial commit
I0622 21:26:58.084465 488 repository.go:388] Executing git clone --bare /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth648622506/initial-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth648622506/git-home/test-repo
I0622 21:26:58.095016 488 repository.go:388] Executing git clone --depth=1 --recursive http://127.0.0.1:32865/test-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gen195206401
EVENT: {"fetch" "0f09dc41fc6db12b631c8a5977b6248346e673c2" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth648622506/git-home/test-repo" "" "" "" "exit status 128" %!q(*http.Request=&{POST 0xc420534700 HTTP/1.1 1 1 map[Accept-Encoding:[gzip] Content-Type:[application/x-git-upload-pack-request] Accept:[application/x-git-upload-pack-result] Content-Length:[219] Authorization:[Basic Z2l0dXNlcjpnaXRwYXNz] User-Agent:[git/1.8.3.1]] 0xc4210fa800 <nil> 219 [] false 127.0.0.1:32865 map[] map[] <nil> map[] 127.0.0.1:36850 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc4210fa840})}
EVENT: {"fetch" "0f09dc41fc6db12b631c8a5977b6248346e673c2" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth648622506/git-home/test-repo" "" "" "" <nil> %!q(*http.Request=&{POST 0xc420662280 HTTP/1.1 1 1 map[Content-Length:[228] Authorization:[Basic Z2l0dXNlcjpnaXRwYXNz] User-Agent:[git/1.8.3.1] Accept-Encoding:[gzip] Content-Type:[application/x-git-upload-pack-request] Accept:[application/x-git-upload-pack-result]] 0xc420888600 <nil> 228 [] false 127.0.0.1:32865 map[] map[] <nil> map[] 127.0.0.1:36850 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc420888680})}
I0622 21:26:58.155333 488 sourcelookup.go:313] Checking if http://127.0.0.1:32865/test-repo requires authentication
I0622 21:26:58.155354 488 repository.go:388] Executing git ls-remote --heads http://127.0.0.1:32865/test-repo
I0622 21:26:58.164035 488 repository.go:456] Error executing command: exit status 128
warning: Cannot check if git requires authentication.
I0622 21:26:58.166190 488 repository.go:388] Executing git init /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth792022654/initial-repo
I0622 21:26:58.168949 488 repository.go:388] Executing git add .
I0622 21:26:58.170690 488 repository.go:388] Executing git commit -m initial commit
I0622 21:26:58.173504 488 repository.go:388] Executing git clone --bare /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth792022654/initial-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth792022654/git-home/test-repo
I0622 21:26:58.184135 488 repository.go:388] Executing git clone --depth=1 --recursive http://example.com/test-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gen145046981
EVENT: {"fetch" "0f09dc41fc6db12b631c8a5977b6248346e673c2" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth792022654/git-home/test-repo" "" "" "" "exit status 128" %!q(*http.Request=&{POST 0xc420662b00 HTTP/1.1 1 1 map[Accept:[application/x-git-upload-pack-result] Content-Type:[application/x-git-upload-pack-request] Accept-Encoding:[gzip] User-Agent:[git/1.8.3.1] Content-Length:[219]] 0xc420889580 <nil> 219 [] false example.com map[] map[] <nil> map[] 127.0.0.1:40226 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc420889600})}
EVENT: {"fetch" "0f09dc41fc6db12b631c8a5977b6248346e673c2" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth792022654/git-home/test-repo" "" "" "" <nil> %!q(*http.Request=&{POST 0xc420662d00 HTTP/1.1 1 1 map[Accept:[application/x-git-upload-pack-result] Content-Type:[application/x-git-upload-pack-request] Accept-Encoding:[gzip] User-Agent:[git/1.8.3.1] Content-Length:[228]] 0xc420f5a140 <nil> 228 [] false example.com map[] map[] <nil> map[] 127.0.0.1:40226 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc420f5a180})}
I0622 21:26:58.256321 488 sourcelookup.go:313] Checking if http://example.com/test-repo requires authentication
I0622 21:26:58.256341 488 repository.go:388] Executing git ls-remote --heads http://example.com/test-repo
I0622 21:26:58.268728 488 repository.go:456] Error executing command: exit status 128
warning: Cannot check if git requires authentication.
I0622 21:26:58.271281 488 repository.go:388] Executing git init /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth068402322/initial-repo
I0622 21:26:58.274021 488 repository.go:388] Executing git add .
I0622 21:26:58.275718 488 repository.go:388] Executing git commit -m initial commit
I0622 21:26:58.278560 488 repository.go:388] Executing git clone --bare /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth068402322/initial-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth068402322/git-home/test-repo
I0622 21:26:58.288489 488 repository.go:388] Executing git clone --depth=1 --recursive http://example.com/test-repo /tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gen848949449
EVENT: {"fetch" "0f09dc41fc6db12b631c8a5977b6248346e673c2" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth068402322/git-home/test-repo" "" "" "" "exit status 128" %!q(*http.Request=&{POST 0xc4204e0c80 HTTP/1.1 1 1 map[Accept-Encoding:[gzip] User-Agent:[git/1.8.3.1] Content-Length:[219] Accept:[application/x-git-upload-pack-result] Authorization:[Basic Z2l0dXNlcjpnaXRwYXNz] Content-Type:[application/x-git-upload-pack-request]] 0xc421254c40 <nil> 219 [] false example.com map[] map[] <nil> map[] 127.0.0.1:34902 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc421254c80})}
EVENT: {"fetch" "0f09dc41fc6db12b631c8a5977b6248346e673c2" "/tmp/openshift/test-integration/tmp-testnewappsourceauthrequired646939920/gitauth068402322/git-home/test-repo" "" "" "" <nil> %!q(*http.Request=&{POST 0xc420663400 HTTP/1.1 1 1 map[Authorization:[Basic Z2l0dXNlcjpnaXRwYXNz] Content-Type:[application/x-git-upload-pack-request] Accept-Encoding:[gzip] User-Agent:[git/1.8.3.1] Content-Length:[228] Accept:[application/x-git-upload-pack-result]] 0xc420f5a500 <nil> 228 [] false example.com map[] map[] <nil> map[] 127.0.0.1:34902 /test-repo/git-upload-pack <nil> <nil> <nil> 0xc420f5a540})}
I0622 21:26:58.351542 488 sourcelookup.go:313] Checking if http://example.com/test-repo requires authentication
I0622 21:26:58.351563 488 repository.go:388] Executing git ls-remote --heads http://example.com/test-repo
I0622 21:26:58.362704 488 repository.go:456] Error executing command: exit status 128
warning: Cannot check if git requires authentication.
--- PASS: TestIntegration/TestNodeAuthorizer (28.18s)
runner_test.go:187:
I0622 21:27:06.885748 476 wrap.go:42] PUT /api/v1/namespaces/kube-system/serviceaccounts/daemon-set-controller: (3.052226ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:openshift-infra:serviceaccount-pull-secrets-controller] 127.0.0.1:58000]
=== OUTPUT
476 pvc_protection_controller.go:141] Processing PVC ns/mypvc
I0622 21:27:06.842234 476 pvc_protection_controller.go:144] Finished processing PVC ns/mypvc (5.599µs)
I0622 21:27:06.842294 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2normalpod, uid 0389cb8a-7663-11e8-9b47-0242ac110002, event type update
I0622 21:27:06.843605 476 wrap.go:42] DELETE /api/v1/namespaces/ns/pods/node2normalpod: (4.846584ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:58034]
I0622 21:27:06.844049 476 graph_populator.go:101] deletePod ns/node2normalpod for node node2
I0622 21:27:06.844141 476 disruption.go:369] deletePod called on pod "node2normalpod"
I0622 21:27:06.844164 476 disruption.go:403] No PodDisruptionBudgets found for pod node2normalpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.844173 476 disruption.go:372] No matching pdb for pod "node2normalpod"
I0622 21:27:06.844193 476 util.go:186] Skipping processing of pod "ns"/"node2normalpod": it is scheduled to node "node2" which is not managed by the controller.
I0622 21:27:06.844224 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2normalpod, uid 0389cb8a-7663-11e8-9b47-0242ac110002, event type delete
I0622 21:27:06.844255 476 pvc_protection_controller.go:276] Got event on pod ns/node2normalpod
I0622 21:27:06.844279 476 pvc_protection_controller.go:141] Processing PVC ns/mypvc
I0622 21:27:06.844298 476 pvc_protection_controller.go:144] Finished processing PVC ns/mypvc (3.671µs)
I0622 21:27:06.844339 476 deployment_controller.go:357] Pod node2normalpod deleted.
I0622 21:27:06.844366 476 taint_manager.go:338] Noticed pod deletion: types.NamespacedName{Namespace:"ns", Name:"node2normalpod"}
I0622 21:27:06.844428 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2normalpod, uid 0389cb8a-7663-11e8-9b47-0242ac110002, event type delete
I0622 21:27:06.844589 476 admission.go:97] getting security context constraints for pod node2mirrorpod (generate: ) in namespace ns with user info &{system:node:node2 [system:nodes system:authenticated] map[]}
I0622 21:27:06.846260 476 wrap.go:42] GET /api/v1/namespaces/ns: (1.028129ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:57854]
I0622 21:27:06.846474 476 matcher.go:279] got preallocated values for min: 1000060000, max: 1000069999 for uid range in namespace ns
I0622 21:27:06.846494 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.846500 476 matcher.go:322] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.846514 476 admission.go:217] validating pod node2mirrorpod (generate: ) against providers restricted,privileged
I0622 21:27:06.846552 476 admission.go:170] pod node2mirrorpod (generate: ) validated against provider restricted
I0622 21:27:06.847832 476 wrap.go:42] POST /api/v1/namespaces/ns/pods: (3.713839ms) 201 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:58034]
I0622 21:27:06.848153 476 graph_populator.go:84] updatePod ns/node2mirrorpod for node node2
I0622 21:27:06.848230 476 disruption.go:328] addPod called on pod "node2mirrorpod"
I0622 21:27:06.848252 476 disruption.go:403] No PodDisruptionBudgets found for pod node2mirrorpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.848260 476 disruption.go:331] No matching pdb for pod "node2mirrorpod"
I0622 21:27:06.848315 476 taint_manager.go:345] Noticed pod update: types.NamespacedName{Namespace:"ns", Name:"node2mirrorpod"}
I0622 21:27:06.849363 476 wrap.go:42] GET /api/v1/namespaces/ns/pods/node2mirrorpod?resourceVersion=0: (397.36µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:57854]
I0622 21:27:06.850073 476 request.go:485] Throttling request took 61.379619ms, request: POST:https://127.0.0.1:18563/api/v1/namespaces/kube-system/secrets
I0622 21:27:06.851975 476 disruption.go:340] updatePod called on pod "node2mirrorpod"
I0622 21:27:06.852002 476 disruption.go:403] No PodDisruptionBudgets found for pod node2mirrorpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.852010 476 disruption.go:343] No matching pdb for pod "node2mirrorpod"
I0622 21:27:06.852011 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 0390e114-7663-11e8-9b47-0242ac110002, event type update
I0622 21:27:06.852037 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 0390e114-7663-11e8-9b47-0242ac110002, event type update
I0622 21:27:06.852058 476 pvc_protection_controller.go:276] Got event on pod ns/node2mirrorpod
I0622 21:27:06.852945 476 wrap.go:42] POST /api/v1/namespaces/kube-system/secrets: (2.668835ms) 201 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:openshift-infra:serviceaccount-pull-secrets-controller] 127.0.0.1:58000]
I0622 21:27:06.854329 476 wrap.go:42] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (5.923416ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:58034]
I0622 21:27:06.854846 476 graph_populator.go:101] deletePod ns/node2mirrorpod for node node2
I0622 21:27:06.854917 476 disruption.go:369] deletePod called on pod "node2mirrorpod"
I0622 21:27:06.854940 476 disruption.go:403] No PodDisruptionBudgets found for pod node2mirrorpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.854948 476 disruption.go:372] No matching pdb for pod "node2mirrorpod"
I0622 21:27:06.854994 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 0390e114-7663-11e8-9b47-0242ac110002, event type delete
I0622 21:27:06.855027 476 pvc_protection_controller.go:276] Got event on pod ns/node2mirrorpod
I0622 21:27:06.855062 476 deployment_controller.go:357] Pod node2mirrorpod deleted.
I0622 21:27:06.855087 476 taint_manager.go:338] Noticed pod deletion: types.NamespacedName{Namespace:"ns", Name:"node2mirrorpod"}
I0622 21:27:06.855134 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 0390e114-7663-11e8-9b47-0242ac110002, event type delete
I0622 21:27:06.855349 476 admission.go:97] getting security context constraints for pod node2normalpod (generate: ) in namespace ns with user info &{system:admin [system:masters system:cluster-admins system:authenticated] map[]}
I0622 21:27:06.855392 476 admission.go:108] getting security context constraints for pod node2normalpod (generate: ) with service account info &{system:serviceaccount:ns:default [system:serviceaccounts system:serviceaccounts:ns] map[]}
I0622 21:27:06.857085 476 wrap.go:42] GET /api/v1/namespaces/ns: (981.242µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:57854]
I0622 21:27:06.857296 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.857321 476 matcher.go:279] got preallocated values for min: 1000060000, max: 1000069999 for uid range in namespace ns
I0622 21:27:06.857328 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.857333 476 matcher.go:322] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.857344 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.857352 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.857361 476 matcher.go:279] got preallocated values for min: 1000060000, max: 1000069999 for uid range in namespace ns
I0622 21:27:06.857371 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.857376 476 matcher.go:322] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.857383 476 matcher.go:342] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.857404 476 matcher.go:279] got preallocated values for min: 1000060000, max: 1000069999 for uid range in namespace ns
I0622 21:27:06.857419 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.857427 476 matcher.go:322] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.857450 476 admission.go:217] validating pod node2normalpod (generate: ) against providers anyuid,restricted,nonroot,hostmount-anyuid,hostnetwork,hostaccess,privileged
I0622 21:27:06.857490 476 admission.go:170] pod node2normalpod (generate: ) validated against provider anyuid
I0622 21:27:06.858737 476 wrap.go:42] POST /api/v1/namespaces/ns/pods: (3.920491ms) 201 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:58002]
I0622 21:27:06.859189 476 graph_populator.go:84] updatePod ns/node2normalpod for node node2
I0622 21:27:06.859287 476 disruption.go:328] addPod called on pod "node2normalpod"
I0622 21:27:06.859311 476 disruption.go:403] No PodDisruptionBudgets found for pod node2normalpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.859319 476 disruption.go:331] No matching pdb for pod "node2normalpod"
I0622 21:27:06.859341 476 util.go:186] Skipping processing of pod "ns"/"node2normalpod": it is scheduled to node "node2" which is not managed by the controller.
I0622 21:27:06.859399 476 taint_manager.go:345] Noticed pod update: types.NamespacedName{Namespace:"ns", Name:"node2normalpod"}
I0622 21:27:06.859852 476 admission.go:97] getting security context constraints for pod node2mirrorpod (generate: ) in namespace ns with user info &{system:admin [system:masters system:cluster-admins system:authenticated] map[]}
I0622 21:27:06.861077 476 wrap.go:42] GET /api/v1/namespaces/ns: (896.74µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:57854]
I0622 21:27:06.861273 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.861305 476 matcher.go:279] got preallocated values for min: 1000060000, max: 1000069999 for uid range in namespace ns
I0622 21:27:06.861313 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.861318 476 matcher.go:322] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.861334 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.861347 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.861357 476 matcher.go:279] got preallocated values for min: 1000060000, max: 1000069999 for uid range in namespace ns
I0622 21:27:06.861368 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.861373 476 matcher.go:322] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.861380 476 matcher.go:342] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.861396 476 matcher.go:279] got preallocated values for min: 1000060000, max: 1000069999 for uid range in namespace ns
I0622 21:27:06.861401 476 matcher.go:292] got preallocated value for level: s0:c8,c2 for selinux options in namespace ns
I0622 21:27:06.861410 476 matcher.go:322] got preallocated value for groups: 1000060000/10000 in namespace ns
I0622 21:27:06.861421 476 admission.go:217] validating pod node2mirrorpod (generate: ) against providers anyuid,restricted,nonroot,hostmount-anyuid,hostnetwork,hostaccess,privileged
I0622 21:27:06.861456 476 admission.go:170] pod node2mirrorpod (generate: ) validated against provider anyuid
I0622 21:27:06.862671 476 wrap.go:42] POST /api/v1/namespaces/ns/pods: (3.279977ms) 201 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:58002]
I0622 21:27:06.863224 476 graph_populator.go:84] updatePod ns/node2mirrorpod for node node2
I0622 21:27:06.863299 476 disruption.go:328] addPod called on pod "node2mirrorpod"
I0622 21:27:06.863323 476 disruption.go:403] No PodDisruptionBudgets found for pod node2mirrorpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.863331 476 disruption.go:331] No matching pdb for pod "node2mirrorpod"
I0622 21:27:06.863386 476 taint_manager.go:345] Noticed pod update: types.NamespacedName{Namespace:"ns", Name:"node2mirrorpod"}
I0622 21:27:06.864303 476 wrap.go:42] GET /api/v1/namespaces/ns/pods/node2normalpod?resourceVersion=0: (427.679µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:57854]
I0622 21:27:06.867374 476 disruption.go:340] updatePod called on pod "node2normalpod"
I0622 21:27:06.867402 476 disruption.go:403] No PodDisruptionBudgets found for pod node2normalpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.867412 476 disruption.go:343] No matching pdb for pod "node2normalpod"
I0622 21:27:06.867426 476 util.go:186] Skipping processing of pod "ns"/"node2normalpod": it is scheduled to node "node2" which is not managed by the controller.
I0622 21:27:06.867448 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2normalpod, uid 03928c53-7663-11e8-9b47-0242ac110002, event type update
I0622 21:27:06.867454 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2normalpod, uid 03928c53-7663-11e8-9b47-0242ac110002, event type update
I0622 21:27:06.867471 476 pvc_protection_controller.go:276] Got event on pod ns/node2normalpod
I0622 21:27:06.867546 476 pvc_protection_controller.go:141] Processing PVC ns/mypvc
I0622 21:27:06.867568 476 pvc_protection_controller.go:144] Finished processing PVC ns/mypvc (3.847µs)
I0622 21:27:06.868872 476 wrap.go:42] POST /api/v1/namespaces/ns/pods/node2normalpod/eviction: (5.529767ms) 201 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:58034]
I0622 21:27:06.869326 476 graph_populator.go:101] deletePod ns/node2normalpod for node node2
I0622 21:27:06.869435 476 disruption.go:369] deletePod called on pod "node2normalpod"
I0622 21:27:06.869460 476 disruption.go:403] No PodDisruptionBudgets found for pod node2normalpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.869467 476 disruption.go:372] No matching pdb for pod "node2normalpod"
I0622 21:27:06.869486 476 util.go:186] Skipping processing of pod "ns"/"node2normalpod": it is scheduled to node "node2" which is not managed by the controller.
I0622 21:27:06.869517 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2normalpod, uid 03928c53-7663-11e8-9b47-0242ac110002, event type delete
I0622 21:27:06.869563 476 pvc_protection_controller.go:276] Got event on pod ns/node2normalpod
I0622 21:27:06.869586 476 pvc_protection_controller.go:141] Processing PVC ns/mypvc
I0622 21:27:06.869600 476 pvc_protection_controller.go:144] Finished processing PVC ns/mypvc (4.029µs)
I0622 21:27:06.869634 476 deployment_controller.go:357] Pod node2normalpod deleted.
I0622 21:27:06.869660 476 taint_manager.go:338] Noticed pod deletion: types.NamespacedName{Namespace:"ns", Name:"node2normalpod"}
I0622 21:27:06.869706 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2normalpod, uid 03928c53-7663-11e8-9b47-0242ac110002, event type delete
I0622 21:27:06.870340 476 wrap.go:42] GET /api/v1/namespaces/ns/pods/node2mirrorpod?resourceVersion=0: (355.037µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:57854]
I0622 21:27:06.873199 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 03932728-7663-11e8-9b47-0242ac110002, event type update
I0622 21:27:06.873271 476 disruption.go:340] updatePod called on pod "node2mirrorpod"
I0622 21:27:06.873289 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 03932728-7663-11e8-9b47-0242ac110002, event type update
I0622 21:27:06.873293 476 disruption.go:403] No PodDisruptionBudgets found for pod node2mirrorpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.873324 476 disruption.go:343] No matching pdb for pod "node2mirrorpod"
I0622 21:27:06.873308 476 pvc_protection_controller.go:276] Got event on pod ns/node2mirrorpod
I0622 21:27:06.874758 476 wrap.go:42] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (5.433644ms) 201 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:58034]
I0622 21:27:06.875122 476 graph_populator.go:101] deletePod ns/node2mirrorpod for node node2
I0622 21:27:06.875194 476 disruption.go:369] deletePod called on pod "node2mirrorpod"
I0622 21:27:06.875218 476 disruption.go:403] No PodDisruptionBudgets found for pod node2mirrorpod, PodDisruptionBudget controller will avoid syncing.
I0622 21:27:06.875226 476 disruption.go:372] No matching pdb for pod "node2mirrorpod"
I0622 21:27:06.875260 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 03932728-7663-11e8-9b47-0242ac110002, event type delete
I0622 21:27:06.875286 476 pvc_protection_controller.go:276] Got event on pod ns/node2mirrorpod
I0622 21:27:06.875315 476 deployment_controller.go:357] Pod node2mirrorpod deleted.
I0622 21:27:06.875341 476 taint_manager.go:338] Noticed pod deletion: types.NamespacedName{Namespace:"ns", Name:"node2mirrorpod"}
I0622 21:27:06.875388 476 resource_quota_monitor.go:352] QuotaMonitor process object: /v1, Resource=pods, namespace ns, name node2mirrorpod, uid 03932728-7663-11e8-9b47-0242ac110002, event type delete
INFO: 2018/06/22 21:27:06 ccBalancerWrapper: updating state and picker called by balancer: IDLE, 0xc432c1a300
INFO: 2018/06/22 21:27:06 dialing to target with scheme: ""
INFO: 2018/06/22 21:27:06 could not get resolver for scheme: ""
INFO: 2018/06/22 21:27:06 balancerWrapper: is pickfirst: false
INFO: 2018/06/22 21:27:06 balancerWrapper: got update addr from Notify: [{127.0.0.1:12235 <nil>}]
INFO: 2018/06/22 21:27:06 ccBalancerWrapper: new subconn: [{127.0.0.1:12235 0 <nil>}]
INFO: 2018/06/22 21:27:06 balancerWrapper: handle subconn state change: 0xc424a1e800, CONNECTING
INFO: 2018/06/22 21:27:06 ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc432c1a300
INFO: 2018/06/22 21:27:06 balancerWrapper: handle subconn state change: 0xc424a1e800, READY
INFO: 2018/06/22 21:27:06 ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc432c1a300
--- PASS: TestIntegration/TestNewAppRunAll (8.61s)
runner_test.go:187:
--- PASS: TestNewAppRunAll/successful_ruby_app_generation (0.47s)
--- PASS: TestNewAppRunAll/successful_ruby_app_generation_with_labels (0.46s)
--- PASS: TestNewAppRunAll/successful_docker_app_generation (0.44s)
--- PASS: TestNewAppRunAll/app_generation_using_context_dir (0.96s)
--- PASS: TestNewAppRunAll/failed_app_generation_using_missing_context_dir (2.80s)
--- PASS: TestNewAppRunAll/insecure_registry_generation (1.59s)
--- PASS: TestNewAppRunAll/emptyDir_volumes (0.20s)
--- PASS: TestNewAppRunAll/Docker_build (0.92s)
--- PASS: TestNewAppRunAll/Docker_build_with_no_registry_image (0.57s)
--- PASS: TestNewAppRunAll/custom_name (0.00s)
newapp_test.go:863: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:""}, Status:"Failure", Message:"Internal error occurred: test error", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc4212c1860), Code:500}}
--- PASS: TestNewAppRunAll/partial_matches (0.00s)
=== OUTPUT
I0622 21:26:59.007334 722 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/ruby-hello-world /tmp/openshift/test-integration/tmp-testnewapprunall598887215/gen560185533
I0622 21:26:59.320164 722 sourcelookup.go:313] Checking if https://github.com/openshift/ruby-hello-world requires authentication
I0622 21:26:59.320188 722 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world
I0622 21:26:59.465313 722 imagestreamlookup.go:61] checking ImageStreams default/ruby with ref "latest"
I0622 21:26:59.465373 722 dockerimagelookup.go:84] checking remote registry for "ruby"
I0622 21:26:59.465594 722 resolve.go:190] Code [https://github.com/openshift/ruby-hello-world]
I0622 21:26:59.465613 722 resolve.go:191] Components [ruby]
I0622 21:26:59.465652 722 newapp.go:431] found group: ruby
I0622 21:26:59.465666 722 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/ruby-hello-world"
I0622 21:26:59.465688 722 newapp.go:466] will use "ruby" as the base image for a source build of "https://github.com/openshift/ruby-hello-world"
--> Found Docker image ruby from for "ruby"
* An image stream tag will be created as "ruby:latest" that will track the source image
* The source repository appears to match: ruby
* A source build using source code from https://github.com/openshift/ruby-hello-world will be created
* The resulting image will be pushed to image stream tag "ruby-hello-world:latest"
* Every time "ruby:latest" changes a new build will be triggered
* This image will be deployed in deployment config "ruby-hello-world"
* Port 8080/tcp will be load balanced by service "ruby-hello-world"
* Other containers can access this service through the hostname "ruby-hello-world"
* WARNING: Image "ruby" runs as the 'root' user which may not be permitted by your cluster administrator
=== RUN TestNewAppRunAll/successful_docker_app_generation
I0622 21:26:59.466112 722 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/ruby-hello-world /tmp/openshift/test-integration/tmp-testnewapprunall598887215/gen330153194
I0622 21:26:59.759385 722 sourcelookup.go:313] Checking if https://github.com/openshift/ruby-hello-world requires authentication
I0622 21:26:59.759409 722 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world
I0622 21:26:59.903590 722 imagestreamlookup.go:61] checking ImageStreams centos/ruby-22-centos7 with ref "latest"
I0622 21:26:59.903632 722 dockerimagelookup.go:84] checking remote registry for "centos/ruby-22-centos7"
I0622 21:26:59.903788 722 resolve.go:190] Code [https://github.com/openshift/ruby-hello-world]
I0622 21:26:59.903804 722 resolve.go:191] Components [centos/ruby-22-centos7]
I0622 21:26:59.903844 722 newapp.go:431] found group: centos/ruby-22-centos7
I0622 21:26:59.903856 722 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/ruby-hello-world"
I0622 21:26:59.903871 722 newapp.go:466] will use "centos/ruby-22-centos7" as the base image for a source build of "https://github.com/openshift/ruby-hello-world"
--> Found Docker image ruby from for "centos/ruby-22-centos7"
* An image stream tag will be created as "ruby-22-centos7:latest" that will track the source image
* A Docker build using source code from https://github.com/openshift/ruby-hello-world will be created
* The resulting image will be pushed to image stream tag "ruby-hello-world:latest"
* Every time "ruby-22-centos7:latest" changes a new build will be triggered
* This image will be deployed in deployment config "ruby-hello-world"
* Port 8080/tcp will be load balanced by service "ruby-hello-world"
* Other containers can access this service through the hostname "ruby-hello-world"
* WARNING: Image "centos/ruby-22-centos7" runs as the 'root' user which may not be permitted by your cluster administrator
=== RUN TestNewAppRunAll/app_generation_using_context_dir
I0622 21:26:59.904259 722 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/sti-ruby /tmp/openshift/test-integration/tmp-testnewapprunall598887215/gen990028059
I0622 21:27:00.628176 722 sourcelookup.go:313] Checking if https://github.com/openshift/sti-ruby requires authentication
I0622 21:27:00.628202 722 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/sti-ruby
I0622 21:27:00.859690 722 imagestreamlookup.go:61] checking ImageStreams default/ruby with ref "latest"
I0622 21:27:00.859764 722 imagestreamlookup.go:116] Adding Image stream "ruby" (tag "latest") in project "default" as component match for "ruby" with score 0
I0622 21:27:00.859792 722 resolve.go:190] Code [https://github.com/openshift/sti-ruby]
I0622 21:27:00.859827 722 resolve.go:191] Components [ruby]
I0622 21:27:00.859852 722 newapp.go:431] found group: ruby
I0622 21:27:00.859858 722 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/sti-ruby"
I0622 21:27:00.859882 722 newapp.go:466] will use "ruby" as the base image for a source build of "https://github.com/openshift/sti-ruby"
--> Found image in image stream "default/ruby" under tag "latest" for "ruby"
* The source repository appears to match: ruby
* A source build using source code from https://github.com/openshift/sti-ruby will be created
* The resulting image will be pushed to image stream tag "sti-ruby:latest"
* Use 'start-build' to trigger a new build
* This image will be deployed in deployment config "sti-ruby"
* Port 8080/tcp will be load balanced by service "sti-ruby"
* Other containers can access this service through the hostname "sti-ruby"
* WARNING: Image "default/ruby:latest" runs as the 'root' user which may not be permitted by your cluster administrator
I0622 21:27:00.860004 722 pipeline.go:552] acceptor determined that imagestreamtag ruby:latest in namespace default exists so don't accept
=== RUN TestNewAppRunAll/failed_app_generation_using_missing_context_dir
I0622 21:27:00.860356 722 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/sti-ruby /tmp/openshift/test-integration/tmp-testnewapprunall598887215/gen857552288
=== RUN TestNewAppRunAll/insecure_registry_generation
I0622 21:27:03.658177 722 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/ruby-hello-world /tmp/openshift/test-integration/tmp-testnewapprunall598887215/gen409164927
I0622 21:27:04.718410 722 sourcelookup.go:313] Checking if https://github.com/openshift/ruby-hello-world requires authentication
I0622 21:27:04.718435 722 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world
I0622 21:27:05.249868 722 imagestreamlookup.go:45] image streams must be of the form [<namespace>/]<name>[:<tag>|@<digest>], term "myrepo:5000/myco/example" did not qualify
I0622 21:27:05.249904 722 templatelookup.go:38] template references must be of the form [<namespace>/]<name>, term "myrepo:5000/myco/example" did not qualify
I0622 21:27:05.249969 722 dockerimagelookup.go:84] checking remote registry for "myrepo:5000/myco/example"
I0622 21:27:05.250288 722 resolve.go:492] Using "https://github.com/openshift/ruby-hello-world" as the source for build
I0622 21:27:05.250303 722 resolve.go:494] Pairing with component myrepo:5000/myco/example
I0622 21:27:05.250317 722 resolve.go:190] Code [https://github.com/openshift/ruby-hello-world]
I0622 21:27:05.250327 722 resolve.go:191] Components [myrepo:5000/myco/example]
I0622 21:27:05.250350 722 newapp.go:431] found group: myrepo:5000/myco/example
I0622 21:27:05.250359 722 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/ruby-hello-world"
I0622 21:27:05.250380 722 newapp.go:466] will use "myrepo:5000/myco/example" as the base image for a source build of "https://github.com/openshift/ruby-hello-world"
--> Found Docker image ruby from myrepo:5000 for "myrepo:5000/myco/example"
* An image stream tag will be created as "example:latest" that will track the source image
* The source repository appears to match: ruby
* A source build using source code from https://github.com/openshift/ruby-hello-world will be created
* The resulting image will be pushed to image stream tag "ruby-hello-world:latest"
* Every time "example:latest" changes a new build will be triggered
* This image will be deployed in deployment config "ruby-hello-world"
* Port 8080/tcp will be load balanced by service "ruby-hello-world"
* Other containers can access this service through the hostname "ruby-hello-world"
* WARNING: Image "myrepo:5000/myco/example" runs as the 'root' user which may not be permitted by your cluster administrator
=== RUN TestNewAppRunAll/emptyDir_volumes
I0622 21:27:05.250732 722 dockerimagelookup.go:304] checking Docker registry for "mysql", allow-insecure=false
I0622 21:27:05.447501 722 dockerimagelookup.go:333] found image: &dockerv1client.Image{Image:docker.Image{ID:"sha256:d60c13a2bfdbbeb9cf1c84fd3cb0a1577b2bbaeec11e44bf345f4da90586e9e1", RepoTags:[]string(nil), Parent:"", Comment:"", Created:time.Time{wall:0x22538e8e, ext:63661074076, loc:(*time.Location)(nil)}, Container:"db34fcb24f1c3b4aaca62f45903a3c05c32b97dc29ff405e8ec8e485cf3d6366", ContainerConfig:docker.Config{Hostname:"db34fcb24f1c", Domainname:"", User:"", Memory:0, MemorySwap:0, MemoryReservation:0, KernelMemory:0, CPUShares:0, CPUSet:"", PortSpecs:[]string(nil), ExposedPorts:map[docker.Port]struct {}{"3306/tcp":struct {}{}}, PublishService:"", StopSignal:"", Env:[]string{"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GOSU_VERSION=1.7", "MYSQL_MAJOR=8.0", "MYSQL_VERSION=8.0.11-1debian9"}, Cmd:[]string{"/bin/sh", "-c", "#(nop) ", "CMD [\"mysqld\"]"}, Shell:[]string(nil), Healthcheck:(*docker.HealthConfig)(nil), DNS:[]string(nil), Image:"sha256:7534e869ac7783111183cc06c4c002c7f0e84fb0f0d519be3b076783193865ca", Volumes:map[string]struct {}{"/var/lib/mysql":struct {}{}}, VolumeDriver:"", WorkingDir:"", MacAddress:"", Entrypoint:[]string{"docker-entrypoint.sh"}, SecurityOpts:[]string(nil), OnBuild:[]string{}, Mounts:[]docker.Mount(nil), Labels:map[string]string{}, AttachStdin:false, AttachStdout:false, AttachStderr:false, ArgsEscaped:true, Tty:false, OpenStdin:false, StdinOnce:false, NetworkDisabled:false, VolumesFrom:""}, DockerVersion:"17.06.2-ce", Author:"", Config:(*docker.Config)(0xc420160780), Architecture:"amd64", Size:0, VirtualSize:0, RepoDigests:[]string(nil), RootFS:(*docker.RootFS)(nil), OS:""}, PullByID:true}
I0622 21:27:05.447937 722 dockerimagelookup.go:352] Adding Docker image "mysql" (tag "latest"), d60c13a, from Docker Hub as component match for "mysql" with score 0
I0622 21:27:05.447979 722 resolve.go:190] Code []
I0622 21:27:05.447994 722 resolve.go:191] Components [mysql]
I0622 21:27:05.448017 722 newapp.go:431] found group: mysql
I0622 21:27:05.448042 722 newapp.go:480] will include "mysql"
--> Found Docker image d60c13a (6 weeks old) from Docker Hub for "mysql"
* An image stream tag will be created as "mysql:latest" that will track this image
* This image will be deployed in deployment config "mysql"
* Port 3306/tcp will be load balanced by service "mysql"
* Other containers can access this service through the hostname "mysql"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/mysql --add ...'
* WARNING: Image "mysql" runs as the 'root' user which may not be permitted by your cluster administrator
=== RUN TestNewAppRunAll/Docker_build
I0622 21:27:05.448480 722 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/ruby-hello-world /tmp/openshift/test-integration/tmp-testnewapprunall598887215/gen295471316
I0622 21:27:06.188681 722 sourcelookup.go:313] Checking if https://github.com/openshift/ruby-hello-world requires authentication
I0622 21:27:06.188704 722 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world
I0622 21:27:06.365176 722 imagestreamlookup.go:61] checking ImageStreams centos/ruby-22-centos7 with ref "latest"
I0622 21:27:06.365220 722 dockerimagelookup.go:84] checking remote registry for "centos/ruby-22-centos7"
I0622 21:27:06.365452 722 resolve.go:190] Code [https://github.com/openshift/ruby-hello-world]
I0622 21:27:06.365468 722 resolve.go:191] Components [centos/ruby-22-centos7]
I0622 21:27:06.365491 722 newapp.go:431] found group: centos/ruby-22-centos7
I0622 21:27:06.365506 722 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/ruby-hello-world"
I0622 21:27:06.365521 722 newapp.go:466] will use "centos/ruby-22-centos7" as the base image for a source build of "https://github.com/openshift/ruby-hello-world"
--> Found Docker image ruby from for "centos/ruby-22-centos7"
* An image stream tag will be created as "ruby-22-centos7:latest" that will track the source image
* A Docker build using source code from https://github.com/openshift/ruby-hello-world will be created
* The resulting image will be pushed to image stream tag "ruby-hello-world:latest"
* Every time "ruby-22-centos7:latest" changes a new build will be triggered
* This image will be deployed in deployment config "ruby-hello-world"
* Port 8080/tcp will be load balanced by service "ruby-hello-world"
* Other containers can access this service through the hostname "ruby-hello-world"
* WARNING: Image "centos/ruby-22-centos7" runs as the 'root' user which may not be permitted by your cluster administrator
=== RUN TestNewAppRunAll/Docker_build_with_no_registry_image
I0622 21:27:06.365974 722 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/ruby-hello-world /tmp/openshift/test-integration/tmp-testnewapprunall598887215/gen614963277
I0622 21:27:06.764336 722 sourcelookup.go:313] Checking if https://github.com/openshift/ruby-hello-world requires authentication
I0622 21:27:06.764357 722 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world
I0622 21:27:06.936103 722 imagestreamlookup.go:61] checking ImageStreams centos/ruby-22-centos7 with ref "latest"
I0622 21:27:06.936159 722 dockerimagelookup.go:103] checking local Docker daemon for "centos/ruby-22-centos7"
I0622 21:27:06.936187 722 dockerimagelookup.go:420] partial match on "centos/ruby-22-centos7" with 0.000000
I0622 21:27:06.936488 722 resolve.go:190] Code [https://github.com/openshift/ruby-hello-world]
I0622 21:27:06.936511 722 resolve.go:191] Components [centos/ruby-22-centos7]
I0622 21:27:06.936536 722 newapp.go:431] found group: centos/ruby-22-centos7
I0622 21:27:06.936551 722 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/ruby-hello-world"
W0622 21:27:06.936584 722 newapp.go:461] Could not find an image stream match for "centos/ruby-22-centos7". Make sure that a Docker image with that tag is available on the node for the build to succeed.
I0622 21:27:06.936604 722 newapp.go:466] will use "centos/ruby-22-centos7" as the base image for a source build of "https://github.com/openshift/ruby-hello-world"
--> Found Docker image ruby from for "centos/ruby-22-centos7"
* A Docker build using source code from https://github.com/openshift/ruby-hello-world will be created
* The resulting image will be pushed to image stream tag "ruby-hello-world:latest"
* Use 'start-build' to trigger a new build
* This image will be deployed in deployment config "ruby-hello-world"
* Port 8080/tcp will be load balanced by service "ruby-hello-world"
* Other containers can access this service through the hostname "ruby-hello-world"
* WARNING: Image "centos/ruby-22-centos7" runs as the 'root' user which may not be permitted by your cluster administrator
=== RUN TestNewAppRunAll/custom_name
I0622 21:27:06.936994 722 dockerimagelookup.go:84] checking remote registry for "mysql"
I0622 21:27:06.937247 722 resolve.go:190] Code []
I0622 21:27:06.937269 722 resolve.go:191] Components [mysql]
I0622 21:27:06.937289 722 newapp.go:431] found group: mysql
I0622 21:27:06.937316 722 newapp.go:480] will include "mysql"
--> Found Docker image from for "mysql"
* An image stream tag will be created as "custom:latest" that will track this image
* This image will be deployed in deployment config "custom"
* Port 8080/tcp will be load balanced by service "custom"
* Other containers can access this service through the hostname "custom"
* WARNING: Image "mysql" runs as the 'root' user which may not be permitted by your cluster administrator
=== RUN TestNewAppRunAll/partial_matches
I0622 21:27:06.937541 722 dockerimagelookup.go:84] checking remote registry for "mysql"
--- PASS: TestIntegration/TestNewAppResolve (0.33s)
runner_test.go:187:
=== OUTPUT
I0622 21:27:07.100176 1286 dockerimagelookup.go:304] checking Docker registry for "mysql:invalid", allow-insecure=false
I0622 21:27:07.253621 1286 client.go:656] unable to find tagged image at "https://registry-1.docker.io/v2/library/mysql/manifests/invalid", 404 map[Content-Type:[application/json; charset=utf-8] Docker-Distribution-Api-Version:[registry/2.0] Date:[Fri, 22 Jun 2018 21:27:07 GMT] Content-Length:[97] Strict-Transport-Security:[max-age=31536000]]: {"errors":[{"code":"MANIFEST_UNKNOWN","message":"manifest unknown","detail":{"Tag":"invalid"}}]}
I0622 21:27:07.253678 1286 dockerimagelookup.go:319] tag not found: tag "invalid" has not been set on repository "library/mysql"
--- PASS: TestIntegration/TestNewAppNewBuildEnvVars (0.96s)
runner_test.go:187:
=== OUTPUT
I0622 21:27:07.153616 1293 repository.go:388] Executing git clone --depth=1 --recursive https://github.com/openshift/ruby-hello-world /tmp/openshift/test-integration/tmp-testnewappnewbuildenvvars222800697/gen171130560
I0622 21:27:07.538311 1293 sourcelookup.go:313] Checking if https://github.com/openshift/ruby-hello-world requires authentication
I0622 21:27:07.538337 1293 repository.go:388] Executing git ls-remote --heads https://github.com/openshift/ruby-hello-world
I0622 21:27:07.685257 1293 dockerimagelookup.go:304] checking Docker registry for "centos/ruby-22-centos7", allow-insecure=false
I0622 21:27:07.863427 1293 dockerimagelookup.go:333] found image: &dockerv1client.Image{Image:docker.Image{ID:"sha256:a18c8706118a5c4c9f1adf045024d2abf06ba632b5674b23421019ee4d3edcae", RepoTags:[]string(nil), Parent:"", Comment:"", Created:time.Time{wall:0x352f7acc, ext:63663450643, loc:(*time.Location)(nil)}, Container:"8942a55a6294197eb9a2b6306fd99664a8b40a91d29c1f58fdc4095eb2fcdc6f", ContainerConfig:docker.Config{Hostname:"42b8e6b59c1d", Domainname:"", User:"1001", Memory:0, MemorySwap:0, MemoryReservation:0, KernelMemory:0, CPUShares:0, CPUSet:"", PortSpecs:[]string(nil), ExposedPorts:map[docker.Port]struct {}{"8080/tcp":struct {}{}}, PublishService:"", StopSignal:"", Env:[]string{"PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "SUMMARY=Platform for building and running Ruby 2.2 applications", "DESCRIPTION=Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible.", "STI_SCRIPTS_URL=image:///usr/libexec/s2i", "STI_SCRIPTS_PATH=/usr/libexec/s2i", "APP_ROOT=/opt/app-root", "HOME=/opt/app-root/src", "BASH_ENV=/opt/app-root/etc/scl_enable", "ENV=/opt/app-root/etc/scl_enable", "PROMPT_COMMAND=. /opt/app-root/etc/scl_enable", "NODEJS_SCL=rh-nodejs8", "RUBY_VERSION=2.2"}, Cmd:[]string{"/bin/sh", "-c", "#(nop) ", "LABEL io.openshift.builder-version=\"c159276\""}, Shell:[]string(nil), Healthcheck:(*docker.HealthConfig)(nil), DNS:[]string(nil), Image:"sha256:b61cbe4541ddd8940382e5788d886902864e4582d20b2a2df74f402ea937f5d3", Volumes:map[string]struct {}(nil), VolumeDriver:"", WorkingDir:"/opt/app-root/src", MacAddress:"", Entrypoint:[]string{"container-entrypoint"}, SecurityOpts:[]string(nil), OnBuild:[]string{}, Mounts:[]docker.Mount(nil), Labels:map[string]string{"io.k8s.display-name":"Ruby 2.2", "io.openshift.tags":"builder,ruby,ruby22", "name":"centos/ruby-22-centos7", "usage":"s2i build https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.4/test/puma-test-app/ centos/ruby-22-centos7 ruby-sample-app", "version":"2.2", "io.openshift.expose-services":"8080:http", "com.redhat.component":"rh-ruby22-docker", "description":"Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible.", "io.k8s.description":"Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible.", "io.s2i.scripts-url":"image:///usr/libexec/s2i", "summary":"Platform for building and running Ruby 2.2 applications", "io.openshift.builder-version":"\"c159276\"", "io.openshift.s2i.scripts-url":"image:///usr/libexec/s2i", "maintainer":"SoftwareCollections.org <sclorg@redhat.com>", "org.label-schema.schema-version":"= 1.0 org.label-schema.name=CentOS Base Image org.label-schema.vendor=CentOS org.label-schema.license=GPLv2 org.label-schema.build-date=20180402", "release":"1"}, AttachStdin:false, AttachStdout:false, AttachStderr:false, ArgsEscaped:true, Tty:false, OpenStdin:false, StdinOnce:false, NetworkDisabled:false, VolumesFrom:""}, DockerVersion:"1.13.1", Author:"", Config:(*docker.Config)(0xc42082a000), Architecture:"amd64", Size:0, VirtualSize:0, RepoDigests:[]string(nil), RootFS:(*docker.RootFS)(nil), OS:""}, PullByID:true}
I0622 21:27:07.863926 1293 dockerimagelookup.go:352] Adding Docker image "centos/ruby-22-centos7" (tag "latest"), a18c870, from Docker Hub as component match for "centos/ruby-22-centos7" with score 0
I0622 21:27:07.863970 1293 dockerimagelookup.go:304] checking Docker registry for "openshift/nodejs-010-centos7", allow-insecure=false
I0622 21:27:07.930302 1293 dockerimagelookup.go:333] found image: &dockerv1client.Image{Image:docker.Image{ID:"sha256:bd971b467b08b8dbbbfee26bad80dcaa0110b184e0a8dd6c1b0460a6d6f5d332", RepoTags:[]string(nil), Parent:"51c34328d22d2456e6d532f86e0c5e8d8606a0d75d0975fbadf3d41fb361925f", Comment:"", Created:time.Time{wall:0x16a9e077, ext:63614743020, loc:(*time.Location)(nil)}, Container:"6ed29b37771bea2fc0ace8727bdc7c060224e1fb4e0f1450018ad19728247e3e", ContainerConfig:docker.Config{Hostname:"719d8d68f3dc", Domainname:"", User:"1001", Memory:0, MemorySwap:0, MemoryReservation:0, KernelMemory:0, CPUShares:0, CPUSet:"", PortSpecs:[]string(nil), ExposedPorts:map[docker.Port]struct {}{"8080/tcp":struct {}{}}, PublishService:"", StopSignal:"", Env:[]string{"PATH=/opt/app-root/src/node_modules/.bin/:/opt/app-root/src/.npm-global/bin/:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "STI_SCRIPTS_URL=image:///usr/libexec/s2i", "STI_SCRIPTS_PATH=/usr/libexec/s2i", "HOME=/opt/app-root/src", "BASH_ENV=/opt/app-root/etc/scl_enable", "ENV=/opt/app-root/etc/scl_enable", "PROMPT_COMMAND=. /opt/app-root/etc/scl_enable", "NPM_RUN=start", "NODEJS_VERSION=0.10", "NPM_CONFIG_PREFIX=/opt/app-root/src/.npm-global"}, Cmd:[]string{"/bin/sh", "-c", "#(nop) ", "LABEL io.openshift.builder-version=bbaf6847c02b403f1c216c4759e71735367056a0"}, Shell:[]string(nil), Healthcheck:(*docker.HealthConfig)(nil), DNS:[]string(nil), Image:"sha256:5bcf62dda02b2b58d6be59ef74af8dfcee0dbb582740a7b3789db83fe428781f", Volumes:map[string]struct {}(nil), VolumeDriver:"", WorkingDir:"/opt/app-root/src", MacAddress:"", Entrypoint:[]string{"container-entrypoint"}, SecurityOpts:[]string(nil), OnBuild:[]string{}, Mounts:[]docker.Mount(nil), Labels:map[string]string{"build-date":"20160906", "io.openshift.builder-version":"bbaf6847c02b403f1c216c4759e71735367056a0", "vendor":"CentOS", "com.redhat.deployments-dir":"/opt/app-root/src", "io.k8s.display-name":"Node.js 0.10", "io.openshift.expose-services":"8080:http", "io.openshift.s2i.scripts-url":"image:///usr/libexec/s2i", "name":"CentOS Base Image", "io.openshift.builder-base-version":"8c4d31f", "io.s2i.scripts-url":"image:///usr/libexec/s2i", "license":"GPLv2", "com.redhat.dev-mode":"DEV_MODE:false", "com.redhat.dev-mode.port":"DEBUG_PORT:5858", "io.k8s.description":"Platform for building and running Node.js 0.10 applications", "io.openshift.tags":"builder,nodejs,nodejs010"}, AttachStdin:false, AttachStdout:false, AttachStderr:false, ArgsEscaped:true, Tty:false, OpenStdin:false, StdinOnce:false, NetworkDisabled:false, VolumesFrom:""}, DockerVersion:"1.12.2-rc2", Author:"SoftwareCollections.org <sclorg@redhat.com>", Config:(*docker.Config)(0xc4207625a0), Architecture:"amd64", Size:0, VirtualSize:0, RepoDigests:[]string(nil), RootFS:(*docker.RootFS)(nil), OS:""}, PullByID:true}
I0622 21:27:07.930675 1293 dockerimagelookup.go:352] Adding Docker image "openshift/nodejs-010-centos7" (tag "latest"), bd971b4, from Docker Hub, author SoftwareCollections.org <sclorg@redhat.com> as component match for "openshift/nodejs-010-centos7" with score 0
I0622 21:27:07.930720 1293 resolve.go:492] Using "https://github.com/openshift/ruby-hello-world" as the source for build
I0622 21:27:07.930745 1293 resolve.go:494] Pairing with component centos/ruby-22-centos7
I0622 21:27:07.930765 1293 resolve.go:494] Pairing with component openshift/nodejs-010-centos7
I0622 21:27:07.930779 1293 resolve.go:190] Code [https://github.com/openshift/ruby-hello-world]
I0622 21:27:07.930830 1293 resolve.go:191] Components [centos/ruby-22-centos7,openshift/nodejs-010-centos7]
I0622 21:27:07.930889 1293 newapp.go:431] found group: centos/ruby-22-centos7
I0622 21:27:07.930903 1293 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/ruby-hello-world"
I0622 21:27:07.930928 1293 newapp.go:466] will use "centos/ruby-22-centos7" as the base image for a source build of "https://github.com/openshift/ruby-hello-world"
--> Found Docker image a18c870 (3 weeks old) from Docker Hub for "centos/ruby-22-centos7"
Ruby 2.2
--------
Ruby 2.2 available as container is a base platform for building and running various Ruby 2.2 applications and frameworks. Ruby is the interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (as in Perl). It is simple, straight-forward, and extensible.
Tags: builder, ruby, ruby22
* An image stream tag will be created as "ruby-22-centos7:latest" that will track the source image
* The source repository appears to match: ruby
* A source build using source code from https://github.com/openshift/ruby-hello-world will be created
* The resulting image will be pushed with Docker to "ruby-hello-world:latest"
* Every time "ruby-22-centos7:latest" changes a new build will be triggered
I0622 21:27:07.931094 1293 newapp.go:431] found group: openshift/nodejs-010-centos7
I0622 21:27:07.931108 1293 newapp.go:440] will add "" secrets into a build for a source build of "https://github.com/openshift/ruby-hello-world"
I0622 21:27:07.931127 1293 newapp.go:466] will use "openshift/nodejs-010-centos7" as the base image for a source build of "https://github.com/openshift/ruby-hello-world"
--> Found Docker image bd971b4 (19 months old) from Docker Hub for "openshift/nodejs-010-centos7"
Node.js 0.10
------------
Platform for building and running Node.js 0.10 applications
Tags: builder, nodejs, nodejs010
* An image stream tag will be created as "nodejs-010-centos7:latest" that will track the source image
* The source repository appears to match: ruby
* A source build using source code from https://github.com/openshift/ruby-hello-world will be created
* The resulting image will be pushed with Docker to "ruby-hello-world-1:latest"
* Every time "nodejs-010-centos7:latest" changes a new build will be triggered
I0622 21:27:07.931254 1293 pipeline.go:507] acceptor determined that imagestream ruby-22-centos7 in namespace default exists so don't accept: &image.ImageStream{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:image.ImageStreamSpec{LookupPolicy:image.ImageLookupPolicy{Local:false}, DockerImageRepository:"", Tags:map[string]image.TagReference(nil)}, Status:image.ImageStreamStatus{DockerImageRepository:"", PublicDockerImageRepository:"", Tags:map[string]image.TagEventList(nil)}}
I0622 21:27:07.931361 1293 pipeline.go:552] acceptor determined that imagestreamtag ruby-22-centos7:latest in namespace default exists so don't accept
I0622 21:27:07.931421 1293 pipeline.go:507] acceptor determined that imagestream nodejs-010-centos7 in namespace default exists so don't accept: &image.ImageStream{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:image.ImageStreamSpec{LookupPolicy:image.ImageLookupPolicy{Local:false}, DockerImageRepository:"", Tags:map[string]image.TagReference(nil)}, Status:image.ImageStreamStatus{DockerImageRepository:"", PublicDockerImageRepository:"", Tags:map[string]image.TagEventList(nil)}}
I0622 21:27:07.931459 1293 pipeline.go:552] acceptor determined that imagestreamtag nodejs-010-centos7:latest in namespace default exists so don't accept
--- PASS: TestIntegration/TestOAuthTimeoutNotEnabled (363.06s)
runner_test.go:187:
=== OUTPUT
64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.631436 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (975.313µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.654930 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (874.523µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.655342 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (865.817µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.656701 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (810.125µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.657502 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (952.05µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.657534 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.041998ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.713728 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (810.67µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.717288 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (872.234µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.731505 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (871.459µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.743617 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (832.787µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.744544 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (823.457µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.745899 32494 wrap.go:42] GET /api/v1/namespaces/kube-system/configmaps/kube-controller-manager: (950.099µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/leader-election] 127.0.0.1:49948]
I0622 21:29:31.748394 32494 wrap.go:42] PUT /api/v1/namespaces/kube-system/configmaps/kube-controller-manager: (2.074003ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/leader-election] 127.0.0.1:49948]
I0622 21:29:31.748584 32494 leaderelection.go:199] successfully renewed lease kube-system/kube-controller-manager
I0622 21:29:31.750959 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (879.246µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.754198 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (793.399µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.755954 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (977.384µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.756027 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.108932ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.763426 32494 wrap.go:42] GET /api/v1/namespaces/kube-system/configmaps/openshift-master-controllers: (860.939µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.765681 32494 wrap.go:42] PUT /api/v1/namespaces/kube-system/configmaps/openshift-master-controllers: (1.829707ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:31.765929 32494 leaderelection.go:199] successfully renewed lease kube-system/openshift-master-controllers
I0622 21:29:31.789543 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (970.998µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.429816 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.000425ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.633234 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.116215ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.656735 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.065644ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.656798 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (872.999µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.658117 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (888.391µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.659078 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (945.336µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.659104 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (968.439µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.710576 32494 wrap.go:42] GET /api/v1/namespaces/kube-system/configmaps/kube-scheduler: (912.494µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/leader-election] 127.0.0.1:49948]
I0622 21:29:32.712981 32494 wrap.go:42] PUT /api/v1/namespaces/kube-system/configmaps/kube-scheduler: (1.96369ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/leader-election] 127.0.0.1:49948]
I0622 21:29:32.713165 32494 leaderelection.go:199] successfully renewed lease kube-system/kube-scheduler
I0622 21:29:32.715079 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (804.947µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.718587 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (802.949µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.732799 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (787.267µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.745006 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (830.441µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.745955 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (909.031µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.752274 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (801.57µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.755839 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (883.765µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.757348 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (879.807µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.757386 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (851.505µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:32.790884 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (874.236µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.195716 32494 wrap.go:42] GET /api?timeout=32s: (285.754µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.196345 32494 wrap.go:42] GET /apis?timeout=32s: (258.538µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.197144 32494 wrap.go:42] GET /api/v1?timeout=32s: (407.036µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.197862 32494 wrap.go:42] GET /apis/apiregistration.k8s.io/v1?timeout=32s: (284.981µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.198450 32494 wrap.go:42] GET /apis/apiregistration.k8s.io/v1beta1?timeout=32s: (237.44µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.199162 32494 wrap.go:42] GET /apis/extensions/v1beta1?timeout=32s: (296.592µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.199860 32494 wrap.go:42] GET /apis/apps/v1?timeout=32s: (340.014µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.200538 32494 wrap.go:42] GET /apis/apps/v1beta2?timeout=32s: (246.012µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.201220 32494 wrap.go:42] GET /apis/apps/v1beta1?timeout=32s: (238.131µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.201912 32494 wrap.go:42] GET /apis/events.k8s.io/v1beta1?timeout=32s: (277.337µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.202552 32494 wrap.go:42] GET /apis/authentication.k8s.io/v1?timeout=32s: (253.545µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.203135 32494 wrap.go:42] GET /apis/authentication.k8s.io/v1beta1?timeout=32s: (218.685µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.203674 32494 wrap.go:42] GET /apis/authorization.k8s.io/v1?timeout=32s: (234.148µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.204252 32494 wrap.go:42] GET /apis/authorization.k8s.io/v1beta1?timeout=32s: (218.334µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.204796 32494 wrap.go:42] GET /apis/autoscaling/v1?timeout=32s: (249.809µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.205398 32494 wrap.go:42] GET /apis/autoscaling/v2beta1?timeout=32s: (257.005µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.206015 32494 wrap.go:42] GET /apis/batch/v1?timeout=32s: (249.884µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.206581 32494 wrap.go:42] GET /apis/batch/v1beta1?timeout=32s: (225.062µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.207202 32494 wrap.go:42] GET /apis/certificates.k8s.io/v1beta1?timeout=32s: (267.416µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.207778 32494 wrap.go:42] GET /apis/networking.k8s.io/v1?timeout=32s: (229.476µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.208370 32494 wrap.go:42] GET /apis/policy/v1beta1?timeout=32s: (255.483µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.209078 32494 wrap.go:42] GET /apis/authorization.openshift.io/v1?timeout=32s: (325.649µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.209740 32494 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1?timeout=32s: (287.248µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.210362 32494 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1beta1?timeout=32s: (251.341µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.210972 32494 wrap.go:42] GET /apis/storage.k8s.io/v1?timeout=32s: (260.466µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.211504 32494 wrap.go:42] GET /apis/storage.k8s.io/v1beta1?timeout=32s: (222.635µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.212080 32494 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1beta1?timeout=32s: (272.895µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.212732 32494 wrap.go:42] GET /apis/apiextensions.k8s.io/v1beta1?timeout=32s: (289.604µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.213373 32494 wrap.go:42] GET /apis/apps.openshift.io/v1?timeout=32s: (265.119µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.214024 32494 wrap.go:42] GET /apis/build.openshift.io/v1?timeout=32s: (284.418µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.214692 32494 wrap.go:42] GET /apis/image.openshift.io/v1?timeout=32s: (289.384µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.215348 32494 wrap.go:42] GET /apis/network.openshift.io/v1?timeout=32s: (320.801µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.215962 32494 wrap.go:42] GET /apis/oauth.openshift.io/v1?timeout=32s: (261.876µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.216557 32494 wrap.go:42] GET /apis/project.openshift.io/v1?timeout=32s: (226.199µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.217119 32494 wrap.go:42] GET /apis/quota.openshift.io/v1?timeout=32s: (224.567µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.217672 32494 wrap.go:42] GET /apis/route.openshift.io/v1?timeout=32s: (240.742µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.218254 32494 wrap.go:42] GET /apis/security.openshift.io/v1?timeout=32s: (255.744µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.218868 32494 wrap.go:42] GET /apis/template.openshift.io/v1?timeout=32s: (248.906µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.219525 32494 wrap.go:42] GET /apis/user.openshift.io/v1?timeout=32s: (248.834µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.431641 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.091198ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.634769 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (927.894µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.658199 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (969.166µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.658518 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.002859ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.659631 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (765.304µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.660639 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.054992ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.660742 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.066857ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.714434 32494 wrap.go:42] GET /apis/oauth.openshift.io/v1/oauthaccesstokens/p7JwQOFRisIIlqvyaKtTWgCLcoGq-tBm-8o9Xax2vrk: (1.02891ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.715790 32494 wrap.go:42] GET /apis/user.openshift.io/v1/users/username: (970.565µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.716682 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (814.423µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
I0622 21:29:33.716768 32494 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (3.638508ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50402]
INFO: 2018/06/22 21:29:33 ccBalancerWrapper: updating state and picker called by balancer: IDLE, 0xc4344bcba0
INFO: 2018/06/22 21:29:33 dialing to target with scheme: ""
INFO: 2018/06/22 21:29:33 could not get resolver for scheme: ""
INFO: 2018/06/22 21:29:33 balancerWrapper: is pickfirst: false
INFO: 2018/06/22 21:29:33 balancerWrapper: got update addr from Notify: [{127.0.0.1:16568 <nil>}]
INFO: 2018/06/22 21:29:33 ccBalancerWrapper: new subconn: [{127.0.0.1:16568 0 <nil>}]
INFO: 2018/06/22 21:29:33 balancerWrapper: handle subconn state change: 0xc42fe4cc90, CONNECTING
INFO: 2018/06/22 21:29:33 ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc4344bcba0
I0622 21:29:33.719914 32494 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (884.75µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:49948]
INFO: 2018/06/22 21:29:33 balancerWrapper: handle subconn state change: 0xc42fe4cc90, READY
INFO: 2018/06/22 21:29:33 ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc4344bcba0
INFO: 2018/06/22 21:29:33 balancerWrapper: got update addr from Notify: [{127.0.0.1:16568 <nil>}]
--- PASS: TestIntegration/TestOAuthTimeout (487.19s)
runner_test.go:187:
=== OUTPUT
api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (872.552µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.972127 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (773.814µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.973589 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (860.135µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.974711 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.010359ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.974787 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (908.066µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.976244 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (866.037µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.976543 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (789.468µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.978264 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (829.86µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.978940 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (891.87µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.984111 32503 wrap.go:42] GET /api/v1/namespaces/kube-system/secrets/resourcequota-controller-token-vgqxl: (1.011947ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.985324 32503 wrap.go:42] GET /api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller: (842.807µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:37.985627 32503 wrap.go:42] GET /api?timeout=32s: (3.013246ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.986149 32503 wrap.go:42] GET /apis?timeout=32s: (189.521µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.986797 32503 wrap.go:42] GET /api/v1?timeout=32s: (247.505µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.987383 32503 wrap.go:42] GET /apis/apiregistration.k8s.io/v1?timeout=32s: (152.768µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.987911 32503 wrap.go:42] GET /apis/apiregistration.k8s.io/v1beta1?timeout=32s: (168.042µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.988471 32503 wrap.go:42] GET /apis/extensions/v1beta1?timeout=32s: (233.078µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.989014 32503 wrap.go:42] GET /apis/apps/v1?timeout=32s: (181.59µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.989569 32503 wrap.go:42] GET /apis/apps/v1beta2?timeout=32s: (193.5µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.990137 32503 wrap.go:42] GET /apis/apps/v1beta1?timeout=32s: (187.086µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.990694 32503 wrap.go:42] GET /apis/events.k8s.io/v1beta1?timeout=32s: (191.642µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.991183 32503 wrap.go:42] GET /apis/authentication.k8s.io/v1?timeout=32s: (153.618µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.991678 32503 wrap.go:42] GET /apis/authentication.k8s.io/v1beta1?timeout=32s: (139.46µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.992195 32503 wrap.go:42] GET /apis/authorization.k8s.io/v1?timeout=32s: (169.087µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.992689 32503 wrap.go:42] GET /apis/authorization.k8s.io/v1beta1?timeout=32s: (139.786µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.993202 32503 wrap.go:42] GET /apis/autoscaling/v1?timeout=32s: (156.985µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.993659 32503 wrap.go:42] GET /apis/autoscaling/v2beta1?timeout=32s: (160.459µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.994224 32503 wrap.go:42] GET /apis/batch/v1?timeout=32s: (155.664µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.994726 32503 wrap.go:42] GET /apis/batch/v1beta1?timeout=32s: (155.792µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.995283 32503 wrap.go:42] GET /apis/certificates.k8s.io/v1beta1?timeout=32s: (173.144µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.995867 32503 wrap.go:42] GET /apis/networking.k8s.io/v1?timeout=32s: (189.004µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.996361 32503 wrap.go:42] GET /apis/policy/v1beta1?timeout=32s: (154.376µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.996988 32503 wrap.go:42] GET /apis/authorization.openshift.io/v1?timeout=32s: (225.796µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.997512 32503 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1?timeout=32s: (166.047µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.998029 32503 wrap.go:42] GET /apis/rbac.authorization.k8s.io/v1beta1?timeout=32s: (144.084µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.998499 32503 wrap.go:42] GET /apis/storage.k8s.io/v1?timeout=32s: (158.476µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.999049 32503 wrap.go:42] GET /apis/storage.k8s.io/v1beta1?timeout=32s: (181.849µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:37.999560 32503 wrap.go:42] GET /apis/admissionregistration.k8s.io/v1beta1?timeout=32s: (166.335µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.000085 32503 wrap.go:42] GET /apis/apiextensions.k8s.io/v1beta1?timeout=32s: (188.126µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.000598 32503 wrap.go:42] GET /apis/apps.openshift.io/v1?timeout=32s: (179.724µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.001162 32503 wrap.go:42] GET /apis/build.openshift.io/v1?timeout=32s: (197.946µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.001709 32503 wrap.go:42] GET /apis/image.openshift.io/v1?timeout=32s: (173.88µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.002315 32503 wrap.go:42] GET /apis/network.openshift.io/v1?timeout=32s: (204.119µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.002881 32503 wrap.go:42] GET /apis/oauth.openshift.io/v1?timeout=32s: (179.316µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.003351 32503 wrap.go:42] GET /apis/project.openshift.io/v1?timeout=32s: (148.012µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.003840 32503 wrap.go:42] GET /apis/quota.openshift.io/v1?timeout=32s: (184.708µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.004428 32503 wrap.go:42] GET /apis/route.openshift.io/v1?timeout=32s: (157.585µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.005010 32503 wrap.go:42] GET /apis/security.openshift.io/v1?timeout=32s: (164.469µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.005607 32503 wrap.go:42] GET /apis/template.openshift.io/v1?timeout=32s: (159.263µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.006165 32503 wrap.go:42] GET /apis/user.openshift.io/v1?timeout=32s: (180.69µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/system:serviceaccount:kube-system:resourcequota-controller] 127.0.0.1:51070]
I0622 21:31:38.006577 32503 resource_quota_controller.go:433] no resource updates from discovery, skipping resource quota sync
I0622 21:31:38.396995 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.179422ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.419695 32503 wrap.go:42] GET /api/v1/namespaces?resourceVersion=417&timeoutSeconds=451&watch=true: (7m31.001205571s) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/leader-election] 127.0.0.1:50732]
I0622 21:31:38.419831 32503 reflector.go:428] github.com/openshift/origin/vendor/k8s.io/client-go/informers/factory.go:87: Watch close - *v1.Namespace total 6 items received
I0622 21:31:38.420310 32503 get.go:238] Starting watch for /api/v1/namespaces, rv=465 labels= fields= timeout=5m41s
I0622 21:31:38.504736 32503 wrap.go:42] GET /api/v1/namespaces/kube-system/configmaps/openshift-master-controllers: (876.574µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.507332 32503 wrap.go:42] PUT /api/v1/namespaces/kube-system/configmaps/openshift-master-controllers: (2.174584ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.507540 32503 leaderelection.go:199] successfully renewed lease kube-system/openshift-master-controllers
I0622 21:31:38.608571 32503 wrap.go:42] GET /api/v1/namespaces/kube-system/configmaps/kube-controller-manager: (935.556µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/leader-election] 127.0.0.1:50732]
I0622 21:31:38.611048 32503 wrap.go:42] PUT /api/v1/namespaces/kube-system/configmaps/kube-controller-manager: (1.992091ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8/leader-election] 127.0.0.1:50732]
I0622 21:31:38.611229 32503 leaderelection.go:199] successfully renewed lease kube-system/kube-controller-manager
I0622 21:31:38.701687 32503 wrap.go:42] GET /api/v1/namespaces/default: (1.036123ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50720]
I0622 21:31:38.702920 32503 wrap.go:42] GET /api/v1/namespaces/default/services/kubernetes: (816.233µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50720]
I0622 21:31:38.715189 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (851.871µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.715691 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.324536ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.715807 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.358079ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.715925 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.108333ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.780680 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (815.088µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.966057 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (815.978µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.967577 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (865.53µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.968643 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (854.693µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.973557 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (775.829µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.975217 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (933.598µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.976273 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (838.345µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.976313 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (911.396µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.977619 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (867.467µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.977965 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (937.34µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.979565 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (797.447µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:38.980321 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (895.416µs) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:39.399074 32503 wrap.go:42] GET /api/v1/namespaces/openshift-web-console/configmaps/webconsole-config: (1.135226ms) 404 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:39.684070 32503 wrap.go:42] GET /apis/oauth.openshift.io/v1/oauthaccesstokens/yGl-uVd9ksOWm2FBNtzqRfkd57WZvrgOmSZaPmPdIyU: (1.138784ms) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
I0622 21:31:39.685400 32503 wrap.go:42] GET /apis/user.openshift.io/v1/users/username: (914.284µs) 200 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:50732]
E0622 21:31:39.685620 32503 authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, token timed out]]
I0622 21:31:39.685695 32503 wrap.go:42] GET /apis/user.openshift.io/v1/users/~: (3.122267ms) 401 [[integration.test/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8] 127.0.0.1:51070]
INFO: 2018/06/22 21:31:39 ccBalancerWrapper: updating state and picker called by balancer: IDLE, 0xc44294ca20
INFO: 2018/06/22 21:31:39 dialing to target with scheme: ""
INFO: 2018/06/22 21:31:39 could not get resolver for scheme: ""
INFO: 2018/06/22 21:31:39 balancerWrapper: is pickfirst: false
INFO: 2018/06/22 21:31:39 balancerWrapper: got update addr from Notify: [{127.0.0.1:16126 <nil>}]
INFO: 2018/06/22 21:31:39 ccBalancerWrapper: new subconn: [{127.0.0.1:16126 0 <nil>}]
INFO: 2018/06/22 21:31:39 balancerWrapper: handle subconn state change: 0xc42334c600, CONNECTING
INFO: 2018/06/22 21:31:39 ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc44294ca20
INFO: 2018/06/22 21:31:39 balancerWrapper: handle subconn state change: 0xc42334c600, READY
INFO: 2018/06/22 21:31:39 ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc44294ca20
INFO: 2018/06/22 21:31:39 balancerWrapper: got update addr from Notify: [{127.0.0.1:16126 <nil>}]
PASS
ok github.com/openshift/origin/test/integration/runner 2072.303s
[INFO] [21:31:39+0000] jUnit XML report placed at _output/scripts/test-integration/artifacts/gotest_report_78a5V.xml
Of 203 tests executed in 2072.303s, 203 succeeded, 0 failed, and 0 were skipped.
[INFO] [21:31:40+0000] hack/test-go.sh exited with code 0 after 00h 34m 34s
[WARNING] [21:31:41+0000] Copying _output/local/releases from the container failed!
[WARNING] [21:31:41+0000] Error response from daemon: lstat /var/lib/docker/overlay2/16e35ec5a608a6088c514f06da1d6d1bf8da043d24de6dc71da88683bf3eeeb7/merged/go/src/github.com/openshift/origin/_output/local/releases: no such file or directory
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN INTEGRATION TESTS [00h 46m 11s] ##########
[PostBuildScript] - Executing post build scripts.
[workspace@2] $ /bin/bash /tmp/jenkins4338800108870615044.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/gathered
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin/_output/scripts
File: ‘/data/src/github.com/openshift/origin/_output/scripts’
Size: 61 Blocks: 0 IO Block: 4096 directory
Device: ca02h/51714d Inode: 151223198 Links: 5
Access: (2755/drwxr-sr-x) Uid: ( 1001/ origin) Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:container_file_t:s0
Access: 1970-01-01 00:00:00.000000000 +0000
Modify: 2018-06-22 20:46:49.000000000 +0000
Change: 2018-06-22 21:31:41.095377693 +0000
Birth: -
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin/_output/scripts
+ scp -r -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin/_output/scripts /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/gathered
+ tree /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/gathered
/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/gathered
└── scripts
├── shell
│ ├── artifacts
│ ├── logs
│ │ ├── 711305dea6416815d3b3af5b81711c65b8605da3b7c8db7c31ab6b24b8405473.json
│ │ └── scripts.log
│ └── openshift.local.home
├── test-integration
│ ├── artifacts
│ │ ├── gotest_report_78a5V
│ │ └── gotest_report_78a5V.xml
│ ├── logs
│ │ ├── raw_test_output.log
│ │ ├── scripts.log
│ │ └── test-go-err.log
│ └── openshift.local.home
└── test-tools
├── artifacts
├── logs
│ ├── raw_test_output.log
│ └── scripts.log
└── openshift.local.home
13 directories, 9 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins3449497761871084445.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/generated
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
WARNING: You're not using the default seccomp profile
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --server=https://$( uname --nodename ):10250 --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ true
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/generated
/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
├── master-metrics.log
├── node-metrics.log
└── pid1.journal
0 directories, 10 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins1901113588501937256.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/journals
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/journals
/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/artifacts/journals
├── dnsmasq.service
├── docker.service
└── systemd-journald.service
0 directories, 3 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins2659698119670889156.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_integration/18782/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/test_pull_request_origin_integration/builds/18782/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/master-metrics.log artifacts/generated/node-metrics.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r artifacts/gathered/scripts gcs/artifacts/
++ pwd
+ scp -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config -r /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/gcs openshiftdevel:/data
+ scp -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config /var/lib/jenkins/.config/gcloud/gcs-publisher-credentials.json openshiftdevel:/data/credentials.json
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins8017777918180899574.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
++ mktemp
+ script=/tmp/tmp.hVgYEW8Mij
+ cat
+ chmod +x /tmp/tmp.hVgYEW8Mij
+ scp -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.hVgYEW8Mij openshiftdevel:/tmp/tmp.hVgYEW8Mij
+ ssh -F /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.hVgYEW8Mij"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"test_pull_request_origin_integration","buildid":"a32d9e03-765c-11e8-a857-0a58ac10f8e5","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"84c76c10047095d1a18731c6956439f66b69f1ff","pulls":[{"number":20083,"author":"juanvallejo","sha":"2b5020144282aa69dcfe9dbe260dc2e194b7f525"}]}} ]]
++ jq --compact-output .buildid
+ [[ "a32d9e03-765c-11e8-a857-0a58ac10f8e5" =~ ^"[0-9]+"$ ]]
+ echo 'Using BUILD_NUMBER'
Using BUILD_NUMBER
++ jq --compact-output '.buildid |= "18782"'
+ JOB_SPEC='{"type":"presubmit","job":"test_pull_request_origin_integration","buildid":"18782","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"84c76c10047095d1a18731c6956439f66b69f1ff","pulls":[{"number":20083,"author":"juanvallejo","sha":"2b5020144282aa69dcfe9dbe260dc2e194b7f525"}]}}'
+ docker run -e 'JOB_SPEC={"type":"presubmit","job":"test_pull_request_origin_integration","buildid":"18782","refs":{"org":"openshift","repo":"origin","base_ref":"master","base_sha":"84c76c10047095d1a18731c6956439f66b69f1ff","pulls":[{"number":20083,"author":"juanvallejo","sha":"2b5020144282aa69dcfe9dbe260dc2e194b7f525"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-path=gs://origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin /data/gcs/artifacts /data/gcs/build-log.txt /data/gcs/finished.json
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ...
latest: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
605ce1bd3f31: Already exists
dc6346da9948: Already exists
7377da2e59db: Pulling fs layer
7377da2e59db: Verifying Checksum
7377da2e59db: Download complete
7377da2e59db: Pull complete
Digest: sha256:cce318f50d4a815a3dc926962b02d39fd6cd33303c3a59035b743f19dd2bf6a7
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","level":"info","msg":"Gathering artifacts from artifact directory: /data/gcs/artifacts","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/avc_denials.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/avc_denials.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/containers.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/containers.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/dmesg.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/dmesg.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.config in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/docker.config\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.info in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/docker.info\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/filesystem.info in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/filesystem.info\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/installed_packages.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/installed_packages.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/master-metrics.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/master-metrics.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/node-metrics.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/node-metrics.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/pid1.journal in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/pid1.journal\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/dnsmasq.service in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/dnsmasq.service\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/docker.service in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/docker.service\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/systemd-journald.service in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/systemd-journald.service\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/711305dea6416815d3b3af5b81711c65b8605da3b7c8db7c31ab6b24b8405473.json in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/shell/logs/711305dea6416815d3b3af5b81711c65b8605da3b7c8db7c31ab6b24b8405473.json\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/shell/logs/scripts.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/shell/logs/scripts.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V.xml in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V.xml\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/test-integration/logs/raw_test_output.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/raw_test_output.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/test-integration/logs/scripts.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/scripts.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/test-integration/logs/test-go-err.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/test-go-err.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/test-tools/logs/raw_test_output.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-tools/logs/raw_test_output.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/test-tools/logs/scripts.log in artifact directory. Uploading as pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-tools/logs/scripts.log\n","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_integration/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/dmesg.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_integration/18782.txt","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/containers.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/master-metrics.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/pid1.journal","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/test-go-err.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/finished.json","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/docker.config","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/filesystem.info","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/docker.service","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/systemd-journald.service","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-tools/logs/raw_test_output.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/dnsmasq.service","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/raw_test_output.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/installed_packages.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-tools/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/avc_denials.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/docker.info","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/shell/logs/711305dea6416815d3b3af5b81711c65b8605da3b7c8db7c31ab6b24b8405473.json","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/build-log.txt","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/node-metrics.log","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V.xml","level":"info","msg":"Queued for upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/test-go-err.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/docker.config","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/containers.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_integration/18782.txt","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-tools/logs/raw_test_output.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/master-metrics.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/avc_denials.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/shell/logs/711305dea6416815d3b3af5b81711c65b8605da3b7c8db7c31ab6b24b8405473.json","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/shell/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/finished.json","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/dmesg.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/dnsmasq.service","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/docker.info","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-tools/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/artifacts/gotest_report_78a5V.xml","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/installed_packages.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_origin_integration/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/pid1.journal","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/filesystem.info","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/generated/node-metrics.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/systemd-journald.service","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:55Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/journals/docker.service","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:56Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/build-log.txt","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:56Z"}
{"component":"gcsupload","dest":"pr-logs/pull/20083/test_pull_request_origin_integration/18782/artifacts/scripts/test-integration/logs/raw_test_output.log","level":"info","msg":"Finished upload","time":"2018-06-22T21:31:56Z"}
{"component":"gcsupload","level":"info","msg":"Finished upload to GCS","time":"2018-06-22T21:31:56Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 05s] ##########
[workspace@2] $ /bin/bash /tmp/jenkins3468045264255764385.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e
++ export PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config
+ oct deprovision
PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml
PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****
TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir) => {
"changed": false,
"generated_timestamp": "2018-06-22 17:31:57.373744",
"item": "origin_ci_inventory_dir",
"skip_reason": "Conditional check failed",
"skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region) => {
"changed": false,
"generated_timestamp": "2018-06-22 17:31:57.378341",
"item": "origin_ci_aws_region",
"skip_reason": "Conditional check failed",
"skipped": true
}
PLAY [deprovision virtual hosts in EC2] ****************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost
TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
"changed": false,
"generated_timestamp": "2018-06-22 17:31:58.237323",
"msg": ""
}
TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2018-06-22 17:31:58.838692",
"msg": "Tags {'Name': 'oct-terminate'} created for resource i-020717aab675451cf."
}
TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2018-06-22 17:31:59.652188",
"instance_ids": [
"i-020717aab675451cf"
],
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-05d5f1ec75f3ed5b5"
},
"/dev/sdb": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-0ec11482bc7fb8420"
}
},
"dns_name": "ec2-54-147-99-15.compute-1.amazonaws.com",
"ebs_optimized": false,
"groups": {
"sg-7e73221a": "default"
},
"hypervisor": "xen",
"id": "i-020717aab675451cf",
"image_id": "ami-0f2178e5f060dbf2d",
"instance_type": "m4.xlarge",
"kernel": null,
"key_name": "libra",
"launch_time": "2018-06-22T20:42:14.000Z",
"placement": "us-east-1d",
"private_dns_name": "ip-172-18-14-209.ec2.internal",
"private_ip": "172.18.14.209",
"public_dns_name": "ec2-54-147-99-15.compute-1.amazonaws.com",
"public_ip": "54.147.99.15",
"ramdisk": null,
"region": "us-east-1",
"root_device_name": "/dev/sda1",
"root_device_type": "ebs",
"state": "running",
"state_code": 16,
"tags": {
"Name": "oct-terminate",
"openshift_etcd": "",
"openshift_master": "",
"openshift_node": ""
},
"tenancy": "default",
"virtualization_type": "hvm"
}
],
"tagged_instances": []
}
TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2018-06-22 17:31:59.915216",
"path": "/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory/host_vars/172.18.14.209.yml",
"state": "absent"
}
PLAY [deprovision virtual hosts locally manged by Vagrant] *********************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
PLAY [clean up local configuration for deprovisioned instances] ****************
TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/4b405957477ba1b70cfacd1cf43c6d41a605fc8e/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
"changed": true,
"generated_timestamp": "2018-06-22 17:32:00.441908",
"path": "/var/lib/jenkins/jobs/test_pull_request_origin_integration/workspace@2/.config/origin-ci-tool/inventory",
"state": "absent"
}
PLAY RECAP *********************************************************************
localhost : ok=8 changed=4 unreachable=0 failed=0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 04s] ##########
Archiving artifacts
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
Finished: SUCCESS