SuccessConsole Output

Skipping 4,570 KB.. Full Log
[INFO] ------- Test case 1 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external and CONTAINER values.
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-33br8' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
No resources found.
SUCCESS after 8.361s: test/mux.sh:52: executing 'oc get pod logging-fluentd-33br8' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.053s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.414s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.234s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /dab33761fd7843369fa5906b0a563258 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 9.333s: test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /dab33761fd7843369fa5906b0a563258 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:04036cfabb6c46d9ad10993695098da8 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.606s: test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:04036cfabb6c46d9ad10993695098da8 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.618s: test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.662s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 2 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external without CONTAINER values.
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-lpp1f' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 39.074s: test/mux.sh:52: executing 'oc get pod logging-fluentd-lpp1f' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.067s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.593s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.252s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /5667da23c54c4f19895dbcfc7c48e6b3 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 3.901s: test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /5667da23c54c4f19895dbcfc7c48e6b3 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:6d1ccf93a5d94f2caa6926833d75feea | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 2.038s: test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:6d1ccf93a5d94f2caa6926833d75feea | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.491s: test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.723s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 3 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external and CONTAINER values, which namespace names do not match.
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-48fqb' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 9.708s: test/mux.sh:52: executing 'oc get pod logging-fluentd-48fqb' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.108s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
No resources found.
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.487s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.346s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /9c422f22fcd5444cbd2b6cc345dd59cd 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 7.221s: test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /9c422f22fcd5444cbd2b6cc345dd59cd 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:50f9ee31f927419d90ea5614d0c0e588 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.760s: test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:50f9ee31f927419d90ea5614d0c0e588 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.570s: test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.626s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 4 -------
[INFO] fluentd forwards kibana and system logs with tag test.bogus.external and no CONTAINER values, which will use a namespace of mux-undefined.
[INFO] using existing project mux-undefined
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-rkbj6' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 6.347s: test/mux.sh:52: executing 'oc get pod logging-fluentd-rkbj6' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.061s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.572s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.251s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /34bce03366224821a6f053b4657f2ebd 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 7.534s: test/mux.sh:263: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /34bce03366224821a6f053b4657f2ebd 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.mux-undefined.*/_count?q=SYSLOG_IDENTIFIER:217b6fe58dcf488697527a296b028ec3 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 1.374s: test/mux.sh:304: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.mux-undefined.*/_count?q=SYSLOG_IDENTIFIER:217b6fe58dcf488697527a296b028ec3 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.608s: test/mux.sh:314: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.623s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:319: executing 'oc set env dc logging-mux ES_HOST=logging-es OPS_HOST=logging-es-ops' expecting success...
SUCCESS after 0.503s: test/mux.sh:319: executing 'oc set env dc logging-mux ES_HOST=logging-es OPS_HOST=logging-es-ops' expecting success
Running test/mux.sh:322: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.259s: test/mux.sh:322: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] mux test finished at Tue Apr 10 10:24:39 UTC 2018
Running test/mux.sh:362: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.288s: test/mux.sh:362: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
{"acknowledged":true}{"acknowledged":true}{"acknowledged":true}Running test/mux.sh:373: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.090s: test/mux.sh:373: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:376: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.463s: test/mux.sh:376: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-mux succeeded at Tue Apr 10 10:24:57 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.416s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.447s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-read-throttling started at Tue Apr 10 10:24:58 UTC 2018
  WARNING: You're not using the default seccomp profile
[INFO] This test only works with the json-file docker log driver
[INFO] Logging test suite test-read-throttling succeeded at Tue Apr 10 10:24:58 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.354s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.341s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-utf8-characters started at Tue Apr 10 10:24:59 UTC 2018
[INFO] Starting utf8-characters test at Tue Apr 10 10:24:59 UTC 2018
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.018s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
No resources found.
Running test/utf8-characters.sh:45: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"51d1a0a1b2384ce899f29af21922de16"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 300.000s...
SUCCESS after 5.295s: test/utf8-characters.sh:45: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"51d1a0a1b2384ce899f29af21922de16"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 300.000s
[INFO] Checking that message was successfully processed...
Running test/utf8-characters.sh:48: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"51d1a0a1b2384ce899f29af21922de16"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-utf8-characters.py '51d1a0a1b2384ce899f29af21922de16-µ' 51d1a0a1b2384ce899f29af21922de16' expecting success...
SUCCESS after 0.482s: test/utf8-characters.sh:48: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"51d1a0a1b2384ce899f29af21922de16"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-utf8-characters.py '51d1a0a1b2384ce899f29af21922de16-µ' 51d1a0a1b2384ce899f29af21922de16' expecting success
[INFO] utf8-characters test finished at Tue Apr 10 10:25:06 UTC 2018
[INFO] Logging test suite test-utf8-characters succeeded at Tue Apr 10 10:25:06 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.360s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.340s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-viaq-data-model started at Tue Apr 10 10:25:07 UTC 2018
Running test/viaq-data-model.sh:50: executing 'oc get pods -l component=es' expecting any result and text '^logging-es.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.264s: test/viaq-data-model.sh:50: executing 'oc get pods -l component=es' expecting any result and text '^logging-es.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:51: executing 'oc get pods -l component=kibana' expecting any result and text '^logging-kibana-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.270s: test/viaq-data-model.sh:51: executing 'oc get pods -l component=kibana' expecting any result and text '^logging-kibana-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.268s: test/viaq-data-model.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Starting viaq-data-model test at Tue Apr 10 10:25:09 UTC 2018
No resources found.
No resources found.
Running test/viaq-data-model.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.251s: test/viaq-data-model.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
Running test/viaq-data-model.sh:113: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.059s: test/viaq-data-model.sh:113: executing 'flush_fluentd_pos_files' expecting success
Running test/viaq-data-model.sh:115: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 9.420s: test/viaq-data-model.sh:115: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.456s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /ee09d450e6a947b68f9b9b28ab4bb62e 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.746s: hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /ee09d450e6a947b68f9b9b28ab4bb62e 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d444a32cccb64a408ed21912297251ad"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.513s: hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d444a32cccb64a408ed21912297251ad"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:131: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /ee09d450e6a947b68f9b9b28ab4bb62e 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success...
SUCCESS after 0.486s: test/viaq-data-model.sh:131: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /ee09d450e6a947b68f9b9b28ab4bb62e 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success
Running test/viaq-data-model.sh:134: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d444a32cccb64a408ed21912297251ad"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success...
SUCCESS after 0.496s: test/viaq-data-model.sh:134: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d444a32cccb64a408ed21912297251ad"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success
Running test/viaq-data-model.sh:145: executing 'oc get pod logging-fluentd-4p294' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 7.025s: test/viaq-data-model.sh:145: executing 'oc get pod logging-fluentd-4p294' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:146: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.392s: test/viaq-data-model.sh:146: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.022s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /8be9d2703f61401cb921925a251fab58 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.052s: hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /8be9d2703f61401cb921925a251fab58 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"4b57bddc560f4a28ac4ee2d9cf3bb2af"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.543s: hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"4b57bddc560f4a28ac4ee2d9cf3bb2af"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:152: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /8be9d2703f61401cb921925a251fab58 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success...
SUCCESS after 0.502s: test/viaq-data-model.sh:152: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /8be9d2703f61401cb921925a251fab58 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success
Running test/viaq-data-model.sh:155: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"4b57bddc560f4a28ac4ee2d9cf3bb2af"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success...
SUCCESS after 0.549s: test/viaq-data-model.sh:155: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"4b57bddc560f4a28ac4ee2d9cf3bb2af"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success
Running test/viaq-data-model.sh:160: executing 'oc get pod logging-fluentd-21nc5' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 6.569s: test/viaq-data-model.sh:160: executing 'oc get pod logging-fluentd-21nc5' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:161: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.341s: test/viaq-data-model.sh:161: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.025s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /8b6af2385e8e46efa232e84d2145ce32 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.112s: hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /8b6af2385e8e46efa232e84d2145ce32 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"2f5c9788fe8840caa8639c472c864899"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.544s: hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"2f5c9788fe8840caa8639c472c864899"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:167: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /8b6af2385e8e46efa232e84d2145ce32 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success...
SUCCESS after 0.472s: test/viaq-data-model.sh:167: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /8b6af2385e8e46efa232e84d2145ce32 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success
Running test/viaq-data-model.sh:171: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"2f5c9788fe8840caa8639c472c864899"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success...
SUCCESS after 0.481s: test/viaq-data-model.sh:171: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"2f5c9788fe8840caa8639c472c864899"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success
Running test/viaq-data-model.sh:176: executing 'oc get pod logging-fluentd-5bj9w' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 7.006s: test/viaq-data-model.sh:176: executing 'oc get pod logging-fluentd-5bj9w' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:177: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.051s: test/viaq-data-model.sh:177: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.020s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /55047128d7d642cc9f33d037f8e55377 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.833s: hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /55047128d7d642cc9f33d037f8e55377 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"20532c2493e34c33b5e6df2de9e0998e"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.560s: hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"20532c2493e34c33b5e6df2de9e0998e"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:183: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /55047128d7d642cc9f33d037f8e55377 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success...
SUCCESS after 0.460s: test/viaq-data-model.sh:183: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /55047128d7d642cc9f33d037f8e55377 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success
Running test/viaq-data-model.sh:187: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"20532c2493e34c33b5e6df2de9e0998e"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success...
SUCCESS after 0.506s: test/viaq-data-model.sh:187: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"20532c2493e34c33b5e6df2de9e0998e"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success
Running test/viaq-data-model.sh:192: executing 'oc get pod logging-fluentd-p1g4j' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 6.516s: test/viaq-data-model.sh:192: executing 'oc get pod logging-fluentd-p1g4j' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.897s: test/viaq-data-model.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.019s: hack/testing/util.sh:281: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /c0e59db8e1ed405ebbc093d723bdb215 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.380s: hack/testing/util.sh:212: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /c0e59db8e1ed405ebbc093d723bdb215 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"fb1edb1671eb4d7b9a8f48513176f402"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 1.244s: hack/testing/util.sh:232: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"fb1edb1671eb4d7b9a8f48513176f402"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:207: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /c0e59db8e1ed405ebbc093d723bdb215 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success...
SUCCESS after 0.524s: test/viaq-data-model.sh:207: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /c0e59db8e1ed405ebbc093d723bdb215 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success
Running test/viaq-data-model.sh:211: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"fb1edb1671eb4d7b9a8f48513176f402"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success...
SUCCESS after 0.500s: test/viaq-data-model.sh:211: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"fb1edb1671eb4d7b9a8f48513176f402"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success
[INFO] viaq-data-model test finished at Tue Apr 10 10:26:52 UTC 2018
Running test/viaq-data-model.sh:33: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.347s: test/viaq-data-model.sh:33: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
Running test/viaq-data-model.sh:40: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.081s: test/viaq-data-model.sh:40: executing 'flush_fluentd_pos_files' expecting success
Running test/viaq-data-model.sh:42: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.850s: test/viaq-data-model.sh:42: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-viaq-data-model succeeded at Tue Apr 10 10:26:59 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.418s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.435s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-zzz-correct-index-names started at Tue Apr 10 10:27:00 UTC 2018
No resources found.
No resources found.
Running test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default\.'...
SUCCESS after 0.497s: test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.default\.'
Running test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.default.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.523s: test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.default.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.538s: test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default\.'...
SUCCESS after 0.664s: test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.default\.'
Running test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.default.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.501s: test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.default.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.474s: test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:40: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and not text '^0$'...
SUCCESS after 0.508s: test/zzz-correct-index-names.sh:40: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /.operations.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and not text '^0$'
Running test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.openshift\.'...
SUCCESS after 0.480s: test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.openshift\.'
Running test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.514s: test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.464s: test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.openshift\.'...
SUCCESS after 0.511s: test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.openshift\.'
Running test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.562s: test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.542s: test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.openshift-infra\.'...
SUCCESS after 0.478s: test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /_cat/indices' expecting success and not text 'project\.openshift-infra\.'
Running test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.522s: test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.500s: test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-fj82lvf3-1-pk244 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.openshift-infra\.'...
SUCCESS after 0.494s: test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /_cat/indices' expecting success and not text 'project\.openshift-infra\.'
Running test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.495s: test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.481s: test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-1x1nb16q-1-7d7vm /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'
[INFO] Logging test suite test-zzz-correct-index-names succeeded at Tue Apr 10 10:27:12 UTC 2018
[INFO] [CLEANUP] Beginning cleanup routines...
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 61667 Terminated              monitor_fluentd_top
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 61668 Terminated              monitor_fluentd_pos
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 61670 Terminated              monitor_journal_lograte
[INFO] [CLEANUP] Dumping cluster events to _output/scripts/entrypoint/artifacts/events.txt
Logged into "https://ip-172-18-6-56.ec2.internal:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * logging
    management-infra
    mux-undefined
    openshift
    openshift-infra

Using project "logging".
[INFO] [CLEANUP] Dumping container logs to _output/scripts/entrypoint/logs/containers
[INFO] [CLEANUP] Truncating log files over 200M
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN LOGGING TESTS [00h 31m 47s] ##########
[PostBuildScript] - Execution post build scripts.
[workspace] $ /bin/bash /tmp/jenkins4202214660574839080.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin/_output/scripts
  File: ‘/data/src/github.com/openshift/origin/_output/scripts’
  Size: 107       	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 25234435    Links: 7
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:svirt_sandbox_file_t:s0
Access: 2018-04-10 08:37:04.341012667 +0000
Modify: 2018-04-10 09:47:08.294854935 +0000
Change: 2018-04-10 09:47:08.294854935 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin/_output/scripts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin/_output/scripts /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts
  File: ‘/data/src/github.com/openshift/origin-aggregated-logging/_output/scripts’
  Size: 44        	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 176392451   Links: 4
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:svirt_sandbox_file_t:s0
Access: 2018-04-10 09:47:10.939761903 +0000
Modify: 2018-04-10 09:55:29.875212334 +0000
Change: 2018-04-10 09:55:29.875212334 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin-aggregated-logging/_output/scripts /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/gathered
/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/gathered
└── scripts
    ├── ansible_junit
    │   ├── FRJdhrdvnw.xml
    │   ├── wFfEzUecXX.xml
    │   ├── xeRxYUejNl.xml
    │   └── xVFSFPMxVQ.xml
    ├── build-base-images
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    ├── build-images
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── entrypoint
    │   ├── artifacts
    │   │   ├── access_control.sh-artifacts.txt
    │   │   ├── es.indices.after
    │   │   ├── es.indices.before
    │   │   ├── es-ops.indices.after
    │   │   ├── es-ops.indices.before
    │   │   ├── events.txt
    │   │   ├── fluentd-forward.sh-artifacts.txt
    │   │   ├── logging-curator-ops-3-xfp6n.log
    │   │   ├── logging-fluentd-3bqts.log
    │   │   ├── logging-fluentd-gx38f.log
    │   │   ├── logging-fluentd-orig.yaml
    │   │   ├── logging-fluentd-zbz24.log
    │   │   ├── logging-mux-1-xldt4.log
    │   │   ├── monitor_fluentd_pos.log
    │   │   ├── monitor_fluentd_top.kubeconfig
    │   │   ├── monitor_fluentd_top.log
    │   │   ├── monitor_journal_lograte.log
    │   │   ├── multi_tenancy.sh-artifacts.txt
    │   │   ├── mux.logging-fluentd-djdgk.log
    │   │   └── mux.sh-artifacts.txt
    │   ├── logs
    │   │   ├── containers
    │   │   │   ├── k8s_curator_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_curator_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_elasticsearch_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_elasticsearch_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_fluentd-elasticsearch_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_fluentd-elasticsearch_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_kibana_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_kibana_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_kibana-proxy_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_kibana-proxy_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_mux_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_POD_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_registry-console_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log
    │   │   │   ├── k8s_registry_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log
    │   │   │   └── k8s_router_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log
    │   │   ├── raw_test_output.log
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── env
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── tmp.K0qsam1kjF
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    └── tmp.tu8mqQMkUY
        ├── artifacts
        ├── logs
        └── openshift.local.home

27 directories, 54 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins4790088067329404176.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/generated
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/etcd/etcd.conf 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --server=https://$( uname --nodename ):10250 --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/generated
/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── etcd.conf
├── filesystem.info
├── installed_packages.log
├── master-metrics.log
├── node-metrics.log
└── pid1.journal

0 directories, 11 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins4413888227793314657.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/journals
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/journals
/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
├── etcd.service
├── openvswitch.service
├── origin-master-api.service
├── origin-master-controllers.service
├── origin-master.service
├── origin-node.service
├── ovsdb-server.service
├── ovs-vswitchd.service
└── systemd-journald.service

0 directories, 11 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins4208542558900391645.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/test_pull_request_openshift_ansible_logging_36/29/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/builds/29/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/etcd.conf artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/master-metrics.log artifacts/generated/node-metrics.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/etcd.service artifacts/journals/openvswitch.service artifacts/journals/origin-master-api.service artifacts/journals/origin-master-controllers.service artifacts/journals/origin-master.service artifacts/journals/origin-node.service artifacts/journals/ovsdb-server.service artifacts/journals/ovs-vswitchd.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r artifacts/gathered/scripts gcs/artifacts/
++ pwd
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config -r /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/gcs openshiftdevel:/data
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /var/lib/jenkins/.config/gcloud/gcs-publisher-credentials.json openshiftdevel:/data/credentials.json
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins767966675582245669.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
++ mktemp
+ script=/tmp/tmp.yydu74lwzG
+ cat
+ chmod +x /tmp/tmp.yydu74lwzG
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.yydu74lwzG openshiftdevel:/tmp/tmp.yydu74lwzG
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.yydu74lwzG"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"test_pull_request_openshift_ansible_logging_36","buildid":"1f08e19a-3c99-11e8-94bb-0a58ac10035a","refs":{"org":"openshift","repo":"openshift-ansible","base_ref":"release-3.6","base_sha":"75f05c5f2f26003c4d3263ecd483ba19f7dfead3","pulls":[{"number":7767,"author":"mjudeikis","sha":"6d36481847d2162d40d4a57ba98e380b3fa1a625"}]}} ]]
++ jq --compact-output .buildid
+ [[ "1f08e19a-3c99-11e8-94bb-0a58ac10035a" =~ ^"[0-9]+"$ ]]
Using BUILD_NUMBER
+ echo 'Using BUILD_NUMBER'
++ jq --compact-output '.buildid |= "29"'
+ JOB_SPEC='{"type":"presubmit","job":"test_pull_request_openshift_ansible_logging_36","buildid":"29","refs":{"org":"openshift","repo":"openshift-ansible","base_ref":"release-3.6","base_sha":"75f05c5f2f26003c4d3263ecd483ba19f7dfead3","pulls":[{"number":7767,"author":"mjudeikis","sha":"6d36481847d2162d40d4a57ba98e380b3fa1a625"}]}}'
+ docker run -e 'JOB_SPEC={"type":"presubmit","job":"test_pull_request_openshift_ansible_logging_36","buildid":"29","refs":{"org":"openshift","repo":"openshift-ansible","base_ref":"release-3.6","base_sha":"75f05c5f2f26003c4d3263ecd483ba19f7dfead3","pulls":[{"number":7767,"author":"mjudeikis","sha":"6d36481847d2162d40d4a57ba98e380b3fa1a625"}]}}' -v /data:/data:z registry.svc.ci.openshift.org/ci/gcsupload:latest --dry-run=false --gcs-bucket=origin-ci-test --gcs-credentials-file=/data/credentials.json --path-strategy=single --default-org=openshift --default-repo=origin /data/gcs/artifacts /data/gcs/build-log.txt /data/gcs/finished.json
Unable to find image 'registry.svc.ci.openshift.org/ci/gcsupload:latest' locally
Trying to pull repository registry.svc.ci.openshift.org/ci/gcsupload ... 
sha256:937cfc74efbe5f99ac6b54a8837ce0c1ba72f9f12cf4bf484c6fb7323727f623: Pulling from registry.svc.ci.openshift.org/ci/gcsupload
6d987f6f4279: Already exists
4cccebe844ee: Already exists
deb4d9262c8e: Pulling fs layer
deb4d9262c8e: Verifying Checksum
deb4d9262c8e: Download complete
deb4d9262c8e: Pull complete
Digest: sha256:937cfc74efbe5f99ac6b54a8837ce0c1ba72f9f12cf4bf484c6fb7323727f623
Status: Downloaded newer image for registry.svc.ci.openshift.org/ci/gcsupload:latest
{"component":"gcsupload","level":"info","msg":"Gathering artifacts from artifact directory: /data/gcs/artifacts","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/avc_denials.log in artifact directory. Uploading as artifacts/generated/avc_denials.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/containers.log in artifact directory. Uploading as artifacts/generated/containers.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/dmesg.log in artifact directory. Uploading as artifacts/generated/dmesg.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.config in artifact directory. Uploading as artifacts/generated/docker.config\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/docker.info in artifact directory. Uploading as artifacts/generated/docker.info\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/etcd.conf in artifact directory. Uploading as artifacts/generated/etcd.conf\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/filesystem.info in artifact directory. Uploading as artifacts/generated/filesystem.info\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/installed_packages.log in artifact directory. Uploading as artifacts/generated/installed_packages.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/master-metrics.log in artifact directory. Uploading as artifacts/generated/master-metrics.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/node-metrics.log in artifact directory. Uploading as artifacts/generated/node-metrics.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/generated/pid1.journal in artifact directory. Uploading as artifacts/generated/pid1.journal\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/dnsmasq.service in artifact directory. Uploading as artifacts/journals/dnsmasq.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/docker.service in artifact directory. Uploading as artifacts/journals/docker.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/etcd.service in artifact directory. Uploading as artifacts/journals/etcd.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/openvswitch.service in artifact directory. Uploading as artifacts/journals/openvswitch.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/origin-master-api.service in artifact directory. Uploading as artifacts/journals/origin-master-api.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/origin-master-controllers.service in artifact directory. Uploading as artifacts/journals/origin-master-controllers.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/origin-master.service in artifact directory. Uploading as artifacts/journals/origin-master.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/origin-node.service in artifact directory. Uploading as artifacts/journals/origin-node.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/ovs-vswitchd.service in artifact directory. Uploading as artifacts/journals/ovs-vswitchd.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/ovsdb-server.service in artifact directory. Uploading as artifacts/journals/ovsdb-server.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/journals/systemd-journald.service in artifact directory. Uploading as artifacts/journals/systemd-journald.service\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/ansible_junit/FRJdhrdvnw.xml in artifact directory. Uploading as artifacts/scripts/ansible_junit/FRJdhrdvnw.xml\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/ansible_junit/wFfEzUecXX.xml in artifact directory. Uploading as artifacts/scripts/ansible_junit/wFfEzUecXX.xml\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/ansible_junit/xVFSFPMxVQ.xml in artifact directory. Uploading as artifacts/scripts/ansible_junit/xVFSFPMxVQ.xml\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/ansible_junit/xeRxYUejNl.xml in artifact directory. Uploading as artifacts/scripts/ansible_junit/xeRxYUejNl.xml\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/build-images/logs/scripts.log in artifact directory. Uploading as artifacts/scripts/build-images/logs/scripts.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/access_control.sh-artifacts.txt in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/access_control.sh-artifacts.txt\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/es-ops.indices.after in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/es-ops.indices.after\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/es-ops.indices.before in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/es-ops.indices.before\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/es.indices.after in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/es.indices.after\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/es.indices.before in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/es.indices.before\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/events.txt in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/events.txt\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/fluentd-forward.sh-artifacts.txt in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/fluentd-forward.sh-artifacts.txt\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/logging-curator-ops-3-xfp6n.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/logging-curator-ops-3-xfp6n.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/logging-fluentd-3bqts.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/logging-fluentd-3bqts.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/logging-fluentd-gx38f.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/logging-fluentd-gx38f.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/logging-fluentd-orig.yaml in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/logging-fluentd-orig.yaml\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/logging-fluentd-zbz24.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/logging-fluentd-zbz24.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/logging-mux-1-xldt4.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/logging-mux-1-xldt4.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_pos.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/monitor_fluentd_pos.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.kubeconfig in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.kubeconfig\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/monitor_journal_lograte.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/monitor_journal_lograte.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/multi_tenancy.sh-artifacts.txt in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/multi_tenancy.sh-artifacts.txt\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/mux.logging-fluentd-djdgk.log in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/mux.logging-fluentd-djdgk.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/artifacts/mux.sh-artifacts.txt in artifact directory. Uploading as artifacts/scripts/entrypoint/artifacts/mux.sh-artifacts.txt\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_POD_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_POD_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_mux_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_mux_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_registry-console_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_registry-console_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_registry_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_registry_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/containers/k8s_router_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/containers/k8s_router_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/raw_test_output.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/raw_test_output.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/entrypoint/logs/scripts.log in artifact directory. Uploading as artifacts/scripts/entrypoint/logs/scripts.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","level":"info","msg":"Found /data/gcs/artifacts/scripts/env/logs/scripts.log in artifact directory. Uploading as artifacts/scripts/env/logs/scripts.log\n","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_router_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/mux.logging-fluentd-djdgk.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/xeRxYUejNl.xml","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-3bqts.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/pid1.journal","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es.indices.before","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/openvswitch.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-curator-ops-3-xfp6n.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_openshift_ansible_logging_36/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/etcd.conf","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-zbz24.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/finished.json","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/master-metrics.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/xVFSFPMxVQ.xml","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/ovsdb-server.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/raw_test_output.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/env/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/dmesg.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/installed_packages.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-gx38f.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/build-log.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/wFfEzUecXX.xml","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/mux.sh-artifacts.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-node.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/FRJdhrdvnw.xml","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es-ops.indices.after","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/containers.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/docker.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/docker.info","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/filesystem.info","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-master-controllers.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-master.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/fluentd-forward.sh-artifacts.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_pos.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_openshift_ansible_logging_36/29.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/avc_denials.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_mux_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_registry-console_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/systemd-journald.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/access_control.sh-artifacts.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/events.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/latest-build.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/ovs-vswitchd.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_journal_lograte.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/multi_tenancy.sh-artifacts.txt","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_registry_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/node-metrics.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-orig.yaml","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/docker.config","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es.indices.after","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/etcd.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-master-api.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es-ops.indices.before","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-mux-1-xldt4.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.kubeconfig","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/dnsmasq.service","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/build-images/logs/scripts.log","level":"info","msg":"Queued for upload","time":"2018-04-10T10:27:46Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es.indices.before","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/mux.logging-fluentd-djdgk.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/xeRxYUejNl.xml","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es.indices.after","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/openvswitch.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-3bqts.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.kubeconfig","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_mux_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/docker.config","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/wFfEzUecXX.xml","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/dnsmasq.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-curator-ops-3-xfp6n.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/mux.sh-artifacts.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/FRJdhrdvnw.xml","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-mux-1-xldt4.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_registry_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/pid1.journal","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es-ops.indices.before","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/events.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_top.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/ovs-vswitchd.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_router_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/etcd.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/filesystem.info","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-ops-data-master-1x1nb16q-1-7d7vm_logging_0dc588ea-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_fluentd_pos.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/access_control.sh-artifacts.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/raw_test_output.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/build-images/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/monitor_journal_lograte.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/ansible_junit/xVFSFPMxVQ.xml","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zc8lj_logging_b1eeceb4-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-zbz24.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-7-p4hd0_logging_d03ae60a-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana-proxy_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/docker.info","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/systemd-journald.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/etcd.conf","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_openshift_ansible_logging_36/29.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-master-controllers.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-master-api.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/containers.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/master-metrics.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-gx38f.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/fluentd-forward.sh-artifacts.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/logging-fluentd-orig.yaml","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/multi_tenancy.sh-artifacts.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/finished.json","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_registry-console_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/directory/test_pull_request_openshift_ansible_logging_36/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_elasticsearch_logging-es-data-master-fj82lvf3-1-pk244_logging_01e3ba8d-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/dmesg.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/avc_denials.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_router-1-7ztjv_default_f5682b41-3ca3-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-ops-1-xvn9m_logging_2216adf9-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/env/logs/scripts.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_docker-registry-1-07rs8_default_10ac5dd2-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/installed_packages.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/generated/node-metrics.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/ovsdb-server.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/latest-build.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_fluentd-elasticsearch_logging-fluentd-zbz24_logging_a70a64f5-3ca9-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_curator_logging-curator-ops-5-x38f0_logging_e85c5676-3ca7-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_logging-mux-1-xldt4_logging_3f1bbde1-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/artifacts/es-ops.indices.after","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_POD_registry-console-1-t9g5j_default_1dd94162-3ca4-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/docker.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/scripts/entrypoint/logs/containers/k8s_kibana_logging-kibana-1-5n7b3_logging_19adf8e3-3ca5-11e8-8016-0e3510f2bc98_0.log","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/build-log.txt","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:47Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-master.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:48Z"}
{"component":"gcsupload","dest":"pr-logs/pull/openshift_openshift-ansible/7767/test_pull_request_openshift_ansible_logging_36/29/artifacts/journals/origin-node.service","level":"info","msg":"Finished upload","time":"2018-04-10T10:27:49Z"}
{"component":"gcsupload","level":"info","msg":"Finished upload to GCS","time":"2018-04-10T10:27:49Z"}
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 11s] ##########
[workspace] $ /bin/bash /tmp/jenkins5192807829070111439.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2018-04-10 06:27:50.632579", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2018-04-10 06:27:50.635455", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2018-04-10 06:27:51.415868", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-10 06:27:51.979063", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-0953b352b42f5e1e9."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-10 06:27:52.872184", 
    "instance_ids": [
        "i-0953b352b42f5e1e9"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-00af55b01cfc285d0"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-0cab5166b14051b14"
                }
            }, 
            "dns_name": "ec2-35-174-8-68.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-0953b352b42f5e1e9", 
            "image_id": "ami-069c0ca6cc091e8fa", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2018-04-10T08:29:11.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-6-56.ec2.internal", 
            "private_ip": "172.18.6.56", 
            "public_dns_name": "ec2-35-174-8-68.compute-1.amazonaws.com", 
            "public_ip": "35.174.8.68", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-10 06:27:53.112651", 
    "path": "/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.6.56.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-10 06:27:53.552401", 
    "path": "/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_36/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 04s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
Finished: SUCCESS