Console Output

Skipping 4,672 KB.. Full Log
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/4cedf07628c6d907331bb4d80c0b046427d36c513d3c7d55bcb8d56a2e6b2e2a/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/4d268392cc53d05dc2d925888d0c7b55bdb316704a779e2402e325a2e9d81c75/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/5d9ee55361fd162765b88351869bd6378dd8b51721a4794090e852374f68e506/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/823f044923420e0af3c24dbd330f3e3aa48138d27e048e63f50e12b1503a857f/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/855a635eef2eb8758b90f5eb63560907c17955779d055b702952d973c49e5360/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/8ce7d7a2f53db163759f2a7e50abf04346845a587f91ee241257fcc1b086327b/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/902c80652ab42b0ee4628088b0c43d196a2343778a396b7ef2d915d59a96de2b/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/aece228fbf0b545fd6253cf5b2f35037b2fcc0427061931862f2f135b3f57070/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/d2cdf1ea99c14a7d82dadc7e1ce71442dc0746d1f64e9c518752c12fd6c15745/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/d7a9f52041d0ca47cff3096305000723a424ac574b2e930d2b68e0052308d760/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/db751bdcb1597d5baeff6ded05d71146d6a567f14a287a1bb4ade470dffbf923/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/ec11aac5d3eb19010a7a5a4235174e2ca363e1d7aee6165f354cfb5309eac33c/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/eff59745c5191e3a6969d279aefeb07112dfefd05fb5b93f203d1ff2cd457347/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/f0198dc399a2d42b678d8f6d6b61fe45ef2020e470a1af30e38d8a252ff7cc12/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/f15e71b9ad6f6a62dccb7ff2bab28a326e5463d749adba2306fa93796442dc70/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] umount: /var/lib/docker/containers/f8cf1ce05a639164c6a109368bbe0b2986d9a9a854b430183490cfd866b94c54/shm: not mounted
[2018-04-06 22:40:50.641608006+00:00] 2018-04-06 22:39:26 +0000 [info]: reading config file path="/etc/fluent/fluent.conf"
[2018-04-06 22:40:50.641608006+00:00] 2018-04-06 22:39:27 +0000 [warn]: 'block' action stops input process until the buffer full is resolved. Check your pipeline this action is fit or not
[2018-04-06 22:40:50.641608006+00:00] 2018-04-06 22:39:27 +0000 [warn]: 'block' action stops input process until the buffer full is resolved. Check your pipeline this action is fit or not
[2018-04-06 22:40:50.641608006+00:00] 2018-04-06 22:39:27 +0000 [warn]: 'insecure' mode has vulnerability for man-in-the-middle attacks for clients (output plugins).
[INFO] Logging test suite test-fluentd-forward succeeded at Fri Apr  6 22:40:58 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.302s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.347s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-json-parsing started at Fri Apr  6 22:40:58 UTC 2018
[INFO] Starting json-parsing test at Fri Apr 6 22:40:59 UTC 2018
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.341s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /967edaaa419c4d7887f8ab0ee644805f 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 3.270s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /967edaaa419c4d7887f8ab0ee644805f 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"26e7400cbaf7470c86c6456032e3e7b9"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.508s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"26e7400cbaf7470c86c6456032e3e7b9"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
No resources found.
[INFO] Testing if record is in correct format . . .
Running test/json-parsing.sh:48: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search?q=message:967edaaa419c4d7887f8ab0ee644805f |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-json-parsing.py 967edaaa419c4d7887f8ab0ee644805f' expecting success...
SUCCESS after 0.499s: test/json-parsing.sh:48: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search?q=message:967edaaa419c4d7887f8ab0ee644805f |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-json-parsing.py 967edaaa419c4d7887f8ab0ee644805f' expecting success
[INFO] json-parsing test finished at Fri Apr 6 22:41:09 UTC 2018
[INFO] Logging test suite test-json-parsing succeeded at Fri Apr  6 22:41:09 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.302s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.265s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-kibana-dashboards started at Fri Apr  6 22:41:10 UTC 2018
Running test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-data-master-decexn9b-1-f7sxf -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'...
SUCCESS after 0.444s: test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-data-master-decexn9b-1-f7sxf -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'
Running test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-data-master-decexn9b-1-f7sxf -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'...
SUCCESS after 0.496s: test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-data-master-decexn9b-1-f7sxf -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'
Running test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-ops-data-master-36f1k6jr-1-5hxrn -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'...
SUCCESS after 0.350s: test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-ops-data-master-36f1k6jr-1-5hxrn -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'
Running test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-ops-data-master-36f1k6jr-1-5hxrn -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'...
SUCCESS after 0.606s: test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-ops-data-master-36f1k6jr-1-5hxrn -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'
Running test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-data-master-decexn9b-1-f7sxf -- es_load_kibana_ui_objects admin' expecting success and text 'Success'...
SUCCESS after 0.888s: test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-data-master-decexn9b-1-f7sxf -- es_load_kibana_ui_objects admin' expecting success and text 'Success'
Running test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-ops-data-master-36f1k6jr-1-5hxrn -- es_load_kibana_ui_objects admin' expecting success and text 'Success'...
SUCCESS after 0.934s: test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-ops-data-master-36f1k6jr-1-5hxrn -- es_load_kibana_ui_objects admin' expecting success and text 'Success'
[INFO] Finished with test - login to kibana and kibana-ops to verify the admin user can load and view the dashboards with no errors
[INFO] Logging test suite test-kibana-dashboards succeeded at Fri Apr  6 22:41:18 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.328s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.272s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-multi-tenancy started at Fri Apr  6 22:41:19 UTC 2018
No resources found.
No resources found.
[INFO] Creating project multi-tenancy-1
[INFO] Creating test index and entry for multi-tenancy-1
[INFO] Creating project multi-tenancy-2
[INFO] Creating test index and entry for multi-tenancy-2
[INFO] Creating project multi-tenancy-3
[INFO] Creating test index and entry for multi-tenancy-3
[INFO] Creating user loguser with password loguser
[INFO] Assigning user to projects multi-tenancy-1 multi-tenancy-2
[INFO] Creating user loguser2 with password loguser2
[INFO] Assigning user to projects multi-tenancy-2 multi-tenancy-3
Running test/multi_tenancy.sh:167: executing 'hack_msearch_access' expecting failure and text 'Usage:'...
SUCCESS after 0.124s: test/multi_tenancy.sh:167: executing 'hack_msearch_access' expecting failure and text 'Usage:'
Running test/multi_tenancy.sh:168: executing 'hack_msearch_access no-such-user no-such-project' expecting failure and text 'user no-such-user not found'...
SUCCESS after 0.379s: test/multi_tenancy.sh:168: executing 'hack_msearch_access no-such-user no-such-project' expecting failure and text 'user no-such-user not found'
Running test/multi_tenancy.sh:169: executing 'hack_msearch_access loguser no-such-project' expecting failure and text 'project no-such-project not found'...
SUCCESS after 0.510s: test/multi_tenancy.sh:169: executing 'hack_msearch_access loguser no-such-project' expecting failure and text 'project no-such-project not found'
Running test/multi_tenancy.sh:170: executing 'hack_msearch_access loguser default' expecting failure and text 'loguser does not have access to view logs in project default'...
SUCCESS after 0.743s: test/multi_tenancy.sh:170: executing 'hack_msearch_access loguser default' expecting failure and text 'loguser does not have access to view logs in project default'
Running test/multi_tenancy.sh:172: executing 'hack_msearch_access loguser multi-tenancy-1 multi-tenancy-2' expecting success...
SUCCESS after 9.073s: test/multi_tenancy.sh:172: executing 'hack_msearch_access loguser multi-tenancy-1 multi-tenancy-2' expecting success
Running test/multi_tenancy.sh:174: executing 'hack_msearch_access loguser2 --all' expecting success...
SUCCESS after 4.621s: test/multi_tenancy.sh:174: executing 'hack_msearch_access loguser2 --all' expecting success
[INFO] See if user loguser can read /project.multi-tenancy-1.*
Running test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success...
SUCCESS after 0.007s: test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success
[INFO] See if user loguser can read /project.multi-tenancy-2.*
Running test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success...
SUCCESS after 0.006s: test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success
[INFO] See if user loguser can _msearch ["project.multi-tenancy-1.*","project.multi-tenancy-2.*"]
Running test/multi_tenancy.sh:110: executing 'test 2 = 2' expecting success...
SUCCESS after 0.007s: test/multi_tenancy.sh:110: executing 'test 2 = 2' expecting success
[INFO] See if user loguser is denied /project.default.*
Running test/multi_tenancy.sh:125: executing 'test 0 = 0' expecting success...
SUCCESS after 0.007s: test/multi_tenancy.sh:125: executing 'test 0 = 0' expecting success
[INFO] See if user loguser is denied /.operations.*
Running test/multi_tenancy.sh:136: executing 'test 0 = 0' expecting success...
SUCCESS after 0.029s: test/multi_tenancy.sh:136: executing 'test 0 = 0' expecting success
[INFO] See if user loguser2 can read /project.multi-tenancy-2.*
Running test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success...
SUCCESS after 0.008s: test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success
[INFO] See if user loguser2 can read /project.multi-tenancy-3.*
Running test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success...
SUCCESS after 0.008s: test/multi_tenancy.sh:94: executing 'test 1 = 1' expecting success
[INFO] See if user loguser2 can _msearch ["project.multi-tenancy-2.*","project.multi-tenancy-3.*"]
Running test/multi_tenancy.sh:110: executing 'test 2 = 2' expecting success...
SUCCESS after 0.010s: test/multi_tenancy.sh:110: executing 'test 2 = 2' expecting success
[INFO] See if user loguser2 is denied /project.default.*
Running test/multi_tenancy.sh:125: executing 'test 0 = 0' expecting success...
SUCCESS after 0.008s: test/multi_tenancy.sh:125: executing 'test 0 = 0' expecting success
[INFO] See if user loguser2 is denied /.operations.*
Running test/multi_tenancy.sh:136: executing 'test 0 = 0' expecting success...
SUCCESS after 0.007s: test/multi_tenancy.sh:136: executing 'test 0 = 0' expecting success
[INFO] Logging test suite test-multi-tenancy succeeded at Fri Apr  6 22:42:43 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.517s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.573s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-mux-client-mode started at Fri Apr  6 22:42:44 UTC 2018
[INFO] configure fluentd to use MUX_CLIENT_MODE=minimal - verify logs get through
Running test/mux-client-mode.sh:65: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.278s: test/mux-client-mode.sh:65: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/mux-client-mode.sh:68: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.061s: test/mux-client-mode.sh:68: executing 'flush_fluentd_pos_files' expecting success
Running test/mux-client-mode.sh:70: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 11.218s: test/mux-client-mode.sh:70: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.884s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /77f3dd3d1d1f4daab96a1707042e9f81 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 9.099s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /77f3dd3d1d1f4daab96a1707042e9f81 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"0cd9c87f5d90499dbc7752ded4df4f26"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 2.963s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"0cd9c87f5d90499dbc7752ded4df4f26"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
[INFO] configure fluentd to use MUX_CLIENT_MODE=maximal - verify logs get through
Running test/mux-client-mode.sh:77: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.226s: test/mux-client-mode.sh:77: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/mux-client-mode.sh:80: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.057s: test/mux-client-mode.sh:80: executing 'flush_fluentd_pos_files' expecting success
Running test/mux-client-mode.sh:82: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 8.575s: test/mux-client-mode.sh:82: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.021s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /5c3ba3ba51334593af5a6378d3251436 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 14.035s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /5c3ba3ba51334593af5a6378d3251436 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d60cf47a283b46f99367eeeda20c3d30"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 1.274s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d60cf47a283b46f99367eeeda20c3d30"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/mux-client-mode.sh:42: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.121s: test/mux-client-mode.sh:42: executing 'flush_fluentd_pos_files' expecting success
error: 'logging-infra-fluentd' already has a value (true), and --overwrite is false
Running test/mux-client-mode.sh:44: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 6.892s: test/mux-client-mode.sh:44: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-mux-client-mode succeeded at Fri Apr  6 22:43:52 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.438s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.319s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-mux started at Fri Apr  6 22:43:53 UTC 2018
No resources found.
No resources found.
[INFO] Starting mux test at Fri Apr 6 22:43:55 UTC 2018
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.020s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /c736e0fb35d24240b45d98e0d96c50cb 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 2.020s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /c736e0fb35d24240b45d98e0d96c50cb 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"5df119a1ddca40c7a55b276e4d961484"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 1.294s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"5df119a1ddca40c7a55b276e4d961484"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
[INFO] ------- Test case 1 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external and CONTAINER values.
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-6wbq2' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
No resources found.
SUCCESS after 4.149s: test/mux.sh:52: executing 'oc get pod logging-fluentd-6wbq2' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.057s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.075s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.753s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /c51ebe6761bd4aa8be2cda3492bd058e 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 16.164s: test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /c51ebe6761bd4aa8be2cda3492bd058e 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:3b56c41aba0345569ccaa3c4270ef61c | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 4.644s: test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:3b56c41aba0345569ccaa3c4270ef61c | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.889s: test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.790s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 2 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external without CONTAINER values.
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-lhb59' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 13.798s: test/mux.sh:52: executing 'oc get pod logging-fluentd-lhb59' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.351s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
No resources found.
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.689s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.184s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /1bf3b46e714841c5b1e16596296e5d4d 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 143.632s: test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /1bf3b46e714841c5b1e16596296e5d4d 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:5a09152826fd452797f5821d6b41a331 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 12.223s: test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:5a09152826fd452797f5821d6b41a331 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.657s: test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.831s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 3 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external and CONTAINER values, which namespace names do not match.
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-5wtm2' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 29.211s: test/mux.sh:52: executing 'oc get pod logging-fluentd-5wtm2' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.094s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.247s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.247s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /e755eaadc4704101892f9c13af50633f 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 206.517s: test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /e755eaadc4704101892f9c13af50633f 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:7db22f18ec05470aa8ab3412bf98a10a | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 9.284s: test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:7db22f18ec05470aa8ab3412bf98a10a | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.673s: test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.954s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 4 -------
[INFO] fluentd forwards kibana and system logs with tag test.bogus.external and no CONTAINER values, which will use a namespace of mux-undefined.
[INFO] using existing project mux-undefined
Running test/mux.sh:52: executing 'oc get pod logging-fluentd-6jq5p' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 36.958s: test/mux.sh:52: executing 'oc get pod logging-fluentd-6jq5p' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
Running test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.160s: test/mux.sh:190: executing 'flush_fluentd_pos_files' expecting success
No resources found.
Running test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.861s: test/mux.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.643s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /4b8a64504a214fdfaa352265b3c77682 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 156.938s: test/mux.sh:263: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"message":"GET /4b8a64504a214fdfaa352265b3c77682 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.mux-undefined.*/_count?q=SYSLOG_IDENTIFIER:cea47159ca52488a9f1efa192b74f800 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 12.487s: test/mux.sh:304: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.mux-undefined.*/_count?q=SYSLOG_IDENTIFIER:cea47159ca52488a9f1efa192b74f800 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.742s: test/mux.sh:314: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.614s: test/mux.sh:315: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:319: executing 'oc set env dc logging-mux ES_HOST=logging-es OPS_HOST=logging-es-ops' expecting success...
SUCCESS after 0.518s: test/mux.sh:319: executing 'oc set env dc logging-mux ES_HOST=logging-es OPS_HOST=logging-es-ops' expecting success
Running test/mux.sh:322: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.479s: test/mux.sh:322: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] mux test finished at Fri Apr 6 22:55:53 UTC 2018
Running test/mux.sh:362: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.346s: test/mux.sh:362: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
{"acknowledged":true}{"acknowledged":true}{"acknowledged":true}Running test/mux.sh:373: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.062s: test/mux.sh:373: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:376: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.318s: test/mux.sh:376: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-mux succeeded at Fri Apr  6 22:56:13 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.689s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.649s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-read-throttling started at Fri Apr  6 22:56:14 UTC 2018
[INFO] This test only works with the json-file docker log driver
[INFO] Logging test suite test-read-throttling succeeded at Fri Apr  6 22:56:15 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.708s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.425s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-remote-syslog started at Fri Apr  6 22:56:16 UTC 2018
[INFO] Starting fluentd-plugin-remote-syslog tests at Fri Apr 6 22:56:18 UTC 2018
[INFO] Test 1, expecting generate_syslog_config.rb to have created configuration file
Running test/remote-syslog.sh:57: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.535s: test/remote-syslog.sh:57: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:60: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.291s: test/remote-syslog.sh:60: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:64: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 7.248s: test/remote-syslog.sh:64: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:67: executing 'oc exec logging-fluentd-5w8st find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.871s: test/remote-syslog.sh:67: executing 'oc exec logging-fluentd-5w8st find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s
[INFO] Test 2, expecting generate_syslog_config.rb to not create a configuration file
Running test/remote-syslog.sh:73: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.367s: test/remote-syslog.sh:73: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:77: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 9.323s: test/remote-syslog.sh:77: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:80: executing 'oc exec logging-fluentd-5sxzz find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting failure; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.786s: test/remote-syslog.sh:80: executing 'oc exec logging-fluentd-5sxzz find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting failure; re-trying every 0.2s until completion or 60.000s
[INFO] Test 3, expecting generate_syslog_config.rb to generate multiple stores
Running test/remote-syslog.sh:86: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.325s: test/remote-syslog.sh:86: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:90: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 8.347s: test/remote-syslog.sh:90: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:93: executing 'oc exec logging-fluentd-4694p grep '<store>' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf | wc -l' expecting any result and text '^2$'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.143s: test/remote-syslog.sh:93: executing 'oc exec logging-fluentd-4694p grep '<store>' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf | wc -l' expecting any result and text '^2$'; re-trying every 0.2s until completion or 60.000s
[INFO] Test 4, making sure tag_key=message does not cause remote-syslog plugin crash
Running test/remote-syslog.sh:99: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.400s: test/remote-syslog.sh:99: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:103: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 12.710s: test/remote-syslog.sh:103: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:106: executing 'oc exec logging-fluentd-k97fm find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.747s: test/remote-syslog.sh:106: executing 'oc exec logging-fluentd-k97fm find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:107: executing 'oc exec logging-fluentd-k97fm grep 'tag_key message' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success...
SUCCESS after 0.563s: test/remote-syslog.sh:107: executing 'oc exec logging-fluentd-k97fm grep 'tag_key message' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success
Running test/remote-syslog.sh:108: executing 'oc logs logging-fluentd-k97fm' expecting success and not text 'nil:NilClass'...
SUCCESS after 0.626s: test/remote-syslog.sh:108: executing 'oc logs logging-fluentd-k97fm' expecting success and not text 'nil:NilClass'
[INFO] Test 5, making sure tag_key=bogus does not cause remote-syslog plugin crash
Running test/remote-syslog.sh:125: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.699s: test/remote-syslog.sh:125: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
Running test/remote-syslog.sh:129: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 15.405s: test/remote-syslog.sh:129: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:132: executing 'oc exec logging-fluentd-8lfnh find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.657s: test/remote-syslog.sh:132: executing 'oc exec logging-fluentd-8lfnh find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:133: executing 'oc exec logging-fluentd-8lfnh grep 'tag_key bogus' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success...
SUCCESS after 0.867s: test/remote-syslog.sh:133: executing 'oc exec logging-fluentd-8lfnh grep 'tag_key bogus' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success
Running test/remote-syslog.sh:134: executing 'oc logs logging-fluentd-8lfnh' expecting success and not text 'nil:NilClass'...
SUCCESS after 0.665s: test/remote-syslog.sh:134: executing 'oc logs logging-fluentd-8lfnh' expecting success and not text 'nil:NilClass'
[INFO] Restoring original fluentd daemonset environment variable
error: unable to decode "/tmp/tmp.uUia7d5QTN": Object 'Kind' is missing in '{"Spec":{"MinReadySeconds":0,"RevisionHistoryLimit":10,"Selector":{"matchLabels":{"component":"fluentd","provider":"openshift"}},"Template":{"Spec":{"ActiveDeadlineSeconds":null,"Affinity":null,"AutomountServiceAccountToken":null,"Containers":[{"Args":null,"Command":null,"Env":[{"Name":"K8S_HOST_URL","Value":"https://kubernetes.default.svc.cluster.local","ValueFrom":null},{"Name":"ES_HOST","Value":"logging-es","ValueFrom":null},{"Name":"ES_PORT","Value":"9200","ValueFrom":null},{"Name":"ES_CLIENT_CERT","Value":"/etc/fluent/keys/cert","ValueFrom":null},{"Name":"ES_CLIENT_KEY","Value":"/etc/fluent/keys/key","ValueFrom":null},{"Name":"ES_CA","Value":"/etc/fluent/keys/ca","ValueFrom":null},{"Name":"OPS_HOST","Value":"logging-es-ops","ValueFrom":null},{"Name":"OPS_PORT","Value":"9200","ValueFrom":null},{"Name":"OPS_CLIENT_CERT","Value":"/etc/fluent/keys/cert","ValueFrom":null},{"Name":"OPS_CLIENT_KEY","Value":"/etc/fluent/keys/key","ValueFrom":null},{"Name":"OPS_CA","Value":"/etc/fluent/keys/ca","ValueFrom":null},{"Name":"JOURNAL_SOURCE","Value":"","ValueFrom":null},{"Name":"JOURNAL_READ_FROM_HEAD","Value":"false","ValueFrom":null},{"Name":"BUFFER_QUEUE_LIMIT","Value":"32","ValueFrom":null},{"Name":"BUFFER_SIZE_LIMIT","Value":"8m","ValueFrom":null},{"Name":"FLUENTD_CPU_LIMIT","Value":"","ValueFrom":{"ConfigMapKeyRef":null,"FieldRef":null,"ResourceFieldRef":{"ContainerName":"fluentd-elasticsearch","Divisor":"0","Resource":"limits.cpu"},"SecretKeyRef":null}},{"Name":"FLUENTD_MEMORY_LIMIT","Value":"","ValueFrom":{"ConfigMapKeyRef":null,"FieldRef":null,"ResourceFieldRef":{"ContainerName":"fluentd-elasticsearch","Divisor":"0","Resource":"limits.memory"},"SecretKeyRef":null}},{"Name":"FILE_BUFFER_LIMIT","Value":"256Mi","ValueFrom":null},{"Name":"AUDIT_CONTAINER_ENGINE","Value":"true","ValueFrom":null},{"Name":"NODE_NAME","Value":"","ValueFrom":{"ConfigMapKeyRef":null,"FieldRef":{"APIVersion":"v1","FieldPath":"spec.nodeName"},"ResourceFieldRef":null,"SecretKeyRef":null}}],"EnvFrom":null,"Image":"openshift/origin-logging-fluentd:latest","ImagePullPolicy":"IfNotPresent","Lifecycle":null,"LivenessProbe":null,"Name":"fluentd-elasticsearch","Ports":null,"ReadinessProbe":null,"Resources":{"Limits":{"memory":"256Mi"},"Requests":{"cpu":"100m","memory":"256Mi"}},"SecurityContext":{"Capabilities":null,"Privileged":true,"ReadOnlyRootFilesystem":null,"RunAsNonRoot":null,"RunAsUser":null,"SELinuxOptions":null},"Stdin":false,"StdinOnce":false,"TTY":false,"TerminationMessagePath":"/dev/termination-log","TerminationMessagePolicy":"File","VolumeMounts":[{"MountPath":"/run/log/journal","Name":"runlogjournal","ReadOnly":false,"SubPath":""},{"MountPath":"/var/log","Name":"varlog","ReadOnly":false,"SubPath":""},{"MountPath":"/var/lib/docker/containers","Name":"varlibdockercontainers","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/fluent/configs.d/user","Name":"config","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/fluent/keys","Name":"certs","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/docker-hostname","Name":"dockerhostname","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/localtime","Name":"localtime","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/sysconfig/docker","Name":"dockercfg","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/docker","Name":"dockerdaemoncfg","ReadOnly":true,"SubPath":""},{"MountPath":"/var/lib/fluentd","Name":"filebufferstorage","ReadOnly":false,"SubPath":""}],"WorkingDir":""}],"DNSPolicy":"ClusterFirst","HostAliases":null,"Hostname":"","ImagePullSecrets":null,"InitContainers":null,"NodeName":"","NodeSelector":{"logging-infra-fluentd":"true"},"RestartPolicy":"Always","SchedulerName":"default-scheduler","SecurityContext":{"FSGroup":null,"HostIPC":false,"HostNetwork":false,"HostPID":false,"RunAsNonRoot":null,"RunAsUser":null,"SELinuxOptions":null,"SupplementalGroups":null},"ServiceAccountName":"aggregated-logging-fluentd","Subdomain":"","TerminationGracePeriodSeconds":30,"Tolerations":null,"Volumes":[{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/run/log/journal"},"ISCSI":null,"NFS":null,"Name":"runlogjournal","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/var/log"},"ISCSI":null,"NFS":null,"Name":"varlog","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/var/lib/docker/containers"},"ISCSI":null,"NFS":null,"Name":"varlibdockercontainers","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":{"DefaultMode":420,"Items":null,"Name":"logging-fluentd","Optional":null},"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":null,"ISCSI":null,"NFS":null,"Name":"config","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":null,"ISCSI":null,"NFS":null,"Name":"certs","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":{"DefaultMode":420,"Items":null,"Optional":null,"SecretName":"logging-fluentd"},"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/hostname"},"ISCSI":null,"NFS":null,"Name":"dockerhostname","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/localtime"},"ISCSI":null,"NFS":null,"Name":"localtime","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/sysconfig/docker"},"ISCSI":null,"NFS":null,"Name":"dockercfg","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/docker"},"ISCSI":null,"NFS":null,"Name":"dockerdaemoncfg","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/var/lib/fluentd"},"ISCSI":null,"NFS":null,"Name":"filebufferstorage","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null}]},"creationTimestamp":null,"labels":{"component":"fluentd","logging-infra":"fluentd","provider":"openshift"},"name":"fluentd-elasticsearch"},"TemplateGeneration":1,"UpdateStrategy":{"RollingUpdate":{"MaxUnavailable":1},"Type":"RollingUpdate"}},"Status":{"CollisionCount":null,"CurrentNumberScheduled":1,"DesiredNumberScheduled":1,"NumberAvailable":1,"NumberMisscheduled":0,"NumberReady":1,"NumberUnavailable":0,"ObservedGeneration":1,"UpdatedNumberScheduled":1},"creationTimestamp":null,"generation":1,"labels":{"component":"fluentd","logging-infra":"fluentd","provider":"openshift"},"name":"logging-fluentd"}'
[INFO] Test 6, verify openshift_logging_mux_remote_syslog_host is respected in the mux pod
Running test/remote-syslog.sh:158: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.986s: test/remote-syslog.sh:158: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:161: executing 'oc get dc logging-mux -o jsonpath='{ .status.replicas }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.900s: test/remote-syslog.sh:161: executing 'oc get dc logging-mux -o jsonpath='{ .status.replicas }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:183: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 14.116s: test/remote-syslog.sh:183: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:186: executing 'oc exec logging-mux-2-g57lr find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.381s: test/remote-syslog.sh:186: executing 'oc exec logging-mux-2-g57lr find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:187: executing 'oc exec logging-mux-2-g57lr grep 'remote_syslog' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success and text 'remote_syslog 127.0.0.1'...
SUCCESS after 0.481s: test/remote-syslog.sh:187: executing 'oc exec logging-mux-2-g57lr grep 'remote_syslog' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success and text 'remote_syslog 127.0.0.1'
[INFO] Restoring original mux deployconfig environment variable
[INFO] Logging test suite test-remote-syslog succeeded at Fri Apr  6 22:57:58 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 1.129s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.897s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-utf8-characters started at Fri Apr  6 22:58:00 UTC 2018
[INFO] Starting utf8-characters test at Fri Apr  6 22:58:01 UTC 2018
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.043s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
No resources found.
Running test/utf8-characters.sh:45: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"047ef3e2e7ff424ea903d5d3999816fe"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 300.000s...
SUCCESS after 74.462s: test/utf8-characters.sh:45: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"047ef3e2e7ff424ea903d5d3999816fe"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 300.000s
[INFO] Checking that message was successfully processed...
Running test/utf8-characters.sh:48: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"047ef3e2e7ff424ea903d5d3999816fe"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-utf8-characters.py '047ef3e2e7ff424ea903d5d3999816fe-µ' 047ef3e2e7ff424ea903d5d3999816fe' expecting success...
SUCCESS after 0.611s: test/utf8-characters.sh:48: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"047ef3e2e7ff424ea903d5d3999816fe"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-utf8-characters.py '047ef3e2e7ff424ea903d5d3999816fe-µ' 047ef3e2e7ff424ea903d5d3999816fe' expecting success
[INFO] utf8-characters test finished at Fri Apr  6 22:59:17 UTC 2018
[INFO] Logging test suite test-utf8-characters succeeded at Fri Apr  6 22:59:17 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.534s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.593s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-viaq-data-model started at Fri Apr  6 22:59:18 UTC 2018
Running test/viaq-data-model.sh:50: executing 'oc get pods -l component=es' expecting any result and text '^logging-es.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.398s: test/viaq-data-model.sh:50: executing 'oc get pods -l component=es' expecting any result and text '^logging-es.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:51: executing 'oc get pods -l component=kibana' expecting any result and text '^logging-kibana-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.414s: test/viaq-data-model.sh:51: executing 'oc get pods -l component=kibana' expecting any result and text '^logging-kibana-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.372s: test/viaq-data-model.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Starting viaq-data-model test at Fri Apr 6 22:59:21 UTC 2018
No resources found.
No resources found.
Running test/viaq-data-model.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.289s: test/viaq-data-model.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
Running test/viaq-data-model.sh:113: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.140s: test/viaq-data-model.sh:113: executing 'flush_fluentd_pos_files' expecting success
Running test/viaq-data-model.sh:115: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 14.067s: test/viaq-data-model.sh:115: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.022s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /b584c3731d8f40bfb7b6cf580ce23ed6 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 3.579s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /b584c3731d8f40bfb7b6cf580ce23ed6 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6df01530be6a46829e3aac7c70a64805"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.487s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6df01530be6a46829e3aac7c70a64805"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:131: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /b584c3731d8f40bfb7b6cf580ce23ed6 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success...
SUCCESS after 0.532s: test/viaq-data-model.sh:131: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /b584c3731d8f40bfb7b6cf580ce23ed6 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success
Running test/viaq-data-model.sh:134: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6df01530be6a46829e3aac7c70a64805"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success...
SUCCESS after 0.574s: test/viaq-data-model.sh:134: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6df01530be6a46829e3aac7c70a64805"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success
Running test/viaq-data-model.sh:145: executing 'oc get pod logging-fluentd-btdg4' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 7.951s: test/viaq-data-model.sh:145: executing 'oc get pod logging-fluentd-btdg4' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:146: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.616s: test/viaq-data-model.sh:146: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.022s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /171f5990b5984c96ab8f9727744f3b5e 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 9.223s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /171f5990b5984c96ab8f9727744f3b5e 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"22943544a2784251b95df48de463e560"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.687s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"22943544a2784251b95df48de463e560"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:152: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /171f5990b5984c96ab8f9727744f3b5e 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success...
SUCCESS after 0.636s: test/viaq-data-model.sh:152: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /171f5990b5984c96ab8f9727744f3b5e 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success
Running test/viaq-data-model.sh:155: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"22943544a2784251b95df48de463e560"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success...
SUCCESS after 0.596s: test/viaq-data-model.sh:155: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"22943544a2784251b95df48de463e560"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success
Running test/viaq-data-model.sh:160: executing 'oc get pod logging-fluentd-bhl2j' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 4.152s: test/viaq-data-model.sh:160: executing 'oc get pod logging-fluentd-bhl2j' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:161: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.501s: test/viaq-data-model.sh:161: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.022s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /712ede43787d4d57a550953e5004f2e0 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 7.712s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /712ede43787d4d57a550953e5004f2e0 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6c0a1835c6014af7a6361520ef7db457"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.567s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6c0a1835c6014af7a6361520ef7db457"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:167: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /712ede43787d4d57a550953e5004f2e0 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success...
SUCCESS after 0.628s: test/viaq-data-model.sh:167: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /712ede43787d4d57a550953e5004f2e0 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success
Running test/viaq-data-model.sh:171: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6c0a1835c6014af7a6361520ef7db457"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success...
SUCCESS after 0.626s: test/viaq-data-model.sh:171: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"6c0a1835c6014af7a6361520ef7db457"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success
Running test/viaq-data-model.sh:176: executing 'oc get pod logging-fluentd-m75n8' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 11.813s: test/viaq-data-model.sh:176: executing 'oc get pod logging-fluentd-m75n8' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:177: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.578s: test/viaq-data-model.sh:177: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.027s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /613b1d4bbdb64013853c6f493be15cfe 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 10.995s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /613b1d4bbdb64013853c6f493be15cfe 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"b0a41d9c17f4464597ad5d9016445af4"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 3.916s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"b0a41d9c17f4464597ad5d9016445af4"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:183: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /613b1d4bbdb64013853c6f493be15cfe 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success...
SUCCESS after 0.678s: test/viaq-data-model.sh:183: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /613b1d4bbdb64013853c6f493be15cfe 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success
Running test/viaq-data-model.sh:187: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"b0a41d9c17f4464597ad5d9016445af4"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success...
SUCCESS after 0.524s: test/viaq-data-model.sh:187: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"b0a41d9c17f4464597ad5d9016445af4"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success
Running test/viaq-data-model.sh:192: executing 'oc get pod logging-fluentd-cfkfx' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 3.451s: test/viaq-data-model.sh:192: executing 'oc get pod logging-fluentd-cfkfx' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.282s: test/viaq-data-model.sh:193: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.024s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /2088fa5cab344ab9b653c2e58cab050b 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 8.288s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /2088fa5cab344ab9b653c2e58cab050b 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"9ada16fa903948da8fae9f5dc10b98d6"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.589s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"9ada16fa903948da8fae9f5dc10b98d6"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:207: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /2088fa5cab344ab9b653c2e58cab050b 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success...
SUCCESS after 0.619s: test/viaq-data-model.sh:207: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /2088fa5cab344ab9b653c2e58cab050b 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success
Running test/viaq-data-model.sh:211: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"9ada16fa903948da8fae9f5dc10b98d6"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success...
SUCCESS after 0.569s: test/viaq-data-model.sh:211: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"9ada16fa903948da8fae9f5dc10b98d6"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success
[INFO] viaq-data-model test finished at Fri Apr 6 23:01:28 UTC 2018
Running test/viaq-data-model.sh:33: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.255s: test/viaq-data-model.sh:33: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
Running test/viaq-data-model.sh:40: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.099s: test/viaq-data-model.sh:40: executing 'flush_fluentd_pos_files' expecting success
Running test/viaq-data-model.sh:42: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.898s: test/viaq-data-model.sh:42: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-viaq-data-model succeeded at Fri Apr  6 23:01:35 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.610s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.457s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-zzz-correct-index-names started at Fri Apr  6 23:01:36 UTC 2018
No resources found.
No resources found.
Running test/zzz-correct-index-names.sh:49: executing 'oc get -n default pods test-pod' expecting any result and text '^test-pod.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.288s: test/zzz-correct-index-names.sh:49: executing 'oc get -n default pods test-pod' expecting any result and text '^test-pod.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:54: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=message:4d53e68babe84c8b8eb510bd3bf47b7a |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting any result and text '^true$'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.494s: test/zzz-correct-index-names.sh:54: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=message:4d53e68babe84c8b8eb510bd3bf47b7a |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting any result and text '^true$'; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:56: executing 'oc get -n default pod test-pod' expecting failure; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.292s: test/zzz-correct-index-names.sh:56: executing 'oc get -n default pod test-pod' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:49: executing 'oc get -n openshift pods test-pod' expecting any result and text '^test-pod.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.900s: test/zzz-correct-index-names.sh:49: executing 'oc get -n openshift pods test-pod' expecting any result and text '^test-pod.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:54: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=message:887135c7151c4c5ba4f3d1f20afb1065 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting any result and text '^true$'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.624s: test/zzz-correct-index-names.sh:54: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=message:887135c7151c4c5ba4f3d1f20afb1065 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting any result and text '^true$'; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:56: executing 'oc get -n openshift pod test-pod' expecting failure; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.248s: test/zzz-correct-index-names.sh:56: executing 'oc get -n openshift pod test-pod' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:49: executing 'oc get -n openshift-infra pods test-pod' expecting any result and text '^test-pod.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.842s: test/zzz-correct-index-names.sh:49: executing 'oc get -n openshift-infra pods test-pod' expecting any result and text '^test-pod.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:54: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=message:f886b6c6969c432487d94258c4ddf8c2 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting any result and text '^true$'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.838s: test/zzz-correct-index-names.sh:54: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=message:f886b6c6969c432487d94258c4ddf8c2 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting any result and text '^true$'; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:56: executing 'oc get -n openshift-infra pod test-pod' expecting failure; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.262s: test/zzz-correct-index-names.sh:56: executing 'oc get -n openshift-infra pod test-pod' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/zzz-correct-index-names.sh:65: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default\.'...
SUCCESS after 0.537s: test/zzz-correct-index-names.sh:65: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.default\.'
Running test/zzz-correct-index-names.sh:66: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.default.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.556s: test/zzz-correct-index-names.sh:66: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.default.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:72: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.*/_search?q=kubernetes.namespace_name:default\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting success and text '^false$'...
SUCCESS after 0.491s: test/zzz-correct-index-names.sh:72: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.*/_search?q=kubernetes.namespace_name:default\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting success and text '^false$'
Running test/zzz-correct-index-names.sh:74: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default\.'...
SUCCESS after 0.804s: test/zzz-correct-index-names.sh:74: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.default\.'
Running test/zzz-correct-index-names.sh:75: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.default.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.497s: test/zzz-correct-index-names.sh:75: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.default.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:78: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.*/_search?q=kubernetes.namespace_name:default\&size=9999 |             jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting success and text '^false$'...
SUCCESS after 0.446s: test/zzz-correct-index-names.sh:78: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.*/_search?q=kubernetes.namespace_name:default\&size=9999 |             jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting success and text '^false$'
Running test/zzz-correct-index-names.sh:81: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=kubernetes.namespace_name:default\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting success and text '^true$'...
SUCCESS after 0.765s: test/zzz-correct-index-names.sh:81: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=kubernetes.namespace_name:default\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "default")) | length | . > 0'' expecting success and text '^true$'
Running test/zzz-correct-index-names.sh:65: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.openshift\.'...
SUCCESS after 0.535s: test/zzz-correct-index-names.sh:65: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.openshift\.'
Running test/zzz-correct-index-names.sh:66: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.764s: test/zzz-correct-index-names.sh:66: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:72: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.*/_search?q=kubernetes.namespace_name:openshift\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting success and text '^false$'...
SUCCESS after 0.611s: test/zzz-correct-index-names.sh:72: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.*/_search?q=kubernetes.namespace_name:openshift\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting success and text '^false$'
Running test/zzz-correct-index-names.sh:74: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.openshift\.'...
SUCCESS after 0.664s: test/zzz-correct-index-names.sh:74: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.openshift\.'
Running test/zzz-correct-index-names.sh:75: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.556s: test/zzz-correct-index-names.sh:75: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:78: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.*/_search?q=kubernetes.namespace_name:openshift\&size=9999 |             jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting success and text '^false$'...
SUCCESS after 0.659s: test/zzz-correct-index-names.sh:78: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.*/_search?q=kubernetes.namespace_name:openshift\&size=9999 |             jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting success and text '^false$'
Running test/zzz-correct-index-names.sh:81: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=kubernetes.namespace_name:openshift\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting success and text '^true$'...
SUCCESS after 0.575s: test/zzz-correct-index-names.sh:81: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=kubernetes.namespace_name:openshift\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift")) | length | . > 0'' expecting success and text '^true$'
Running test/zzz-correct-index-names.sh:65: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.openshift-infra\.'...
SUCCESS after 0.475s: test/zzz-correct-index-names.sh:65: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /_cat/indices' expecting success and not text 'project\.openshift-infra\.'
Running test/zzz-correct-index-names.sh:66: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.556s: test/zzz-correct-index-names.sh:66: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:72: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.*/_search?q=kubernetes.namespace_name:openshift-infra\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting success and text '^false$'...
SUCCESS after 0.498s: test/zzz-correct-index-names.sh:72: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.*/_search?q=kubernetes.namespace_name:openshift-infra\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting success and text '^false$'
Running test/zzz-correct-index-names.sh:74: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.openshift-infra\.'...
SUCCESS after 0.540s: test/zzz-correct-index-names.sh:74: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /_cat/indices' expecting success and not text 'project\.openshift-infra\.'
Running test/zzz-correct-index-names.sh:75: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.479s: test/zzz-correct-index-names.sh:75: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:78: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.*/_search?q=kubernetes.namespace_name:openshift-infra\&size=9999 |             jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting success and text '^false$'...
SUCCESS after 0.505s: test/zzz-correct-index-names.sh:78: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /project.*/_search?q=kubernetes.namespace_name:openshift-infra\&size=9999 |             jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting success and text '^false$'
Running test/zzz-correct-index-names.sh:81: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=kubernetes.namespace_name:openshift-infra\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting success and text '^true$'...
SUCCESS after 0.451s: test/zzz-correct-index-names.sh:81: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_search?q=kubernetes.namespace_name:openshift-infra\&size=9999 |         jq '.hits.hits | map(select(._source.kubernetes.namespace_name == "openshift-infra")) | length | . > 0'' expecting success and text '^true$'
[INFO] Logging test suite test-zzz-correct-index-names succeeded at Fri Apr  6 23:02:14 UTC 2018
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.354s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.304s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-zzz-rsyslog started at Fri Apr  6 23:02:15 UTC 2018
No resources found.
No resources found.
Running test/zzz-rsyslog.sh:41: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.233s: test/zzz-rsyslog.sh:41: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/zzz-rsyslog.sh:56: executing 'ansible-playbook -vvv --become --become-user root --connection local     -e use_mmk8s=True -i /tmp/tmp.b2t1dUpKcm playbook.yaml > /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-ansible.log 2>&1' expecting success...
command terminated with exit code 137
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
SUCCESS after 165.535s: test/zzz-rsyslog.sh:56: executing 'ansible-playbook -vvv --become --become-user root --connection local     -e use_mmk8s=True -i /tmp/tmp.b2t1dUpKcm playbook.yaml > /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-ansible.log 2>&1' expecting success
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
Running hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.021s: hack/testing/util.sh:280: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
  WARNING: You're not using the default seccomp profile
Running hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /f7c56045d77f45da9639864bd0345e15 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.682s: hack/testing/util.sh:211: executing 'curl_es logging-es-data-master-decexn9b-1-f7sxf /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /f7c56045d77f45da9639864bd0345e15 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
No resources found.
Running hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"8d7ae3988be94242bbdfcd89984a189d"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
No resources found.
SUCCESS after 1.419s: hack/testing/util.sh:231: executing 'curl_es logging-es-ops-data-master-36f1k6jr-1-5hxrn /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"8d7ae3988be94242bbdfcd89984a189d"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
No resources found.
Running test/zzz-rsyslog.sh:87: executing 'test 6828be24-39e6-11e8-909c-0e25a66d045a = 6828be24-39e6-11e8-909c-0e25a66d045a' expecting success...
SUCCESS after 0.014s: test/zzz-rsyslog.sh:87: executing 'test 6828be24-39e6-11e8-909c-0e25a66d045a = 6828be24-39e6-11e8-909c-0e25a66d045a' expecting success
Running test/zzz-rsyslog.sh:88: executing 'test ip-172-18-1-211.ec2.internal = ip-172-18-1-211.ec2.internal' expecting success...
SUCCESS after 0.013s: test/zzz-rsyslog.sh:88: executing 'test ip-172-18-1-211.ec2.internal = ip-172-18-1-211.ec2.internal' expecting success
Running test/zzz-rsyslog.sh:89: executing 'test c326cf7e-39e3-11e8-909c-0e25a66d045a = c326cf7e-39e3-11e8-909c-0e25a66d045a' expecting success...
SUCCESS after 0.013s: test/zzz-rsyslog.sh:89: executing 'test c326cf7e-39e3-11e8-909c-0e25a66d045a = c326cf7e-39e3-11e8-909c-0e25a66d045a' expecting success
Running test/zzz-rsyslog.sh:93: executing 'diff /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-expected-labels.json /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-actual-labels.json' expecting success...
SUCCESS after 0.015s: test/zzz-rsyslog.sh:93: executing 'diff /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-expected-labels.json /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-actual-labels.json' expecting success
No resources found.
Running test/zzz-rsyslog.sh:98: executing 'diff /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-expected-annotations.json /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-actual-annotations.json' expecting success...
SUCCESS after 0.019s: test/zzz-rsyslog.sh:98: executing 'diff /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-expected-annotations.json /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-actual-annotations.json' expecting success
Running test/zzz-rsyslog.sh:106: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-proj.json | jq ._source.metadata.namespace_labels' expecting success and text '^null$'...
SUCCESS after 0.023s: test/zzz-rsyslog.sh:106: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-proj.json | jq ._source.metadata.namespace_labels' expecting success and text '^null$'
Running test/zzz-rsyslog.sh:113: executing 'diff /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-expected-nsannotations.json /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-actual-nsannotations.json' expecting success...
SUCCESS after 0.017s: test/zzz-rsyslog.sh:113: executing 'diff /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-expected-nsannotations.json /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/zzz-rsyslog-actual-nsannotations.json' expecting success
No resources found.
Running test/zzz-rsyslog.sh:119: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.t.TRANSPORT' expecting success and not text '^null$'...
SUCCESS after 0.022s: test/zzz-rsyslog.sh:119: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.t.TRANSPORT' expecting success and not text '^null$'
Running test/zzz-rsyslog.sh:120: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.t.SELINUX_CONTEXT' expecting success and not text '^null$'...
SUCCESS after 0.022s: test/zzz-rsyslog.sh:120: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.t.SELINUX_CONTEXT' expecting success and not text '^null$'
Running test/zzz-rsyslog.sh:121: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.u.SYSLOG_FACILITY' expecting success and not text '^null$'...
SUCCESS after 0.022s: test/zzz-rsyslog.sh:121: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.u.SYSLOG_FACILITY' expecting success and not text '^null$'
Running test/zzz-rsyslog.sh:122: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.u.SYSLOG_PID' expecting success and not text '^null$'...
SUCCESS after 0.022s: test/zzz-rsyslog.sh:122: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.systemd.u.SYSLOG_PID' expecting success and not text '^null$'
Running test/zzz-rsyslog.sh:123: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.message' expecting success and text '^8d7ae3988be94242bbdfcd89984a189d$'...
SUCCESS after 0.019s: test/zzz-rsyslog.sh:123: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.message' expecting success and text '^8d7ae3988be94242bbdfcd89984a189d$'
Running test/zzz-rsyslog.sh:124: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.level' expecting success and not text '^null$'...
SUCCESS after 0.021s: test/zzz-rsyslog.sh:124: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.level' expecting success and not text '^null$'
Running test/zzz-rsyslog.sh:125: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.hostname' expecting success and not text '^null$'...
SUCCESS after 0.021s: test/zzz-rsyslog.sh:125: executing 'cat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts/entrypoint/artifacts/rsyslog-ops.json | jq -r ._source.hostname' expecting success and not text '^null$'
Running test/zzz-rsyslog.sh:127: executing 'test 2018-04-06T23:05:15.540797+00:00 != null' expecting success...
SUCCESS after 0.014s: test/zzz-rsyslog.sh:127: executing 'test 2018-04-06T23:05:15.540797+00:00 != null' expecting success
Running hack/testing/util.sh:182: executing 'sudo rm -f /var/log/journal.pos' expecting success...
SUCCESS after 0.021s: hack/testing/util.sh:182: executing 'sudo rm -f /var/log/journal.pos' expecting success
Running test/zzz-rsyslog.sh:31: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.778s: test/zzz-rsyslog.sh:31: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-zzz-rsyslog succeeded at Fri Apr  6 23:05:26 UTC 2018
[INFO] [CLEANUP] Beginning cleanup routines...
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 72748 Terminated              monitor_fluentd_top
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 72749 Terminated              monitor_fluentd_pos
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 72750 Terminated              monitor_journal_lograte
[INFO] [CLEANUP] Dumping cluster events to _output/scripts/entrypoint/artifacts/events.txt
Logged into "https://ip-172-18-1-211.ec2.internal:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-service-catalog
    kube-system
  * logging
    management-infra
    mux-undefined
    openshift
    openshift-ansible-service-broker
    openshift-infra
    openshift-node
    openshift-template-service-broker

Using project "logging".
[INFO] [CLEANUP] Dumping container logs to _output/scripts/entrypoint/logs/containers
[INFO] [CLEANUP] Truncating log files over 200M
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN LOGGING TESTS [00h 59m 47s] ##########
[PostBuildScript] - Execution post build scripts.
[workspace@2] $ /bin/bash /tmp/jenkins167331140647674950.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin/_output/scripts
  File: ‘/data/src/github.com/openshift/origin/_output/scripts’
  Size: 107       	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 163789481   Links: 7
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:svirt_sandbox_file_t:s0
Access: 2018-04-06 21:03:28.381946469 +0000
Modify: 2018-04-06 21:50:41.636942958 +0000
Change: 2018-04-06 21:50:41.636942958 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin/_output/scripts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin/_output/scripts /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts
  File: ‘/data/src/github.com/openshift/origin-aggregated-logging/_output/scripts’
  Size: 44        	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 48383431    Links: 4
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:svirt_sandbox_file_t:s0
Access: 2018-04-06 21:50:44.316844077 +0000
Modify: 2018-04-06 22:05:48.287503987 +0000
Change: 2018-04-06 22:05:48.287503987 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin-aggregated-logging/_output/scripts /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/gathered
+ tree /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/gathered
/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/gathered
└── scripts
    ├── ansible_junit
    │   ├── EgifpdHGuz.xml
    │   ├── JFrolTRpAO.xml
    │   ├── kQrnMTyDWm.xml
    │   ├── ToidRSAtlv.xml
    │   └── wgtMKAQvto.xml
    ├── build-base-images
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    ├── build-images
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── entrypoint
    │   ├── artifacts
    │   │   ├── access_control.sh-artifacts.txt
    │   │   ├── es.indices.after
    │   │   ├── es.indices.before
    │   │   ├── es-ops.indices.after
    │   │   ├── es-ops.indices.before
    │   │   ├── events.txt
    │   │   ├── fluentd-forward.sh-artifacts.txt
    │   │   ├── logging-curator-ops-4-2qkr4.log
    │   │   ├── logging-fluentd-4qsjg.log
    │   │   ├── logging-fluentd-orig.yaml
    │   │   ├── logging-fluentd-rfqgf.log
    │   │   ├── logging-mux-1-cz49j.log
    │   │   ├── monitor_fluentd_pos.log
    │   │   ├── monitor_fluentd_top.kubeconfig
    │   │   ├── monitor_fluentd_top.log
    │   │   ├── monitor_journal_lograte.log
    │   │   ├── multi_tenancy.sh-artifacts.txt
    │   │   ├── mux.logging-fluentd-n484g.log
    │   │   ├── mux.sh-artifacts.txt
    │   │   ├── rsyslog-ops.json
    │   │   ├── rsyslog-proj.json
    │   │   ├── zzz-correct-index-names.sh-artifacts.txt
    │   │   ├── zzz-rsyslog-actual-annotations.json
    │   │   ├── zzz-rsyslog-actual-labels.json
    │   │   ├── zzz-rsyslog-actual-nsannotations.json
    │   │   ├── zzz-rsyslog-ansible.log
    │   │   ├── zzz-rsyslog-expected-annotations.json
    │   │   ├── zzz-rsyslog-expected-labels.json
    │   │   ├── zzz-rsyslog-expected-nsannotations.json
    │   │   └── zzz-rsyslog.sh-artifacts.txt
    │   ├── logs
    │   │   ├── containers
    │   │   │   ├── k8s_apiserver_apiserver-kxxbl_kube-service-catalog_2318b333-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_c_apiserver-gr5l6_openshift-template-service-broker_3af43cd4-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_controller-manager_controller-manager-r7mb6_kube-service-catalog_2318c6b9-39e4-11e8-909c-0e25a66d045a_19.log
    │   │   │   ├── k8s_curator_logging-curator-8-ltv2n_logging_e4d1b930-39ea-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_curator_logging-curator-ops-6-6lts2_logging_088fa466-39eb-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_deployment_asb-1-deploy_openshift-ansible-service-broker_34084259-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_deployment_asb-etcd-1-deploy_openshift-ansible-service-broker_34e9e8f2-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_elasticsearch_logging-es-data-master-decexn9b-1-f7sxf_logging_a3fd3da7-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_elasticsearch_logging-es-ops-data-master-36f1k6jr-1-5hxrn_logging_a7f6cda8-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_fluentd-elasticsearch_logging-fluentd-lz676_logging_fbeb737b-39ee-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_kibana_logging-kibana-1-ndnp6_logging_6828be24-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_kibana_logging-kibana-ops-1-4z4bb_logging_7398d86c-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_kibana-proxy_logging-kibana-1-ndnp6_logging_6828be24-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_kibana-proxy_logging-kibana-ops-1-4z4bb_logging_7398d86c-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_mux_logging-mux-3-8lttv_logging_f710f07f-39ed-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_apiserver-gr5l6_openshift-template-service-broker_3af43cd4-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_apiserver-kxxbl_kube-service-catalog_2318b333-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_asb-1-deploy_openshift-ansible-service-broker_34084259-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_asb-etcd-1-deploy_openshift-ansible-service-broker_34e9e8f2-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_controller-manager-r7mb6_kube-service-catalog_2318c6b9-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_docker-registry-1-wdcqp_default_f7d93e7d-39e3-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-curator-8-ltv2n_logging_e4d1b930-39ea-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-curator-ops-6-6lts2_logging_088fa466-39eb-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-es-data-master-decexn9b-1-f7sxf_logging_a3fd3da7-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-es-ops-data-master-36f1k6jr-1-5hxrn_logging_a7f6cda8-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-fluentd-lz676_logging_fbeb737b-39ee-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-kibana-1-ndnp6_logging_6828be24-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-kibana-ops-1-4z4bb_logging_7398d86c-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_logging-mux-3-8lttv_logging_f710f07f-39ed-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_registry-console-1-999t4_default_055daae6-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_POD_router-1-hgwss_default_d43b000c-39e3-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_proxy_logging-es-data-master-decexn9b-1-f7sxf_logging_a3fd3da7-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_proxy_logging-es-ops-data-master-36f1k6jr-1-5hxrn_logging_a7f6cda8-39e6-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_registry-console_registry-console-1-999t4_default_055daae6-39e4-11e8-909c-0e25a66d045a_0.log
    │   │   │   ├── k8s_registry_docker-registry-1-wdcqp_default_f7d93e7d-39e3-11e8-909c-0e25a66d045a_0.log
    │   │   │   └── k8s_router_router-1-hgwss_default_d43b000c-39e3-11e8-909c-0e25a66d045a_0.log
    │   │   ├── raw_test_output.log
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── env
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── tmp.1kUdaYoF3s
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    └── tmp.WU40zflhO4
        ├── artifacts
        ├── logs
        └── openshift.local.home

27 directories, 75 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins4870037748932871443.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/generated
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/etcd/etcd.conf 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo find /var/lib/docker/containers -name *.log | sudo xargs tail -vn +1 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --server=https://$( uname --nodename ):10250 --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -T -h && sudo pvs && sudo vgs && sudo lvs && sudo findmnt --all 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl --dmesg --no-pager --all --lines=all 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/generated
/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/generated
├── avc_denials.log
├── containers.log
├── dmesg.log
├── docker.config
├── docker.info
├── etcd.conf
├── filesystem.info
├── installed_packages.log
├── master-metrics.log
├── node-metrics.log
└── pid1.journal

0 directories, 11 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins3494472787360790786.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/journals
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/journals
/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/artifacts/journals
├── dnsmasq.service
├── docker.service
├── etcd.service
├── openvswitch.service
├── origin-master-api.service
├── origin-master-controllers.service
├── origin-master.service
├── origin-node.service
├── ovsdb-server.service
├── ovs-vswitchd.service
└── systemd-journald.service

0 directories, 11 files
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins7571508600981949415.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
++ python -c 'import json; import urllib; print json.load(urllib.urlopen('\''https://ci.openshift.redhat.com/jenkins/job/test_pull_request_openshift_ansible_logging_37/97/api/json'\''))['\''result'\'']'
+ result=SUCCESS
+ cat
++ date +%s
+ cat /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/builds/97/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/containers.log artifacts/generated/dmesg.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/etcd.conf artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/master-metrics.log artifacts/generated/node-metrics.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/etcd.service artifacts/journals/openvswitch.service artifacts/journals/origin-master-api.service artifacts/journals/origin-master-controllers.service artifacts/journals/origin-master.service artifacts/journals/origin-node.service artifacts/journals/ovsdb-server.service artifacts/journals/ovs-vswitchd.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r artifacts/gathered/scripts gcs/artifacts/
++ pwd
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config -r /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/gcs openshiftdevel:/data
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /var/lib/jenkins/.config/gcloud/gcs-publisher-credentials.json openshiftdevel:/data/credentials.json
+ exit 0
[workspace@2] $ /bin/bash /tmp/jenkins3010741526980376371.sh
########## STARTING STAGE: PUSH THE ARTIFACTS AND METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
++ mktemp
+ script=/tmp/tmp.6ZKQeCqZJL
+ cat
+ chmod +x /tmp/tmp.6ZKQeCqZJL
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.6ZKQeCqZJL openshiftdevel:/tmp/tmp.6ZKQeCqZJL
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.6ZKQeCqZJL"'
+ cd /home/origin
+ trap 'exit 0' EXIT
+ [[ -n {"type":"presubmit","job":"test_pull_request_openshift_ansible_logging_37","buildid":"dd7a388b-39dc-11e8-a837-0a58ac100475","refs":{"org":"openshift","repo":"openshift-ansible","base_ref":"release-3.7","base_sha":"e5443134210b0d49d376c183ca3472f66d71b7e9","pulls":[{"number":7844,"author":"mtnbikenc","sha":"8fe25956544e69773f179429374bfb7b527dd1f6"}]}} ]]
+ gcloud auth activate-service-account --key-file /data/credentials.json
/tmp/tmp.6ZKQeCqZJL: line 8: gcloud: command not found
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: PUSH THE ARTIFACTS AND METADATA [00h 00m 01s] ##########
[workspace@2] $ /bin/bash /tmp/jenkins7486889115660858206.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate ]]
+ source /var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66
++ export PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2018-04-06 19:07:48.345560", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2018-04-06 19:07:48.349552", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2018-04-06 19:07:49.123585", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-06 19:07:49.688049", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-0cd57e9e471604c34."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-06 19:07:50.512462", 
    "instance_ids": [
        "i-0cd57e9e471604c34"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-079cf222217bb2e11"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-03213c54be8a41316"
                }
            }, 
            "dns_name": "ec2-54-152-227-161.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-0cd57e9e471604c34", 
            "image_id": "ami-069c0ca6cc091e8fa", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2018-04-06T20:56:24.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-1-211.ec2.internal", 
            "private_ip": "172.18.1.211", 
            "public_dns_name": "ec2-54-152-227-161.compute-1.amazonaws.com", 
            "public_ip": "54.152.227.161", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-06 19:07:50.747938", 
    "path": "/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config/origin-ci-tool/inventory/host_vars/172.18.1.211.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/437e1037dfc38a9b27d44a96a21bde8b638ccf66/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2018-04-06 19:07:51.180109", 
    "path": "/var/lib/jenkins/jobs/test_pull_request_openshift_ansible_logging_37/workspace@2/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 04s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
Finished: SUCCESS