SuccessConsole Output

Skipping 2,817 KB.. Full Log
[INFO] Checking for index .operations with Kibana pod logging-kibana-ops-1-qq7bz...
Running test/cluster/functionality.sh:104: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-qq7bz' 'logging-es-ops:9200' '.operations' 'journal' '500' 'admin' 'MDlUu3gNuF4RS1egDNIR3BYXfnn3j7pdKi2ura7jub4' '127.0.0.1'' expecting success...
SUCCESS after 1.443s: test/cluster/functionality.sh:104: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-qq7bz' 'logging-es-ops:9200' '.operations' 'journal' '500' 'admin' 'MDlUu3gNuF4RS1egDNIR3BYXfnn3j7pdKi2ura7jub4' '127.0.0.1'' expecting success
[INFO] Checking that Elasticsearch pod logging-es-ops-data-master-wmi6lbzv-1-8g9z9 contains common data model index templates...
Running test/cluster/functionality.sh:109: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- ls -1 /usr/share/java/elasticsearch/index_templates' expecting success...
SUCCESS after 0.396s: test/cluster/functionality.sh:109: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- ls -1 /usr/share/java/elasticsearch/index_templates' expecting success
Running test/cluster/functionality.sh:111: executing 'curl_es 'logging-es-ops-data-master-wmi6lbzv-1-8g9z9' '/_template/com.redhat.viaq-openshift-operations.template.json' --request HEAD --head --output /dev/null --write-out '%{response_code}'' expecting success and text '200'...
SUCCESS after 0.547s: test/cluster/functionality.sh:111: executing 'curl_es 'logging-es-ops-data-master-wmi6lbzv-1-8g9z9' '/_template/com.redhat.viaq-openshift-operations.template.json' --request HEAD --head --output /dev/null --write-out '%{response_code}'' expecting success and text '200'
Running test/cluster/functionality.sh:111: executing 'curl_es 'logging-es-ops-data-master-wmi6lbzv-1-8g9z9' '/_template/com.redhat.viaq-openshift-project.template.json' --request HEAD --head --output /dev/null --write-out '%{response_code}'' expecting success and text '200'...
SUCCESS after 0.535s: test/cluster/functionality.sh:111: executing 'curl_es 'logging-es-ops-data-master-wmi6lbzv-1-8g9z9' '/_template/com.redhat.viaq-openshift-project.template.json' --request HEAD --head --output /dev/null --write-out '%{response_code}'' expecting success and text '200'
Running test/cluster/functionality.sh:111: executing 'curl_es 'logging-es-ops-data-master-wmi6lbzv-1-8g9z9' '/_template/org.ovirt.viaq-collectd.template.json' --request HEAD --head --output /dev/null --write-out '%{response_code}'' expecting success and text '200'...
SUCCESS after 0.433s: test/cluster/functionality.sh:111: executing 'curl_es 'logging-es-ops-data-master-wmi6lbzv-1-8g9z9' '/_template/org.ovirt.viaq-collectd.template.json' --request HEAD --head --output /dev/null --write-out '%{response_code}'' expecting success and text '200'
name                                component               version  type url 
logging-es-ops-data-master-wmi6lbzv cloud-kubernetes        2.4.4_01 j        
logging-es-ops-data-master-wmi6lbzv openshift-elasticsearch 2.4.4.19 j        
logging-es-ops-data-master-wmi6lbzv prometheus-exporter     2.4.4.0  j        
[INFO] Installed plugin: cloud-kubernetes
[INFO] Installed plugin: openshift-elasticsearch
[INFO] Installed plugin: prometheus-exporter
[INFO] Installed plugin: [@]
[INFO] Elasticsearch pod logging-es-ops-data-master-wmi6lbzv-1-8g9z9 contains expected plugin(s)
[INFO] Logging test suite check-logs succeeded at Fri Dec  8 20:25:22 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.368s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.261s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-curator started at Fri Dec  8 20:25:22 UTC 2017
[INFO] Starting curator test at Fri Dec 8 20:25:23 UTC 2017
No resources found.
No resources found.
[INFO] Enabled debug for dc/logging-curator - rolling out . . .
Error was encountered while opening journal files: Input/output error
[INFO] Rolled out dc/logging-curator
[INFO] Enabled debug for dc/logging-curator-ops - rolling out . . .
[INFO] Rolled out dc/logging-curator-ops
[INFO] Testing curator for incorrect project name length error - updating config and rolling out . . .
Running test/curator.sh:120: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].state.waiting.reason}'' expecting any result and text 'Error|CrashLoopBackOff'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 17.507s: test/curator.sh:120: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].state.waiting.reason}'' expecting any result and text 'Error|CrashLoopBackOff'; re-trying every 0.2s until completion or 60.000s
Running test/curator.sh:265: executing 'oc logs logging-curator-3-k72zv 2>&1' expecting success and text 'The project name length must be less than or equal to'...
SUCCESS after 0.247s: test/curator.sh:265: executing 'oc logs logging-curator-3-k72zv 2>&1' expecting success and text 'The project name length must be less than or equal to'
[INFO] Testing curator for improper project name error - updating config and rolling out . . .
Running test/curator.sh:120: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].state.waiting.reason}'' expecting any result and text 'Error|CrashLoopBackOff'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 19.793s: test/curator.sh:120: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].state.waiting.reason}'' expecting any result and text 'Error|CrashLoopBackOff'; re-trying every 0.2s until completion or 60.000s
Running test/curator.sh:275: executing 'oc logs logging-curator-4-h4tpq 2>&1' expecting success and text 'The project name must match this regex'...
SUCCESS after 0.574s: test/curator.sh:275: executing 'oc logs logging-curator-4-h4tpq 2>&1' expecting success and text 'The project name must match this regex'
Running test/curator.sh:122: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.283s: test/curator.sh:122: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Running test/curator.sh:122: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.471s: test/curator.sh:122: executing 'oc get pods -l component=curator -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Running test/curator.sh:329: executing 'oc logs logging-curator-6-645bb 2>&1 | grep -c 'curator run finish'' expecting any result and text '1'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.247s: test/curator.sh:329: executing 'oc logs logging-curator-6-645bb 2>&1 | grep -c 'curator run finish'' expecting any result and text '1'; re-trying every 0.2s until completion or 120.000s
Running test/curator.sh:335: executing 'verify_indices logging-curator-6-645bb ' expecting success...
SUCCESS after 0.603s: test/curator.sh:335: executing 'verify_indices logging-curator-6-645bb ' expecting success
[INFO] sleeping 248 seconds to see if runhour and runminute are working . . .
Error was encountered while opening journal files: No such file or directory
Running test/curator.sh:350: executing 'oc logs logging-curator-6-645bb 2>&1 | grep -c 'curator run finish'' expecting any result and text '2'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.321s: test/curator.sh:350: executing 'oc logs logging-curator-6-645bb 2>&1 | grep -c 'curator run finish'' expecting any result and text '2'; re-trying every 0.2s until completion or 120.000s
[INFO] verify indices deletion after curator run time
Running test/curator.sh:357: executing 'verify_indices logging-curator-6-645bb ' expecting success...
SUCCESS after 0.661s: test/curator.sh:357: executing 'verify_indices logging-curator-6-645bb ' expecting success
Running test/curator.sh:122: executing 'oc get pods -l component=curator-ops -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.423s: test/curator.sh:122: executing 'oc get pods -l component=curator-ops -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Running test/curator.sh:329: executing 'oc logs logging-curator-ops-3-7jfl2 2>&1 | grep -c 'curator run finish'' expecting any result and text '1'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.242s: test/curator.sh:329: executing 'oc logs logging-curator-ops-3-7jfl2 2>&1 | grep -c 'curator run finish'' expecting any result and text '1'; re-trying every 0.2s until completion or 120.000s
Running test/curator.sh:335: executing 'verify_indices logging-curator-ops-3-7jfl2 -ops' expecting success...
SUCCESS after 0.539s: test/curator.sh:335: executing 'verify_indices logging-curator-ops-3-7jfl2 -ops' expecting success
[INFO] sleeping 249 seconds to see if runhour and runminute are working . . .
Running test/curator.sh:350: executing 'oc logs logging-curator-ops-3-7jfl2 2>&1 | grep -c 'curator run finish'' expecting any result and text '2'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.257s: test/curator.sh:350: executing 'oc logs logging-curator-ops-3-7jfl2 2>&1 | grep -c 'curator run finish'' expecting any result and text '2'; re-trying every 0.2s until completion or 120.000s
[INFO] verify indices deletion after curator run time
Running test/curator.sh:357: executing 'verify_indices logging-curator-ops-3-7jfl2 -ops' expecting success...
SUCCESS after 0.545s: test/curator.sh:357: executing 'verify_indices logging-curator-ops-3-7jfl2 -ops' expecting success
[INFO] curator test finished at Fri Dec 8 20:40:20 UTC 2017
Running test/curator.sh:122: executing 'oc get pods -l component=curator-ops -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.301s: test/curator.sh:122: executing 'oc get pods -l component=curator-ops -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Running test/curator.sh:122: executing 'oc get pods -l component=curator-ops -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.318s: test/curator.sh:122: executing 'oc get pods -l component=curator-ops -o jsonpath='{.items[0].status.containerStatuses[?(@.name=="curator")].ready}'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-curator succeeded at Fri Dec  8 20:42:03 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.261s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.224s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-datetime-future started at Fri Dec  8 20:42:03 UTC 2017
[INFO] Ensuring Fluentd pod logging-fluentd-4txhm timezone matches node timezone...
Running test/future_dated_log.sh:18: executing 'oc exec logging-fluentd-4txhm -- date +%Z' expecting success and text 'UTC'...
SUCCESS after 0.315s: test/future_dated_log.sh:18: executing 'oc exec logging-fluentd-4txhm -- date +%Z' expecting success and text 'UTC'
[INFO] Logging test suite test-datetime-future succeeded at Fri Dec  8 20:42:04 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.225s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.246s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-docker-audit started at Fri Dec  8 20:42:04 UTC 2017
No resources found.
No resources found.

Running test/docker_audit.sh:43: executing 'logs_count_is_ge logging-es-ops-data-master-wmi6lbzv-1-8g9z9 '/.operations.*/' 5' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.722s: test/docker_audit.sh:43: executing 'logs_count_is_ge logging-es-ops-data-master-wmi6lbzv-1-8g9z9 '/.operations.*/' 5' expecting success; re-trying every 0.2s until completion or 60.000s
[INFO] ops diff:  5
[INFO] proj diff: 0
Running test/docker_audit.sh:58: executing 'test 0 -eq 0' expecting success...
SUCCESS after 0.012s: test/docker_audit.sh:58: executing 'test 0 -eq 0' expecting success
Running test/docker_audit.sh:65: executing 'test 5 -ge 5' expecting success...
SUCCESS after 0.011s: test/docker_audit.sh:65: executing 'test 5 -ge 5' expecting success
[INFO] Logging test suite test-docker-audit succeeded at Fri Dec  8 20:42:13 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.274s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.231s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-eventrouter started at Fri Dec  8 20:42:13 UTC 2017
No resources found.
[INFO] Logging test suite test-eventrouter succeeded at Fri Dec  8 20:42:14 UTC 2017
[WARNING] Eventrouter not deployed
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.237s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.256s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-fluentd-forward started at Fri Dec  8 20:42:14 UTC 2017
[INFO] Starting fluentd-forward test at Fri Dec 8 20:42:15 UTC 2017
Running test/fluentd-forward.sh:131: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.240s: test/fluentd-forward.sh:131: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/fluentd-forward.sh:133: executing 'wait_for_fluentd_to_catch_up' expecting success...
SUCCESS after 6.001s: test/fluentd-forward.sh:133: executing 'wait_for_fluentd_to_catch_up' expecting success
configmap "logging-forward-fluentd" created
daemonset "logging-forward-fluentd" created
daemonset "logging-forward-fluentd" updated
Running test/fluentd-forward.sh:78: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.049s: test/fluentd-forward.sh:78: executing 'flush_fluentd_pos_files' expecting success
Running test/fluentd-forward.sh:82: executing 'oc get pods -l component=forward-fluentd' expecting any result and text '^logging-forward-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.017s: test/fluentd-forward.sh:82: executing 'oc get pods -l component=forward-fluentd' expecting any result and text '^logging-forward-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/fluentd-forward.sh:19: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.326s: test/fluentd-forward.sh:19: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
configmap "logging-fluentd" replaced
command terminated with exit code 137
configmap "logging-fluentd" patched
Running test/fluentd-forward.sh:47: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.099s: test/fluentd-forward.sh:47: executing 'flush_fluentd_pos_files' expecting success
Running test/fluentd-forward.sh:49: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 13.171s: test/fluentd-forward.sh:49: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/fluentd-forward.sh:138: executing 'wait_for_fluentd_to_catch_up' expecting success...
SUCCESS after 68.463s: test/fluentd-forward.sh:138: executing 'wait_for_fluentd_to_catch_up' expecting success
[INFO] fluentd-forward test finished at Fri Dec 8 20:43:50 UTC 2017
Running test/fluentd-forward.sh:113: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.256s: test/fluentd-forward.sh:113: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/fluentd-forward.sh:120: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.048s: test/fluentd-forward.sh:120: executing 'flush_fluentd_pos_files' expecting success
[INFO] Logging test suite test-fluentd-forward succeeded at Fri Dec  8 20:44:02 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.274s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.252s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-json-parsing started at Fri Dec  8 20:44:03 UTC 2017
[INFO] Starting json-parsing test at Fri Dec 8 20:44:03 UTC 2017
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.816s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.019s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /9e78d26c-1a0a-4a14-a325-71b2ec875b71 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 1.690s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /9e78d26c-1a0a-4a14-a325-71b2ec875b71 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"74aaa651-e9fb-49da-9d69-ddb61658ffa5"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 1.749s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"74aaa651-e9fb-49da-9d69-ddb61658ffa5"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
No resources found.
[INFO] Testing if record is in correct format . . .
Running test/json-parsing.sh:48: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search?q=message:9e78d26c-1a0a-4a14-a325-71b2ec875b71 |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-json-parsing.py 9e78d26c-1a0a-4a14-a325-71b2ec875b71' expecting success...
SUCCESS after 0.466s: test/json-parsing.sh:48: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search?q=message:9e78d26c-1a0a-4a14-a325-71b2ec875b71 |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-json-parsing.py 9e78d26c-1a0a-4a14-a325-71b2ec875b71' expecting success
[INFO] json-parsing test finished at Fri Dec 8 20:44:12 UTC 2017
[INFO] Logging test suite test-json-parsing succeeded at Fri Dec  8 20:44:12 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.271s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.249s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-kibana-dashboards started at Fri Dec  8 20:44:13 UTC 2017
Running test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-data-master-ztl0kza1-1-6rr84 -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'...
SUCCESS after 0.335s: test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-data-master-ztl0kza1-1-6rr84 -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'
Running test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-data-master-ztl0kza1-1-6rr84 -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'...
SUCCESS after 0.424s: test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-data-master-ztl0kza1-1-6rr84 -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'
Running test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'...
SUCCESS after 0.331s: test/kibana_dashboards.sh:50: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- es_load_kibana_ui_objects' expecting failure and text 'Usage:'
Running test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'...
SUCCESS after 0.452s: test/kibana_dashboards.sh:51: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- es_load_kibana_ui_objects no-such-user' expecting failure and text 'Could not find kibana index'
Running test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-data-master-ztl0kza1-1-6rr84 -- es_load_kibana_ui_objects admin' expecting success and text 'Success'...
SUCCESS after 0.814s: test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-data-master-ztl0kza1-1-6rr84 -- es_load_kibana_ui_objects admin' expecting success and text 'Success'
Running test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- es_load_kibana_ui_objects admin' expecting success and text 'Success'...
SUCCESS after 0.801s: test/kibana_dashboards.sh:60: executing 'oc exec -c elasticsearch logging-es-ops-data-master-wmi6lbzv-1-8g9z9 -- es_load_kibana_ui_objects admin' expecting success and text 'Success'
[INFO] Finished with test - login to kibana and kibana-ops to verify the admin user can load and view the dashboards with no errors
[INFO] Logging test suite test-kibana-dashboards succeeded at Fri Dec  8 20:44:20 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.258s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.228s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-multi-tenancy started at Fri Dec  8 20:44:20 UTC 2017
No resources found.
No resources found.
[INFO] Creating project multi-tenancy-1
[INFO] Creating test index and entry for multi-tenancy-1
[INFO] Creating project multi-tenancy-2
[INFO] Creating test index and entry for multi-tenancy-2
[INFO] Creating project multi-tenancy-3
[INFO] Creating test index and entry for multi-tenancy-3
[INFO] Creating user loguser with password loguser
[INFO] Assigning user to projects multi-tenancy-1 multi-tenancy-2
[INFO] Creating user loguser2 with password loguser2
[INFO] Assigning user to projects multi-tenancy-2 multi-tenancy-3
command terminated with exit code 137
Running test/multi_tenancy.sh:182: executing 'hack_msearch_access' expecting failure and text 'Usage:'...
SUCCESS after 0.044s: test/multi_tenancy.sh:182: executing 'hack_msearch_access' expecting failure and text 'Usage:'
Running test/multi_tenancy.sh:183: executing 'hack_msearch_access no-such-user no-such-project' expecting failure and text 'user no-such-user not found'...
SUCCESS after 0.448s: test/multi_tenancy.sh:183: executing 'hack_msearch_access no-such-user no-such-project' expecting failure and text 'user no-such-user not found'
Running test/multi_tenancy.sh:184: executing 'hack_msearch_access loguser no-such-project' expecting failure and text 'project no-such-project not found'...
SUCCESS after 0.645s: test/multi_tenancy.sh:184: executing 'hack_msearch_access loguser no-such-project' expecting failure and text 'project no-such-project not found'
Running test/multi_tenancy.sh:185: executing 'hack_msearch_access loguser default' expecting failure and text 'loguser does not have access to view logs in project default'...
SUCCESS after 0.805s: test/multi_tenancy.sh:185: executing 'hack_msearch_access loguser default' expecting failure and text 'loguser does not have access to view logs in project default'
Running test/multi_tenancy.sh:187: executing 'hack_msearch_access loguser multi-tenancy-1 multi-tenancy-2' expecting success...
SUCCESS after 5.666s: test/multi_tenancy.sh:187: executing 'hack_msearch_access loguser multi-tenancy-1 multi-tenancy-2' expecting success
Running test/multi_tenancy.sh:189: executing 'hack_msearch_access loguser2 --all' expecting success...
SUCCESS after 4.086s: test/multi_tenancy.sh:189: executing 'hack_msearch_access loguser2 --all' expecting success
[INFO] See if user loguser can read /project.multi-tenancy-1.*
Running test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success...
SUCCESS after 0.006s: test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success
[INFO] See if user loguser can read /project.multi-tenancy-2.*
Running test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success...
SUCCESS after 0.009s: test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success
[INFO] See if user loguser can _msearch ["project.multi-tenancy-1.*","project.multi-tenancy-2.*"]
Running test/multi_tenancy.sh:125: executing 'test 2 = 2' expecting success...
SUCCESS after 0.006s: test/multi_tenancy.sh:125: executing 'test 2 = 2' expecting success
[INFO] See if user loguser is denied /project.default.*
Running test/multi_tenancy.sh:140: executing 'test 0 = 0' expecting success...
SUCCESS after 0.005s: test/multi_tenancy.sh:140: executing 'test 0 = 0' expecting success
[INFO] See if user loguser is denied /.operations.*
Running test/multi_tenancy.sh:151: executing 'test 0 = 0' expecting success...
SUCCESS after 0.006s: test/multi_tenancy.sh:151: executing 'test 0 = 0' expecting success
[INFO] See if user loguser2 can read /project.multi-tenancy-2.*
Running test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success...
SUCCESS after 0.006s: test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success
[INFO] See if user loguser2 can read /project.multi-tenancy-3.*
Running test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success...
SUCCESS after 0.007s: test/multi_tenancy.sh:109: executing 'test 1 = 1' expecting success
[INFO] See if user loguser2 can _msearch ["project.multi-tenancy-2.*","project.multi-tenancy-3.*"]
Running test/multi_tenancy.sh:125: executing 'test 2 = 2' expecting success...
SUCCESS after 0.006s: test/multi_tenancy.sh:125: executing 'test 2 = 2' expecting success
[INFO] See if user loguser2 is denied /project.default.*
Running test/multi_tenancy.sh:140: executing 'test 0 = 0' expecting success...
SUCCESS after 0.008s: test/multi_tenancy.sh:140: executing 'test 0 = 0' expecting success
[INFO] See if user loguser2 is denied /.operations.*
Running test/multi_tenancy.sh:151: executing 'test 0 = 0' expecting success...
SUCCESS after 0.008s: test/multi_tenancy.sh:151: executing 'test 0 = 0' expecting success
[INFO] Logging test suite test-multi-tenancy succeeded at Fri Dec  8 20:45:28 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.399s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.284s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-mux-client-mode started at Fri Dec  8 20:45:29 UTC 2017
[INFO] configure fluentd to use MUX_CLIENT_MODE=minimal - verify logs get through
Running test/mux-client-mode.sh:65: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.241s: test/mux-client-mode.sh:65: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/mux-client-mode.sh:68: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.048s: test/mux-client-mode.sh:68: executing 'flush_fluentd_pos_files' expecting success
Running test/mux-client-mode.sh:70: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 9.295s: test/mux-client-mode.sh:70: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.449s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.017s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /1d5f3333-76c3-4083-84b0-ed1464473f22 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 10.813s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /1d5f3333-76c3-4083-84b0-ed1464473f22 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"27b3a234-0ba3-4e6e-b906-eb497624b685"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.498s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"27b3a234-0ba3-4e6e-b906-eb497624b685"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
[INFO] configure fluentd to use MUX_CLIENT_MODE=maximal - verify logs get through
Running test/mux-client-mode.sh:77: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.223s: test/mux-client-mode.sh:77: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/mux-client-mode.sh:80: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.054s: test/mux-client-mode.sh:80: executing 'flush_fluentd_pos_files' expecting success
Running test/mux-client-mode.sh:82: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 5.100s: test/mux-client-mode.sh:82: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.451s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.018s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /564d63b8-467e-477d-8805-241dab21e758 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 15.268s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /564d63b8-467e-477d-8805-241dab21e758 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"04a117fe-be4a-415d-8cde-442c986169b1"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.411s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"04a117fe-be4a-415d-8cde-442c986169b1"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/mux-client-mode.sh:42: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.062s: test/mux-client-mode.sh:42: executing 'flush_fluentd_pos_files' expecting success
error: 'logging-infra-fluentd' already has a value (true), and --overwrite is false
Running test/mux-client-mode.sh:44: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 9.300s: test/mux-client-mode.sh:44: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-mux-client-mode succeeded at Fri Dec  8 20:46:32 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.428s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.358s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-mux started at Fri Dec  8 20:46:32 UTC 2017
No resources found.
No resources found.
[INFO] Starting mux test at Fri Dec 8 20:46:35 UTC 2017
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.027s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.017s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /5944965f-d0f7-437a-9f51-af510ae87272 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.704s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /5944965f-d0f7-437a-9f51-af510ae87272 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"ce9c1a56-b98e-485a-88ef-69ed38def865"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.493s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"ce9c1a56-b98e-485a-88ef-69ed38def865"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
[INFO] ------- Test case 1 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external and CONTAINER values.
Running test/mux.sh:67: executing 'oc get pod logging-fluentd-8rbdr' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 12.956s: test/mux.sh:67: executing 'oc get pod logging-fluentd-8rbdr' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
No resources found.
Running test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.051s: test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.159s: test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.095s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.024s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /31edc699-f542-49be-993c-eb387c36dc09 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 9.562s: test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /31edc699-f542-49be-993c-eb387c36dc09 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:c648fe4c-f31a-4e2b-96b0-e27457222acd | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.589s: test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:c648fe4c-f31a-4e2b-96b0-e27457222acd | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.688s: test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.529s: test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 2 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external without CONTAINER values.
Running test/mux.sh:67: executing 'oc get pod logging-fluentd-sxcql' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 8.286s: test/mux.sh:67: executing 'oc get pod logging-fluentd-sxcql' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
Running test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.051s: test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success
No resources found.
Running test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.289s: test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.602s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.024s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /1737b002-582f-4d10-a506-daf5bd90bad0 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 41.374s: test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /1737b002-582f-4d10-a506-daf5bd90bad0 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:d1bcd5fb-0f7a-4154-8e4d-b079152993c5 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 2.115s: test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:d1bcd5fb-0f7a-4154-8e4d-b079152993c5 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.576s: test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.537s: test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 3 -------
[INFO] fluentd forwards kibana and system logs with tag project.testproj.external and CONTAINER values, which namespace names do not match.
Running test/mux.sh:67: executing 'oc get pod logging-fluentd-phdqv' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 35.211s: test/mux.sh:67: executing 'oc get pod logging-fluentd-phdqv' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
No resources found.
Running test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.065s: test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.723s: test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.238s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.025s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /5d5b8b75-2ef8-49a6-a889-380b09277d92 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 38.143s: test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /5d5b8b75-2ef8-49a6-a889-380b09277d92 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:e64f9f2b-df40-4d81-9905-c2b9f62bd2b6 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.757s: test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.testproj.*/_count?q=SYSLOG_IDENTIFIER:e64f9f2b-df40-4d81-9905-c2b9f62bd2b6 | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.631s: test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.566s: test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'
[INFO] ------- Test case 4 -------
[INFO] fluentd forwards kibana and system logs with tag test.bogus.external and no CONTAINER values, which will use a namespace of mux-undefined.
[INFO] using existing project mux-undefined
Running test/mux.sh:67: executing 'oc get pod logging-fluentd-sxcsk' expecting failure; re-trying every 0.2s until completion or 120.000s...
command terminated with exit code 137
SUCCESS after 32.713s: test/mux.sh:67: executing 'oc get pod logging-fluentd-sxcsk' expecting failure; re-trying every 0.2s until completion or 120.000s
No resources found.
No resources found.
Running test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.052s: test/mux.sh:201: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.284s: test/mux.sh:204: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.728s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.018s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /0ea2d938-892f-4ee9-a114-6fee39d651c0 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 8.133s: test/mux.sh:274: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -XPOST -d '{"query":{"bool":{"filter":{"match_phrase":{"_all":"GET /0ea2d938-892f-4ee9-a114-6fee39d651c0 404 "}},"must_not":[{"exists":{"field":"docker"}},{"exists":{"field":"kubernetes"}},{"exists":{"field":"CONTAINER_NAME"}},{"exists":{"field":"CONTAINER_ID_FULL"}},{"exists":{"field":"mux_namespace_name"}},{"exists":{"field":"mux_need_k8s_meta"}},{"exists":{"field":"namespace_name"}},{"exists":{"field":"namespace_uuid"}}]}}}' | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.mux-undefined.*/_count?q=SYSLOG_IDENTIFIER:3bb1be0f-a983-4510-bf27-b9137845afae | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 2.165s: test/mux.sh:315: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.mux-undefined.*/_count?q=SYSLOG_IDENTIFIER:3bb1be0f-a983-4510-bf27-b9137845afae | get_count_from_json' expecting any result and text '^1$'; re-trying every 0.2s until completion or 600.000s
Running test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.609s: test/mux.sh:325: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'...
SUCCESS after 0.599s: test/mux.sh:326: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default'
Running test/mux.sh:330: executing 'oc set env dc logging-mux ES_HOST=logging-es OPS_HOST=logging-es-ops' expecting success...
SUCCESS after 0.353s: test/mux.sh:330: executing 'oc set env dc logging-mux ES_HOST=logging-es OPS_HOST=logging-es-ops' expecting success
Running test/mux.sh:333: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.358s: test/mux.sh:333: executing 'oc get pods -l component=mux' expecting any result and text '^logging-mux-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] mux test finished at Fri Dec 8 20:50:45 UTC 2017
Running test/mux.sh:374: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.307s: test/mux.sh:374: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
{"acknowledged":true}{"acknowledged":true}{"acknowledged":true}Running test/mux.sh:385: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.067s: test/mux.sh:385: executing 'flush_fluentd_pos_files' expecting success
Running test/mux.sh:388: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.216s: test/mux.sh:388: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-mux succeeded at Fri Dec  8 20:51:02 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.476s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.381s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-read-throttling started at Fri Dec  8 20:51:03 UTC 2017
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running test/read-throttling.sh:55: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.366s: test/read-throttling.sh:55: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/read-throttling.sh:60: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 5.792s: test/read-throttling.sh:60: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/read-throttling.sh:63: executing 'oc logs logging-fluentd-d6l9w' expecting success and text 'Could not parse YAML file'...
SUCCESS after 0.388s: test/read-throttling.sh:63: executing 'oc logs logging-fluentd-d6l9w' expecting success and text 'Could not parse YAML file'
Running test/read-throttling.sh:67: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.465s: test/read-throttling.sh:67: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/read-throttling.sh:72: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 16.885s: test/read-throttling.sh:72: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/read-throttling.sh:75: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Unknown option "bogus-key"'...
SUCCESS after 0.497s: test/read-throttling.sh:75: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Unknown option "bogus-key"'
Running test/read-throttling.sh:76: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Invalid key/value pair {"bogus-key":"bogus-value"} provided -- ignoring...'...
SUCCESS after 0.377s: test/read-throttling.sh:76: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Invalid key/value pair {"bogus-key":"bogus-value"} provided -- ignoring...'
Running test/read-throttling.sh:77: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Invalid value type matched for "bogus-value"'...
SUCCESS after 0.358s: test/read-throttling.sh:77: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Invalid value type matched for "bogus-value"'
Running test/read-throttling.sh:78: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Invalid key/value pair {"read_lines_limit":"bogus-value"} provided -- ignoring...'...
SUCCESS after 0.349s: test/read-throttling.sh:78: executing 'oc logs logging-fluentd-qbh85' expecting success and text 'Invalid key/value pair {"read_lines_limit":"bogus-value"} provided -- ignoring...'
Running test/read-throttling.sh:37: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.225s: test/read-throttling.sh:37: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
[INFO] Logging test suite test-read-throttling succeeded at Fri Dec  8 20:51:37 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.627s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.267s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-remote-syslog started at Fri Dec  8 20:51:38 UTC 2017
[INFO] Starting fluentd-plugin-remote-syslog tests at Fri Dec 8 20:51:39 UTC 2017
[INFO] Test 1, expecting generate_syslog_config.rb to have created configuration file
Running test/remote-syslog.sh:32: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.382s: test/remote-syslog.sh:32: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:35: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.413s: test/remote-syslog.sh:35: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:39: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 7.761s: test/remote-syslog.sh:39: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:42: executing 'oc exec logging-fluentd-9qmw5 find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.486s: test/remote-syslog.sh:42: executing 'oc exec logging-fluentd-9qmw5 find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s
[INFO] Test 2, expecting generate_syslog_config.rb to not create a configuration file
Running test/remote-syslog.sh:48: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.383s: test/remote-syslog.sh:48: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 9.125s: test/remote-syslog.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:55: executing 'oc exec logging-fluentd-t7ggg find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting failure; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.408s: test/remote-syslog.sh:55: executing 'oc exec logging-fluentd-t7ggg find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting failure; re-trying every 0.2s until completion or 60.000s
[INFO] Test 3, expecting generate_syslog_config.rb to generate multiple stores
Running test/remote-syslog.sh:61: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.334s: test/remote-syslog.sh:61: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:65: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 6.356s: test/remote-syslog.sh:65: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:68: executing 'oc exec logging-fluentd-96b5c grep '<store>' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf | wc -l' expecting any result and text '^2$'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.618s: test/remote-syslog.sh:68: executing 'oc exec logging-fluentd-96b5c grep '<store>' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf | wc -l' expecting any result and text '^2$'; re-trying every 0.2s until completion or 60.000s
[INFO] Test 4, making sure tag_key=message does not cause remote-syslog plugin crash
Running test/remote-syslog.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.319s: test/remote-syslog.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:78: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 7.283s: test/remote-syslog.sh:78: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:81: executing 'oc exec logging-fluentd-lll9s find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.555s: test/remote-syslog.sh:81: executing 'oc exec logging-fluentd-lll9s find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:82: executing 'oc exec logging-fluentd-lll9s grep 'tag_key message' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success...
SUCCESS after 0.427s: test/remote-syslog.sh:82: executing 'oc exec logging-fluentd-lll9s grep 'tag_key message' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success
Running test/remote-syslog.sh:83: executing 'oc logs logging-fluentd-lll9s' expecting success and not text 'nil:NilClass'...
SUCCESS after 0.363s: test/remote-syslog.sh:83: executing 'oc logs logging-fluentd-lll9s' expecting success and not text 'nil:NilClass'
[INFO] Test 5, making sure tag_key=bogus does not cause remote-syslog plugin crash
Running test/remote-syslog.sh:100: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.238s: test/remote-syslog.sh:100: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
Running test/remote-syslog.sh:104: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 6.661s: test/remote-syslog.sh:104: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:107: executing 'oc exec logging-fluentd-2cjwv find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.503s: test/remote-syslog.sh:107: executing 'oc exec logging-fluentd-2cjwv find /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success; re-trying every 0.2s until completion or 60.000s
Running test/remote-syslog.sh:108: executing 'oc exec logging-fluentd-2cjwv grep 'tag_key bogus' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success...
SUCCESS after 0.355s: test/remote-syslog.sh:108: executing 'oc exec logging-fluentd-2cjwv grep 'tag_key bogus' /etc/fluent/configs.d/dynamic/output-remote-syslog.conf' expecting success
Running test/remote-syslog.sh:109: executing 'oc logs logging-fluentd-2cjwv' expecting success and not text 'nil:NilClass'...
SUCCESS after 0.269s: test/remote-syslog.sh:109: executing 'oc logs logging-fluentd-2cjwv' expecting success and not text 'nil:NilClass'
[INFO] Restoring original fluentd daemonset environment variable
error: unable to decode "/tmp/tmp.wariTyDSAG": Object 'Kind' is missing in '{"Spec":{"MinReadySeconds":0,"RevisionHistoryLimit":10,"Selector":{"matchLabels":{"component":"fluentd","provider":"openshift"}},"Template":{"Spec":{"ActiveDeadlineSeconds":null,"Affinity":null,"AutomountServiceAccountToken":null,"Containers":[{"Args":null,"Command":null,"Env":[{"Name":"K8S_HOST_URL","Value":"https://kubernetes.default.svc.cluster.local","ValueFrom":null},{"Name":"ES_HOST","Value":"logging-es","ValueFrom":null},{"Name":"ES_PORT","Value":"9200","ValueFrom":null},{"Name":"ES_CLIENT_CERT","Value":"/etc/fluent/keys/cert","ValueFrom":null},{"Name":"ES_CLIENT_KEY","Value":"/etc/fluent/keys/key","ValueFrom":null},{"Name":"ES_CA","Value":"/etc/fluent/keys/ca","ValueFrom":null},{"Name":"OPS_HOST","Value":"logging-es-ops","ValueFrom":null},{"Name":"OPS_PORT","Value":"9200","ValueFrom":null},{"Name":"OPS_CLIENT_CERT","Value":"/etc/fluent/keys/cert","ValueFrom":null},{"Name":"OPS_CLIENT_KEY","Value":"/etc/fluent/keys/key","ValueFrom":null},{"Name":"OPS_CA","Value":"/etc/fluent/keys/ca","ValueFrom":null},{"Name":"JOURNAL_SOURCE","Value":"","ValueFrom":null},{"Name":"JOURNAL_READ_FROM_HEAD","Value":"false","ValueFrom":null},{"Name":"BUFFER_QUEUE_LIMIT","Value":"32","ValueFrom":null},{"Name":"BUFFER_SIZE_LIMIT","Value":"8m","ValueFrom":null},{"Name":"FLUENTD_CPU_LIMIT","Value":"","ValueFrom":{"ConfigMapKeyRef":null,"FieldRef":null,"ResourceFieldRef":{"ContainerName":"fluentd-elasticsearch","Divisor":"0","Resource":"limits.cpu"},"SecretKeyRef":null}},{"Name":"FLUENTD_MEMORY_LIMIT","Value":"","ValueFrom":{"ConfigMapKeyRef":null,"FieldRef":null,"ResourceFieldRef":{"ContainerName":"fluentd-elasticsearch","Divisor":"0","Resource":"limits.memory"},"SecretKeyRef":null}},{"Name":"FILE_BUFFER_LIMIT","Value":"256Mi","ValueFrom":null},{"Name":"AUDIT_CONTAINER_ENGINE","Value":"true","ValueFrom":null},{"Name":"NODE_NAME","Value":"","ValueFrom":{"ConfigMapKeyRef":null,"FieldRef":{"APIVersion":"v1","FieldPath":"spec.nodeName"},"ResourceFieldRef":null,"SecretKeyRef":null}}],"EnvFrom":null,"Image":"openshift/origin-logging-fluentd:latest","ImagePullPolicy":"IfNotPresent","Lifecycle":null,"LivenessProbe":null,"Name":"fluentd-elasticsearch","Ports":null,"ReadinessProbe":null,"Resources":{"Limits":{"memory":"256Mi"},"Requests":{"cpu":"100m","memory":"256Mi"}},"SecurityContext":{"AllowPrivilegeEscalation":null,"Capabilities":null,"Privileged":true,"ReadOnlyRootFilesystem":null,"RunAsNonRoot":null,"RunAsUser":null,"SELinuxOptions":null},"Stdin":false,"StdinOnce":false,"TTY":false,"TerminationMessagePath":"/dev/termination-log","TerminationMessagePolicy":"File","VolumeMounts":[{"MountPath":"/run/log/journal","MountPropagation":null,"Name":"runlogjournal","ReadOnly":false,"SubPath":""},{"MountPath":"/var/log","MountPropagation":null,"Name":"varlog","ReadOnly":false,"SubPath":""},{"MountPath":"/var/lib/docker/containers","MountPropagation":null,"Name":"varlibdockercontainers","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/fluent/configs.d/user","MountPropagation":null,"Name":"config","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/fluent/keys","MountPropagation":null,"Name":"certs","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/docker-hostname","MountPropagation":null,"Name":"dockerhostname","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/localtime","MountPropagation":null,"Name":"localtime","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/sysconfig/docker","MountPropagation":null,"Name":"dockercfg","ReadOnly":true,"SubPath":""},{"MountPath":"/etc/docker","MountPropagation":null,"Name":"dockerdaemoncfg","ReadOnly":true,"SubPath":""},{"MountPath":"/var/lib/fluentd","MountPropagation":null,"Name":"filebufferstorage","ReadOnly":false,"SubPath":""}],"WorkingDir":""}],"DNSPolicy":"ClusterFirst","HostAliases":null,"Hostname":"","ImagePullSecrets":null,"InitContainers":null,"NodeName":"","NodeSelector":{"logging-infra-fluentd":"true"},"Priority":null,"PriorityClassName":"","RestartPolicy":"Always","SchedulerName":"default-scheduler","SecurityContext":{"FSGroup":null,"HostIPC":false,"HostNetwork":false,"HostPID":false,"RunAsNonRoot":null,"RunAsUser":null,"SELinuxOptions":null,"SupplementalGroups":null},"ServiceAccountName":"aggregated-logging-fluentd","Subdomain":"","TerminationGracePeriodSeconds":30,"Tolerations":null,"Volumes":[{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/run/log/journal","Type":""},"ISCSI":null,"NFS":null,"Name":"runlogjournal","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/var/log","Type":""},"ISCSI":null,"NFS":null,"Name":"varlog","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/var/lib/docker/containers","Type":""},"ISCSI":null,"NFS":null,"Name":"varlibdockercontainers","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":{"DefaultMode":420,"Items":null,"Name":"logging-fluentd","Optional":null},"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":null,"ISCSI":null,"NFS":null,"Name":"config","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":null,"ISCSI":null,"NFS":null,"Name":"certs","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":{"DefaultMode":420,"Items":null,"Optional":null,"SecretName":"logging-fluentd"},"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/hostname","Type":""},"ISCSI":null,"NFS":null,"Name":"dockerhostname","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/localtime","Type":""},"ISCSI":null,"NFS":null,"Name":"localtime","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/sysconfig/docker","Type":""},"ISCSI":null,"NFS":null,"Name":"dockercfg","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/etc/docker","Type":""},"ISCSI":null,"NFS":null,"Name":"dockerdaemoncfg","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null},{"AWSElasticBlockStore":null,"AzureDisk":null,"AzureFile":null,"CephFS":null,"Cinder":null,"ConfigMap":null,"DownwardAPI":null,"EmptyDir":null,"FC":null,"FlexVolume":null,"Flocker":null,"GCEPersistentDisk":null,"GitRepo":null,"Glusterfs":null,"HostPath":{"Path":"/var/lib/fluentd","Type":""},"ISCSI":null,"NFS":null,"Name":"filebufferstorage","PersistentVolumeClaim":null,"PhotonPersistentDisk":null,"PortworxVolume":null,"Projected":null,"Quobyte":null,"RBD":null,"ScaleIO":null,"Secret":null,"StorageOS":null,"VsphereVolume":null}]},"creationTimestamp":null,"labels":{"component":"fluentd","logging-infra":"fluentd","provider":"openshift"},"name":"fluentd-elasticsearch"},"TemplateGeneration":1,"UpdateStrategy":{"RollingUpdate":{"MaxUnavailable":1},"Type":"RollingUpdate"}},"Status":{"CollisionCount":null,"CurrentNumberScheduled":1,"DesiredNumberScheduled":1,"NumberAvailable":0,"NumberMisscheduled":0,"NumberReady":0,"NumberUnavailable":1,"ObservedGeneration":1,"UpdatedNumberScheduled":1},"creationTimestamp":null,"generation":1,"labels":{"component":"fluentd","logging-infra":"fluentd","provider":"openshift"},"name":"logging-fluentd"}'
[INFO] Logging test suite test-remote-syslog succeeded at Fri Dec  8 20:52:31 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.330s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.345s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-utf8-characters started at Fri Dec  8 20:52:32 UTC 2017
[INFO] Starting utf8-characters test at Fri Dec  8 20:52:32 UTC 2017
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.018s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.018s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
No resources found.
Running test/utf8-characters.sh:45: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"07eb662a-4dd8-4933-948d-13d792892ccb"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 300.000s...
SUCCESS after 5.520s: test/utf8-characters.sh:45: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"07eb662a-4dd8-4933-948d-13d792892ccb"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 300.000s
[INFO] Checking that message was successfully processed...
Running test/utf8-characters.sh:48: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"07eb662a-4dd8-4933-948d-13d792892ccb"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-utf8-characters.py '07eb662a-4dd8-4933-948d-13d792892ccb-µ' 07eb662a-4dd8-4933-948d-13d792892ccb' expecting success...
SUCCESS after 0.481s: test/utf8-characters.sh:48: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"07eb662a-4dd8-4933-948d-13d792892ccb"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-utf8-characters.py '07eb662a-4dd8-4933-948d-13d792892ccb-µ' 07eb662a-4dd8-4933-948d-13d792892ccb' expecting success
[INFO] utf8-characters test finished at Fri Dec  8 20:52:39 UTC 2017
[INFO] Logging test suite test-utf8-characters succeeded at Fri Dec  8 20:52:39 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.283s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.253s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-viaq-data-model started at Fri Dec  8 20:52:39 UTC 2017
Running test/viaq-data-model.sh:50: executing 'oc get pods -l component=es' expecting any result and text '^logging-es.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.268s: test/viaq-data-model.sh:50: executing 'oc get pods -l component=es' expecting any result and text '^logging-es.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:51: executing 'oc get pods -l component=kibana' expecting any result and text '^logging-kibana-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.240s: test/viaq-data-model.sh:51: executing 'oc get pods -l component=kibana' expecting any result and text '^logging-kibana-.* Running '; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.261s: test/viaq-data-model.sh:52: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Starting viaq-data-model test at Fri Dec 8 20:52:41 UTC 2017
No resources found.
No resources found.
Running test/viaq-data-model.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.239s: test/viaq-data-model.sh:74: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
Running test/viaq-data-model.sh:112: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.084s: test/viaq-data-model.sh:112: executing 'flush_fluentd_pos_files' expecting success
Running test/viaq-data-model.sh:114: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 4.093s: test/viaq-data-model.sh:114: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.446s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.017s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /0ebd875c-bb9f-40a7-81a0-c1d3e4437e0b 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 3.687s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /0ebd875c-bb9f-40a7-81a0-c1d3e4437e0b 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"30979956-fb0a-4195-ba75-702409f604ce"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.445s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"30979956-fb0a-4195-ba75-702409f604ce"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:130: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /0ebd875c-bb9f-40a7-81a0-c1d3e4437e0b 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success...
SUCCESS after 0.470s: test/viaq-data-model.sh:130: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /0ebd875c-bb9f-40a7-81a0-c1d3e4437e0b 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success
Running test/viaq-data-model.sh:133: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"30979956-fb0a-4195-ba75-702409f604ce"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success...
SUCCESS after 0.428s: test/viaq-data-model.sh:133: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"30979956-fb0a-4195-ba75-702409f604ce"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test1' expecting success
Running test/viaq-data-model.sh:144: executing 'oc get pod logging-fluentd-z9j6r' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 11.984s: test/viaq-data-model.sh:144: executing 'oc get pod logging-fluentd-z9j6r' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:145: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 3.357s: test/viaq-data-model.sh:145: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.018s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.017s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /bf7a70be-bfe3-44e2-b1d4-91d32b9943dd 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.760s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /bf7a70be-bfe3-44e2-b1d4-91d32b9943dd 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"678c18f9-eeb0-4a3f-b4e3-231ff6b8f713"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.553s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"678c18f9-eeb0-4a3f-b4e3-231ff6b8f713"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:151: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /bf7a70be-bfe3-44e2-b1d4-91d32b9943dd 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success...
SUCCESS after 0.427s: test/viaq-data-model.sh:151: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /bf7a70be-bfe3-44e2-b1d4-91d32b9943dd 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success
Running test/viaq-data-model.sh:154: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"678c18f9-eeb0-4a3f-b4e3-231ff6b8f713"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success...
SUCCESS after 0.458s: test/viaq-data-model.sh:154: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"678c18f9-eeb0-4a3f-b4e3-231ff6b8f713"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test2' expecting success
Running test/viaq-data-model.sh:159: executing 'oc get pod logging-fluentd-8g77t' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 7.115s: test/viaq-data-model.sh:159: executing 'oc get pod logging-fluentd-8g77t' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:160: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.268s: test/viaq-data-model.sh:160: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.032s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.019s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /33fb5579-47f3-49a2-8905-9f16c18ba582 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 6.335s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /33fb5579-47f3-49a2-8905-9f16c18ba582 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6570840-6a9b-4a79-b762-dba617e99fb9"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.523s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6570840-6a9b-4a79-b762-dba617e99fb9"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:166: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /33fb5579-47f3-49a2-8905-9f16c18ba582 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success...
SUCCESS after 0.471s: test/viaq-data-model.sh:166: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /33fb5579-47f3-49a2-8905-9f16c18ba582 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success
Running test/viaq-data-model.sh:170: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6570840-6a9b-4a79-b762-dba617e99fb9"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success...
SUCCESS after 0.437s: test/viaq-data-model.sh:170: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6570840-6a9b-4a79-b762-dba617e99fb9"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test3' expecting success
Running test/viaq-data-model.sh:175: executing 'oc get pod logging-fluentd-vkqdr' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 6.634s: test/viaq-data-model.sh:175: executing 'oc get pod logging-fluentd-vkqdr' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:176: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.228s: test/viaq-data-model.sh:176: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.030s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.026s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /a260bdc1-204c-4500-a7ab-83a8879f5750 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 5.786s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /a260bdc1-204c-4500-a7ab-83a8879f5750 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"c1a57edb-c9d3-4b1d-858f-648618e47ee9"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.445s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"c1a57edb-c9d3-4b1d-858f-648618e47ee9"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:182: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /a260bdc1-204c-4500-a7ab-83a8879f5750 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success...
SUCCESS after 0.449s: test/viaq-data-model.sh:182: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /a260bdc1-204c-4500-a7ab-83a8879f5750 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success
Running test/viaq-data-model.sh:186: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"c1a57edb-c9d3-4b1d-858f-648618e47ee9"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success...
SUCCESS after 0.449s: test/viaq-data-model.sh:186: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"c1a57edb-c9d3-4b1d-858f-648618e47ee9"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test4' expecting success
Running test/viaq-data-model.sh:191: executing 'oc get pod logging-fluentd-84jlz' expecting failure; re-trying every 0.2s until completion or 60.000s...
command terminated with exit code 137
SUCCESS after 4.075s: test/viaq-data-model.sh:191: executing 'oc get pod logging-fluentd-84jlz' expecting failure; re-trying every 0.2s until completion or 60.000s
Running test/viaq-data-model.sh:192: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.801s: test/viaq-data-model.sh:192: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
No resources found.
No resources found.
Running hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.034s: hack/testing/util.sh:267: executing 'sudo test -f /var/log/journal.pos' expecting success; re-trying every 0.2s until completion or 60.000s
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Running hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.022s: hack/testing/util.sh:271: executing 'sudo test -f /var/log/es-containers.log.pos' expecting success; re-trying every 0.2s until completion or 60.000s
Running hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /1f51037c-2543-4e12-bbea-ca310124fce3 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 4.698s: hack/testing/util.sh:198: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_count -X POST -d '{"query":{"match_phrase":{"message":"GET /1f51037c-2543-4e12-bbea-ca310124fce3 404 "}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6008dc4-7f88-43cb-9d71-82f218f057c3"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.533s: hack/testing/util.sh:218: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6008dc4-7f88-43cb-9d71-82f218f057c3"}}}' | get_count_from_json' expecting any result and text '1'; re-trying every 0.2s until completion or 600.000s
Running test/viaq-data-model.sh:206: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /1f51037c-2543-4e12-bbea-ca310124fce3 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success...
SUCCESS after 0.501s: test/viaq-data-model.sh:206: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.logging.*/_search -X POST -d '{"query":{"match_phrase":{"message":"GET /1f51037c-2543-4e12-bbea-ca310124fce3 404 "}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success
Running test/viaq-data-model.sh:210: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6008dc4-7f88-43cb-9d71-82f218f057c3"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success...
SUCCESS after 0.449s: test/viaq-data-model.sh:210: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_search -X POST -d '{"query":{"term":{"systemd.u.SYSLOG_IDENTIFIER":"d6008dc4-7f88-43cb-9d71-82f218f057c3"}}}' |                          python /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/test-viaq-data-model.py test5 allow_empty' expecting success
[INFO] viaq-data-model test finished at Fri Dec 8 20:54:17 UTC 2017
Running test/viaq-data-model.sh:33: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s...
SUCCESS after 0.239s: test/viaq-data-model.sh:33: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '0'; re-trying every 0.2s until completion or 120.000s
command terminated with exit code 137
Running test/viaq-data-model.sh:40: executing 'flush_fluentd_pos_files' expecting success...
SUCCESS after 0.068s: test/viaq-data-model.sh:40: executing 'flush_fluentd_pos_files' expecting success
Running test/viaq-data-model.sh:42: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 2.476s: test/viaq-data-model.sh:42: executing 'oc get pods -l component=fluentd' expecting any result and text '^logging-fluentd-.* Running '; re-trying every 0.2s until completion or 60.000s
[INFO] Logging test suite test-viaq-data-model succeeded at Fri Dec  8 20:54:22 UTC 2017
Running hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.416s: hack/testing/entrypoint.sh:110: executing 'oc login -u system:admin' expecting success
Running hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success...
SUCCESS after 0.354s: hack/testing/entrypoint.sh:111: executing 'oc project logging' expecting success
[INFO] Logging test suite test-zzz-correct-index-names started at Fri Dec  8 20:54:23 UTC 2017
No resources found.
No resources found.
Running test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default\.'...
SUCCESS after 0.466s: test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.default\.'
Running test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.default.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.400s: test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.default.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.494s: test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default\.'...
SUCCESS after 0.495s: test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.default\.'
Running test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.default.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.414s: test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.default.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.423s: test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:40: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and not text '^0$'...
SUCCESS after 0.445s: test/zzz-correct-index-names.sh:40: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /.operations.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"default"}}}' | get_count_from_json' expecting success and not text '^0$'
Running test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.openshift\.'...
SUCCESS after 0.441s: test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.openshift\.'
Running test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.463s: test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.417s: test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.openshift\.'...
SUCCESS after 0.428s: test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.openshift\.'
Running test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.495s: test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.openshift.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.433s: test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.openshift-infra\.'...
SUCCESS after 0.475s: test/zzz-correct-index-names.sh:29: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /_cat/indices' expecting success and not text 'project\.openshift-infra\.'
Running test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.478s: test/zzz-correct-index-names.sh:30: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.417s: test/zzz-correct-index-names.sh:31: executing 'curl_es logging-es-data-master-ztl0kza1-1-6rr84 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.openshift-infra\.'...
SUCCESS after 0.453s: test/zzz-correct-index-names.sh:33: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /_cat/indices' expecting success and not text 'project\.openshift-infra\.'
Running test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.492s: test/zzz-correct-index-names.sh:34: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.openshift-infra.*/_count | get_count_from_json' expecting success and text '^0$'
Running test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'...
SUCCESS after 0.451s: test/zzz-correct-index-names.sh:35: executing 'curl_es logging-es-ops-data-master-wmi6lbzv-1-8g9z9 /project.*/_count -X POST -d '{"query":{"term":{"kubernetes.namespace_name":"openshift-infra"}}}' | get_count_from_json' expecting success and text '^0$'
[INFO] Logging test suite test-zzz-correct-index-names succeeded at Fri Dec  8 20:54:33 UTC 2017
[INFO] [CLEANUP] Beginning cleanup routines...
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 49389 Terminated              monitor_fluentd_top
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 49390 Terminated              monitor_fluentd_pos
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/cleanup.sh: line 258: 49391 Terminated              monitor_journal_lograte
[INFO] [CLEANUP] Dumping cluster events to _output/scripts/entrypoint/artifacts/events.txt
Logged into "https://ip-172-18-9-251.ec2.internal:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-service-catalog
    kube-system
  * logging
    management-infra
    mux-undefined
    openshift
    openshift-ansible-service-broker
    openshift-infra
    openshift-node
    openshift-template-service-broker

Using project "logging".
[INFO] [CLEANUP] Dumping container logs to _output/scripts/entrypoint/logs/containers
[INFO] [CLEANUP] Truncating log files over 200M
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RUN LOGGING TESTS [00h 30m 03s] ##########
[PostBuildScript] - Execution post build scripts.
[workspace] $ /bin/bash /tmp/jenkins7992122969960685509.sh
########## STARTING STAGE: DOWNLOAD ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ export PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/gathered
+ rm -rf /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/gathered
+ mkdir -p /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin/_output/scripts
  File: ‘/data/src/github.com/openshift/origin/_output/scripts’
  Size: 107       	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 21303121    Links: 7
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:default_t:s0
Access: 2017-12-08 20:20:42.123874247 +0000
Modify: 2017-12-08 20:10:53.864773298 +0000
Change: 2017-12-08 20:10:53.864773298 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin/_output/scripts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin/_output/scripts /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/gathered
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo stat /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts
  File: ‘/data/src/github.com/openshift/origin-aggregated-logging/_output/scripts’
  Size: 44        	Blocks: 0          IO Block: 4096   directory
Device: ca02h/51714d	Inode: 42415201    Links: 4
Access: (2755/drwxr-sr-x)  Uid: ( 1001/  origin)   Gid: ( 1003/origin-git)
Context: unconfined_u:object_r:default_t:s0
Access: 2017-12-08 19:27:02.975342166 +0000
Modify: 2017-12-08 20:24:37.438303873 +0000
Change: 2017-12-08 20:24:37.438303873 +0000
 Birth: -
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod -R o+rX /data/src/github.com/openshift/origin-aggregated-logging/_output/scripts
+ scp -r -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/src/github.com/openshift/origin-aggregated-logging/_output/scripts /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/gathered
+ tree /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/gathered
/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/gathered
└── scripts
    ├── ansible_junit
    │   ├── IjEtsMUvlR.xml
    │   ├── IuaXUmEOfq.xml
    │   ├── nzeDpWKwhs.xml
    │   └── rCZtkCXDBh.xml
    ├── build-base-images
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    ├── build-images
    │   ├── artifacts
    │   ├── logs
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── entrypoint
    │   ├── artifacts
    │   │   ├── es.indices.after
    │   │   ├── es.indices.before
    │   │   ├── es-ops.indices.after
    │   │   ├── es-ops.indices.before
    │   │   ├── events.txt
    │   │   ├── logging-curator-ops-3-7jfl2.log
    │   │   ├── logging-fluentd-9tsxn.log
    │   │   ├── logging-fluentd-orig.yaml
    │   │   ├── logging-fluentd-pq4b9.log
    │   │   ├── logging-fluentd-qbh85.log
    │   │   ├── logging-fluentd-zjskq.log
    │   │   ├── logging-mux-1-zhdxm.log
    │   │   ├── monitor_fluentd_pos.log
    │   │   ├── monitor_fluentd_top.kubeconfig
    │   │   ├── monitor_fluentd_top.log
    │   │   ├── monitor_journal_lograte.log
    │   │   ├── multi_tenancy-artifacts.txt
    │   │   ├── mux-artifacts.txt
    │   │   └── mux.logging-fluentd-gph56.log
    │   ├── logs
    │   │   ├── containers
    │   │   │   ├── k8s_apiserver_apiserver-m56j7_kube-service-catalog_18efe1b2-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_c_apiserver-p6bnb_openshift-template-service-broker_2ec73eb2-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_controller-manager_controller-manager-mbplv_kube-service-catalog_18f308a0-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_curator_logging-curator-7-2h2sg_logging_1936277d-dc58-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_curator_logging-curator-ops-5-9dqk2_logging_3c96860a-dc58-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_deployment_asb-1-deploy_openshift-ansible-service-broker_287623f3-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_deployment_asb-etcd-1-deploy_openshift-ansible-service-broker_2956c67e-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_elasticsearch_logging-es-data-master-ztl0kza1-1-6rr84_logging_86812148-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_elasticsearch_logging-es-ops-data-master-wmi6lbzv-1-8g9z9_logging_95096952-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_fluentd-elasticsearch_logging-fluentd-g6j4f_logging_f6546d0d-dc59-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_kibana_logging-kibana-1-7rfcb_logging_a14b8c49-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_kibana_logging-kibana-ops-1-qq7bz_logging_a9ce5bc8-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_kibana-proxy_logging-kibana-1-7rfcb_logging_a14b8c49-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_kibana-proxy_logging-kibana-ops-1-qq7bz_logging_a9ce5bc8-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_mux_logging-mux-1-zhdxm_logging_c27bad7b-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_apiserver-m56j7_kube-service-catalog_18efe1b2-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_apiserver-p6bnb_openshift-template-service-broker_2ec73eb2-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_asb-1-deploy_openshift-ansible-service-broker_287623f3-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_asb-etcd-1-deploy_openshift-ansible-service-broker_2956c67e-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_controller-manager-mbplv_kube-service-catalog_18f308a0-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_docker-registry-1-x8qqz_default_ec42d41a-dc54-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-curator-7-2h2sg_logging_1936277d-dc58-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-curator-ops-5-9dqk2_logging_3c96860a-dc58-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-es-data-master-ztl0kza1-1-6rr84_logging_86812148-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-es-ops-data-master-wmi6lbzv-1-8g9z9_logging_95096952-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-fluentd-84jlz_logging_e3c92544-dc59-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-fluentd-g6j4f_logging_f6546d0d-dc59-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-fluentd-pq4b9_logging_ed9eed17-dc59-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-fluentd-vkqdr_logging_d7d8204b-dc59-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-kibana-1-7rfcb_logging_a14b8c49-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-kibana-ops-1-qq7bz_logging_a9ce5bc8-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_logging-mux-1-zhdxm_logging_c27bad7b-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_registry-console-1-tk7bt_default_02334a77-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_POD_router-1-xllcl_default_cefa84b8-dc54-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_proxy_logging-es-data-master-ztl0kza1-1-6rr84_logging_86812148-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_proxy_logging-es-ops-data-master-wmi6lbzv-1-8g9z9_logging_95096952-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_registry-console_registry-console-1-tk7bt_default_02334a77-dc55-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   ├── k8s_registry_docker-registry-1-x8qqz_default_ec42d41a-dc54-11e7-a97d-0e7464d43d0e_0.log
    │   │   │   └── k8s_router_router-1-xllcl_default_cefa84b8-dc54-11e7-a97d-0e7464d43d0e_0.log
    │   │   ├── raw_test_output.log
    │   │   └── scripts.log
    │   └── openshift.local.home
    ├── origin_version
    │   ├── artifacts
    │   ├── logs
    │   └── openshift.local.home
    └── shell
        ├── artifacts
        ├── logs
        │   └── scripts.log
        └── openshift.local.home

23 directories, 66 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins505950612067223280.sh
########## STARTING STAGE: GENERATE ARTIFACTS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ export PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/generated
+ rm -rf /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/generated
+ mkdir /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/generated
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo docker version && sudo docker info && sudo docker images && sudo docker ps -a 2>&1'
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo cat /etc/sysconfig/docker /etc/sysconfig/docker-network /etc/sysconfig/docker-storage /etc/sysconfig/docker-storage-setup /etc/systemd/system/docker.service 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --server=https://$( uname --nodename ):10250 --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo ausearch -m AVC -m SELINUX_ERR -m USER_AVC 2>&1'
+ true
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'oc get --raw /metrics --config=/etc/origin/master/admin.kubeconfig 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo df -h && sudo pvs && sudo vgs && sudo lvs 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo yum list installed 2>&1'
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'sudo journalctl _PID=1 --no-pager --all --lines=all 2>&1'
+ tree /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/generated
/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/generated
├── avc_denials.log
├── docker.config
├── docker.info
├── filesystem.info
├── installed_packages.log
├── master-metrics.log
├── node-metrics.log
└── pid1.journal

0 directories, 8 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins3221519337525355350.sh
########## STARTING STAGE: FETCH SYSTEMD JOURNALS FROM THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ export PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
+ trap 'exit 0' EXIT
++ pwd
+ ARTIFACT_DIR=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/journals
+ rm -rf /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/journals
+ mkdir /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/journals
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit docker.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit dnsmasq.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-api.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-master-controllers.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit origin-node.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit openvswitch.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovs-vswitchd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit ovsdb-server.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit etcd.service --no-pager --all --lines=all
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo journalctl --unit systemd-journald.service --no-pager --all --lines=all
+ tree /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/journals
/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/artifacts/journals
├── dnsmasq.service
├── docker.service
├── etcd.service
├── openvswitch.service
├── origin-master-api.service
├── origin-master-controllers.service
├── origin-master.service
├── origin-node.service
├── ovsdb-server.service
├── ovs-vswitchd.service
└── systemd-journald.service

0 directories, 11 files
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins844613167927861583.sh
########## STARTING STAGE: FORWARD PARAMETERS TO THE REMOTE HOST ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ export PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel sudo chmod o+rw /etc/environment
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel 'echo '\''BUILD_URL=https://ci.openshift.redhat.com/jenkins/job/test_branch_openshift_ansible_logging2/1/'\'' >> /etc/environment'
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: FORWARD PARAMETERS TO THE REMOTE HOST [00h 00m 00s] ##########
[workspace] $ /bin/bash /tmp/jenkins8628963998417513477.sh
########## STARTING STAGE: RECORD THE ENDING METADATA ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ export PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ mktemp
+ script=/tmp/tmp.5ePtLNiP0F
+ cat
+ chmod +x /tmp/tmp.5ePtLNiP0F
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config /tmp/tmp.5ePtLNiP0F openshiftdevel:/tmp/tmp.5ePtLNiP0F
+ ssh -F ./.config/origin-ci-tool/inventory/.ssh_config -t openshiftdevel 'bash -l -c "timeout 300 /tmp/tmp.5ePtLNiP0F"'
+ cd /data/src/github.com/openshift/aos-cd-jobs
+ trap 'exit 0' EXIT
+ sjb/gcs/finished.py
+ exit 0
+ set +o xtrace
########## FINISHED STAGE: SUCCESS: RECORD THE ENDING METADATA [00h 00m 01s] ##########
[workspace] $ /bin/bash /tmp/jenkins3984206742160560279.sh
########## STARTING STAGE: ASSEMBLE GCS OUTPUT ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ export PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
+ trap 'exit 0' EXIT
+ mkdir -p gcs/artifacts gcs/artifacts/generated gcs/artifacts/journals gcs/artifacts/gathered
+ scp -F ./.config/origin-ci-tool/inventory/.ssh_config openshiftdevel:/data/finished.json gcs/
+ cat /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/builds/1/log
+ cp artifacts/generated/avc_denials.log artifacts/generated/docker.config artifacts/generated/docker.info artifacts/generated/filesystem.info artifacts/generated/installed_packages.log artifacts/generated/master-metrics.log artifacts/generated/node-metrics.log artifacts/generated/pid1.journal gcs/artifacts/generated/
+ cp artifacts/journals/dnsmasq.service artifacts/journals/docker.service artifacts/journals/etcd.service artifacts/journals/openvswitch.service artifacts/journals/origin-master-api.service artifacts/journals/origin-master-controllers.service artifacts/journals/origin-master.service artifacts/journals/origin-node.service artifacts/journals/ovsdb-server.service artifacts/journals/ovs-vswitchd.service artifacts/journals/systemd-journald.service gcs/artifacts/journals/
+ cp -r artifacts/gathered/scripts gcs/artifacts/
++ gcs_path
++ bucket=gs://origin-ci-test/
/tmp/jenkins3984206742160560279.sh: line 14: buildId: unbound variable
+ path=
+ exit 0
[workspace] $ /bin/bash /tmp/jenkins6933604699832185586.sh
########## STARTING STAGE: DEPROVISION CLOUD RESOURCES ##########
+ [[ -s /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate ]]
+ source /var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/activate
++ export VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ VIRTUAL_ENV=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f
++ export PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ PATH=/var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/bin:/sbin:/usr/sbin:/bin:/usr/bin
++ unset PYTHON_HOME
++ export OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
++ OCT_CONFIG_HOME=/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config
+ oct deprovision

PLAYBOOK: main.yml *************************************************************
4 plays in /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml

PLAY [ensure we have the parameters necessary to deprovision virtual hosts] ****

TASK [ensure all required variables are set] ***********************************
task path: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:9
skipping: [localhost] => (item=origin_ci_inventory_dir)  => {
    "changed": false, 
    "generated_timestamp": "2017-12-08 15:54:57.659966", 
    "item": "origin_ci_inventory_dir", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}
skipping: [localhost] => (item=origin_ci_aws_region)  => {
    "changed": false, 
    "generated_timestamp": "2017-12-08 15:54:57.666122", 
    "item": "origin_ci_aws_region", 
    "skip_reason": "Conditional check failed", 
    "skipped": true
}

PLAY [deprovision virtual hosts in EC2] ****************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [deprovision a virtual EC2 host] ******************************************
task path: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:28
included: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml for localhost

TASK [update the SSH configuration to remove AWS EC2 specifics] ****************
task path: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:2
ok: [localhost] => {
    "changed": false, 
    "generated_timestamp": "2017-12-08 15:54:58.593157", 
    "msg": ""
}

TASK [rename EC2 instance for termination reaper] ******************************
task path: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:8
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-08 15:54:59.276558", 
    "msg": "Tags {'Name': 'oct-terminate'} created for resource i-0c514ef32596852cb."
}

TASK [tear down the EC2 instance] **********************************************
task path: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:15
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-08 15:55:00.362138", 
    "instance_ids": [
        "i-0c514ef32596852cb"
    ], 
    "instances": [
        {
            "ami_launch_index": "0", 
            "architecture": "x86_64", 
            "block_device_mapping": {
                "/dev/sda1": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-08b32b2ad20760cdc"
                }, 
                "/dev/sdb": {
                    "delete_on_termination": true, 
                    "status": "attached", 
                    "volume_id": "vol-07af1bb5441e53b6a"
                }
            }, 
            "dns_name": "ec2-184-73-48-134.compute-1.amazonaws.com", 
            "ebs_optimized": false, 
            "groups": {
                "sg-7e73221a": "default"
            }, 
            "hypervisor": "xen", 
            "id": "i-0c514ef32596852cb", 
            "image_id": "ami-27670b5d", 
            "instance_type": "m4.xlarge", 
            "kernel": null, 
            "key_name": "libra", 
            "launch_time": "2017-12-08T19:52:53.000Z", 
            "placement": "us-east-1d", 
            "private_dns_name": "ip-172-18-9-251.ec2.internal", 
            "private_ip": "172.18.9.251", 
            "public_dns_name": "ec2-184-73-48-134.compute-1.amazonaws.com", 
            "public_ip": "184.73.48.134", 
            "ramdisk": null, 
            "region": "us-east-1", 
            "root_device_name": "/dev/sda1", 
            "root_device_type": "ebs", 
            "state": "running", 
            "state_code": 16, 
            "tags": {
                "Name": "oct-terminate", 
                "openshift_etcd": "", 
                "openshift_master": "", 
                "openshift_node": ""
            }, 
            "tenancy": "default", 
            "virtualization_type": "hvm"
        }
    ], 
    "tagged_instances": []
}

TASK [remove the serialized host variables] ************************************
task path: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/roles/aws-down/tasks/main.yml:22
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-08 15:55:00.631962", 
    "path": "/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config/origin-ci-tool/inventory/host_vars/172.18.9.251.yml", 
    "state": "absent"
}

PLAY [deprovision virtual hosts locally manged by Vagrant] *********************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [clean up local configuration for deprovisioned instances] ****************

TASK [remove inventory configuration directory] ********************************
task path: /var/lib/jenkins/origin-ci-tool/8024b3d1997e58365d7a3b9d62fe2ed7b6bded7f/lib/python2.7/site-packages/oct/ansible/oct/playbooks/deprovision/main.yml:61
changed: [localhost] => {
    "changed": true, 
    "generated_timestamp": "2017-12-08 15:55:01.155028", 
    "path": "/var/lib/jenkins/jobs/test_branch_openshift_ansible_logging2/workspace/.config/origin-ci-tool/inventory", 
    "state": "absent"
}

PLAY RECAP *********************************************************************
localhost                  : ok=8    changed=4    unreachable=0    failed=0   

+ set +o xtrace
########## FINISHED STAGE: SUCCESS: DEPROVISION CLOUD RESOURCES [00h 00m 05s] ##########
Archiving artifacts
Recording test results
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
Finished: SUCCESS