SuccessConsole Output

Started by upstream project "merge_pull_request_origin_aggregated_logging" build number 22
originally caused by:
 Started by remote host 50.17.198.52
[EnvInject] - Loading node environment variables.
Building in workspace /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace
[EnvInject] - Injecting environment variables from a build step.
[EnvInject] - Injecting as environment variables the properties content 
OS_ROOT=/data/src/github.com/openshift/origin
INSTANCE_TYPE=c4.xlarge
GITHUB_REPO=openshift
OS=rhel7
TESTNAME=logging

[EnvInject] - Variables injected successfully.
[workspace] $ /bin/sh -xe /tmp/hudson8847964684854045290.sh
+ false
+ unset GOPATH
+ REPO_NAME=origin-aggregated-logging
+ rm -rf origin-aggregated-logging
+ vagrant origin-local-checkout --replace --repo origin-aggregated-logging -b master
You don't seem to have the GOPATH environment variable set on your system.
See: 'go help gopath' for more details about GOPATH.
Waiting for the cloning process to finish
Cloning origin-aggregated-logging ...
Submodule 'deployer/common' (https://github.com/openshift/origin-integration-common) registered for path 'deployer/common'
Submodule 'kibana-proxy' (https://github.com/fabric8io/openshift-auth-proxy.git) registered for path 'kibana-proxy'
Cloning into 'deployer/common'...
Submodule path 'deployer/common': checked out '45bf993212cdcbab5cbce3b3fab74a72b851402e'
Cloning into 'kibana-proxy'...
Submodule path 'kibana-proxy': checked out '118dfb40f7a8082d370ba7f4805255c9ec7c8178'
Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace
+ pushd origin-aggregated-logging
~/jobs/test-origin-aggregated-logging/workspace/origin-aggregated-logging ~/jobs/test-origin-aggregated-logging/workspace
+ git checkout master
Already on 'master'
+ popd
~/jobs/test-origin-aggregated-logging/workspace
+ '[' -n '' ']'
+ vagrant origin-local-checkout --replace
You don't seem to have the GOPATH environment variable set on your system.
See: 'go help gopath' for more details about GOPATH.
Waiting for the cloning process to finish
Checking repo integrity for /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace/origin
~/jobs/test-origin-aggregated-logging/workspace/origin ~/jobs/test-origin-aggregated-logging/workspace
# On branch master
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#	artifacts/
nothing added to commit but untracked files present (use "git add" to track)
~/jobs/test-origin-aggregated-logging/workspace
Replacing: /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace/origin
~/jobs/test-origin-aggregated-logging/workspace/origin ~/jobs/test-origin-aggregated-logging/workspace
From https://github.com/openshift/origin
   71efe29..1257438  master     -> origin/master
Already on 'master'
Your branch is behind 'origin/master' by 3 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
HEAD is now at 1257438 Merge pull request #14169 from php-coder/oc_cluster_up_and_ports_in_userns_env
Removing .vagrant-openshift.json
Removing .vagrant/
Removing artifacts/
fatal: branch name required
~/jobs/test-origin-aggregated-logging/workspace
Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace
+ pushd origin
~/jobs/test-origin-aggregated-logging/workspace/origin ~/jobs/test-origin-aggregated-logging/workspace
+ INSTANCE_NAME=origin_logging-rhel7-1648
+ GIT_URL=https://github.com/openshift/origin-aggregated-logging
++ echo https://github.com/openshift/origin-aggregated-logging
++ sed s,https://,,
+ OAL_LOCAL_PATH=github.com/openshift/origin-aggregated-logging
+ OS_O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging
+ env
+ sort
_=/bin/env
BRANCH=master
BUILD_CAUSE=UPSTREAMTRIGGER
BUILD_CAUSE_UPSTREAMTRIGGER=true
BUILD_DISPLAY_NAME=#1648
BUILD_ID=1648
BUILD_NUMBER=1648
BUILD_TAG=jenkins-test-origin-aggregated-logging-1648
BUILD_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1648/
EXECUTOR_NUMBER=65
GITHUB_REPO=openshift
HOME=/var/lib/jenkins
HUDSON_COOKIE=c7d4e9b9-1283-4b8d-ba05-7b1be92bd0c9
HUDSON_HOME=/var/lib/jenkins
HUDSON_SERVER_COOKIE=ec11f8b2841c966f
HUDSON_URL=https://ci.openshift.redhat.com/jenkins/
INSTANCE_TYPE=c4.xlarge
JENKINS_HOME=/var/lib/jenkins
JENKINS_SERVER_COOKIE=ec11f8b2841c966f
JENKINS_URL=https://ci.openshift.redhat.com/jenkins/
JOB_BASE_NAME=test-origin-aggregated-logging
JOB_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/display/redirect
JOB_NAME=test-origin-aggregated-logging
JOB_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/
LANG=en_US.UTF-8
LOGNAME=jenkins
MERGE=false
MERGE_SEVERITY=none
NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
NODE_LABELS=master
NODE_NAME=master
OLDPWD=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace
OPENSHIFT_ANSIBLE_TARGET_BRANCH=master
ORIGIN_AGGREGATED_LOGGING_PULL_ID=466
ORIGIN_AGGREGATED_LOGGING_TARGET_BRANCH=master
OS_ANSIBLE_BRANCH=master
OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible 
OS=rhel7
OS_ROOT=/data/src/github.com/openshift/origin
PATH=/sbin:/usr/sbin:/bin:/usr/bin
PWD=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace/origin
ROOT_BUILD_CAUSE=REMOTECAUSE
ROOT_BUILD_CAUSE_REMOTECAUSE=true
RUN_CHANGES_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1648/display/redirect?page=changes
RUN_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1648/display/redirect
SHELL=/bin/bash
SHLVL=3
TESTNAME=logging
TEST_PERF=false
USER=jenkins
WORKSPACE=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace
XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
+ vagrant origin-init --stage inst --os rhel7 --instance-type c4.xlarge origin_logging-rhel7-1648
Reading AWS credentials from /var/lib/jenkins/.awscred
Searching devenv-rhel7_* for latest base AMI (required_name_tag=)
Found: ami-83a1fc95 (devenv-rhel7_6323)
++ seq 0 2
+ for i in '$(seq 0 2)'
+ vagrant up --provider aws
Bringing machine 'openshiftdev' up with 'aws' provider...
==> openshiftdev: Warning! The AWS provider doesn't support any of the Vagrant
==> openshiftdev: high-level network configurations (`config.vm.network`). They
==> openshiftdev: will be silently ignored.
==> openshiftdev: Warning! You're launching this instance into a VPC without an
==> openshiftdev: elastic IP. Please verify you're properly connected to a VPN so
==> openshiftdev: you can access this machine, otherwise Vagrant will not be able
==> openshiftdev: to SSH into it.
==> openshiftdev: Launching an instance with the following settings...
==> openshiftdev:  -- Type: c4.xlarge
==> openshiftdev:  -- AMI: ami-83a1fc95
==> openshiftdev:  -- Region: us-east-1
==> openshiftdev:  -- Keypair: libra
==> openshiftdev:  -- Subnet ID: subnet-cf57c596
==> openshiftdev:  -- User Data: yes
==> openshiftdev:  -- User Data: 
==> openshiftdev: # cloud-config
==> openshiftdev: 
==> openshiftdev: growpart:
==> openshiftdev:   mode: auto
==> openshiftdev:   devices: ['/']
==> openshiftdev: runcmd:
==> openshiftdev: - [ sh, -xc, "sed -i s/^Defaults.*requiretty/#Defaults requiretty/g /etc/sudoers"]
==> openshiftdev:         
==> openshiftdev:  -- Block Device Mapping: [{"DeviceName"=>"/dev/sda1", "Ebs.VolumeSize"=>25, "Ebs.VolumeType"=>"gp2"}, {"DeviceName"=>"/dev/sdb", "Ebs.VolumeSize"=>35, "Ebs.VolumeType"=>"gp2"}]
==> openshiftdev:  -- Terminate On Shutdown: false
==> openshiftdev:  -- Monitoring: false
==> openshiftdev:  -- EBS optimized: false
==> openshiftdev:  -- Assigning a public IP address in a VPC: false
==> openshiftdev: Waiting for instance to become "ready"...
==> openshiftdev: Waiting for SSH to become available...
==> openshiftdev: Machine is booted and ready for use!
==> openshiftdev: Running provisioner: setup (shell)...
    openshiftdev: Running: /tmp/vagrant-shell20170608-20425-1rh62az.sh
==> openshiftdev: Host: ec2-34-207-175-137.compute-1.amazonaws.com
+ break
+ vagrant sync-origin-aggregated-logging -c -s
Running ssh/sudo command 'rm -rf /data/src/github.com/openshift/origin-aggregated-logging-bare; 
' with timeout 14400. Attempt #0
Running ssh/sudo command 'mkdir -p /ec2-user/.ssh;
mv /tmp/file20170608-21093-1fjtusa /ec2-user/.ssh/config &&
chown ec2-user:ec2-user /ec2-user/.ssh/config &&
chmod 0600 /ec2-user/.ssh/config' with timeout 14400. Attempt #0
Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/' with timeout 14400. Attempt #0
Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/builder && chown -R ec2-user:ec2-user /data/src/github.com/openshift/' with timeout 14400. Attempt #0
Running ssh/sudo command 'set -e
rm -fr /data/src/github.com/openshift/origin-aggregated-logging-bare;

if [ ! -d /data/src/github.com/openshift/origin-aggregated-logging-bare ]; then
git clone --quiet --bare https://github.com/openshift/origin-aggregated-logging.git /data/src/github.com/openshift/origin-aggregated-logging-bare >/dev/null
fi
' with timeout 14400. Attempt #0
Synchronizing local sources
Synchronizing [origin-aggregated-logging@master] from origin-aggregated-logging...
Warning: Permanently added '34.207.175.137' (ECDSA) to the list of known hosts.
Running ssh/sudo command 'set -e

if [ -d /data/src/github.com/openshift/origin-aggregated-logging-bare ]; then
rm -rf /data/src/github.com/openshift/origin-aggregated-logging
echo 'Cloning origin-aggregated-logging ...'
git clone --quiet --recurse-submodules /data/src/github.com/openshift/origin-aggregated-logging-bare /data/src/github.com/openshift/origin-aggregated-logging

else
MISSING_REPO+='origin-aggregated-logging-bare'
fi

if [ -n "$MISSING_REPO" ]; then
echo 'Missing required upstream repositories:'
echo $MISSING_REPO
echo 'To fix, execute command: vagrant clone-upstream-repos'
fi
' with timeout 14400. Attempt #0
Cloning origin-aggregated-logging ...
Submodule 'deployer/common' (https://github.com/openshift/origin-integration-common) registered for path 'deployer/common'
Submodule 'kibana-proxy' (https://github.com/fabric8io/openshift-auth-proxy.git) registered for path 'kibana-proxy'
Cloning into 'deployer/common'...
Submodule path 'deployer/common': checked out '45bf993212cdcbab5cbce3b3fab74a72b851402e'
Cloning into 'kibana-proxy'...
Submodule path 'kibana-proxy': checked out '118dfb40f7a8082d370ba7f4805255c9ec7c8178'
+ vagrant ssh -c 'if [ ! -d /tmp/openshift ] ; then mkdir /tmp/openshift ; fi ; sudo chmod 777 /tmp/openshift'
+ for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana
+ echo pulling image openshift/base-centos7 ...
pulling image openshift/base-centos7 ...
+ vagrant ssh -c 'docker pull openshift/base-centos7' -- -n
Using default tag: latest
Trying to pull repository docker.io/openshift/base-centos7 ... 
latest: Pulling from docker.io/openshift/base-centos7
45a2e645736c: Pulling fs layer
734fb161cf89: Pulling fs layer
78efc9e155c4: Pulling fs layer
8a3400b7e31a: Pulling fs layer
8a3400b7e31a: Waiting
734fb161cf89: Verifying Checksum
734fb161cf89: Download complete
8a3400b7e31a: Verifying Checksum
8a3400b7e31a: Download complete
45a2e645736c: Verifying Checksum
45a2e645736c: Download complete
78efc9e155c4: Verifying Checksum
78efc9e155c4: Download complete
45a2e645736c: Pull complete
734fb161cf89: Pull complete
78efc9e155c4: Pull complete
8a3400b7e31a: Pull complete
Digest: sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c
+ echo done with openshift/base-centos7
done with openshift/base-centos7
+ for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana
+ echo pulling image centos:centos7 ...
pulling image centos:centos7 ...
+ vagrant ssh -c 'docker pull centos:centos7' -- -n
Trying to pull repository docker.io/library/centos ... 
centos7: Pulling from docker.io/library/centos
Digest: sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9
+ echo done with centos:centos7
done with centos:centos7
+ for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana
+ echo pulling image openshift/origin-logging-elasticsearch ...
pulling image openshift/origin-logging-elasticsearch ...
+ vagrant ssh -c 'docker pull openshift/origin-logging-elasticsearch' -- -n
Using default tag: latest
Trying to pull repository docker.io/openshift/origin-logging-elasticsearch ... 
latest: Pulling from docker.io/openshift/origin-logging-elasticsearch
d5e46245fe40: Already exists
ab4780386529: Pulling fs layer
80503ae3b0fe: Pulling fs layer
110d90898f8a: Pulling fs layer
b110708dfac6: Pulling fs layer
8f9ecbfd25ab: Pulling fs layer
29d7ed0baa52: Pulling fs layer
17ebbcb3d605: Pulling fs layer
d37a5fc9cbde: Pulling fs layer
060ad1853242: Pulling fs layer
eee851304b3a: Pulling fs layer
29d7ed0baa52: Waiting
060ad1853242: Waiting
eee851304b3a: Waiting
17ebbcb3d605: Waiting
d37a5fc9cbde: Waiting
b110708dfac6: Waiting
8f9ecbfd25ab: Waiting
ab4780386529: Verifying Checksum
ab4780386529: Download complete
110d90898f8a: Verifying Checksum
110d90898f8a: Download complete
b110708dfac6: Verifying Checksum
b110708dfac6: Download complete
29d7ed0baa52: Verifying Checksum
29d7ed0baa52: Download complete
8f9ecbfd25ab: Verifying Checksum
8f9ecbfd25ab: Download complete
17ebbcb3d605: Verifying Checksum
17ebbcb3d605: Download complete
060ad1853242: Verifying Checksum
060ad1853242: Download complete
eee851304b3a: Verifying Checksum
eee851304b3a: Download complete
d37a5fc9cbde: Verifying Checksum
d37a5fc9cbde: Download complete
80503ae3b0fe: Verifying Checksum
80503ae3b0fe: Download complete
ab4780386529: Pull complete
80503ae3b0fe: Pull complete
110d90898f8a: Pull complete
b110708dfac6: Pull complete
8f9ecbfd25ab: Pull complete
29d7ed0baa52: Pull complete
17ebbcb3d605: Pull complete
d37a5fc9cbde: Pull complete
060ad1853242: Pull complete
eee851304b3a: Pull complete
Digest: sha256:3a4d359a10d7655cdca2cfa3a89771d6825ffe1d50de4ac7bb570e79f862ccfb
+ echo done with openshift/origin-logging-elasticsearch
done with openshift/origin-logging-elasticsearch
+ for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana
+ echo pulling image openshift/origin-logging-fluentd ...
pulling image openshift/origin-logging-fluentd ...
+ vagrant ssh -c 'docker pull openshift/origin-logging-fluentd' -- -n
Using default tag: latest
Trying to pull repository docker.io/openshift/origin-logging-fluentd ... 
latest: Pulling from docker.io/openshift/origin-logging-fluentd
d5e46245fe40: Already exists
e0f9da45960a: Pulling fs layer
b7564a1b49c3: Pulling fs layer
1f0ac0ad59f6: Pulling fs layer
a036466e4202: Pulling fs layer
954e91cd4a3c: Pulling fs layer
a036466e4202: Waiting
954e91cd4a3c: Waiting
1f0ac0ad59f6: Download complete
a036466e4202: Verifying Checksum
a036466e4202: Download complete
954e91cd4a3c: Download complete
b7564a1b49c3: Verifying Checksum
b7564a1b49c3: Download complete
e0f9da45960a: Verifying Checksum
e0f9da45960a: Download complete
e0f9da45960a: Pull complete
b7564a1b49c3: Pull complete
1f0ac0ad59f6: Pull complete
a036466e4202: Pull complete
954e91cd4a3c: Pull complete
Digest: sha256:8e382dfb002d4f0788d8c5d30ec1baff8005c548bc49fa061fc24d9a0302d9e9
+ echo done with openshift/origin-logging-fluentd
done with openshift/origin-logging-fluentd
+ for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana
+ echo pulling image openshift/origin-logging-curator ...
pulling image openshift/origin-logging-curator ...
+ vagrant ssh -c 'docker pull openshift/origin-logging-curator' -- -n
Using default tag: latest
Trying to pull repository docker.io/openshift/origin-logging-curator ... 
latest: Pulling from docker.io/openshift/origin-logging-curator
d5e46245fe40: Already exists
45b57d2b5ea1: Pulling fs layer
a2722b2a33b6: Pulling fs layer
45b57d2b5ea1: Download complete
a2722b2a33b6: Verifying Checksum
a2722b2a33b6: Download complete
45b57d2b5ea1: Pull complete
a2722b2a33b6: Pull complete
Digest: sha256:ee6d3de66a3dac118b6c961786fc075276bda8c688f9bf8e24f6559b38f0fbeb
+ echo done with openshift/origin-logging-curator
done with openshift/origin-logging-curator
+ for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana
+ echo pulling image openshift/origin-logging-kibana ...
pulling image openshift/origin-logging-kibana ...
+ vagrant ssh -c 'docker pull openshift/origin-logging-kibana' -- -n
Using default tag: latest
Trying to pull repository docker.io/openshift/origin-logging-kibana ... 
latest: Pulling from docker.io/openshift/origin-logging-kibana
45a2e645736c: Already exists
734fb161cf89: Already exists
78efc9e155c4: Already exists
8a3400b7e31a: Already exists
6e4f505c5772: Pulling fs layer
a746d34fe6c3: Pulling fs layer
2e2d74c80385: Pulling fs layer
8f4b9444f21e: Pulling fs layer
9a5f7882bf53: Pulling fs layer
1a2586e469f9: Pulling fs layer
8f4b9444f21e: Waiting
9a5f7882bf53: Waiting
1a2586e469f9: Waiting
6e4f505c5772: Verifying Checksum
6e4f505c5772: Download complete
2e2d74c80385: Verifying Checksum
2e2d74c80385: Download complete
9a5f7882bf53: Verifying Checksum
9a5f7882bf53: Download complete
8f4b9444f21e: Download complete
6e4f505c5772: Pull complete
a746d34fe6c3: Verifying Checksum
a746d34fe6c3: Download complete
1a2586e469f9: Verifying Checksum
1a2586e469f9: Download complete
a746d34fe6c3: Pull complete
2e2d74c80385: Pull complete
8f4b9444f21e: Pull complete
9a5f7882bf53: Pull complete
1a2586e469f9: Pull complete
Digest: sha256:9e3e11edb1f14c744ecf9587a3212e7648934a8bb302513ba84a8c6b058a1229
+ echo done with openshift/origin-logging-kibana
done with openshift/origin-logging-kibana
+ vagrant test-origin-aggregated-logging -d --env GIT_URL=https://github.com/openshift/origin-aggregated-logging --env GIT_BRANCH=master --env O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging --env OS_ROOT=/data/src/github.com/openshift/origin --env ENABLE_OPS_CLUSTER=true --env USE_LOCAL_SOURCE=true --env TEST_PERF=false --env VERBOSE=1 --env OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible --env OS_ANSIBLE_BRANCH=master
***************************************************
Running GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh...
/data/src/github.com/openshift/origin /data/src/github.com/openshift/origin-aggregated-logging/hack/testing
/data/src/github.com/openshift/origin-aggregated-logging/hack/testing
/data/src/github.com/openshift/origin-aggregated-logging /data/src/github.com/openshift/origin-aggregated-logging/hack/testing
/data/src/github.com/openshift/origin-aggregated-logging/hack/testing
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Metadata Cache Created
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Resolving Dependencies
--> Running transaction check
---> Package ansible.noarch 0:2.3.0.0-3.el7 will be installed
--> Processing Dependency: sshpass for package: ansible-2.3.0.0-3.el7.noarch
--> Processing Dependency: python-paramiko for package: ansible-2.3.0.0-3.el7.noarch
--> Processing Dependency: python-keyczar for package: ansible-2.3.0.0-3.el7.noarch
--> Processing Dependency: python-httplib2 for package: ansible-2.3.0.0-3.el7.noarch
--> Processing Dependency: python-crypto for package: ansible-2.3.0.0-3.el7.noarch
---> Package python2-pip.noarch 0:8.1.2-5.el7 will be installed
---> Package python2-ruamel-yaml.x86_64 0:0.12.14-9.el7 will be installed
--> Processing Dependency: python2-typing for package: python2-ruamel-yaml-0.12.14-9.el7.x86_64
--> Processing Dependency: python2-ruamel-ordereddict for package: python2-ruamel-yaml-0.12.14-9.el7.x86_64
--> Running transaction check
---> Package python-httplib2.noarch 0:0.9.1-2.el7aos will be installed
---> Package python-keyczar.noarch 0:0.71c-2.el7aos will be installed
--> Processing Dependency: python-pyasn1 for package: python-keyczar-0.71c-2.el7aos.noarch
---> Package python-paramiko.noarch 0:2.1.1-1.el7 will be installed
--> Processing Dependency: python-cryptography for package: python-paramiko-2.1.1-1.el7.noarch
---> Package python2-crypto.x86_64 0:2.6.1-13.el7 will be installed
--> Processing Dependency: libtomcrypt.so.0()(64bit) for package: python2-crypto-2.6.1-13.el7.x86_64
---> Package python2-ruamel-ordereddict.x86_64 0:0.4.9-3.el7 will be installed
---> Package python2-typing.noarch 0:3.5.2.2-3.el7 will be installed
---> Package sshpass.x86_64 0:1.06-1.el7 will be installed
--> Running transaction check
---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
--> Processing Dependency: libtommath >= 0.42.0 for package: libtomcrypt-1.17-23.el7.x86_64
--> Processing Dependency: libtommath.so.0()(64bit) for package: libtomcrypt-1.17-23.el7.x86_64
---> Package python2-cryptography.x86_64 0:1.3.1-3.el7 will be installed
--> Processing Dependency: python-idna >= 2.0 for package: python2-cryptography-1.3.1-3.el7.x86_64
--> Processing Dependency: python-cffi >= 1.4.1 for package: python2-cryptography-1.3.1-3.el7.x86_64
--> Processing Dependency: python-ipaddress for package: python2-cryptography-1.3.1-3.el7.x86_64
--> Processing Dependency: python-enum34 for package: python2-cryptography-1.3.1-3.el7.x86_64
---> Package python2-pyasn1.noarch 0:0.1.9-7.el7 will be installed
--> Running transaction check
---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
---> Package python-cffi.x86_64 0:1.6.0-5.el7 will be installed
--> Processing Dependency: python-pycparser for package: python-cffi-1.6.0-5.el7.x86_64
---> Package python-enum34.noarch 0:1.0.4-1.el7 will be installed
---> Package python-idna.noarch 0:2.0-1.el7 will be installed
---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
--> Running transaction check
---> Package python-pycparser.noarch 0:2.14-1.el7 will be installed
--> Processing Dependency: python-ply for package: python-pycparser-2.14-1.el7.noarch
--> Running transaction check
---> Package python-ply.noarch 0:3.4-10.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package              Arch   Version        Repository                     Size
================================================================================
Installing:
 ansible              noarch 2.3.0.0-3.el7  epel                          5.7 M
 python2-pip          noarch 8.1.2-5.el7    epel                          1.7 M
 python2-ruamel-yaml  x86_64 0.12.14-9.el7  li                            245 k
Installing for dependencies:
 libtomcrypt          x86_64 1.17-23.el7    epel                          224 k
 libtommath           x86_64 0.42.0-4.el7   epel                           35 k
 python-cffi          x86_64 1.6.0-5.el7    oso-rhui-rhel-server-releases 218 k
 python-enum34        noarch 1.0.4-1.el7    oso-rhui-rhel-server-releases  52 k
 python-httplib2      noarch 0.9.1-2.el7aos li                            115 k
 python-idna          noarch 2.0-1.el7      oso-rhui-rhel-server-releases  92 k
 python-ipaddress     noarch 1.0.16-2.el7   oso-rhui-rhel-server-releases  34 k
 python-keyczar       noarch 0.71c-2.el7aos rhel-7-server-ose-3.1-rpms    217 k
 python-paramiko      noarch 2.1.1-1.el7    rhel-7-server-ose-3.4-rpms    266 k
 python-ply           noarch 3.4-10.el7     oso-rhui-rhel-server-releases 123 k
 python-pycparser     noarch 2.14-1.el7     oso-rhui-rhel-server-releases 105 k
 python2-crypto       x86_64 2.6.1-13.el7   epel                          476 k
 python2-cryptography x86_64 1.3.1-3.el7    oso-rhui-rhel-server-releases 471 k
 python2-pyasn1       noarch 0.1.9-7.el7    oso-rhui-rhel-server-releases 100 k
 python2-ruamel-ordereddict
                      x86_64 0.4.9-3.el7    li                             38 k
 python2-typing       noarch 3.5.2.2-3.el7  epel                           39 k
 sshpass              x86_64 1.06-1.el7     epel                           21 k

Transaction Summary
================================================================================
Install  3 Packages (+17 Dependent packages)

Total download size: 10 M
Installed size: 47 M
Downloading packages:
--------------------------------------------------------------------------------
Total                                              4.8 MB/s |  10 MB  00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python2-pyasn1-0.1.9-7.el7.noarch                           1/20 
  Installing : sshpass-1.06-1.el7.x86_64                                   2/20 
  Installing : libtommath-0.42.0-4.el7.x86_64                              3/20 
  Installing : libtomcrypt-1.17-23.el7.x86_64                              4/20 
  Installing : python2-crypto-2.6.1-13.el7.x86_64                          5/20 
  Installing : python-keyczar-0.71c-2.el7aos.noarch                        6/20 
  Installing : python-enum34-1.0.4-1.el7.noarch                            7/20 
  Installing : python-ply-3.4-10.el7.noarch                                8/20 
  Installing : python-pycparser-2.14-1.el7.noarch                          9/20 
  Installing : python-cffi-1.6.0-5.el7.x86_64                             10/20 
  Installing : python-httplib2-0.9.1-2.el7aos.noarch                      11/20 
  Installing : python-idna-2.0-1.el7.noarch                               12/20 
  Installing : python2-ruamel-ordereddict-0.4.9-3.el7.x86_64              13/20 
  Installing : python2-typing-3.5.2.2-3.el7.noarch                        14/20 
  Installing : python-ipaddress-1.0.16-2.el7.noarch                       15/20 
  Installing : python2-cryptography-1.3.1-3.el7.x86_64                    16/20 
  Installing : python-paramiko-2.1.1-1.el7.noarch                         17/20 
  Installing : ansible-2.3.0.0-3.el7.noarch                               18/20 
  Installing : python2-ruamel-yaml-0.12.14-9.el7.x86_64                   19/20 
  Installing : python2-pip-8.1.2-5.el7.noarch                             20/20 
  Verifying  : python-pycparser-2.14-1.el7.noarch                          1/20 
  Verifying  : python-ipaddress-1.0.16-2.el7.noarch                        2/20 
  Verifying  : ansible-2.3.0.0-3.el7.noarch                                3/20 
  Verifying  : python2-typing-3.5.2.2-3.el7.noarch                         4/20 
  Verifying  : python2-pip-8.1.2-5.el7.noarch                              5/20 
  Verifying  : python2-pyasn1-0.1.9-7.el7.noarch                           6/20 
  Verifying  : libtomcrypt-1.17-23.el7.x86_64                              7/20 
  Verifying  : python-cffi-1.6.0-5.el7.x86_64                              8/20 
  Verifying  : python2-ruamel-yaml-0.12.14-9.el7.x86_64                    9/20 
  Verifying  : python2-ruamel-ordereddict-0.4.9-3.el7.x86_64              10/20 
  Verifying  : python-idna-2.0-1.el7.noarch                               11/20 
  Verifying  : python-httplib2-0.9.1-2.el7aos.noarch                      12/20 
  Verifying  : python-ply-3.4-10.el7.noarch                               13/20 
  Verifying  : python-enum34-1.0.4-1.el7.noarch                           14/20 
  Verifying  : python-keyczar-0.71c-2.el7aos.noarch                       15/20 
  Verifying  : libtommath-0.42.0-4.el7.x86_64                             16/20 
  Verifying  : sshpass-1.06-1.el7.x86_64                                  17/20 
  Verifying  : python2-cryptography-1.3.1-3.el7.x86_64                    18/20 
  Verifying  : python-paramiko-2.1.1-1.el7.noarch                         19/20 
  Verifying  : python2-crypto-2.6.1-13.el7.x86_64                         20/20 

Installed:
  ansible.noarch 0:2.3.0.0-3.el7              python2-pip.noarch 0:8.1.2-5.el7 
  python2-ruamel-yaml.x86_64 0:0.12.14-9.el7 

Dependency Installed:
  libtomcrypt.x86_64 0:1.17-23.el7                                              
  libtommath.x86_64 0:0.42.0-4.el7                                              
  python-cffi.x86_64 0:1.6.0-5.el7                                              
  python-enum34.noarch 0:1.0.4-1.el7                                            
  python-httplib2.noarch 0:0.9.1-2.el7aos                                       
  python-idna.noarch 0:2.0-1.el7                                                
  python-ipaddress.noarch 0:1.0.16-2.el7                                        
  python-keyczar.noarch 0:0.71c-2.el7aos                                        
  python-paramiko.noarch 0:2.1.1-1.el7                                          
  python-ply.noarch 0:3.4-10.el7                                                
  python-pycparser.noarch 0:2.14-1.el7                                          
  python2-crypto.x86_64 0:2.6.1-13.el7                                          
  python2-cryptography.x86_64 0:1.3.1-3.el7                                     
  python2-pyasn1.noarch 0:0.1.9-7.el7                                           
  python2-ruamel-ordereddict.x86_64 0:0.4.9-3.el7                               
  python2-typing.noarch 0:3.5.2.2-3.el7                                         
  sshpass.x86_64 0:1.06-1.el7                                                   

Complete!
Cloning into '/tmp/tmp.na8PBpGpOF/openhift-ansible'...
Copying oc from path to /usr/local/bin for use by openshift-ansible
Copying oc from path to /usr/bin for use by openshift-ansible
Copying oadm from path to /usr/local/bin for use by openshift-ansible
Copying oadm from path to /usr/bin for use by openshift-ansible
[INFO] Starting logging tests at Thu Jun  8 21:36:03 EDT 2017
Generated new key pair as /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/serviceaccounts.public.key and /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/serviceaccounts.private.key
Generating node credentials ...
Created node config for 172.18.11.188 in /tmp/openshift/origin-aggregated-logging/openshift.local.config/node-172.18.11.188
Wrote master config to: /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/master-config.yaml
Running hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s...
SUCCESS after 38.951s: hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s
Standard output from the command:
ok
Standard error from the command:
The connection to the server 172.18.11.188:8443 was refused - did you specify the right host or port?
... repeated 85 times
Error from server (Forbidden): User "system:admin" cannot "get" on "/healthz"
... repeated 5 times
Running hack/lib/start.sh:353: executing 'oc get --raw https://172.18.11.188:10250/healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s...
SUCCESS after 0.246s: hack/lib/start.sh:353: executing 'oc get --raw https://172.18.11.188:10250/healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s
Standard output from the command:
ok
There was no error output from the command.
Running hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s...
SUCCESS after 0.781s: hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s
Standard output from the command:
ok
Standard error from the command:
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Running hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s...
SUCCESS after 0.445s: hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s
Standard output from the command:
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)                 AGE
kubernetes   172.30.0.1   <none>        443/TCP,53/UDP,53/TCP   4s

There was no error output from the command.
Running hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.11.188 --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s...
SUCCESS after 0.322s: hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.11.188 --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s
Standard output from the command:
{"kind":"Node","apiVersion":"v1","metadata":{"name":"172.18.11.188","selfLink":"/api/v1/nodes/172.18.11.188","uid":"21fc4fa9-4cb4-11e7-9445-0ecf874efb82","resourceVersion":"290","creationTimestamp":"2017-06-09T01:37:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"172.18.11.188"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"172.18.11.188","providerID":"aws:////i-08ff6aa2c118c7c4f"},"status":{"capacity":{"cpu":"4","memory":"7231688Ki","pods":"40"},"allocatable":{"cpu":"4","memory":"7129288Ki","pods":"40"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2017-06-09T01:37:00Z","lastTransitionTime":"2017-06-09T01:37:00Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2017-06-09T01:37:00Z","lastTransitionTime":"2017-06-09T01:37:00Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2017-06-09T01:37:00Z","lastTransitionTime":"2017-06-09T01:37:00Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"Ready","status":"True","lastHeartbeatTime":"2017-06-09T01:37:00Z","lastTransitionTime":"2017-06-09T01:37:00Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"LegacyHostIP","address":"172.18.11.188"},{"type":"InternalIP","address":"172.18.11.188"},{"type":"Hostname","address":"172.18.11.188"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"f9370ed252a14f73b014c1301a9b6d1b","systemUUID":"EC2C94CB-D989-55B5-BF3C-986A8B251863","bootID":"adb5af75-a764-42ee-b935-43316d27f23d","kernelVersion":"3.10.0-327.22.2.el7.x86_64","osImage":"Red Hat Enterprise Linux Server 7.3 (Maipo)","containerRuntimeVersion":"docker://1.12.6","kubeletVersion":"v1.6.1+5115d708d7","kubeProxyVersion":"v1.6.1+5115d708d7","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/origin-federation:6acabdc","openshift/origin-federation:latest"],"sizeBytes":1205885664},{"names":["openshift/origin-docker-registry:6acabdc","openshift/origin-docker-registry:latest"],"sizeBytes":1100164272},{"names":["openshift/origin-gitserver:6acabdc","openshift/origin-gitserver:latest"],"sizeBytes":1086520226},{"names":["openshift/openvswitch:6acabdc","openshift/openvswitch:latest"],"sizeBytes":1053403667},{"names":["openshift/node:6acabdc","openshift/node:latest"],"sizeBytes":1051721928},{"names":["openshift/origin-keepalived-ipfailover:6acabdc","openshift/origin-keepalived-ipfailover:latest"],"sizeBytes":1028529711},{"names":["openshift/origin-haproxy-router:6acabdc","openshift/origin-haproxy-router:latest"],"sizeBytes":1022758742},{"names":["openshift/origin:6acabdc","openshift/origin:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-f5-router:6acabdc","openshift/origin-f5-router:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-sti-builder:6acabdc","openshift/origin-sti-builder:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-recycler:6acabdc","openshift/origin-recycler:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-deployer:6acabdc","openshift/origin-deployer:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-docker-builder:6acabdc","openshift/origin-docker-builder:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-cluster-capacity:6acabdc","openshift/origin-cluster-capacity:latest"],"sizeBytes":962455026},{"names":["rhel7.1:latest"],"sizeBytes":765301508},{"names":["openshift/dind-master:latest"],"sizeBytes":731456758},{"names":["openshift/dind-node:latest"],"sizeBytes":731453034},{"names":["\u003cnone\u003e@\u003cnone\u003e","\u003cnone\u003e:\u003cnone\u003e"],"sizeBytes":709532011},{"names":["docker.io/openshift/origin-logging-kibana@sha256:9e3e11edb1f14c744ecf9587a3212e7648934a8bb302513ba84a8c6b058a1229","docker.io/openshift/origin-logging-kibana:latest"],"sizeBytes":682851463},{"names":["openshift/dind:latest"],"sizeBytes":640650210},{"names":["docker.io/openshift/origin-logging-elasticsearch@sha256:3a4d359a10d7655cdca2cfa3a89771d6825ffe1d50de4ac7bb570e79f862ccfb","docker.io/openshift/origin-logging-elasticsearch:latest"],"sizeBytes":425433788},{"names":["docker.io/openshift/base-centos7@sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c","docker.io/openshift/base-centos7:latest"],"sizeBytes":383049978},{"names":["rhel7.2:latest"],"sizeBytes":377493597},{"names":["openshift/origin-egress-router:6acabdc","openshift/origin-egress-router:latest"],"sizeBytes":364745713},{"names":["openshift/origin-base:latest"],"sizeBytes":363070172},{"names":["\u003cnone\u003e@\u003cnone\u003e","\u003cnone\u003e:\u003cnone\u003e"],"sizeBytes":363024702},{"names":["docker.io/openshift/origin-logging-fluentd@sha256:8e382dfb002d4f0788d8c5d30ec1baff8005c548bc49fa061fc24d9a0302d9e9","docker.io/openshift/origin-logging-fluentd:latest"],"sizeBytes":359223094},{"names":["docker.io/fedora@sha256:69281ddd7b2600e5f2b17f1e12d7fba25207f459204fb2d15884f8432c479136","docker.io/fedora:25"],"sizeBytes":230864375},{"names":["docker.io/openshift/origin-logging-curator@sha256:ee6d3de66a3dac118b6c961786fc075276bda8c688f9bf8e24f6559b38f0fbeb","docker.io/openshift/origin-logging-curator:latest"],"sizeBytes":224977536},{"names":["rhel7.3:latest","rhel7:latest"],"sizeBytes":219121266},{"names":["openshift/origin-pod:6acabdc","openshift/origin-pod:latest"],"sizeBytes":213199843},{"names":["registry.access.redhat.com/rhel7.2@sha256:98e6ca5d226c26e31a95cd67716afe22833c943e1926a21daf1a030906a02249","registry.access.redhat.com/rhel7.2:latest"],"sizeBytes":201376319},{"names":["registry.access.redhat.com/rhel7.3@sha256:1e232401d8e0ba53b36b757b4712fbcbd1dab9c21db039c45a84871a74e89e68","registry.access.redhat.com/rhel7.3:latest"],"sizeBytes":192693772},{"names":["docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077"],"sizeBytes":192548999},{"names":["openshift/origin-source:latest"],"sizeBytes":192548894},{"names":["docker.io/centos@sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9","docker.io/centos:7","docker.io/centos:centos7"],"sizeBytes":192548537},{"names":["registry.access.redhat.com/rhel7.1@sha256:1bc5a4c43bbb29a5a96a61896ff696933be3502e2f5fdc4cde02d9e101731fdd","registry.access.redhat.com/rhel7.1:latest"],"sizeBytes":158229901},{"names":["openshift/hello-openshift:6acabdc","openshift/hello-openshift:latest"],"sizeBytes":5643318}]}}

There was no error output from the command.
serviceaccount "registry" created
clusterrolebinding "registry-registry-role" created
deploymentconfig "docker-registry" created
service "docker-registry" created
--> Creating router router ...
info: password for stats user admin has been set to fAvKJDxPjA
    serviceaccount "router" created
    clusterrolebinding "router-router-role" created
    deploymentconfig "router" created
    service "router" created
--> Success
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:162: executing 'oadm new-project logging --node-selector=''' expecting success...
SUCCESS after 0.465s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:162: executing 'oadm new-project logging --node-selector=''' expecting success
Standard output from the command:
Created project logging

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:163: executing 'oc project logging > /dev/null' expecting success...
SUCCESS after 0.490s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:163: executing 'oc project logging > /dev/null' expecting success
There was no output from the command.
There was no error output from the command.
apiVersion: v1
items:
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      build: logging-elasticsearch
      component: development
      logging-infra: development
      provider: openshift
    name: logging-elasticsearch
  spec: {}
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      build: logging-fluentd
      component: development
      logging-infra: development
      provider: openshift
    name: logging-fluentd
  spec: {}
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      build: logging-kibana
      component: development
      logging-infra: development
      provider: openshift
    name: logging-kibana
  spec: {}
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      build: logging-curator
      component: development
      logging-infra: development
      provider: openshift
    name: logging-curator
  spec: {}
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      build: logging-auth-proxy
      component: development
      logging-infra: development
      provider: openshift
    name: logging-auth-proxy
  spec: {}
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      build: logging-deployment
      component: development
      logging-infra: development
      provider: openshift
    name: origin
  spec:
    dockerImageRepository: openshift/origin
    tags:
    - from:
        kind: DockerImage
        name: openshift/origin:v1.5.0-alpha.2
      name: v1.5.0-alpha.2
- apiVersion: v1
  kind: BuildConfig
  metadata:
    labels:
      app: logging-elasticsearch
      component: development
      logging-infra: development
      provider: openshift
    name: logging-elasticsearch
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: logging-elasticsearch:latest
    resources: {}
    source:
      contextDir: elasticsearch
      git:
        ref: master
        uri: https://github.com/openshift/origin-aggregated-logging
      type: Git
    strategy:
      dockerStrategy:
        from:
          kind: DockerImage
          name: openshift/base-centos7
      type: Docker
- apiVersion: v1
  kind: BuildConfig
  metadata:
    labels:
      build: logging-fluentd
      component: development
      logging-infra: development
      provider: openshift
    name: logging-fluentd
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: logging-fluentd:latest
    resources: {}
    source:
      contextDir: fluentd
      git:
        ref: master
        uri: https://github.com/openshift/origin-aggregated-logging
      type: Git
    strategy:
      dockerStrategy:
        from:
          kind: DockerImage
          name: openshift/base-centos7
      type: Docker
- apiVersion: v1
  kind: BuildConfig
  metadata:
    labels:
      build: logging-kibana
      component: development
      logging-infra: development
      provider: openshift
    name: logging-kibana
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: logging-kibana:latest
    resources: {}
    source:
      contextDir: kibana
      git:
        ref: master
        uri: https://github.com/openshift/origin-aggregated-logging
      type: Git
    strategy:
      dockerStrategy:
        from:
          kind: DockerImage
          name: openshift/base-centos7
      type: Docker
- apiVersion: v1
  kind: BuildConfig
  metadata:
    labels:
      build: logging-curator
      component: development
      logging-infra: development
      provider: openshift
    name: logging-curator
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: logging-curator:latest
    resources: {}
    source:
      contextDir: curator
      git:
        ref: master
        uri: https://github.com/openshift/origin-aggregated-logging
      type: Git
    strategy:
      dockerStrategy:
        from:
          kind: DockerImage
          name: openshift/base-centos7
      type: Docker
- apiVersion: v1
  kind: BuildConfig
  metadata:
    labels:
      build: logging-auth-proxy
      component: development
      logging-infra: development
      provider: openshift
    name: logging-auth-proxy
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: logging-auth-proxy:latest
    resources: {}
    source:
      contextDir: kibana-proxy
      git:
        ref: master
        uri: https://github.com/openshift/origin-aggregated-logging
      type: Git
    strategy:
      dockerStrategy:
        from:
          kind: DockerImage
          name: library/node:0.10.36
      type: Docker
kind: List
metadata: {}
Running hack/testing/build-images:31: executing 'oc process -o yaml    -f /data/src/github.com/openshift/origin-aggregated-logging/hack/templates/dev-builds-wo-deployer.yaml    -p LOGGING_FORK_URL=https://github.com/openshift/origin-aggregated-logging -p LOGGING_FORK_BRANCH=master    | build_filter | oc create -f -' expecting success...
SUCCESS after 0.354s: hack/testing/build-images:31: executing 'oc process -o yaml    -f /data/src/github.com/openshift/origin-aggregated-logging/hack/templates/dev-builds-wo-deployer.yaml    -p LOGGING_FORK_URL=https://github.com/openshift/origin-aggregated-logging -p LOGGING_FORK_BRANCH=master    | build_filter | oc create -f -' expecting success
Standard output from the command:
imagestream "logging-elasticsearch" created
imagestream "logging-fluentd" created
imagestream "logging-kibana" created
imagestream "logging-curator" created
imagestream "logging-auth-proxy" created
imagestream "origin" created
buildconfig "logging-elasticsearch" created
buildconfig "logging-fluentd" created
buildconfig "logging-kibana" created
buildconfig "logging-curator" created
buildconfig "logging-auth-proxy" created

There was no error output from the command.
Running hack/testing/build-images:9: executing 'oc get imagestreamtag origin:latest' expecting success; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 1.034s: hack/testing/build-images:9: executing 'oc get imagestreamtag origin:latest' expecting success; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
NAME            DOCKER REF                                                                                 UPDATED        IMAGENAME
origin:latest   openshift/origin@sha256:5510c6f48d2c08c00a0a2bd287a0c679b8a3894c529f43400cdabfb229600283   1 second ago   sha256:5510c6f48d2c08c00a0a2bd287a0c679b8a3894c529f43400cdabfb229600283
Standard error from the command:
Error from server (NotFound): imagestreamtags.image.openshift.io "origin:latest" not found
... repeated 2 times
Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ...
build "logging-auth-proxy-1" started
Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ...
build "logging-curator-1" started
Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ...
build "logging-elasticsearch-1" started
Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ...
build "logging-fluentd-1" started
Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ...
build "logging-kibana-1" started
Running hack/testing/build-images:33: executing 'wait_for_builds_complete' expecting success...
SUCCESS after 368.201s: hack/testing/build-images:33: executing 'wait_for_builds_complete' expecting success
Standard output from the command:
build "logging-kibana-2" started
build in progress for logging-kibana - delete failed build logging-kibana-1 status complete
build "logging-kibana-1" deleted
Builds are complete

Standard error from the command:
Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ...

/tmp/tmp.na8PBpGpOF/openhift-ansible /data/src/github.com/openshift/origin-aggregated-logging
### Created host inventory file ###
[oo_first_master]
openshift

[oo_first_master:vars]
ansible_become=true
ansible_connection=local
containerized=true
docker_protect_installed_version=true
openshift_deployment_type=origin
deployment_type=origin
required_packages=[]


openshift_hosted_logging_hostname=kibana.127.0.0.1.xip.io
openshift_master_logging_public_url=https://kibana.127.0.0.1.xip.io
openshift_logging_master_public_url=https://172.18.11.188:8443

openshift_logging_image_prefix=172.30.155.104:5000/logging/
openshift_logging_use_ops=true

openshift_logging_fluentd_journal_read_from_head=False
openshift_logging_es_log_appenders=['console']
openshift_logging_use_mux=false
openshift_logging_mux_allow_external=false
openshift_logging_use_mux_client=false





###################################
Running hack/testing/init-log-stack:58: executing 'oc login -u system:admin' expecting success...
SUCCESS after 0.216s: hack/testing/init-log-stack:58: executing 'oc login -u system:admin' expecting success
Standard output from the command:
Logged into "https://172.18.11.188:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * logging
    openshift
    openshift-infra

Using project "logging".

There was no error output from the command.
Using /tmp/tmp.na8PBpGpOF/openhift-ansible/ansible.cfg as config file

PLAYBOOK: openshift-logging.yml ************************************************
4 plays in /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml

PLAY [Create initial host groups for localhost] ********************************
META: ran handlers

TASK [include_vars] ************************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/byo/openshift-cluster/initialize_groups.yml:10
ok: [localhost] => {
    "ansible_facts": {
        "g_all_hosts": "{{ g_master_hosts | union(g_node_hosts) | union(g_etcd_hosts) | union(g_lb_hosts) | union(g_nfs_hosts) | union(g_new_node_hosts)| union(g_new_master_hosts) | default([]) }}", 
        "g_etcd_hosts": "{{ groups.etcd | default([]) }}", 
        "g_glusterfs_hosts": "{{ groups.glusterfs | default([]) }}", 
        "g_glusterfs_registry_hosts": "{{ groups.glusterfs_registry | default(g_glusterfs_hosts) }}", 
        "g_lb_hosts": "{{ groups.lb | default([]) }}", 
        "g_master_hosts": "{{ groups.masters | default([]) }}", 
        "g_new_master_hosts": "{{ groups.new_masters | default([]) }}", 
        "g_new_node_hosts": "{{ groups.new_nodes | default([]) }}", 
        "g_nfs_hosts": "{{ groups.nfs | default([]) }}", 
        "g_node_hosts": "{{ groups.nodes | default([]) }}"
    }, 
    "changed": false
}
META: ran handlers
META: ran handlers

PLAY [Populate config host groups] *********************************************
META: ran handlers

TASK [Evaluate groups - g_etcd_hosts required] *********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:8
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] *********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:13
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:18
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate groups - g_lb_hosts required] ***********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:23
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate groups - g_nfs_hosts required] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:28
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate groups - g_nfs_hosts is single host] ****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:33
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate groups - g_glusterfs_hosts required] ****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:38
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate oo_all_hosts] ***************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:43

TASK [Evaluate oo_masters] *****************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:52

TASK [Evaluate oo_first_master] ************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:61
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate oo_masters_to_config] *******************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:70

TASK [Evaluate oo_etcd_to_config] **********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:79

TASK [Evaluate oo_first_etcd] **************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:88
skipping: [localhost] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [Evaluate oo_etcd_hosts_to_upgrade] ***************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:100

TASK [Evaluate oo_etcd_hosts_to_backup] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:107
creating host via 'add_host': hostname=openshift
ok: [localhost] => (item=openshift) => {
    "add_host": {
        "groups": [
            "oo_etcd_hosts_to_backup"
        ], 
        "host_name": "openshift", 
        "host_vars": {}
    }, 
    "changed": false, 
    "item": "openshift"
}

TASK [Evaluate oo_nodes_to_config] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:114

TASK [Add master to oo_nodes_to_config] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:124

TASK [Evaluate oo_lb_to_config] ************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:134

TASK [Evaluate oo_nfs_to_config] ***********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:143

TASK [Evaluate oo_glusterfs_to_config] *****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:152
META: ran handlers
META: ran handlers

PLAY [OpenShift Aggregated Logging] ********************************************

TASK [Gathering Facts] *********************************************************
ok: [openshift]
META: ran handlers

TASK [openshift_sanitize_inventory : Abort when conflicting deployment type variables are set] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:2
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_sanitize_inventory : Standardize on latest variable names] *****
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:15
ok: [openshift] => {
    "ansible_facts": {
        "deployment_type": "origin", 
        "openshift_deployment_type": "origin"
    }, 
    "changed": false
}

TASK [openshift_sanitize_inventory : Abort when deployment type is invalid] ****
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:23
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_sanitize_inventory : Normalize openshift_release] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:31
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:41
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_facts : Detecting Operating System] ****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_facts : set_fact] **********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:8
ok: [openshift] => {
    "ansible_facts": {
        "l_is_atomic": false
    }, 
    "changed": false
}

TASK [openshift_facts : set_fact] **********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:10
ok: [openshift] => {
    "ansible_facts": {
        "l_is_containerized": true, 
        "l_is_etcd_system_container": false, 
        "l_is_master_system_container": false, 
        "l_is_node_system_container": false, 
        "l_is_openvswitch_system_container": false
    }, 
    "changed": false
}

TASK [openshift_facts : set_fact] **********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:16
ok: [openshift] => {
    "ansible_facts": {
        "l_any_system_container": false
    }, 
    "changed": false
}

TASK [openshift_facts : set_fact] **********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:18
ok: [openshift] => {
    "ansible_facts": {
        "l_etcd_runtime": "docker"
    }, 
    "changed": false
}

TASK [openshift_facts : Validate python version] *******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:22
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_facts : Validate python version] *******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:29
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_facts : Determine Atomic Host Docker Version] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:42
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_facts : assert] ************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:46
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_facts : Load variables] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:53
ok: [openshift] => (item=/tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/vars/default.yml) => {
    "ansible_facts": {
        "required_packages": [
            "iproute", 
            "python-dbus", 
            "PyYAML", 
            "yum-utils"
        ]
    }, 
    "item": "/tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/vars/default.yml"
}

TASK [openshift_facts : Ensure various deps are installed] *********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:59
ok: [openshift] => (item=iproute) => {
    "changed": false, 
    "item": "iproute", 
    "rc": 0, 
    "results": [
        "iproute-3.10.0-74.el7.x86_64 providing iproute is already installed"
    ]
}
ok: [openshift] => (item=python-dbus) => {
    "changed": false, 
    "item": "python-dbus", 
    "rc": 0, 
    "results": [
        "dbus-python-1.1.1-9.el7.x86_64 providing python-dbus is already installed"
    ]
}
ok: [openshift] => (item=PyYAML) => {
    "changed": false, 
    "item": "PyYAML", 
    "rc": 0, 
    "results": [
        "PyYAML-3.10-11.el7.x86_64 providing PyYAML is already installed"
    ]
}
ok: [openshift] => (item=yum-utils) => {
    "changed": false, 
    "item": "yum-utils", 
    "rc": 0, 
    "results": [
        "yum-utils-1.1.31-40.el7.noarch providing yum-utils is already installed"
    ]
}

TASK [openshift_facts : Ensure various deps for running system containers are installed] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:64
skipping: [openshift] => (item=atomic)  => {
    "changed": false, 
    "item": "atomic", 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}
skipping: [openshift] => (item=ostree)  => {
    "changed": false, 
    "item": "ostree", 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}
skipping: [openshift] => (item=runc)  => {
    "changed": false, 
    "item": "runc", 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_facts : Gather Cluster facts and set is_containerized if needed] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:71
changed: [openshift] => {
    "ansible_facts": {
        "openshift": {
            "common": {
                "admin_binary": "/usr/local/bin/oadm", 
                "all_hostnames": [
                    "172.18.11.188", 
                    "ec2-34-207-175-137.compute-1.amazonaws.com", 
                    "34.207.175.137", 
                    "ip-172-18-11-188.ec2.internal"
                ], 
                "cli_image": "openshift/origin", 
                "client_binary": "/usr/local/bin/oc", 
                "cluster_id": "default", 
                "config_base": "/etc/origin", 
                "data_dir": "/var/lib/origin", 
                "debug_level": "2", 
                "deployer_image": "openshift/origin-deployer", 
                "deployment_subtype": "basic", 
                "deployment_type": "origin", 
                "dns_domain": "cluster.local", 
                "etcd_runtime": "docker", 
                "examples_content_version": "v3.6", 
                "generate_no_proxy_hosts": true, 
                "hostname": "ip-172-18-11-188.ec2.internal", 
                "install_examples": true, 
                "internal_hostnames": [
                    "172.18.11.188", 
                    "ip-172-18-11-188.ec2.internal"
                ], 
                "ip": "172.18.11.188", 
                "is_atomic": false, 
                "is_containerized": true, 
                "is_etcd_system_container": false, 
                "is_master_system_container": false, 
                "is_node_system_container": false, 
                "is_openvswitch_system_container": false, 
                "kube_svc_ip": "172.30.0.1", 
                "pod_image": "openshift/origin-pod", 
                "portal_net": "172.30.0.0/16", 
                "public_hostname": "ec2-34-207-175-137.compute-1.amazonaws.com", 
                "public_ip": "34.207.175.137", 
                "registry_image": "openshift/origin-docker-registry", 
                "router_image": "openshift/origin-haproxy-router", 
                "sdn_network_plugin_name": "redhat/openshift-ovs-subnet", 
                "service_type": "origin", 
                "use_calico": false, 
                "use_contiv": false, 
                "use_dnsmasq": true, 
                "use_flannel": false, 
                "use_manageiq": true, 
                "use_nuage": false, 
                "use_openshift_sdn": true, 
                "version_gte_3_1_1_or_1_1_1": true, 
                "version_gte_3_1_or_1_1": true, 
                "version_gte_3_2_or_1_2": true, 
                "version_gte_3_3_or_1_3": true, 
                "version_gte_3_4_or_1_4": true, 
                "version_gte_3_5_or_1_5": true, 
                "version_gte_3_6": true
            }, 
            "current_config": {
                "roles": [
                    "node", 
                    "docker"
                ]
            }, 
            "docker": {
                "api_version": 1.24, 
                "disable_push_dockerhub": false, 
                "gte_1_10": true, 
                "options": "--log-driver=journald", 
                "service_name": "docker", 
                "version": "1.12.6"
            }, 
            "hosted": {
                "logging": {
                    "selector": null
                }, 
                "metrics": {
                    "selector": null
                }, 
                "registry": {
                    "selector": "region=infra"
                }, 
                "router": {
                    "selector": "region=infra"
                }
            }, 
            "node": {
                "annotations": {}, 
                "iptables_sync_period": "30s", 
                "kubelet_args": {
                    "node-labels": []
                }, 
                "labels": {}, 
                "local_quota_per_fsgroup": "", 
                "node_image": "openshift/node", 
                "node_system_image": "openshift/node", 
                "nodename": "ip-172-18-11-188.ec2.internal", 
                "ovs_image": "openshift/openvswitch", 
                "ovs_system_image": "openshift/openvswitch", 
                "registry_url": "openshift/origin-${component}:${version}", 
                "schedulable": true, 
                "sdn_mtu": "8951", 
                "set_node_ip": false, 
                "storage_plugin_deps": [
                    "ceph", 
                    "glusterfs", 
                    "iscsi"
                ]
            }, 
            "provider": {
                "metadata": {
                    "ami-id": "ami-83a1fc95", 
                    "ami-launch-index": "0", 
                    "ami-manifest-path": "(unknown)", 
                    "block-device-mapping": {
                        "ami": "/dev/sda1", 
                        "ebs17": "sdb", 
                        "root": "/dev/sda1"
                    }, 
                    "hostname": "ip-172-18-11-188.ec2.internal", 
                    "instance-action": "none", 
                    "instance-id": "i-08ff6aa2c118c7c4f", 
                    "instance-type": "c4.xlarge", 
                    "local-hostname": "ip-172-18-11-188.ec2.internal", 
                    "local-ipv4": "172.18.11.188", 
                    "mac": "0e:cf:87:4e:fb:82", 
                    "metrics": {
                        "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
                    }, 
                    "network": {
                        "interfaces": {
                            "macs": {
                                "0e:cf:87:4e:fb:82": {
                                    "device-number": "0", 
                                    "interface-id": "eni-82233058", 
                                    "ipv4-associations": {
                                        "34.207.175.137": "172.18.11.188"
                                    }, 
                                    "local-hostname": "ip-172-18-11-188.ec2.internal", 
                                    "local-ipv4s": "172.18.11.188", 
                                    "mac": "0e:cf:87:4e:fb:82", 
                                    "owner-id": "531415883065", 
                                    "public-hostname": "ec2-34-207-175-137.compute-1.amazonaws.com", 
                                    "public-ipv4s": "34.207.175.137", 
                                    "security-group-ids": "sg-7e73221a", 
                                    "security-groups": "default", 
                                    "subnet-id": "subnet-cf57c596", 
                                    "subnet-ipv4-cidr-block": "172.18.0.0/20", 
                                    "vpc-id": "vpc-69705d0c", 
                                    "vpc-ipv4-cidr-block": "172.18.0.0/16", 
                                    "vpc-ipv4-cidr-blocks": "172.18.0.0/16"
                                }
                            }
                        }
                    }, 
                    "placement": {
                        "availability-zone": "us-east-1d"
                    }, 
                    "profile": "default-hvm", 
                    "public-hostname": "ec2-34-207-175-137.compute-1.amazonaws.com", 
                    "public-ipv4": "34.207.175.137", 
                    "public-keys/": "0=libra", 
                    "reservation-id": "r-00005f32f03259c6a", 
                    "security-groups": "default", 
                    "services": {
                        "domain": "amazonaws.com", 
                        "partition": "aws"
                    }
                }, 
                "name": "aws", 
                "network": {
                    "hostname": "ip-172-18-11-188.ec2.internal", 
                    "interfaces": [
                        {
                            "ips": [
                                "172.18.11.188"
                            ], 
                            "network_id": "subnet-cf57c596", 
                            "network_type": "vpc", 
                            "public_ips": [
                                "34.207.175.137"
                            ]
                        }
                    ], 
                    "ip": "172.18.11.188", 
                    "ipv6_enabled": false, 
                    "public_hostname": "ec2-34-207-175-137.compute-1.amazonaws.com", 
                    "public_ip": "34.207.175.137"
                }, 
                "zone": "us-east-1d"
            }
        }
    }, 
    "changed": true
}

TASK [openshift_facts : Set repoquery command] *********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_facts/tasks/main.yml:99
ok: [openshift] => {
    "ansible_facts": {
        "repoquery_cmd": "repoquery --plugins"
    }, 
    "changed": false
}

TASK [openshift_logging : fail] ************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:2
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Set default image variables based on deployment_type] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:6
ok: [openshift] => (item=/tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/vars/default_images.yml) => {
    "ansible_facts": {
        "__openshift_logging_image_prefix": "{{ openshift_hosted_logging_deployer_prefix | default('docker.io/openshift/origin-') }}", 
        "__openshift_logging_image_version": "{{ openshift_hosted_logging_deployer_version | default('latest') }}"
    }, 
    "item": "/tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/vars/default_images.yml"
}

TASK [openshift_logging : Set logging image facts] *****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:12
ok: [openshift] => {
    "ansible_facts": {
        "openshift_logging_image_prefix": "172.30.155.104:5000/logging/", 
        "openshift_logging_image_version": "latest"
    }, 
    "changed": false
}

TASK [openshift_logging : Create temp directory for doing work in] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:17
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.001977", 
    "end": "2017-06-08 21:52:45.675066", 
    "rc": 0, 
    "start": "2017-06-08 21:52:45.673089"
}

STDOUT:

/tmp/openshift-logging-ansible-WCS5kj

TASK [openshift_logging : debug] ***********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:24
ok: [openshift] => {
    "changed": false
}

MSG:

Created temp dir /tmp/openshift-logging-ansible-WCS5kj

TASK [openshift_logging : Create local temp directory for doing work in] *******
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:26
ok: [openshift -> 127.0.0.1] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.001892", 
    "end": "2017-06-08 21:52:45.831793", 
    "rc": 0, 
    "start": "2017-06-08 21:52:45.829901"
}

STDOUT:

/tmp/openshift-logging-ansible-kL1YIP

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:33
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml for openshift

TASK [openshift_logging : Gather OpenShift Logging Facts] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:2
ok: [openshift] => {
    "ansible_facts": {
        "openshift_logging_facts": {
            "curator": {
                "clusterrolebindings": {}, 
                "configmaps": {}, 
                "daemonsets": {}, 
                "deploymentconfigs": {}, 
                "oauthclients": {}, 
                "pvcs": {}, 
                "rolebindings": {}, 
                "routes": {}, 
                "sccs": {}, 
                "secrets": {}, 
                "services": {}
            }, 
            "curator_ops": {
                "clusterrolebindings": {}, 
                "configmaps": {}, 
                "daemonsets": {}, 
                "deploymentconfigs": {}, 
                "oauthclients": {}, 
                "pvcs": {}, 
                "rolebindings": {}, 
                "routes": {}, 
                "sccs": {}, 
                "secrets": {}, 
                "services": {}
            }, 
            "elasticsearch": {
                "clusterrolebindings": {}, 
                "configmaps": {}, 
                "daemonsets": {}, 
                "deploymentconfigs": {}, 
                "oauthclients": {}, 
                "pvcs": {}, 
                "rolebindings": {}, 
                "routes": {}, 
                "sccs": {}, 
                "secrets": {}, 
                "services": {}
            }, 
            "elasticsearch_ops": {
                "clusterrolebindings": {}, 
                "configmaps": {}, 
                "daemonsets": {}, 
                "deploymentconfigs": {}, 
                "oauthclients": {}, 
                "pvcs": {}, 
                "rolebindings": {}, 
                "routes": {}, 
                "sccs": {}, 
                "secrets": {}, 
                "services": {}
            }, 
            "fluentd": {
                "clusterrolebindings": {}, 
                "configmaps": {}, 
                "daemonsets": {}, 
                "deploymentconfigs": {}, 
                "oauthclients": {}, 
                "pvcs": {}, 
                "rolebindings": {}, 
                "routes": {}, 
                "sccs": {}, 
                "secrets": {}, 
                "services": {}
            }, 
            "kibana": {
                "clusterrolebindings": {}, 
                "configmaps": {}, 
                "daemonsets": {}, 
                "deploymentconfigs": {}, 
                "oauthclients": {}, 
                "pvcs": {}, 
                "rolebindings": {}, 
                "routes": {}, 
                "sccs": {}, 
                "secrets": {}, 
                "services": {}
            }, 
            "kibana_ops": {
                "clusterrolebindings": {}, 
                "configmaps": {}, 
                "daemonsets": {}, 
                "deploymentconfigs": {}, 
                "oauthclients": {}, 
                "pvcs": {}, 
                "rolebindings": {}, 
                "routes": {}, 
                "sccs": {}, 
                "secrets": {}, 
                "services": {}
            }
        }
    }, 
    "changed": false
}

TASK [openshift_logging : Set logging project] *********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:7
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get namespace logging -o json", 
        "results": {
            "apiVersion": "v1", 
            "kind": "Namespace", 
            "metadata": {
                "annotations": {
                    "openshift.io/description": "", 
                    "openshift.io/display-name": "", 
                    "openshift.io/node-selector": "", 
                    "openshift.io/sa.scc.mcs": "s0:c7,c4", 
                    "openshift.io/sa.scc.supplemental-groups": "1000050000/10000", 
                    "openshift.io/sa.scc.uid-range": "1000050000/10000"
                }, 
                "creationTimestamp": "2017-06-09T01:37:02Z", 
                "name": "logging", 
                "resourceVersion": "734", 
                "selfLink": "/api/v1/namespaces/logging", 
                "uid": "23305e21-4cb4-11e7-9445-0ecf874efb82"
            }, 
            "spec": {
                "finalizers": [
                    "openshift.io/origin", 
                    "kubernetes"
                ]
            }, 
            "status": {
                "phase": "Active"
            }
        }, 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging : Labeling logging project] ****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:13

TASK [openshift_logging : Labeling logging project] ****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:26
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Create logging cert directory] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:39
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/etc/origin/logging", 
    "secontext": "unconfined_u:object_r:etc_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:47
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml for openshift

TASK [openshift_logging : Checking for ca.key] *********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:3
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for ca.crt] *********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:8
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for ca.serial.txt] **************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:13
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Generate certificates] *******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:18
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "/usr/local/bin/oc", 
        "adm", 
        "--config=/tmp/openshift-logging-ansible-WCS5kj/admin.kubeconfig", 
        "ca", 
        "create-signer-cert", 
        "--key=/etc/origin/logging/ca.key", 
        "--cert=/etc/origin/logging/ca.crt", 
        "--serial=/etc/origin/logging/ca.serial.txt", 
        "--name=logging-signer-test"
    ], 
    "delta": "0:00:00.316412", 
    "end": "2017-06-08 21:52:50.077268", 
    "rc": 0, 
    "start": "2017-06-08 21:52:49.760856"
}

TASK [openshift_logging : Checking for signing.conf] ***************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:29
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : template] ********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:34
changed: [openshift] => {
    "changed": true, 
    "checksum": "a5a1bda430be44f982fa9097778b7d35d2e42780", 
    "dest": "/etc/origin/logging/signing.conf", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "449087446670073f2899aac33113350c", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 4263, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973170.24-263822218473935/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:39
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift

TASK [openshift_logging : Checking for kibana.crt] *****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for kibana.key] *****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Trying to discover server cert variable name for kibana] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Trying to discover the server key variable name for kibana] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating signed server cert and key for kibana] ******
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Copying server key for kibana to generated certs directory] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Copying Server cert for kibana to generated certs directory] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Checking for kibana-ops.crt] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for kibana-ops.key] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Trying to discover server cert variable name for kibana-ops] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Trying to discover the server key variable name for kibana-ops] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating signed server cert and key for kibana-ops] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Copying server key for kibana-ops to generated certs directory] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Copying Server cert for kibana-ops to generated certs directory] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Checking for kibana-internal.crt] ********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for kibana-internal.key] ********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Trying to discover server cert variable name for kibana-internal] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Trying to discover the server key variable name for kibana-internal] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating signed server cert and key for kibana-internal] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "/usr/local/bin/oc", 
        "adm", 
        "--config=/tmp/openshift-logging-ansible-WCS5kj/admin.kubeconfig", 
        "ca", 
        "create-server-cert", 
        "--key=/etc/origin/logging/kibana-internal.key", 
        "--cert=/etc/origin/logging/kibana-internal.crt", 
        "--hostnames=kibana, kibana-ops, kibana.127.0.0.1.xip.io, kibana-ops.router.default.svc.cluster.local", 
        "--signer-cert=/etc/origin/logging/ca.crt", 
        "--signer-key=/etc/origin/logging/ca.key", 
        "--signer-serial=/etc/origin/logging/ca.serial.txt"
    ], 
    "delta": "0:00:00.567348", 
    "end": "2017-06-08 21:52:52.431872", 
    "rc": 0, 
    "start": "2017-06-08 21:52:51.864524"
}

TASK [openshift_logging : Copying server key for kibana-internal to generated certs directory] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Copying Server cert for kibana-internal to generated certs directory] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:48
skipping: [openshift] => (item={u'procure_component': u'mux', u'hostnames': u'logging-mux, mux.router.default.svc.cluster.local'})  => {
    "cert_info": {
        "hostnames": "logging-mux, mux.router.default.svc.cluster.local", 
        "procure_component": "mux"
    }, 
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:56
skipping: [openshift] => (item={u'procure_component': u'mux'})  => {
    "changed": false, 
    "shared_key_info": {
        "procure_component": "mux"
    }, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:63
skipping: [openshift] => (item={u'procure_component': u'es', u'hostnames': u'es, es.router.default.svc.cluster.local'})  => {
    "cert_info": {
        "hostnames": "es, es.router.default.svc.cluster.local", 
        "procure_component": "es"
    }, 
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:71
skipping: [openshift] => (item={u'procure_component': u'es-ops', u'hostnames': u'es-ops, es-ops.router.default.svc.cluster.local'})  => {
    "cert_info": {
        "hostnames": "es-ops, es-ops.router.default.svc.cluster.local", 
        "procure_component": "es-ops"
    }, 
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Copy proxy TLS configuration file] *******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:81
changed: [openshift] => {
    "changed": true, 
    "checksum": "36991681e03970736a99be9f084773521c44db06", 
    "dest": "/etc/origin/logging/server-tls.json", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "2a954195add2b2fdde4ed09ff5c8e1c5", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 321, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973172.88-64712961156966/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Copy proxy TLS configuration file] *******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:86
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Checking for ca.db] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:91
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : copy] ************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:96
changed: [openshift] => {
    "changed": true, 
    "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", 
    "dest": "/etc/origin/logging/ca.db", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "d41d8cd98f00b204e9800998ecf8427e", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 0, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973173.24-78065445081849/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Checking for ca.crt.srl] *****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:101
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : copy] ************************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:106
changed: [openshift] => {
    "changed": true, 
    "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", 
    "dest": "/etc/origin/logging/ca.crt.srl", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "d41d8cd98f00b204e9800998ecf8427e", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 0, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973173.57-87715783731612/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Generate PEM certs] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:111
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift

TASK [openshift_logging : Checking for system.logging.fluentd.key] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for system.logging.fluentd.crt] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Creating cert req for system.logging.fluentd] ********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating cert req for system.logging.fluentd] ********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "req", 
        "-out", 
        "/etc/origin/logging/system.logging.fluentd.csr", 
        "-new", 
        "-newkey", 
        "rsa:2048", 
        "-keyout", 
        "/etc/origin/logging/system.logging.fluentd.key", 
        "-subj", 
        "/CN=system.logging.fluentd/OU=OpenShift/O=Logging", 
        "-days", 
        "712", 
        "-nodes"
    ], 
    "delta": "0:00:00.251379", 
    "end": "2017-06-08 21:52:54.598822", 
    "rc": 0, 
    "start": "2017-06-08 21:52:54.347443"
}

STDERR:

Generating a 2048 bit RSA private key
...........................................................................+++
...............+++
writing new private key to '/etc/origin/logging/system.logging.fluentd.key'
-----

TASK [openshift_logging : Sign cert request with CA for system.logging.fluentd] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "ca", 
        "-in", 
        "/etc/origin/logging/system.logging.fluentd.csr", 
        "-notext", 
        "-out", 
        "/etc/origin/logging/system.logging.fluentd.crt", 
        "-config", 
        "/etc/origin/logging/signing.conf", 
        "-extensions", 
        "v3_req", 
        "-batch", 
        "-extensions", 
        "server_ext"
    ], 
    "delta": "0:00:00.007337", 
    "end": "2017-06-08 21:52:54.727001", 
    "rc": 0, 
    "start": "2017-06-08 21:52:54.719664"
}

STDERR:

Using configuration from /etc/origin/logging/signing.conf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 2 (0x2)
        Validity
            Not Before: Jun  9 01:52:54 2017 GMT
            Not After : Jun  9 01:52:54 2019 GMT
        Subject:
            organizationName          = Logging
            organizationalUnitName    = OpenShift
            commonName                = system.logging.fluentd
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                B2:AE:D4:79:AA:66:B4:06:26:D2:A8:B6:FE:FB:74:DE:60:7A:C0:23
            X509v3 Authority Key Identifier: 
                0.
Certificate is to be certified until Jun  9 01:52:54 2019 GMT (730 days)

Write out database with 1 new entries
Data Base Updated

TASK [openshift_logging : Checking for system.logging.kibana.key] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for system.logging.kibana.crt] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Creating cert req for system.logging.kibana] *********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating cert req for system.logging.kibana] *********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "req", 
        "-out", 
        "/etc/origin/logging/system.logging.kibana.csr", 
        "-new", 
        "-newkey", 
        "rsa:2048", 
        "-keyout", 
        "/etc/origin/logging/system.logging.kibana.key", 
        "-subj", 
        "/CN=system.logging.kibana/OU=OpenShift/O=Logging", 
        "-days", 
        "712", 
        "-nodes"
    ], 
    "delta": "0:00:00.032651", 
    "end": "2017-06-08 21:52:55.136921", 
    "rc": 0, 
    "start": "2017-06-08 21:52:55.104270"
}

STDERR:

Generating a 2048 bit RSA private key
................+++
..+++
writing new private key to '/etc/origin/logging/system.logging.kibana.key'
-----

TASK [openshift_logging : Sign cert request with CA for system.logging.kibana] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "ca", 
        "-in", 
        "/etc/origin/logging/system.logging.kibana.csr", 
        "-notext", 
        "-out", 
        "/etc/origin/logging/system.logging.kibana.crt", 
        "-config", 
        "/etc/origin/logging/signing.conf", 
        "-extensions", 
        "v3_req", 
        "-batch", 
        "-extensions", 
        "server_ext"
    ], 
    "delta": "0:00:00.007564", 
    "end": "2017-06-08 21:52:55.268532", 
    "rc": 0, 
    "start": "2017-06-08 21:52:55.260968"
}

STDERR:

Using configuration from /etc/origin/logging/signing.conf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 3 (0x3)
        Validity
            Not Before: Jun  9 01:52:55 2017 GMT
            Not After : Jun  9 01:52:55 2019 GMT
        Subject:
            organizationName          = Logging
            organizationalUnitName    = OpenShift
            commonName                = system.logging.kibana
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                34:FA:CE:2F:72:F9:1C:22:98:50:D3:F9:E5:C7:54:E2:ED:B0:F9:87
            X509v3 Authority Key Identifier: 
                0.
Certificate is to be certified until Jun  9 01:52:55 2019 GMT (730 days)

Write out database with 1 new entries
Data Base Updated

TASK [openshift_logging : Checking for system.logging.curator.key] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for system.logging.curator.crt] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Creating cert req for system.logging.curator] ********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating cert req for system.logging.curator] ********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "req", 
        "-out", 
        "/etc/origin/logging/system.logging.curator.csr", 
        "-new", 
        "-newkey", 
        "rsa:2048", 
        "-keyout", 
        "/etc/origin/logging/system.logging.curator.key", 
        "-subj", 
        "/CN=system.logging.curator/OU=OpenShift/O=Logging", 
        "-days", 
        "712", 
        "-nodes"
    ], 
    "delta": "0:00:00.040978", 
    "end": "2017-06-08 21:52:55.683596", 
    "rc": 0, 
    "start": "2017-06-08 21:52:55.642618"
}

STDERR:

Generating a 2048 bit RSA private key
..............+++
..........+++
writing new private key to '/etc/origin/logging/system.logging.curator.key'
-----

TASK [openshift_logging : Sign cert request with CA for system.logging.curator] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "ca", 
        "-in", 
        "/etc/origin/logging/system.logging.curator.csr", 
        "-notext", 
        "-out", 
        "/etc/origin/logging/system.logging.curator.crt", 
        "-config", 
        "/etc/origin/logging/signing.conf", 
        "-extensions", 
        "v3_req", 
        "-batch", 
        "-extensions", 
        "server_ext"
    ], 
    "delta": "0:00:00.007186", 
    "end": "2017-06-08 21:52:55.809626", 
    "rc": 0, 
    "start": "2017-06-08 21:52:55.802440"
}

STDERR:

Using configuration from /etc/origin/logging/signing.conf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 4 (0x4)
        Validity
            Not Before: Jun  9 01:52:55 2017 GMT
            Not After : Jun  9 01:52:55 2019 GMT
        Subject:
            organizationName          = Logging
            organizationalUnitName    = OpenShift
            commonName                = system.logging.curator
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                75:80:41:EE:E7:F0:E7:88:44:B3:21:DE:07:EB:63:89:CC:20:2E:E8
            X509v3 Authority Key Identifier: 
                0.
Certificate is to be certified until Jun  9 01:52:55 2019 GMT (730 days)

Write out database with 1 new entries
Data Base Updated

TASK [openshift_logging : Checking for system.admin.key] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for system.admin.crt] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Creating cert req for system.admin] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating cert req for system.admin] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "req", 
        "-out", 
        "/etc/origin/logging/system.admin.csr", 
        "-new", 
        "-newkey", 
        "rsa:2048", 
        "-keyout", 
        "/etc/origin/logging/system.admin.key", 
        "-subj", 
        "/CN=system.admin/OU=OpenShift/O=Logging", 
        "-days", 
        "712", 
        "-nodes"
    ], 
    "delta": "0:00:00.138122", 
    "end": "2017-06-08 21:52:56.325513", 
    "rc": 0, 
    "start": "2017-06-08 21:52:56.187391"
}

STDERR:

Generating a 2048 bit RSA private key
..............................................................................................+++
..+++
writing new private key to '/etc/origin/logging/system.admin.key'
-----

TASK [openshift_logging : Sign cert request with CA for system.admin] **********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31
changed: [openshift] => {
    "changed": true, 
    "cmd": [
        "openssl", 
        "ca", 
        "-in", 
        "/etc/origin/logging/system.admin.csr", 
        "-notext", 
        "-out", 
        "/etc/origin/logging/system.admin.crt", 
        "-config", 
        "/etc/origin/logging/signing.conf", 
        "-extensions", 
        "v3_req", 
        "-batch", 
        "-extensions", 
        "server_ext"
    ], 
    "delta": "0:00:00.007378", 
    "end": "2017-06-08 21:52:56.451991", 
    "rc": 0, 
    "start": "2017-06-08 21:52:56.444613"
}

STDERR:

Using configuration from /etc/origin/logging/signing.conf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 5 (0x5)
        Validity
            Not Before: Jun  9 01:52:56 2017 GMT
            Not After : Jun  9 01:52:56 2019 GMT
        Subject:
            organizationName          = Logging
            organizationalUnitName    = OpenShift
            commonName                = system.admin
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                04:B2:03:F6:4A:A7:4F:7F:9E:43:52:16:14:5F:E3:42:D1:83:2F:FC
            X509v3 Authority Key Identifier: 
                0.
Certificate is to be certified until Jun  9 01:52:56 2019 GMT (730 days)

Write out database with 1 new entries
Data Base Updated

TASK [openshift_logging : Generate PEM cert for mux] ***************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:121
skipping: [openshift] => (item=system.logging.mux)  => {
    "changed": false, 
    "node_name": "system.logging.mux", 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Generate PEM cert for Elasticsearch external route] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:129
skipping: [openshift] => (item=system.logging.es)  => {
    "changed": false, 
    "node_name": "system.logging.es", 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Creating necessary JKS certs] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:137
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml for openshift

TASK [openshift_logging : Checking for elasticsearch.jks] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:3
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for logging-es.jks] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:8
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for system.admin.jks] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:13
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Checking for truststore.jks] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:18
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:23
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:28
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:33
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:38
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : pulling down signing items from host] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:43
changed: [openshift] => (item=ca.crt) => {
    "changed": true, 
    "checksum": "b16c4fca8219559f16a0181ac49b84b649d51737", 
    "dest": "/tmp/openshift-logging-ansible-kL1YIP/ca.crt", 
    "item": "ca.crt", 
    "md5sum": "c63ecabbe1481ce972b34d5e83877f69", 
    "remote_checksum": "b16c4fca8219559f16a0181ac49b84b649d51737", 
    "remote_md5sum": null
}
changed: [openshift] => (item=ca.key) => {
    "changed": true, 
    "checksum": "f782911b68f8fda2ef87a28b3d5c3244142904c0", 
    "dest": "/tmp/openshift-logging-ansible-kL1YIP/ca.key", 
    "item": "ca.key", 
    "md5sum": "eb3503b08f1ad41f023ca724c1458364", 
    "remote_checksum": "f782911b68f8fda2ef87a28b3d5c3244142904c0", 
    "remote_md5sum": null
}
changed: [openshift] => (item=ca.serial.txt) => {
    "changed": true, 
    "checksum": "b649682b92a811746098e5c91e891e5142a41950", 
    "dest": "/tmp/openshift-logging-ansible-kL1YIP/ca.serial.txt", 
    "item": "ca.serial.txt", 
    "md5sum": "76b01ce73ac53fdac1c67d27ac040473", 
    "remote_checksum": "b649682b92a811746098e5c91e891e5142a41950", 
    "remote_md5sum": null
}
ok: [openshift] => (item=ca.crl.srl) => {
    "changed": false, 
    "file": "/etc/origin/logging/ca.crl.srl", 
    "item": "ca.crl.srl"
}

MSG:

the remote file does not exist, not transferring, ignored
changed: [openshift] => (item=ca.db) => {
    "changed": true, 
    "checksum": "7eb22827d056c995ff6eca876a571ce63945d09f", 
    "dest": "/tmp/openshift-logging-ansible-kL1YIP/ca.db", 
    "item": "ca.db", 
    "md5sum": "19e18539fbffdfaf9b82a3f47e9be975", 
    "remote_checksum": "7eb22827d056c995ff6eca876a571ce63945d09f", 
    "remote_md5sum": null
}

TASK [openshift_logging : template] ********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:56
changed: [openshift -> 127.0.0.1] => {
    "changed": true, 
    "checksum": "f1d8f39620e2953602dc02626c97c3a5bdb84e3e", 
    "dest": "/tmp/openshift-logging-ansible-kL1YIP/signing.conf", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "1028add1ad11ac0a07778f4366ae921f", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 4281, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973177.88-254402024756251/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Run JKS generation script] ***************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:61
changed: [openshift -> 127.0.0.1] => {
    "changed": true, 
    "rc": 0
}

STDOUT:

Generating keystore and certificate for node system.admin
Generating certificate signing request for node system.admin
Sign certificate request with CA
Import back to keystore (including CA chain)
All done for system.admin
Generating keystore and certificate for node elasticsearch
Generating certificate signing request for node elasticsearch
Sign certificate request with CA
Import back to keystore (including CA chain)
All done for elasticsearch
Generating keystore and certificate for node logging-es
Generating certificate signing request for node logging-es
Sign certificate request with CA
Import back to keystore (including CA chain)
All done for logging-es
Import CA to truststore for validating client certs



STDERR:

+ '[' 2 -lt 1 ']'
+ dir=/tmp/openshift-logging-ansible-kL1YIP
+ SCRATCH_DIR=/tmp/openshift-logging-ansible-kL1YIP
+ PROJECT=logging
+ [[ ! -f /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks ]]
+ generate_JKS_client_cert system.admin
+ NODE_NAME=system.admin
+ ks_pass=kspass
+ ts_pass=tspass
+ dir=/tmp/openshift-logging-ansible-kL1YIP
+ echo Generating keystore and certificate for node system.admin
+ keytool -genkey -alias system.admin -keystore /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks -keyalg RSA -keysize 2048 -validity 712 -keypass kspass -storepass kspass -dname 'CN=system.admin, OU=OpenShift, O=Logging'
+ echo Generating certificate signing request for node system.admin
+ keytool -certreq -alias system.admin -keystore /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks -file /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks.csr -keyalg rsa -keypass kspass -storepass kspass -dname 'CN=system.admin, OU=OpenShift, O=Logging'
+ echo Sign certificate request with CA
+ openssl ca -in /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks.csr -notext -out /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks.crt -config /tmp/openshift-logging-ansible-kL1YIP/signing.conf -extensions v3_req -batch -extensions server_ext
Using configuration from /tmp/openshift-logging-ansible-kL1YIP/signing.conf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 6 (0x6)
        Validity
            Not Before: Jun  9 01:53:10 2017 GMT
            Not After : Jun  9 01:53:10 2019 GMT
        Subject:
            organizationName          = Logging
            organizationalUnitName    = OpenShift
            commonName                = system.admin
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                46:4C:7B:05:7C:7A:56:70:01:7B:E3:FB:1A:5A:F1:D3:5D:40:B8:C6
            X509v3 Authority Key Identifier: 
                0.
Certificate is to be certified until Jun  9 01:53:10 2019 GMT (730 days)

Write out database with 1 new entries
Data Base Updated
+ echo 'Import back to keystore (including CA chain)'
+ keytool -import -file /tmp/openshift-logging-ansible-kL1YIP/ca.crt -keystore /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks -storepass kspass -noprompt -alias sig-ca
Certificate was added to keystore
+ keytool -import -file /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks.crt -keystore /tmp/openshift-logging-ansible-kL1YIP/system.admin.jks -storepass kspass -noprompt -alias system.admin
Certificate reply was installed in keystore
+ echo All done for system.admin
+ [[ ! -f /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.jks ]]
++ join , logging-es logging-es-ops
++ local IFS=,
++ shift
++ echo logging-es,logging-es-ops
+ generate_JKS_chain true elasticsearch logging-es,logging-es-ops
+ dir=/tmp/openshift-logging-ansible-kL1YIP
+ ADD_OID=true
+ NODE_NAME=elasticsearch
+ CERT_NAMES=logging-es,logging-es-ops
+ ks_pass=kspass
+ ts_pass=tspass
+ rm -rf elasticsearch
+ extension_names=
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es-ops
+ '[' true = true ']'
+ extension_names=,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5
+ echo Generating keystore and certificate for node elasticsearch
+ keytool -genkey -alias elasticsearch -keystore /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.jks -keypass kspass -storepass kspass -keyalg RSA -keysize 2048 -validity 712 -dname 'CN=elasticsearch, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5
+ echo Generating certificate signing request for node elasticsearch
+ keytool -certreq -alias elasticsearch -keystore /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.jks -storepass kspass -file /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.csr -keyalg rsa -dname 'CN=elasticsearch, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5
+ echo Sign certificate request with CA
+ openssl ca -in /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.csr -notext -out /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.crt -config /tmp/openshift-logging-ansible-kL1YIP/signing.conf -extensions v3_req -batch -extensions server_ext
Using configuration from /tmp/openshift-logging-ansible-kL1YIP/signing.conf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 7 (0x7)
        Validity
            Not Before: Jun  9 01:53:11 2017 GMT
            Not After : Jun  9 01:53:11 2019 GMT
        Subject:
            organizationName          = Logging
            organizationalUnitName    = OpenShift
            commonName                = elasticsearch
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                7D:82:B1:87:C7:7D:5B:FB:13:D3:CE:35:19:64:4D:F8:AB:55:DF:4F
            X509v3 Authority Key Identifier: 
                0.
            X509v3 Subject Alternative Name: 
                DNS:localhost, IP Address:127.0.0.1, DNS:logging-es, DNS:logging-es-ops, Registered ID:1.2.3.4.5.5
Certificate is to be certified until Jun  9 01:53:11 2019 GMT (730 days)

Write out database with 1 new entries
Data Base Updated
+ echo 'Import back to keystore (including CA chain)'
+ keytool -import -file /tmp/openshift-logging-ansible-kL1YIP/ca.crt -keystore /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.jks -storepass kspass -noprompt -alias sig-ca
Certificate was added to keystore
+ keytool -import -file /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.crt -keystore /tmp/openshift-logging-ansible-kL1YIP/elasticsearch.jks -storepass kspass -noprompt -alias elasticsearch
Certificate reply was installed in keystore
+ echo All done for elasticsearch
+ [[ ! -f /tmp/openshift-logging-ansible-kL1YIP/logging-es.jks ]]
++ join , logging-es logging-es.logging.svc.cluster.local logging-es-cluster logging-es-cluster.logging.svc.cluster.local logging-es-ops logging-es-ops.logging.svc.cluster.local logging-es-ops-cluster logging-es-ops-cluster.logging.svc.cluster.local
++ local IFS=,
++ shift
++ echo logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local
+ generate_JKS_chain false logging-es logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local
+ dir=/tmp/openshift-logging-ansible-kL1YIP
+ ADD_OID=false
+ NODE_NAME=logging-es
+ CERT_NAMES=logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local
+ ks_pass=kspass
+ ts_pass=tspass
+ rm -rf logging-es
+ extension_names=
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster
+ for name in '${CERT_NAMES//,/ }'
+ extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local
+ '[' false = true ']'
+ echo Generating keystore and certificate for node logging-es
+ keytool -genkey -alias logging-es -keystore /tmp/openshift-logging-ansible-kL1YIP/logging-es.jks -keypass kspass -storepass kspass -keyalg RSA -keysize 2048 -validity 712 -dname 'CN=logging-es, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local
+ echo Generating certificate signing request for node logging-es
+ keytool -certreq -alias logging-es -keystore /tmp/openshift-logging-ansible-kL1YIP/logging-es.jks -storepass kspass -file /tmp/openshift-logging-ansible-kL1YIP/logging-es.csr -keyalg rsa -dname 'CN=logging-es, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local
+ echo Sign certificate request with CA
+ openssl ca -in /tmp/openshift-logging-ansible-kL1YIP/logging-es.csr -notext -out /tmp/openshift-logging-ansible-kL1YIP/logging-es.crt -config /tmp/openshift-logging-ansible-kL1YIP/signing.conf -extensions v3_req -batch -extensions server_ext
Using configuration from /tmp/openshift-logging-ansible-kL1YIP/signing.conf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 8 (0x8)
        Validity
            Not Before: Jun  9 01:53:12 2017 GMT
            Not After : Jun  9 01:53:12 2019 GMT
        Subject:
            organizationName          = Logging
            organizationalUnitName    = OpenShift
            commonName                = logging-es
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Basic Constraints: 
                CA:FALSE
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                DF:77:F0:31:73:B5:D7:30:26:87:42:9C:40:DF:68:CF:9E:8B:3D:FB
            X509v3 Authority Key Identifier: 
                0.
            X509v3 Subject Alternative Name: 
                DNS:localhost, IP Address:127.0.0.1, DNS:logging-es, DNS:logging-es.logging.svc.cluster.local, DNS:logging-es-cluster, DNS:logging-es-cluster.logging.svc.cluster.local, DNS:logging-es-ops, DNS:logging-es-ops.logging.svc.cluster.local, DNS:logging-es-ops-cluster, DNS:logging-es-ops-cluster.logging.svc.cluster.local
Certificate is to be certified until Jun  9 01:53:12 2019 GMT (730 days)

Write out database with 1 new entries
Data Base Updated
+ echo 'Import back to keystore (including CA chain)'
+ keytool -import -file /tmp/openshift-logging-ansible-kL1YIP/ca.crt -keystore /tmp/openshift-logging-ansible-kL1YIP/logging-es.jks -storepass kspass -noprompt -alias sig-ca
Certificate was added to keystore
+ keytool -import -file /tmp/openshift-logging-ansible-kL1YIP/logging-es.crt -keystore /tmp/openshift-logging-ansible-kL1YIP/logging-es.jks -storepass kspass -noprompt -alias logging-es
Certificate reply was installed in keystore
+ echo All done for logging-es
+ '[' '!' -f /tmp/openshift-logging-ansible-kL1YIP/truststore.jks ']'
+ createTruststore
+ echo 'Import CA to truststore for validating client certs'
+ keytool -import -file /tmp/openshift-logging-ansible-kL1YIP/ca.crt -keystore /tmp/openshift-logging-ansible-kL1YIP/truststore.jks -storepass tspass -noprompt -alias sig-ca
Certificate was added to keystore
+ exit 0


TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:66
changed: [openshift] => {
    "changed": true, 
    "checksum": "779a89ec2a87d038ad062320b8a6c7ecaf35acf9", 
    "dest": "/etc/origin/logging/elasticsearch.jks", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "d60745e267f19ae29ad8915010608851", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 3767, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973193.32-210533462936770/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:72
changed: [openshift] => {
    "changed": true, 
    "checksum": "de5dbd62d022104e4acc04fc3d8d4ef0b7aee9fc", 
    "dest": "/etc/origin/logging/logging-es.jks", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "9ab4fac70562cb44ddc2c90d3556ef14", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 3981, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973193.54-236350480559273/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:78
changed: [openshift] => {
    "changed": true, 
    "checksum": "b728045efe2f701226b7840f8a44c105eda1a04a", 
    "dest": "/etc/origin/logging/system.admin.jks", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "6328fe38c50402b69b89344006c71c59", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 3702, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973193.77-97357299419588/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:84
changed: [openshift] => {
    "changed": true, 
    "checksum": "4bb27a9e1b97458cc28ed7f658f79ae483d2e8c2", 
    "dest": "/etc/origin/logging/truststore.jks", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "a124ec9b5aa425e629e304f8673b0610", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 797, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973193.99-106617989791908/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging : Generate proxy session] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:141
ok: [openshift] => {
    "ansible_facts": {
        "session_secret": "GcHdI3aubBWfhJj31NYaZIpFnPz1o6WnAKwV86sXIQztumA2cTY18GVBuU7ZWhZ4QufkxVVkxUODoDEJlPXYVP1M96e8P4OjF83n7EFHkX3Oot2cTdEe09vVZUuOdIomWDcpDgVYCVv8kWbqPcJBNHQ5k8DatSJvxQBo0kStfIegZt9Bv8qKGnWEiKqMBNN1zPhezCST"
    }, 
    "changed": false
}

TASK [openshift_logging : Generate oauth client secret] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:146
ok: [openshift] => {
    "ansible_facts": {
        "oauth_secret": "2OkerlWP2bkIo76PHFKjlvfilVSQTF58WHtSxxPGVr7m0hHduwJyZbXvJ52NW1jp"
    }, 
    "changed": false
}

TASK [openshift_logging : set_fact] ********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:53

TASK [openshift_logging : set_fact] ********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:57
ok: [openshift] => {
    "ansible_facts": {
        "es_indices": "[]"
    }, 
    "changed": false
}

TASK [openshift_logging : set_fact] ********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:60
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:64

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:85
statically included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml

TASK [openshift_logging_elasticsearch : Validate Elasticsearch cluster size] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:2
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Validate Elasticsearch Ops cluster size] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:6
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : fail] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:10
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:14
ok: [openshift] => {
    "ansible_facts": {
        "elasticsearch_name": "logging-elasticsearch", 
        "es_component": "es"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : fail] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:3
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:7
ok: [openshift] => {
    "ansible_facts": {
        "es_version": "3_5"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : debug] *********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:11
ok: [openshift] => {
    "changed": false, 
    "openshift_logging_image_version": "latest"
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:14
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : fail] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:17
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Create temp directory for doing work in] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:21
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.002210", 
    "end": "2017-06-08 21:53:14.826135", 
    "rc": 0, 
    "start": "2017-06-08 21:53:14.823925"
}

STDOUT:

/tmp/openshift-logging-ansible-IzUGtO

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:26
ok: [openshift] => {
    "ansible_facts": {
        "tempdir": "/tmp/openshift-logging-ansible-IzUGtO"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : Create templates subdirectory] *********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:30
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/openshift-logging-ansible-IzUGtO/templates", 
    "secontext": "unconfined_u:object_r:user_tmp_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Create ES service account] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:40
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Create ES service account] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:48
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get sa aggregated-logging-elasticsearch -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "imagePullSecrets": [
                    {
                        "name": "aggregated-logging-elasticsearch-dockercfg-nfqlv"
                    }
                ], 
                "kind": "ServiceAccount", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:15Z", 
                    "name": "aggregated-logging-elasticsearch", 
                    "namespace": "logging", 
                    "resourceVersion": "1333", 
                    "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-elasticsearch", 
                    "uid": "672b62a1-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "secrets": [
                    {
                        "name": "aggregated-logging-elasticsearch-token-07c18"
                    }, 
                    {
                        "name": "aggregated-logging-elasticsearch-dockercfg-nfqlv"
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : copy] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:57
changed: [openshift] => {
    "changed": true, 
    "checksum": "e5015364391ac609da8655a9a1224131599a5cea", 
    "dest": "/tmp/openshift-logging-ansible-IzUGtO/rolebinding-reader.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "446fb96447527f48f97e69bb41bad7be", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 135, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973196.02-261033723729933/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Create rolebinding-reader role] ********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:61
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get clusterrole rolebinding-reader -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "ClusterRole", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:16Z", 
                    "name": "rolebinding-reader", 
                    "resourceVersion": "122", 
                    "selfLink": "/oapi/v1/clusterroles/rolebinding-reader", 
                    "uid": "67d1f5ca-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "rules": [
                    {
                        "apiGroups": [
                            ""
                        ], 
                        "attributeRestrictions": null, 
                        "resources": [
                            "clusterrolebindings"
                        ], 
                        "verbs": [
                            "get"
                        ]
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set rolebinding-reader permissions for ES] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:72
changed: [openshift] => {
    "changed": true, 
    "present": "present", 
    "results": {
        "cmd": "/bin/oc adm policy add-cluster-role-to-user rolebinding-reader system:serviceaccount:logging:aggregated-logging-elasticsearch -n logging", 
        "results": "", 
        "returncode": 0
    }
}

TASK [openshift_logging_elasticsearch : Generate logging-elasticsearch-view-role] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:81
ok: [openshift] => {
    "changed": false, 
    "checksum": "d752c09323565f80ed14fa806d42284f0c5aef2a", 
    "dest": "/tmp/openshift-logging-ansible-IzUGtO/logging-elasticsearch-view-role.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "8299dca2fb036c06ba7c4f620680e0f6", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 183, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973197.73-7609355762022/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:94
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get rolebinding logging-elasticsearch-view-role -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "groupNames": null, 
                "kind": "RoleBinding", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:18Z", 
                    "name": "logging-elasticsearch-view-role", 
                    "namespace": "logging", 
                    "resourceVersion": "767", 
                    "selfLink": "/oapi/v1/namespaces/logging/rolebindings/logging-elasticsearch-view-role", 
                    "uid": "68d5922e-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "roleRef": {
                    "name": "view"
                }, 
                "subjects": [
                    {
                        "kind": "ServiceAccount", 
                        "name": "aggregated-logging-elasticsearch", 
                        "namespace": "logging"
                    }
                ], 
                "userNames": [
                    "system:serviceaccount:logging:aggregated-logging-elasticsearch"
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : template] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:105
ok: [openshift] => {
    "changed": false, 
    "checksum": "f91458d5dad42c496e2081ef872777a6f6eb9ff9", 
    "dest": "/tmp/openshift-logging-ansible-IzUGtO/elasticsearch-logging.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "e4be7c33c1927bbdd8c909bfbe3d9f0b", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2171, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973198.73-77040435565441/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : template] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:111
ok: [openshift] => {
    "changed": false, 
    "checksum": "6d4f976f6e77a6e0c8dca7e01fb5bedb68678b1d", 
    "dest": "/tmp/openshift-logging-ansible-IzUGtO/elasticsearch.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "75abfd3a190832e593a8e5e7c5695e8e", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2454, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973198.97-129276701107760/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : copy] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:121
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : copy] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:127
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Set ES configmap] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:133
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get configmap logging-elasticsearch -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "data": {
                    "elasticsearch.yml": "cluster:\n  name: ${CLUSTER_NAME}\n\nscript:\n  inline: on\n  indexed: on\n\nindex:\n  number_of_shards: 1\n  number_of_replicas: 0\n  unassigned.node_left.delayed_timeout: 2m\n  translog:\n    flush_threshold_size: 256mb\n    flush_threshold_period: 5m\n\nnode:\n  master: ${IS_MASTER}\n  data: ${HAS_DATA}\n\nnetwork:\n  host: 0.0.0.0\n\ncloud:\n  kubernetes:\n    service: ${SERVICE_DNS}\n    namespace: ${NAMESPACE}\n\ndiscovery:\n  type: kubernetes\n  zen.ping.multicast.enabled: false\n  zen.minimum_master_nodes: ${NODE_QUORUM}\n\ngateway:\n  recover_after_nodes: ${NODE_QUORUM}\n  expected_nodes: ${RECOVER_EXPECTED_NODES}\n  recover_after_time: ${RECOVER_AFTER_TIME}\n\nio.fabric8.elasticsearch.authentication.users: [\"system.logging.kibana\", \"system.logging.fluentd\", \"system.logging.curator\", \"system.admin\"]\nio.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.empty: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\n\nopenshift.config:\n  use_common_data_model: true\n  project_index_prefix: \"project\"\n  time_field_name: \"@timestamp\"\n\nopenshift.searchguard:\n  keystore.path: /etc/elasticsearch/secret/admin.jks\n  truststore.path: /etc/elasticsearch/secret/searchguard.truststore\n\nopenshift.operations.allow_cluster_reader: false\n\npath:\n  data: /elasticsearch/persistent/${CLUSTER_NAME}/data\n  logs: /elasticsearch/${CLUSTER_NAME}/logs\n  work: /elasticsearch/${CLUSTER_NAME}/work\n  scripts: /elasticsearch/${CLUSTER_NAME}/scripts\n\nsearchguard:\n  authcz.admin_dn:\n  - CN=system.admin,OU=OpenShift,O=Logging\n  config_index_name: \".searchguard.${HOSTNAME}\"\n  ssl:\n    transport:\n      enabled: true\n      enforce_hostname_verification: false\n      keystore_type: JKS\n      keystore_filepath: /etc/elasticsearch/secret/searchguard.key\n      keystore_password: kspass\n      truststore_type: JKS\n      truststore_filepath: /etc/elasticsearch/secret/searchguard.truststore\n      truststore_password: tspass\n    http:\n      enabled: true\n      keystore_type: JKS\n      keystore_filepath: /etc/elasticsearch/secret/key\n      keystore_password: kspass\n      clientauth_mode: OPTIONAL\n      truststore_type: JKS\n      truststore_filepath: /etc/elasticsearch/secret/truststore\n      truststore_password: tspass\n", 
                    "logging.yml": "# you can override this using by setting a system property, for example -Des.logger.level=DEBUG\nes.logger.level: INFO\nrootLogger: ${es.logger.level}, console, file\nlogger:\n  # log action execution errors for easier debugging\n  action: WARN\n  # reduce the logging for aws, too much is logged under the default INFO\n  com.amazonaws: WARN\n  io.fabric8.elasticsearch: ${PLUGIN_LOGLEVEL}\n  io.fabric8.kubernetes: ${PLUGIN_LOGLEVEL}\n\n  # gateway\n  #gateway: DEBUG\n  #index.gateway: DEBUG\n\n  # peer shard recovery\n  #indices.recovery: DEBUG\n\n  # discovery\n  #discovery: TRACE\n\n  index.search.slowlog: TRACE, index_search_slow_log_file\n  index.indexing.slowlog: TRACE, index_indexing_slow_log_file\n\n  # search-guard\n  com.floragunn.searchguard: WARN\n\nadditivity:\n  index.search.slowlog: false\n  index.indexing.slowlog: false\n\nappender:\n  console:\n    type: console\n    layout:\n      type: consolePattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  file:\n    type: dailyRollingFile\n    file: ${path.logs}/${cluster.name}.log\n    datePattern: \"'.'yyyy-MM-dd\"\n    layout:\n      type: pattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.\n  # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html\n  #file:\n    #type: extrasRollingFile\n    #file: ${path.logs}/${cluster.name}.log\n    #rollingPolicy: timeBased\n    #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz\n    #layout:\n      #type: pattern\n      #conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  index_search_slow_log_file:\n    type: dailyRollingFile\n    file: ${path.logs}/${cluster.name}_index_search_slowlog.log\n    datePattern: \"'.'yyyy-MM-dd\"\n    layout:\n      type: pattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  index_indexing_slow_log_file:\n    type: dailyRollingFile\n    file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log\n    datePattern: \"'.'yyyy-MM-dd\"\n    layout:\n      type: pattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n"
                }, 
                "kind": "ConfigMap", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:19Z", 
                    "name": "logging-elasticsearch", 
                    "namespace": "logging", 
                    "resourceVersion": "1340", 
                    "selfLink": "/api/v1/namespaces/logging/configmaps/logging-elasticsearch", 
                    "uid": "69a87cf0-4cb6-11e7-9445-0ecf874efb82"
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set ES secret] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:144
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc secrets new logging-elasticsearch key=/etc/origin/logging/logging-es.jks truststore=/etc/origin/logging/truststore.jks searchguard.key=/etc/origin/logging/elasticsearch.jks searchguard.truststore=/etc/origin/logging/truststore.jks admin-key=/etc/origin/logging/system.admin.key admin-cert=/etc/origin/logging/system.admin.crt admin-ca=/etc/origin/logging/ca.crt admin.jks=/etc/origin/logging/system.admin.jks -n logging", 
        "results": "", 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set logging-es-cluster service] ********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:168
changed: [openshift] => {
    "changed": true, 
    "results": {
        "clusterip": "172.30.127.128", 
        "cmd": "/bin/oc get service logging-es-cluster -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Service", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:21Z", 
                    "name": "logging-es-cluster", 
                    "namespace": "logging", 
                    "resourceVersion": "1344", 
                    "selfLink": "/api/v1/namespaces/logging/services/logging-es-cluster", 
                    "uid": "6aafc4b3-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "clusterIP": "172.30.127.128", 
                    "ports": [
                        {
                            "port": 9300, 
                            "protocol": "TCP", 
                            "targetPort": 9300
                        }
                    ], 
                    "selector": {
                        "component": "es", 
                        "provider": "openshift"
                    }, 
                    "sessionAffinity": "None", 
                    "type": "ClusterIP"
                }, 
                "status": {
                    "loadBalancer": {}
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set logging-es service] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:182
changed: [openshift] => {
    "changed": true, 
    "results": {
        "clusterip": "172.30.153.244", 
        "cmd": "/bin/oc get service logging-es -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Service", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:22Z", 
                    "name": "logging-es", 
                    "namespace": "logging", 
                    "resourceVersion": "1347", 
                    "selfLink": "/api/v1/namespaces/logging/services/logging-es", 
                    "uid": "6b4aee24-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "clusterIP": "172.30.153.244", 
                    "ports": [
                        {
                            "port": 9200, 
                            "protocol": "TCP", 
                            "targetPort": "restapi"
                        }
                    ], 
                    "selector": {
                        "component": "es", 
                        "provider": "openshift"
                    }, 
                    "sessionAffinity": "None", 
                    "type": "ClusterIP"
                }, 
                "status": {
                    "loadBalancer": {}
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Creating ES storage template] **********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:197
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Creating ES storage template] **********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:210
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Set ES storage] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:225
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:237
ok: [openshift] => {
    "ansible_facts": {
        "es_deploy_name": "logging-es-data-master-8nzz83ik"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:241
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Set ES dc templates] *******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:246
changed: [openshift] => {
    "changed": true, 
    "checksum": "d1cc675be7b77470e353203a4e3f7931f02541fd", 
    "dest": "/tmp/openshift-logging-ansible-IzUGtO/templates/logging-es-dc.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "5e8fc3631f5fbdeda15d39a4b6b02315", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 3139, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973203.07-19242550493512/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Set ES dc] *****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:262
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get dc logging-es-data-master-8nzz83ik -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "DeploymentConfig", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:23Z", 
                    "generation": 2, 
                    "labels": {
                        "component": "es", 
                        "deployment": "logging-es-data-master-8nzz83ik", 
                        "logging-infra": "elasticsearch", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-es-data-master-8nzz83ik", 
                    "namespace": "logging", 
                    "resourceVersion": "1361", 
                    "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-es-data-master-8nzz83ik", 
                    "uid": "6c02a3f6-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "replicas": 1, 
                    "selector": {
                        "component": "es", 
                        "deployment": "logging-es-data-master-8nzz83ik", 
                        "logging-infra": "elasticsearch", 
                        "provider": "openshift"
                    }, 
                    "strategy": {
                        "activeDeadlineSeconds": 21600, 
                        "recreateParams": {
                            "timeoutSeconds": 600
                        }, 
                        "resources": {}, 
                        "type": "Recreate"
                    }, 
                    "template": {
                        "metadata": {
                            "creationTimestamp": null, 
                            "labels": {
                                "component": "es", 
                                "deployment": "logging-es-data-master-8nzz83ik", 
                                "logging-infra": "elasticsearch", 
                                "provider": "openshift"
                            }, 
                            "name": "logging-es-data-master-8nzz83ik"
                        }, 
                        "spec": {
                            "containers": [
                                {
                                    "env": [
                                        {
                                            "name": "NAMESPACE", 
                                            "valueFrom": {
                                                "fieldRef": {
                                                    "apiVersion": "v1", 
                                                    "fieldPath": "metadata.namespace"
                                                }
                                            }
                                        }, 
                                        {
                                            "name": "KUBERNETES_TRUST_CERT", 
                                            "value": "true"
                                        }, 
                                        {
                                            "name": "SERVICE_DNS", 
                                            "value": "logging-es-cluster"
                                        }, 
                                        {
                                            "name": "CLUSTER_NAME", 
                                            "value": "logging-es"
                                        }, 
                                        {
                                            "name": "INSTANCE_RAM", 
                                            "value": "8Gi"
                                        }, 
                                        {
                                            "name": "NODE_QUORUM", 
                                            "value": "1"
                                        }, 
                                        {
                                            "name": "RECOVER_EXPECTED_NODES", 
                                            "value": "1"
                                        }, 
                                        {
                                            "name": "RECOVER_AFTER_TIME", 
                                            "value": "5m"
                                        }, 
                                        {
                                            "name": "READINESS_PROBE_TIMEOUT", 
                                            "value": "30"
                                        }, 
                                        {
                                            "name": "IS_MASTER", 
                                            "value": "true"
                                        }, 
                                        {
                                            "name": "HAS_DATA", 
                                            "value": "true"
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-elasticsearch:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "elasticsearch", 
                                    "ports": [
                                        {
                                            "containerPort": 9200, 
                                            "name": "restapi", 
                                            "protocol": "TCP"
                                        }, 
                                        {
                                            "containerPort": 9300, 
                                            "name": "cluster", 
                                            "protocol": "TCP"
                                        }
                                    ], 
                                    "readinessProbe": {
                                        "exec": {
                                            "command": [
                                                "/usr/share/elasticsearch/probe/readiness.sh"
                                            ]
                                        }, 
                                        "failureThreshold": 3, 
                                        "initialDelaySeconds": 10, 
                                        "periodSeconds": 5, 
                                        "successThreshold": 1, 
                                        "timeoutSeconds": 30
                                    }, 
                                    "resources": {
                                        "limits": {
                                            "cpu": "1", 
                                            "memory": "8Gi"
                                        }, 
                                        "requests": {
                                            "memory": "512Mi"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/etc/elasticsearch/secret", 
                                            "name": "elasticsearch", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/usr/share/java/elasticsearch/config", 
                                            "name": "elasticsearch-config", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/elasticsearch/persistent", 
                                            "name": "elasticsearch-storage"
                                        }
                                    ]
                                }
                            ], 
                            "dnsPolicy": "ClusterFirst", 
                            "restartPolicy": "Always", 
                            "schedulerName": "default-scheduler", 
                            "securityContext": {
                                "supplementalGroups": [
                                    65534
                                ]
                            }, 
                            "serviceAccount": "aggregated-logging-elasticsearch", 
                            "serviceAccountName": "aggregated-logging-elasticsearch", 
                            "terminationGracePeriodSeconds": 30, 
                            "volumes": [
                                {
                                    "name": "elasticsearch", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-elasticsearch"
                                    }
                                }, 
                                {
                                    "configMap": {
                                        "defaultMode": 420, 
                                        "name": "logging-elasticsearch"
                                    }, 
                                    "name": "elasticsearch-config"
                                }, 
                                {
                                    "emptyDir": {}, 
                                    "name": "elasticsearch-storage"
                                }
                            ]
                        }
                    }, 
                    "test": false, 
                    "triggers": [
                        {
                            "type": "ConfigChange"
                        }
                    ]
                }, 
                "status": {
                    "availableReplicas": 0, 
                    "conditions": [
                        {
                            "lastTransitionTime": "2017-06-09T01:53:23Z", 
                            "lastUpdateTime": "2017-06-09T01:53:23Z", 
                            "message": "Deployment config does not have minimum availability.", 
                            "status": "False", 
                            "type": "Available"
                        }, 
                        {
                            "lastTransitionTime": "2017-06-09T01:53:23Z", 
                            "lastUpdateTime": "2017-06-09T01:53:23Z", 
                            "message": "replication controller \"logging-es-data-master-8nzz83ik-1\" is waiting for pod \"logging-es-data-master-8nzz83ik-1-deploy\" to run", 
                            "status": "Unknown", 
                            "type": "Progressing"
                        }
                    ], 
                    "details": {
                        "causes": [
                            {
                                "type": "ConfigChange"
                            }
                        ], 
                        "message": "config change"
                    }, 
                    "latestVersion": 1, 
                    "observedGeneration": 2, 
                    "replicas": 0, 
                    "unavailableReplicas": 0, 
                    "updatedReplicas": 0
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Delete temp directory] *****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:274
ok: [openshift] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-IzUGtO", 
    "state": "absent"
}

TASK [openshift_logging : set_fact] ********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:99

TASK [openshift_logging : set_fact] ********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:105
ok: [openshift] => {
    "ansible_facts": {
        "es_ops_indices": "[]"
    }, 
    "changed": false
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:109

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:132
statically included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml

TASK [openshift_logging_elasticsearch : Validate Elasticsearch cluster size] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:2
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Validate Elasticsearch Ops cluster size] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:6
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : fail] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:10
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:14
ok: [openshift] => {
    "ansible_facts": {
        "elasticsearch_name": "logging-elasticsearch-ops", 
        "es_component": "es-ops"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : fail] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:3
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:7
ok: [openshift] => {
    "ansible_facts": {
        "es_version": "3_5"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : debug] *********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:11
ok: [openshift] => {
    "changed": false, 
    "openshift_logging_image_version": "latest"
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:14
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : fail] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:17
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Create temp directory for doing work in] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:21
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.001944", 
    "end": "2017-06-08 21:53:25.015013", 
    "rc": 0, 
    "start": "2017-06-08 21:53:25.013069"
}

STDOUT:

/tmp/openshift-logging-ansible-03bw2f

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:26
ok: [openshift] => {
    "ansible_facts": {
        "tempdir": "/tmp/openshift-logging-ansible-03bw2f"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : Create templates subdirectory] *********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:30
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/openshift-logging-ansible-03bw2f/templates", 
    "secontext": "unconfined_u:object_r:user_tmp_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Create ES service account] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:40
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Create ES service account] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:48
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get sa aggregated-logging-elasticsearch -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "imagePullSecrets": [
                    {
                        "name": "aggregated-logging-elasticsearch-dockercfg-nfqlv"
                    }
                ], 
                "kind": "ServiceAccount", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:15Z", 
                    "name": "aggregated-logging-elasticsearch", 
                    "namespace": "logging", 
                    "resourceVersion": "1333", 
                    "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-elasticsearch", 
                    "uid": "672b62a1-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "secrets": [
                    {
                        "name": "aggregated-logging-elasticsearch-token-07c18"
                    }, 
                    {
                        "name": "aggregated-logging-elasticsearch-dockercfg-nfqlv"
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : copy] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:57
changed: [openshift] => {
    "changed": true, 
    "checksum": "e5015364391ac609da8655a9a1224131599a5cea", 
    "dest": "/tmp/openshift-logging-ansible-03bw2f/rolebinding-reader.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "446fb96447527f48f97e69bb41bad7be", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 135, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973205.75-141314664558876/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Create rolebinding-reader role] ********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:61
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get clusterrole rolebinding-reader -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "ClusterRole", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:16Z", 
                    "name": "rolebinding-reader", 
                    "resourceVersion": "122", 
                    "selfLink": "/oapi/v1/clusterroles/rolebinding-reader", 
                    "uid": "67d1f5ca-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "rules": [
                    {
                        "apiGroups": [
                            ""
                        ], 
                        "attributeRestrictions": null, 
                        "resources": [
                            "clusterrolebindings"
                        ], 
                        "verbs": [
                            "get"
                        ]
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set rolebinding-reader permissions for ES] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:72
ok: [openshift] => {
    "changed": false, 
    "present": "present"
}

TASK [openshift_logging_elasticsearch : Generate logging-elasticsearch-view-role] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:81
ok: [openshift] => {
    "changed": false, 
    "checksum": "d752c09323565f80ed14fa806d42284f0c5aef2a", 
    "dest": "/tmp/openshift-logging-ansible-03bw2f/logging-elasticsearch-view-role.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "8299dca2fb036c06ba7c4f620680e0f6", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 183, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973207.34-221338947752752/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:94
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get rolebinding logging-elasticsearch-view-role -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "groupNames": null, 
                "kind": "RoleBinding", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:18Z", 
                    "name": "logging-elasticsearch-view-role", 
                    "namespace": "logging", 
                    "resourceVersion": "1338", 
                    "selfLink": "/oapi/v1/namespaces/logging/rolebindings/logging-elasticsearch-view-role", 
                    "uid": "68d5922e-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "roleRef": {
                    "name": "view"
                }, 
                "subjects": [
                    {
                        "kind": "ServiceAccount", 
                        "name": "aggregated-logging-elasticsearch", 
                        "namespace": "logging"
                    }
                ], 
                "userNames": [
                    "system:serviceaccount:logging:aggregated-logging-elasticsearch"
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : template] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:105
ok: [openshift] => {
    "changed": false, 
    "checksum": "f91458d5dad42c496e2081ef872777a6f6eb9ff9", 
    "dest": "/tmp/openshift-logging-ansible-03bw2f/elasticsearch-logging.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "e4be7c33c1927bbdd8c909bfbe3d9f0b", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2171, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973208.6-77492887579670/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : template] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:111
ok: [openshift] => {
    "changed": false, 
    "checksum": "6d4f976f6e77a6e0c8dca7e01fb5bedb68678b1d", 
    "dest": "/tmp/openshift-logging-ansible-03bw2f/elasticsearch.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "75abfd3a190832e593a8e5e7c5695e8e", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2454, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973208.88-13517142628380/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : copy] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:121
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : copy] **********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:127
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Set ES configmap] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:133
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get configmap logging-elasticsearch-ops -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "data": {
                    "elasticsearch.yml": "cluster:\n  name: ${CLUSTER_NAME}\n\nscript:\n  inline: on\n  indexed: on\n\nindex:\n  number_of_shards: 1\n  number_of_replicas: 0\n  unassigned.node_left.delayed_timeout: 2m\n  translog:\n    flush_threshold_size: 256mb\n    flush_threshold_period: 5m\n\nnode:\n  master: ${IS_MASTER}\n  data: ${HAS_DATA}\n\nnetwork:\n  host: 0.0.0.0\n\ncloud:\n  kubernetes:\n    service: ${SERVICE_DNS}\n    namespace: ${NAMESPACE}\n\ndiscovery:\n  type: kubernetes\n  zen.ping.multicast.enabled: false\n  zen.minimum_master_nodes: ${NODE_QUORUM}\n\ngateway:\n  recover_after_nodes: ${NODE_QUORUM}\n  expected_nodes: ${RECOVER_EXPECTED_NODES}\n  recover_after_time: ${RECOVER_AFTER_TIME}\n\nio.fabric8.elasticsearch.authentication.users: [\"system.logging.kibana\", \"system.logging.fluentd\", \"system.logging.curator\", \"system.admin\"]\nio.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.empty: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\n\nopenshift.config:\n  use_common_data_model: true\n  project_index_prefix: \"project\"\n  time_field_name: \"@timestamp\"\n\nopenshift.searchguard:\n  keystore.path: /etc/elasticsearch/secret/admin.jks\n  truststore.path: /etc/elasticsearch/secret/searchguard.truststore\n\nopenshift.operations.allow_cluster_reader: false\n\npath:\n  data: /elasticsearch/persistent/${CLUSTER_NAME}/data\n  logs: /elasticsearch/${CLUSTER_NAME}/logs\n  work: /elasticsearch/${CLUSTER_NAME}/work\n  scripts: /elasticsearch/${CLUSTER_NAME}/scripts\n\nsearchguard:\n  authcz.admin_dn:\n  - CN=system.admin,OU=OpenShift,O=Logging\n  config_index_name: \".searchguard.${HOSTNAME}\"\n  ssl:\n    transport:\n      enabled: true\n      enforce_hostname_verification: false\n      keystore_type: JKS\n      keystore_filepath: /etc/elasticsearch/secret/searchguard.key\n      keystore_password: kspass\n      truststore_type: JKS\n      truststore_filepath: /etc/elasticsearch/secret/searchguard.truststore\n      truststore_password: tspass\n    http:\n      enabled: true\n      keystore_type: JKS\n      keystore_filepath: /etc/elasticsearch/secret/key\n      keystore_password: kspass\n      clientauth_mode: OPTIONAL\n      truststore_type: JKS\n      truststore_filepath: /etc/elasticsearch/secret/truststore\n      truststore_password: tspass\n", 
                    "logging.yml": "# you can override this using by setting a system property, for example -Des.logger.level=DEBUG\nes.logger.level: INFO\nrootLogger: ${es.logger.level}, console, file\nlogger:\n  # log action execution errors for easier debugging\n  action: WARN\n  # reduce the logging for aws, too much is logged under the default INFO\n  com.amazonaws: WARN\n  io.fabric8.elasticsearch: ${PLUGIN_LOGLEVEL}\n  io.fabric8.kubernetes: ${PLUGIN_LOGLEVEL}\n\n  # gateway\n  #gateway: DEBUG\n  #index.gateway: DEBUG\n\n  # peer shard recovery\n  #indices.recovery: DEBUG\n\n  # discovery\n  #discovery: TRACE\n\n  index.search.slowlog: TRACE, index_search_slow_log_file\n  index.indexing.slowlog: TRACE, index_indexing_slow_log_file\n\n  # search-guard\n  com.floragunn.searchguard: WARN\n\nadditivity:\n  index.search.slowlog: false\n  index.indexing.slowlog: false\n\nappender:\n  console:\n    type: console\n    layout:\n      type: consolePattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  file:\n    type: dailyRollingFile\n    file: ${path.logs}/${cluster.name}.log\n    datePattern: \"'.'yyyy-MM-dd\"\n    layout:\n      type: pattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.\n  # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html\n  #file:\n    #type: extrasRollingFile\n    #file: ${path.logs}/${cluster.name}.log\n    #rollingPolicy: timeBased\n    #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz\n    #layout:\n      #type: pattern\n      #conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  index_search_slow_log_file:\n    type: dailyRollingFile\n    file: ${path.logs}/${cluster.name}_index_search_slowlog.log\n    datePattern: \"'.'yyyy-MM-dd\"\n    layout:\n      type: pattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n  index_indexing_slow_log_file:\n    type: dailyRollingFile\n    file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log\n    datePattern: \"'.'yyyy-MM-dd\"\n    layout:\n      type: pattern\n      conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n"
                }, 
                "kind": "ConfigMap", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:29Z", 
                    "name": "logging-elasticsearch-ops", 
                    "namespace": "logging", 
                    "resourceVersion": "1385", 
                    "selfLink": "/api/v1/namespaces/logging/configmaps/logging-elasticsearch-ops", 
                    "uid": "6f8ae837-4cb6-11e7-9445-0ecf874efb82"
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set ES secret] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:144
ok: [openshift] => {
    "changed": false, 
    "results": {
        "apiVersion": "v1", 
        "data": {
            "admin-ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJME9Wb1hEVEl5TURZd09EQXhOVEkxTUZvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt5OFlydXA4SXc1dDhFVit1bWFkQWkrYmNQSzd2d00xbUhqWC8wdndOU3oKWXNSRDNvWGRqeGl2b2dlbTBhbmM4aWJvRmRVZmQwcFJaUG1QRHZkaHgvVmtDMzBvSktLVWZFN3IrZ2ppT1VGVQp5YkVFbTBPQjNxTEZ1TVZEcHg0QjhSRzBlUTFscU1qdk5wM0hGWVgzKzJjbEkrOVp6ZHMycUNDbmJRb3VVc0xpCnNYZUlmTHVGYVcvWmlRdkVsMnlYbVgvKzR6ZUd0eHBDTXhROXFneDVUU3Y3cmpPclV4dzVhWDJucFNwcnlBeFEKS2xvcU9jSUdNNXdPbDNPbXRTcEJSbFU1N284NHlXaTdwUzF0ZlBDL09wMlVYTVlsN1RBS2dkRkFmU2w2YmtVYQpFcStaU1Q2Q1Nsd0VtVVB1R2JmUzhOeFF5dUFVQmdsN3hOdEJhRHR0clJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHbFYKQTQwQUNzY3cwR1hYTEIwOVZmZWNoaU03bmxiTDViM0xvNU1abWlIaFpYcXBVY2FITUtxRFBmdktRc3k5UHRXYwpYRS9acis2b3RlMExiWlBBa0h2TThhak56OW1QMmpsMUNhODVBcmNMeWhaOTUxV0U1eW02Y3dYSWo4SjlWdEtmCk95SDdHNmlWSmR2VndLRjdrWC9ZNkFvMmEyVDRHa2xCODZKcVJGRFRHYUxhdVlKQ05rN1lFSU9BbjZEdXRiemwKRTl3ZXRmNWd6M2h2UE5vd25yb0p2aTdWL01tQjY5Zzk4YnV4c0NCS2hDSWZrZ0tudWRjUmlwZ3JHcVJNeHhKeQpnNi9nK21nOFNzcXN4emxURUN4TzhLOUlZekpOcnlXZ3F0MTY4RTFobURHNS93RUxMUENzUldEd2pTelBsOEpXCms0c2ZaRGNzRXB1RUFwQktxWEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", 
            "admin-cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQRENDQWlTZ0F3SUJBZ0lCQlRBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJMU5sb1hEVEU1TURZd09UQXhOVEkxTmxvdwpQVEVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVJVd0V3WURWUVFECkRBeHplWE4wWlcwdVlXUnRhVzR3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzgKbmZSM25qT0dKV042NDFOZFpGT0gvMlNNWFh2bUFvejV0NS85ckUyZ0xJTmVuNE1HaDQxcWVtZGdNRDFyZXpudgpNVTFUYzVjbnYvVWIzSHRjbXU2dXQrSitVNjdXYlNCT09yZVNlYURPLzQ5YXFKZVFJK3J4a1BjMHJ5YVVVcEZUCjV3MUxpR3RxQXhlVDJibXNvSTJTVjk0VFVXSTJaOW4zRnM0T2FIS3lzd2FacG1CK29SMUNLZWw0MG9sZWZOdGgKeEZrSE1UYmZsSnNvV0x1K3FvSHdpRVJLVC9nT0RHdDU0TmYyU2lUaVdkT09kZkJhK1F6bDBEckVJMWRZNkN5SgppcktCUGpRQ2FDM2EvdEJ1U21TdWczd0l3elduUU1xR0tWd0FhUm1UUnlWdW15L3M2V0Q0SE5tWDdLR3V4YjVsCkRablZ3Y2xvS2dFWVZpbittZjNkQWdNQkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUZvREFKQmdOVkhSTUUKQWpBQU1CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFkQmdOVkhRNEVGZ1FVQkxJRAo5a3FuVDMrZVExSVdGRi9qUXRHREwvd3dDUVlEVlIwakJBSXdBREFOQmdrcWhraUc5dzBCQVFVRkFBT0NBUUVBCkpzOXdiM0g5b2ZyMFJVZ2RzMkNtQXA2ZnRTKzVmNjBkS1dBVHc4bVhrZERuaUJzdkdpdXhEc2ZVQjF3QTRVWisKVW50b0t2ZWQ0WDlnNHFFaXUva2NpTUdwRGJlRVRyL0IzNFdINmJmaHZpOTRJVFlMT0t2WENSaXROY0tHdGRxcAoyMnhLMS9iZ25uc0lTYWYzQ1dqTk0vcnp0ZFVXZHppdGtDTVVGemZEZkVzQkhJcVZVb2pyUUxVYUNRT0lrS013CkpVVUc2VW5RbXdTKzQ4cnFQdXZyaDhyUnFyMTBhQmxBY29mTUZkSklwYVdSYk92emlmVGovNEF5aGQ2K0JpVEMKYko4dzk5aFFjQzEyUEFXN0c1Z3FTRm9wQTdzbWtabWlsczNteTNGZ20yY1I5R2NZVkwxcEo5YzdYczFid3JmKwpHNUxrTlBVTWJaN3I5aHk2WThDVTV3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", 
            "admin-key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQzhuZlIzbmpPR0pXTjYKNDFOZFpGT0gvMlNNWFh2bUFvejV0NS85ckUyZ0xJTmVuNE1HaDQxcWVtZGdNRDFyZXpudk1VMVRjNWNudi9VYgozSHRjbXU2dXQrSitVNjdXYlNCT09yZVNlYURPLzQ5YXFKZVFJK3J4a1BjMHJ5YVVVcEZUNXcxTGlHdHFBeGVUCjJibXNvSTJTVjk0VFVXSTJaOW4zRnM0T2FIS3lzd2FacG1CK29SMUNLZWw0MG9sZWZOdGh4RmtITVRiZmxKc28KV0x1K3FvSHdpRVJLVC9nT0RHdDU0TmYyU2lUaVdkT09kZkJhK1F6bDBEckVJMWRZNkN5SmlyS0JQalFDYUMzYQovdEJ1U21TdWczd0l3elduUU1xR0tWd0FhUm1UUnlWdW15L3M2V0Q0SE5tWDdLR3V4YjVsRFpuVndjbG9LZ0VZClZpbittZjNkQWdNQkFBRUNnZ0VCQUs1MUJ1NHcxSVZLUmNZZlJ6ZEZtWUZidHR1aGgveko5U3p3SzdwTlNZdFMKUUx3Zm0raEpMb01DN21Ub21aYTFabk9YeldiWHJrS2s2UWc1R1owZzdJMk1KYUVrczcwL09EZERWaEhVRCtvRwpOTWpzMFNzUUhib0xsS3NWS2dEY2tmRGg3OGtpUi8vSkZtQzViR1NBS0JIbzFjNVdZeG5oV3BpUmJrdWpUaHQ3CnBwWllUVTlSY1EwdFBaMUpGSUdUTUdKQTdEQ0hRYmZsanlIRkJ2RFYrdTR3bGFiS2NDQk45ZTI2RnVRVXNJRisKRCtjdm83Y3VoYU9mT0t1VGtudkhKOUp5a2FnM0VFcXdTTi9KTitLSFRyd1VMMkFxZEdVeGZIVGkxYlliWFFMQgpjQlNXb1ZkazFwSXBmRi9Ya2R4WnlKbmhjSHhoZTJYWWhzbmx0alNjNXVFQ2dZRUE0RGdrU1F1aEFORHBBYWJ5CkJYVDZ5U0xHNnRYeFVCalEyQjJLWlZsMVpYODNkd3NnTXZqbnlzeTkzVmRUbXlFU0IxYVArUC9TbmtJUFYxWjYKdGlJSFBXSzlDalZUck1XOXVhbEFJQlllaDVNdnhQbGRmdDBLVmcrbGlxRzZ1MXhnazNUZk5mTTNkc2R3WGlBMAplNmdDalMrNlJwYmRwdUN3QjBqb0JlaXBpZ1VDZ1lFQTExbjVjUmJMcUN5T24xakxPaERwaFFnakkwcVdUK1Y1Cms0enVoaVRURlJGeUtuNk1Kamp0RzZRMFZHVENtU2J6MTZINXJTOGdSNWwxTXNOQ0xIcDI4eEtKNGVnWjRJZXAKOUFLamRGOE0yTjF2a0w3VjV1Y0tnVk5LdUt3dmgvR2FtRWp4UEd0aVAyK3B3WXZ5NlhQdU1ZU2N4UnZ3bE9vSgo2TmJxZWhQdzgva0NnWUFMblh6cnQ4bUFaRklkdnNzODB5R0d0K0Y4R3Rja1loUzNqVmcxQmR4YUJLd1g0NkNvClkxS0dvL0tWKzhjZCt5bVc3Ym9KbVI4TkNia0h1amdqSlVJZ3dQT3dDckVwK3hobi9NZVFvZlMwNjBBSFFTL0IKdWF1bVo2c1lzbVljL0owWUptN0Z1YksrMlhnTnVEZGZ6SVZOVVJLaVE0QjUrNXZDMU5rSUxWUlREUUtCZ0JpKwpROFRVbzYyOUFONGFLNitPUmVaOUd0eHhNM2dXbTdOeVcrMlp5WThBSkNmeHhsU1Y4ZGhkTTQ0R3piMGZGcUZPCkFRdi9BQ3g1MjFkcnkreWtYWXBzTk45NTlZOHd4enc4R1YvRGxBeE8xUVRDaUgweFNxbTFVajZKdWlSYWhESy8KSHNpY1pmdDM3djlIL3k4SG5QU2ZrZ3VydkNiQUJVZDFlaGR3dHh5eEFvR0FaSFdoYWV6OHFLTnFGSUoxNVJMMQpaanVQRTE5UDZXNFI5RUZydWtwVWVNOUJMcXc3SlYvWEZkMC84L3RqVHlCR0dyNkRlS2YrOGszaTlGRmNlSEhCCkVKMkRINTl3QllpQUJnK0FXaTRrYUhDVElNWmtxT2NmZlRKMW50Uk9PK1pjZEFVOWVKTW9Nb09QM1BZZ3VMREoKTVJiZGc1YVQxQUhMZVBvNUptbEwreXM9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K", 
            "admin.jks": "/u3+7QAAAAIAAAACAAAAAQAMc3lzdGVtLmFkbWluAAABXIqOJdgAAAUDMIIE/zAOBgorBgEEASoCEQEBBQAEggTrVyvXd2yyNAByelosa6AejPcUe9Z1Y89dc9oLS06GK+DjUQIhNBtJocezSgwXThaBo9UrEX0pjpDfitLeWq5GU9xSkFfWELKLX0hAd33dU+/atdvkdLLFJnSPmr7tKMqmPbwtdnfKOfPukn05V5J2D0HCbwDXrJl8M7+l1X3OIR7dmV5NLobfcHkE1OgLxPGCdgKGTBJPpBmBxw8fPysxpxIlnKR0TG4yb+dmgV8drTND2PBH8OP+KsR6vI6YkKuIBpEyt3Dj590c7yGq3VQtfUkwglmTRBDU7vhUY+JdNmJg8eVum972njmajBIbGAg/hy84znoM8lzKcMJfX6la8cVm+FSSpazs75QoQRPPGNDoFiwaKq8fhM8uOYKkx4nFEyg4OC/q9YHS5sRhIzEtPa2to6WYZolEN2LaGaLXfNbIu6vwUSx7prCcyZHYoz3HvHZKSf9fd2/jNYgSoCVxgd0cRhyBh6l/4Foc9W4nP7pjmz8ZMmUm6wRzsIpadTZwWp136nSniuk9Y0x3TtqubOJnn30jLoqqe5OGco+dm3yrb5wsnsX8CmImrb7PsJRusnPgsknmeGsu+3NB9T7Ab9PSvR/jbojr3fxOjWuZm09GktsEIYMbbxNbCYCA4BCstyfe4xbVzR1xJtSQq84YJyph372Nie3Eut6ejSsqatjV+LvyklFVzdeKObSuugCDgFk4ZREpWbkkzkuL7GphT6Ju7j6q/YzV0JwtHVah/Mg/wqg6446OhcsW7VmaP8UZlNBNVg3V5+LR3JRMMvUvrrsrJh/m9rSdTI0zPEaz0YbVExgDVH2ivLZqycz5uZWu5X0C5GdA2v+guwj5l8D00GfVWKRNYM/Wcg1UG/FQTS8YTHBru9mzOCYoXZb7AlbnYpARDroXZ3/FTCPV7eWM4oghy7z1pDD6r203NRkkroh4hjcaNcb2HcH9YJvbT58g/93WDoqq5kpDPdiVv8cArUq/9XIbMWVtXS686TInSV/sKziYP22DifTmq36NKROg1z7+nMus7zBSW1NfTdQ9Zy77tnUDLVYZJu6s72n9eemWyjBjfhnsae8/JKr7sZOAJh1K9QLH+cDZ7cYhFPZ+d/Nm2ShfcHLr7Z8DFouWjWrDkSlQKnVrr4rPTXo3AEKnIbgOQpqgS8c3IMQ0Pvpj5AfuRW/0+Eqa/hmMIE6PB9cZeI3f/xjX28ZJg+0czJFqHjnIzTriue4YXvR3PkD5oz1TXmS+EooTB20VCxVw4RvT7EWOoEc61MCL03QNZvDVsbTvOLsN5xXHkYoQtASrUkQl0JrjWQZKXFxI3DI6oSHMI4QAX9ZmMAg73qyE9axAtv+Yww7C0r6j3saF4BW1ThV/BW/lh02CoHW7GRJb1YkJCIexKf7Dn2ISF1drIvdZRG1GvwA7KejSr37arGM+Qe+PMlemJ9j0n7IGSH7+Vmed7zkQULa+izK96AdG0wZVevwP4bmG/0Zc/pcIopn0lIOv06kxXwsbOa55PPxJvGD6pRqHPJm3xFFeLP5BV9EGtmzmkPKN7sLUUpBrLbw3dHyfvQ9lzVBecC5Ftg/FmRGA6AnYED9B9HStJUsDjqWJMNweUA3vUc8qwVxhDKAv90/IbHO2BKMF6WWQmjovgKm7rppf1652qmFrdsNlg/wxnq0EcyIQH+Gi+B8AAAACAAVYLjUwOQAAA0AwggM8MIICJKADAgECAgEGMA0GCSqGSIb3DQEBBQUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjA5MDE1MzEwWhcNMTkwNjA5MDE1MzEwWjA9MRAwDgYDVQQKEwdMb2dnaW5nMRIwEAYDVQQLEwlPcGVuU2hpZnQxFTATBgNVBAMTDHN5c3RlbS5hZG1pbjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJFkFPspkQwkEglgH/21BX1uTjUT0nDVPFJa0uShBAGxnGqEUGxNc0MhtfloK1PGJiNxPqTTKu/vjHGwlMxa4L3CzlqEZ4uWzp+eCVAZ0OiITgLX570BVMl1qBHQ/x5p6Tz0E5EVXe58lRI7ez0mMz58WvFPYymt0bSdBxtzPNloj5ETmzx57O6CYWVZg1v2ONKPEb5+Igb5C4P6ZG9p1vq5pH3b0hkJJEb07kPHnGfJudH8QAHWSZ9JARIisaUsKaGEcThCexaEboMCjVkjTgRZXgJjW/ioO+ZqPoQvVx8NxZQzMMwBXY4UeFXFD5XCUU3Cd6jUDluu7Dx79Inr+fMCAwEAAaNmMGQwDgYDVR0PAQH/BAQDAgWgMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBRGTHsFfHpWcAF74/saWvHTXUC4xjAJBgNVHSMEAjAAMA0GCSqGSIb3DQEBBQUAA4IBAQBELLvq+Jlql6x8awgCKiPr8Rqeqsr9o1AGHg0PDTUxrCPApWMmdJsMH7VyyDnRn/8bkb9EfDuPRnyTai6AL40jELuSOhdFZlaCqjOU6pMUlYM/8rn0XDYUlGTLn8MrqdCb7JLogKrn6HIh1rKPd2eZMnOmdSdE34gIyt9Skexxw+90+Lubsunr4N9jNfbBKf5dd9zp/naNc9KQjrdpz4hPzSQY+5TrwwbdxlL4L7cS3xlgZiOvxp4TBkWKjraEVCg5Ajbae14ua/PW/LTGtwU6ajxewiWQWa5QDxrXP9e4JGwFGl5Oq3iOuIWjSakibrM8nFZ1Om1TqNS8pX8e+mj1AAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjA5MDE1MjQ5WhcNMjIwNjA4MDE1MjUwWjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArLxiu6nwjDm3wRX66Zp0CL5tw8ru/AzWYeNf/S/A1LNixEPehd2PGK+iB6bRqdzyJugV1R93SlFk+Y8O92HH9WQLfSgkopR8Tuv6COI5QVTJsQSbQ4HeosW4xUOnHgHxEbR5DWWoyO82nccVhff7ZyUj71nN2zaoIKdtCi5SwuKxd4h8u4Vpb9mJC8SXbJeZf/7jN4a3GkIzFD2qDHlNK/uuM6tTHDlpfaelKmvIDFAqWio5wgYznA6Xc6a1KkFGVTnujzjJaLulLW188L86nZRcxiXtMAqB0UB9KXpuRRoSr5lJPoJKXASZQ+4Zt9Lw3FDK4BQGCXvE20FoO22tEwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAaVUDjQAKxzDQZdcsHT1V95yGIzueVsvlvcujkxmaIeFleqlRxocwqoM9+8pCzL0+1ZxcT9mv7qi17Qttk8CQe8zxqM3P2Y/aOXUJrzkCtwvKFn3nVYTnKbpzBciPwn1W0p87IfsbqJUl29XAoXuRf9joCjZrZPgaSUHzompEUNMZotq5gkI2TtgQg4CfoO61vOUT3B61/mDPeG882jCeugm+LtX8yYHr2D3xu7GwIEqEIh+SAqe51xGKmCsapEzHEnKDr+D6aDxKyqzHOVMQLE7wr0hjMk2vJaCq3XrwTWGYMbn/AQss8KxFYPCNLM+XwlaTix9kNywSm4QCkEqpcgAAAAIABnNpZy1jYQAAAVyKjiVNAAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjA5MDE1MjQ5WhcNMjIwNjA4MDE1MjUwWjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArLxiu6nwjDm3wRX66Zp0CL5tw8ru/AzWYeNf/S/A1LNixEPehd2PGK+iB6bRqdzyJugV1R93SlFk+Y8O92HH9WQLfSgkopR8Tuv6COI5QVTJsQSbQ4HeosW4xUOnHgHxEbR5DWWoyO82nccVhff7ZyUj71nN2zaoIKdtCi5SwuKxd4h8u4Vpb9mJC8SXbJeZf/7jN4a3GkIzFD2qDHlNK/uuM6tTHDlpfaelKmvIDFAqWio5wgYznA6Xc6a1KkFGVTnujzjJaLulLW188L86nZRcxiXtMAqB0UB9KXpuRRoSr5lJPoJKXASZQ+4Zt9Lw3FDK4BQGCXvE20FoO22tEwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAaVUDjQAKxzDQZdcsHT1V95yGIzueVsvlvcujkxmaIeFleqlRxocwqoM9+8pCzL0+1ZxcT9mv7qi17Qttk8CQe8zxqM3P2Y/aOXUJrzkCtwvKFn3nVYTnKbpzBciPwn1W0p87IfsbqJUl29XAoXuRf9joCjZrZPgaSUHzompEUNMZotq5gkI2TtgQg4CfoO61vOUT3B61/mDPeG882jCeugm+LtX8yYHr2D3xu7GwIEqEIh+SAqe51xGKmCsapEzHEnKDr+D6aDxKyqzHOVMQLE7wr0hjMk2vJaCq3XrwTWGYMbn/AQss8KxFYPCNLM+XwlaTix9kNywSm4QCkEqpcjF2KW8VNA1oit6iLeTa7JTptPOk", 
            "key": "/u3+7QAAAAIAAAACAAAAAQAKbG9nZ2luZy1lcwAAAVyKji9jAAAFADCCBPwwDgYKKwYBBAEqAhEBAQUABIIE6HTMe2K3Kw0YohoY1yYhQNuFAgthASv/JMkhbqeTmXm5PM14fFd0r6zT1jHWJjLqNsvK1VGqCFv0x3ciS34C/GZd4WZKDmPCgAPOkVTeLwilYAPTtIrpQuBvpyRAVtqR3eWWMrPsGCSW9Fab7gYAtkAO0dbJrPbgpF/NTwmOPpN89axgvnr3fjjg/JcOwreYcyozmQe7ytSxAxsnHo9D84MI0APeEow1DMzu5rpJsUk4m4M2XAoxp1CH2n61vSDqKMqwiDWowUpDnrer29M9chWCoT+IMXAK+zt2b17kIhU77mdqi4hSoCF4VFxR2ietrkwh/+ESVYxkZx3wZhzvOBws/VLjH05+Ku4fSMeClmZ4otkrUoP38qqahLUlHjyl1LXsb8vrQZbuCkzDxK3tYpAP6Ma+Ys1L5B9+ILop8LBaSRMzfRFp9BckIm3vIvp5M1jGrzrYfM4TYpNAift2DyKCc4BIBIlfXKRhx5BxgDgf5eSPyBAIUNbG/7H9F+F/kkEtlHzSXZKmUvGwbnli+fndlLWpN3m5SmqVt0l04QBF/920siRk/MWffXW4Aoh7eLFVojut4dDLqpz/F+2yfaPVWTIW31p7CHHDybGE/Ws+0k+EoPXE+HqfZuIwQQSjY2S3UR8yInM5gF/D4bRXct9OSvLOXKUey4VVyPlIrzO+DpUlOG9Xnzwbtc1JMjQD+Hac5grTy2BgPYskcpepLgdP19Eg9xKcBz9xOa6AWpIolUr4ELVZfN1wWKUdRGM+mX1xMgv7CC8HTi6/M5S+iCdaba6DvRMH0+DgZadeNleWmSH1Yg9wNpX0evVH68zwdMqvTgokkak+tPeDcZHSer1oaA40MX4HVqDRKXh+OmGuJUQXJVF0pZuBEVckQIXCZe0PqcwWyVt9KFog+Ci26Z5cOiWot/u0/63STJHkVdETT9hHh42PcGIR8Io6VBISmhuRClxOrUsi6RLdlt9YHSyqJFMbN0fhHQT0diOHEosCOSa2RFNkFEmL9fgCGhVPahHZIPw7SuzAJ/HMlHPEdofb5yubfgGE5jDwnDP7pcb5yrQJsD/ySGgWZiMttR4W9OVNSmD0bJLSn4HungCi6TCJPdVtIkg77EX3CmeMIz46m7GmAaIyId0bDb33TnM5bFJ0vugc/hTMaGaia8P1i4cnUfBhEu3SJ/zrAADZUvk8GIRrfGqpNircnBU2fas+/4xjnczrzj4pMZEAY2Ar9ilSkDcEck16GLxwCVAsGFmnBD46c8akni/JWxFLMJG2adFqm5Zoqn7ND3DfnDkTteBkEm8J9JlYR5I+tg6SbsxP9vTJM6n1SIkgC46sS1Daxu7tpF2DhrAyP6f+e6XvzSiBkxZ6zZQ/2yd1IXc/MoBx8STid6/eZgx3NTzsbB9sKceLhrXQmcxnfo1ergmju9oEjAvAcrHUQ6Dbiv3mlojq+J9ew79bawMhIpvRaMkY7i+NDPaVl3NCfGHvQK3zuX6LgaI7vR2cQDgFd3arwFY3U6AovhyWnmuCKY8+7WluBdJcFB2eYSAa12+yxynshls4XUGfMtewemjYXusWrs9JhOhMNQfF5qI6SWLNhOK40PWIKjW5qAEo2PTEHFXIO5jVvdGJ8Guor2zYMwdp9vCl1ALVm/Ch2oyKAF6WxoeVLnM10rLQSRQWAAAAAgAFWC41MDkAAARcMIIEWDCCA0CgAwIBAgIBCDANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTAxNTMxMloXDTE5MDYwOTAxNTMxMlowOzEQMA4GA1UEChMHTG9nZ2luZzESMBAGA1UECxMJT3BlblNoaWZ0MRMwEQYDVQQDEwpsb2dnaW5nLWVzMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAirnITecObt4k2WmrRqkcdr/3/Cx2UYmloTpV9G2Eu0ZFvQuvpFBnyGB7j1NzbCWMg32l8uA4SwIHUO6t8KVizFv4LMY0ssys9hvJTvAkHTyptabEECwaGbQzVCmFdCEMXCZ7LM04kqHi3QBY+BNpotgJT55xLUCFXmucOjwBQtCNUHInDlV9qVNyKG7r+F+Bt7IDckWkDVPd9+zChcyJMdoa/x+qJIy5rZFBCi3mzRturxvRGV3D9s5SezVt1OfyQUW1P1vdIPq9xBaUx0wf2dctc9fEjq48qqVHFh7Vpq92t6EtIQyBfpWoU/9vjsleJWhdHbJ9rgLzLOWAJs2ZPQIDAQABo4IBgjCCAX4wDgYDVR0PAQH/BAQDAgWgMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBTfd/Axc7XXMCaHQpxA32jPnos9+zAJBgNVHSMEAjAAMIIBFgYDVR0RBIIBDTCCAQmCCWxvY2FsaG9zdIcEfwAAAYIKbG9nZ2luZy1lc4IkbG9nZ2luZy1lcy5sb2dnaW5nLnN2Yy5jbHVzdGVyLmxvY2FsghJsb2dnaW5nLWVzLWNsdXN0ZXKCLGxvZ2dpbmctZXMtY2x1c3Rlci5sb2dnaW5nLnN2Yy5jbHVzdGVyLmxvY2Fsgg5sb2dnaW5nLWVzLW9wc4IobG9nZ2luZy1lcy1vcHMubG9nZ2luZy5zdmMuY2x1c3Rlci5sb2NhbIIWbG9nZ2luZy1lcy1vcHMtY2x1c3RlcoIwbG9nZ2luZy1lcy1vcHMtY2x1c3Rlci5sb2dnaW5nLnN2Yy5jbHVzdGVyLmxvY2FsMA0GCSqGSIb3DQEBBQUAA4IBAQAWlBM8FSE7R8OEn1V1MS1C4F3wqPTqK2XlBAKzhGhffMo0I9nVq2lhIzmR3MQco+gHzfbHyZhd4g0SlwH5uguH2YXkW1dZsvkIGtFakcS8bzaAOgnEuVqBroRLPkWpbfxammrDtRIqH7kTs/z34sHLJr1mFA1D7IdxOSLtyh+KAirXSr/BvXWNM+HKTNfQ1M4j8Cce3aveXBLV0oOtX4C8d+zqwYcESv9UcQnbA6StY6aDM9/HLNlqYi2po7WvRy7PXOSA4SIld7ykg4cl9hkj0ILYbuNW3J5aSfJl6guSWXnvd/VOZ6bNbGBfBrpaNXYub78M+mXvYNKx/IVqb4CBAAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjA5MDE1MjQ5WhcNMjIwNjA4MDE1MjUwWjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArLxiu6nwjDm3wRX66Zp0CL5tw8ru/AzWYeNf/S/A1LNixEPehd2PGK+iB6bRqdzyJugV1R93SlFk+Y8O92HH9WQLfSgkopR8Tuv6COI5QVTJsQSbQ4HeosW4xUOnHgHxEbR5DWWoyO82nccVhff7ZyUj71nN2zaoIKdtCi5SwuKxd4h8u4Vpb9mJC8SXbJeZf/7jN4a3GkIzFD2qDHlNK/uuM6tTHDlpfaelKmvIDFAqWio5wgYznA6Xc6a1KkFGVTnujzjJaLulLW188L86nZRcxiXtMAqB0UB9KXpuRRoSr5lJPoJKXASZQ+4Zt9Lw3FDK4BQGCXvE20FoO22tEwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAaVUDjQAKxzDQZdcsHT1V95yGIzueVsvlvcujkxmaIeFleqlRxocwqoM9+8pCzL0+1ZxcT9mv7qi17Qttk8CQe8zxqM3P2Y/aOXUJrzkCtwvKFn3nVYTnKbpzBciPwn1W0p87IfsbqJUl29XAoXuRf9joCjZrZPgaSUHzompEUNMZotq5gkI2TtgQg4CfoO61vOUT3B61/mDPeG882jCeugm+LtX8yYHr2D3xu7GwIEqEIh+SAqe51xGKmCsapEzHEnKDr+D6aDxKyqzHOVMQLE7wr0hjMk2vJaCq3XrwTWGYMbn/AQss8KxFYPCNLM+XwlaTix9kNywSm4QCkEqpcgAAAAIABnNpZy1jYQAAAVyKji7cAAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjA5MDE1MjQ5WhcNMjIwNjA4MDE1MjUwWjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArLxiu6nwjDm3wRX66Zp0CL5tw8ru/AzWYeNf/S/A1LNixEPehd2PGK+iB6bRqdzyJugV1R93SlFk+Y8O92HH9WQLfSgkopR8Tuv6COI5QVTJsQSbQ4HeosW4xUOnHgHxEbR5DWWoyO82nccVhff7ZyUj71nN2zaoIKdtCi5SwuKxd4h8u4Vpb9mJC8SXbJeZf/7jN4a3GkIzFD2qDHlNK/uuM6tTHDlpfaelKmvIDFAqWio5wgYznA6Xc6a1KkFGVTnujzjJaLulLW188L86nZRcxiXtMAqB0UB9KXpuRRoSr5lJPoJKXASZQ+4Zt9Lw3FDK4BQGCXvE20FoO22tEwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAaVUDjQAKxzDQZdcsHT1V95yGIzueVsvlvcujkxmaIeFleqlRxocwqoM9+8pCzL0+1ZxcT9mv7qi17Qttk8CQe8zxqM3P2Y/aOXUJrzkCtwvKFn3nVYTnKbpzBciPwn1W0p87IfsbqJUl29XAoXuRf9joCjZrZPgaSUHzompEUNMZotq5gkI2TtgQg4CfoO61vOUT3B61/mDPeG882jCeugm+LtX8yYHr2D3xu7GwIEqEIh+SAqe51xGKmCsapEzHEnKDr+D6aDxKyqzHOVMQLE7wr0hjMk2vJaCq3XrwTWGYMbn/AQss8KxFYPCNLM+XwlaTix9kNywSm4QCkEqpcgDJQHp5EwfsL3kByNqm+UYV4UiC", 
            "searchguard.key": "/u3+7QAAAAIAAAACAAAAAQANZWxhc3RpY3NlYXJjaAAAAVyKjim4AAAFATCCBP0wDgYKKwYBBAEqAhEBAQUABIIE6TtJ86SIeEPGhas027/H2NgVNJ6U65m2y+EdMPCJTJF5xQI7FvHK2FvkEY5XS6+Kx5OFLi62y1oTLM8eJ4jwIWxZta0xGfTPdvRQ/nqaEnEfA1uHIGqmIjcOQGUAZdKge6IgVFN6v/jOQi3T3vZbmhKvUyJEMAyJV1oew5Gg6oMzvUnTbKf0e5239SY0e/j3yG4gNksSHk8yoino7eQFmtz4YYmmVc0KOphggyn/UV2ZoFUK1yRwZ/l9BFZeYI9QCrB1MPnvnf5DCG+DL/P5SpVYDXPuNKNq1isf+JgoWqGyLRAihdywo2UEzZvwXloskQepWqt1JbUCmmp3y1nytIvQMSL2GZ4QwFWp//xTrf4xWoX1pV8inJ+KUm69UJ/3aoEqqicoepEWm9bvTD/HqNaQsNMOHEtTjSSgJVqNRM0JD4MluNXkOjCznnkEwYJbTjvVmNx+YzN72P/2S+orJ8StAQfjnkHIlvKoQSAV2/uRbOM8KoTRMQWmPIYJgwE5ekCG+Kt9xSyA24I+LVh47mVIEqx9Tl2AdBH/LpgSBKF1oc1ZQaCFID9HDWwPJXN4EE4XqpO1hOOQcAHfbi7C9g0fUE5l648/vtMGVDuDOK3jYximXKDhidHO+0N3WTsmVOyZmONqdP9/lHfvlvg/ICUcePeeZG+zna7Twm6/9wasQviOTiJt5CjmrLe5LrhwkwQ9PUOsJVEVC4OULxAp0T18+2AsoMZf6ZmjD0xVwvMSjq71UG/FfGnBwL2qldVLfRKDK65Cl7OU3VjoP6JSvyMk0ImjX9XRYTlmK4c79Qy0meIzb+oKD0OtAu2a0S96GhijXJnVJrE6HPvw55361inwmHPTWbFJkN4PfCztcyfH9p2+emvqEdqkpZBhlsNgs4+qGclhiTZjebNZ6Gie2cd0lZxtxjSSbMbRL/MQvVo3aS9qIKixjQi3IqdSfssAtFnZPFmj2SbZnSHeCG0PJn8zwNDwCpjlp/QV8IeWTGiCV+a+jN/Zu0RZFVef2j5wFTo7q7roAxaAci2DwU42VVj7TsIfCHReeV0sJBGM/6RefcCXVPXqD34Ko9rxiH5/mzUGyWdLz8m4vLP+ql6YAW3d41Mt9V/Fv0+SYa5cBYJ6RLY84upkD7BL8lxbdapBkI13Uf2zIKDWE2t+/PSbejyA05YHqEKkyQFqv4UvqSwAAhzZ3xoIWaUJs1fKwuR5HzTNIRGrnVWmt08GIixSoE2OEzMNurbPtO9QWWR7ovU0/ldQXVz6shTADVNmybQO7gC8AO9OWc2HIDhIsDJTJhFrtGLCeqyUR5NlI9+D2R1a/FL5nCD7RIsg/MeXkhkrpQtxxHnb+Gb32LHYdyA7yu+KxCeOcS1lbSPJDb/y572qzw0Tb4/lo8JebEqae3zYj0kHTh2nhXG4ioMPw+MfOx5psz5YOVEHIDubA0kWcM8+8px76F6sVz/p2ak00Rxp75ZV6TzvfI4NxBxD6WRd3gzgPeMCpYj47sGP2eCNKdV3Ra9cYIsnqkm1o80mC5a4RF1LsJF8uAGDh0oaSIAxv+S9k66Kb7iKwIIQoUjPWHiUgchbbRqvek94Vu9i5oKnMGVMPxzEyeL1Rq8BFhHrwGmifuWBdpxgas6eSN90/yhhv/KWT952WMqpeCWwsKzdR0ogiqYaBvf0FAAAAAIABVguNTA5AAADgjCCA34wggJmoAMCAQICAQcwDQYJKoZIhvcNAQEFBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkwMTUzMTFaFw0xOTA2MDkwMTUzMTFaMD4xEDAOBgNVBAoTB0xvZ2dpbmcxEjAQBgNVBAsTCU9wZW5TaGlmdDEWMBQGA1UEAxMNZWxhc3RpY3NlYXJjaDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJie2T2EVuA34FipIzmzjF9jQNz7J5t2s4ByERox44EG2kuBeTNmUsbQ4XVE4oWpUTY3wI0cCCJHIiVdsnTRq6iMxgmOfNlssWaifeONc8xWZISgu+VlPSDS1SOLdxKDTA4OGxJAtRnRvfYTTJFdcoPfVHE+6rY4rJ+4W9YZysrSnMhWLTESGpBtGtNBB1/3adOGRgHCDIzBP7t124QGb3Yp3zGHHwlr7RbJnT2YE97hjIDfDwSsZsypjViNDE+Nyydbhjc2oRDRfWR+evYlXS3euEXRYutK0wcv9CRwhljGpR9pYpamu+ZmjLd0zttfH8P0DqrODksrmeMU+lPbW68CAwEAAaOBpjCBozAOBgNVHQ8BAf8EBAMCBaAwCQYDVR0TBAIwADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFH2CsYfHfVv7E9PONRlkTfirVd9PMAkGA1UdIwQCMAAwPQYDVR0RBDYwNIIJbG9jYWxob3N0hwR/AAABggpsb2dnaW5nLWVzgg5sb2dnaW5nLWVzLW9wc4gFKgMEBQUwDQYJKoZIhvcNAQEFBQADggEBAD9xOOsseNZ2D1i5WgXSMFl5p5gXQ1FjbLHZqMDBoaKgJKVAgS3JnytKvavuZAXVKIGGhR5vG/pJ1Q6ZjR4CxQCgcwNQDM846FcaxCAX+clHLYx+eGIDgh606YOrBLcp2F5z1t4dv1vH0XMwRon/qYcIKGrmt7K9RJQgYp5VsqojtfkzdFkurrLNJvYNagztmVNKQu6n10Gr+vAq3v80CLzvkzrCUKSKMNFjfKBxXjIW+bhu1+2YvScxLOJmcwSpCyu2wwBK3TyG0hIA13WTqZnPwAHj0ZudlV8Lwi5rxwNumO4Rxd7+HmUClsQ46LnA6cSQQsIsPBYdgDAhuscuqzkABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkwMTUyNDlaFw0yMjA2MDgwMTUyNTBaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsvGK7qfCMObfBFfrpmnQIvm3Dyu78DNZh41/9L8DUs2LEQ96F3Y8Yr6IHptGp3PIm6BXVH3dKUWT5jw73Ycf1ZAt9KCSilHxO6/oI4jlBVMmxBJtDgd6ixbjFQ6ceAfERtHkNZajI7zadxxWF9/tnJSPvWc3bNqggp20KLlLC4rF3iHy7hWlv2YkLxJdsl5l//uM3hrcaQjMUPaoMeU0r+64zq1McOWl9p6Uqa8gMUCpaKjnCBjOcDpdzprUqQUZVOe6POMlou6UtbXzwvzqdlFzGJe0wCoHRQH0pem5FGhKvmUk+gkpcBJlD7hm30vDcUMrgFAYJe8TbQWg7ba0TAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBpVQONAArHMNBl1ywdPVX3nIYjO55Wy+W9y6OTGZoh4WV6qVHGhzCqgz37ykLMvT7VnFxP2a/uqLXtC22TwJB7zPGozc/Zj9o5dQmvOQK3C8oWfedVhOcpunMFyI/CfVbSnzsh+xuolSXb1cChe5F/2OgKNmtk+BpJQfOiakRQ0xmi2rmCQjZO2BCDgJ+g7rW85RPcHrX+YM94bzzaMJ66Cb4u1fzJgevYPfG7sbAgSoQiH5ICp7nXEYqYKxqkTMcScoOv4PpoPErKrMc5UxAsTvCvSGMyTa8loKrdevBNYZgxuf8BCyzwrEVg8I0sz5fCVpOLH2Q3LBKbhAKQSqlyAAAAAgAGc2lnLWNhAAABXIqOKTAABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkwMTUyNDlaFw0yMjA2MDgwMTUyNTBaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsvGK7qfCMObfBFfrpmnQIvm3Dyu78DNZh41/9L8DUs2LEQ96F3Y8Yr6IHptGp3PIm6BXVH3dKUWT5jw73Ycf1ZAt9KCSilHxO6/oI4jlBVMmxBJtDgd6ixbjFQ6ceAfERtHkNZajI7zadxxWF9/tnJSPvWc3bNqggp20KLlLC4rF3iHy7hWlv2YkLxJdsl5l//uM3hrcaQjMUPaoMeU0r+64zq1McOWl9p6Uqa8gMUCpaKjnCBjOcDpdzprUqQUZVOe6POMlou6UtbXzwvzqdlFzGJe0wCoHRQH0pem5FGhKvmUk+gkpcBJlD7hm30vDcUMrgFAYJe8TbQWg7ba0TAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBpVQONAArHMNBl1ywdPVX3nIYjO55Wy+W9y6OTGZoh4WV6qVHGhzCqgz37ykLMvT7VnFxP2a/uqLXtC22TwJB7zPGozc/Zj9o5dQmvOQK3C8oWfedVhOcpunMFyI/CfVbSnzsh+xuolSXb1cChe5F/2OgKNmtk+BpJQfOiakRQ0xmi2rmCQjZO2BCDgJ+g7rW85RPcHrX+YM94bzzaMJ66Cb4u1fzJgevYPfG7sbAgSoQiH5ICp7nXEYqYKxqkTMcScoOv4PpoPErKrMc5UxAsTvCvSGMyTa8loKrdevBNYZgxuf8BCyzwrEVg8I0sz5fCVpOLH2Q3LBKbhAKQSqly4MQAsuCgndmgs30T0rZZraW3kV8=", 
            "searchguard.truststore": "/u3+7QAAAAIAAAABAAAAAgAGc2lnLWNhAAABXIqOL+QABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkwMTUyNDlaFw0yMjA2MDgwMTUyNTBaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsvGK7qfCMObfBFfrpmnQIvm3Dyu78DNZh41/9L8DUs2LEQ96F3Y8Yr6IHptGp3PIm6BXVH3dKUWT5jw73Ycf1ZAt9KCSilHxO6/oI4jlBVMmxBJtDgd6ixbjFQ6ceAfERtHkNZajI7zadxxWF9/tnJSPvWc3bNqggp20KLlLC4rF3iHy7hWlv2YkLxJdsl5l//uM3hrcaQjMUPaoMeU0r+64zq1McOWl9p6Uqa8gMUCpaKjnCBjOcDpdzprUqQUZVOe6POMlou6UtbXzwvzqdlFzGJe0wCoHRQH0pem5FGhKvmUk+gkpcBJlD7hm30vDcUMrgFAYJe8TbQWg7ba0TAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBpVQONAArHMNBl1ywdPVX3nIYjO55Wy+W9y6OTGZoh4WV6qVHGhzCqgz37ykLMvT7VnFxP2a/uqLXtC22TwJB7zPGozc/Zj9o5dQmvOQK3C8oWfedVhOcpunMFyI/CfVbSnzsh+xuolSXb1cChe5F/2OgKNmtk+BpJQfOiakRQ0xmi2rmCQjZO2BCDgJ+g7rW85RPcHrX+YM94bzzaMJ66Cb4u1fzJgevYPfG7sbAgSoQiH5ICp7nXEYqYKxqkTMcScoOv4PpoPErKrMc5UxAsTvCvSGMyTa8loKrdevBNYZgxuf8BCyzwrEVg8I0sz5fCVpOLH2Q3LBKbhAKQSqlyE6uGtaME+/3Xg2WBBvCkIgyXN4A=", 
            "truststore": "/u3+7QAAAAIAAAABAAAAAgAGc2lnLWNhAAABXIqOL+QABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkwMTUyNDlaFw0yMjA2MDgwMTUyNTBaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCsvGK7qfCMObfBFfrpmnQIvm3Dyu78DNZh41/9L8DUs2LEQ96F3Y8Yr6IHptGp3PIm6BXVH3dKUWT5jw73Ycf1ZAt9KCSilHxO6/oI4jlBVMmxBJtDgd6ixbjFQ6ceAfERtHkNZajI7zadxxWF9/tnJSPvWc3bNqggp20KLlLC4rF3iHy7hWlv2YkLxJdsl5l//uM3hrcaQjMUPaoMeU0r+64zq1McOWl9p6Uqa8gMUCpaKjnCBjOcDpdzprUqQUZVOe6POMlou6UtbXzwvzqdlFzGJe0wCoHRQH0pem5FGhKvmUk+gkpcBJlD7hm30vDcUMrgFAYJe8TbQWg7ba0TAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBpVQONAArHMNBl1ywdPVX3nIYjO55Wy+W9y6OTGZoh4WV6qVHGhzCqgz37ykLMvT7VnFxP2a/uqLXtC22TwJB7zPGozc/Zj9o5dQmvOQK3C8oWfedVhOcpunMFyI/CfVbSnzsh+xuolSXb1cChe5F/2OgKNmtk+BpJQfOiakRQ0xmi2rmCQjZO2BCDgJ+g7rW85RPcHrX+YM94bzzaMJ66Cb4u1fzJgevYPfG7sbAgSoQiH5ICp7nXEYqYKxqkTMcScoOv4PpoPErKrMc5UxAsTvCvSGMyTa8loKrdevBNYZgxuf8BCyzwrEVg8I0sz5fCVpOLH2Q3LBKbhAKQSqlyE6uGtaME+/3Xg2WBBvCkIgyXN4A="
        }, 
        "kind": "Secret", 
        "metadata": {
            "creationTimestamp": null, 
            "name": "logging-elasticsearch"
        }, 
        "type": "Opaque"
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set logging-es-ops-cluster service] ****
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:168
changed: [openshift] => {
    "changed": true, 
    "results": {
        "clusterip": "172.30.195.113", 
        "cmd": "/bin/oc get service logging-es-ops-cluster -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Service", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:31Z", 
                    "name": "logging-es-ops-cluster", 
                    "namespace": "logging", 
                    "resourceVersion": "1392", 
                    "selfLink": "/api/v1/namespaces/logging/services/logging-es-ops-cluster", 
                    "uid": "70b98796-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "clusterIP": "172.30.195.113", 
                    "ports": [
                        {
                            "port": 9300, 
                            "protocol": "TCP", 
                            "targetPort": 9300
                        }
                    ], 
                    "selector": {
                        "component": "es-ops", 
                        "provider": "openshift"
                    }, 
                    "sessionAffinity": "None", 
                    "type": "ClusterIP"
                }, 
                "status": {
                    "loadBalancer": {}
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Set logging-es-ops service] ************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:182
changed: [openshift] => {
    "changed": true, 
    "results": {
        "clusterip": "172.30.186.99", 
        "cmd": "/bin/oc get service logging-es-ops -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Service", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:32Z", 
                    "name": "logging-es-ops", 
                    "namespace": "logging", 
                    "resourceVersion": "1396", 
                    "selfLink": "/api/v1/namespaces/logging/services/logging-es-ops", 
                    "uid": "717ee67b-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "clusterIP": "172.30.186.99", 
                    "ports": [
                        {
                            "port": 9200, 
                            "protocol": "TCP", 
                            "targetPort": "restapi"
                        }
                    ], 
                    "selector": {
                        "component": "es-ops", 
                        "provider": "openshift"
                    }, 
                    "sessionAffinity": "None", 
                    "type": "ClusterIP"
                }, 
                "status": {
                    "loadBalancer": {}
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Creating ES storage template] **********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:197
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Creating ES storage template] **********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:210
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Set ES storage] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:225
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:237
ok: [openshift] => {
    "ansible_facts": {
        "es_deploy_name": "logging-es-ops-data-master-xc2h70yx"
    }, 
    "changed": false
}

TASK [openshift_logging_elasticsearch : set_fact] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:241
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_elasticsearch : Set ES dc templates] *******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:246
changed: [openshift] => {
    "changed": true, 
    "checksum": "5f5ff5613349029d8c7483d8b65951db311f7156", 
    "dest": "/tmp/openshift-logging-ansible-03bw2f/templates/logging-es-dc.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "c0cd20a67139171a81d3fdb0a167291a", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 3179, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973213.56-161174887137990/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_elasticsearch : Set ES dc] *****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:262
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get dc logging-es-ops-data-master-xc2h70yx -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "DeploymentConfig", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:34Z", 
                    "generation": 2, 
                    "labels": {
                        "component": "es-ops", 
                        "deployment": "logging-es-ops-data-master-xc2h70yx", 
                        "logging-infra": "elasticsearch", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-es-ops-data-master-xc2h70yx", 
                    "namespace": "logging", 
                    "resourceVersion": "1410", 
                    "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-es-ops-data-master-xc2h70yx", 
                    "uid": "7257b003-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "replicas": 1, 
                    "selector": {
                        "component": "es-ops", 
                        "deployment": "logging-es-ops-data-master-xc2h70yx", 
                        "logging-infra": "elasticsearch", 
                        "provider": "openshift"
                    }, 
                    "strategy": {
                        "activeDeadlineSeconds": 21600, 
                        "recreateParams": {
                            "timeoutSeconds": 600
                        }, 
                        "resources": {}, 
                        "type": "Recreate"
                    }, 
                    "template": {
                        "metadata": {
                            "creationTimestamp": null, 
                            "labels": {
                                "component": "es-ops", 
                                "deployment": "logging-es-ops-data-master-xc2h70yx", 
                                "logging-infra": "elasticsearch", 
                                "provider": "openshift"
                            }, 
                            "name": "logging-es-ops-data-master-xc2h70yx"
                        }, 
                        "spec": {
                            "containers": [
                                {
                                    "env": [
                                        {
                                            "name": "NAMESPACE", 
                                            "valueFrom": {
                                                "fieldRef": {
                                                    "apiVersion": "v1", 
                                                    "fieldPath": "metadata.namespace"
                                                }
                                            }
                                        }, 
                                        {
                                            "name": "KUBERNETES_TRUST_CERT", 
                                            "value": "true"
                                        }, 
                                        {
                                            "name": "SERVICE_DNS", 
                                            "value": "logging-es-ops-cluster"
                                        }, 
                                        {
                                            "name": "CLUSTER_NAME", 
                                            "value": "logging-es-ops"
                                        }, 
                                        {
                                            "name": "INSTANCE_RAM", 
                                            "value": "8Gi"
                                        }, 
                                        {
                                            "name": "NODE_QUORUM", 
                                            "value": "1"
                                        }, 
                                        {
                                            "name": "RECOVER_EXPECTED_NODES", 
                                            "value": "1"
                                        }, 
                                        {
                                            "name": "RECOVER_AFTER_TIME", 
                                            "value": "5m"
                                        }, 
                                        {
                                            "name": "READINESS_PROBE_TIMEOUT", 
                                            "value": "30"
                                        }, 
                                        {
                                            "name": "IS_MASTER", 
                                            "value": "true"
                                        }, 
                                        {
                                            "name": "HAS_DATA", 
                                            "value": "true"
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-elasticsearch:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "elasticsearch", 
                                    "ports": [
                                        {
                                            "containerPort": 9200, 
                                            "name": "restapi", 
                                            "protocol": "TCP"
                                        }, 
                                        {
                                            "containerPort": 9300, 
                                            "name": "cluster", 
                                            "protocol": "TCP"
                                        }
                                    ], 
                                    "readinessProbe": {
                                        "exec": {
                                            "command": [
                                                "/usr/share/elasticsearch/probe/readiness.sh"
                                            ]
                                        }, 
                                        "failureThreshold": 3, 
                                        "initialDelaySeconds": 10, 
                                        "periodSeconds": 5, 
                                        "successThreshold": 1, 
                                        "timeoutSeconds": 30
                                    }, 
                                    "resources": {
                                        "limits": {
                                            "cpu": "1", 
                                            "memory": "8Gi"
                                        }, 
                                        "requests": {
                                            "memory": "512Mi"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/etc/elasticsearch/secret", 
                                            "name": "elasticsearch", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/usr/share/java/elasticsearch/config", 
                                            "name": "elasticsearch-config", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/elasticsearch/persistent", 
                                            "name": "elasticsearch-storage"
                                        }
                                    ]
                                }
                            ], 
                            "dnsPolicy": "ClusterFirst", 
                            "restartPolicy": "Always", 
                            "schedulerName": "default-scheduler", 
                            "securityContext": {
                                "supplementalGroups": [
                                    65534
                                ]
                            }, 
                            "serviceAccount": "aggregated-logging-elasticsearch", 
                            "serviceAccountName": "aggregated-logging-elasticsearch", 
                            "terminationGracePeriodSeconds": 30, 
                            "volumes": [
                                {
                                    "name": "elasticsearch", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-elasticsearch"
                                    }
                                }, 
                                {
                                    "configMap": {
                                        "defaultMode": 420, 
                                        "name": "logging-elasticsearch"
                                    }, 
                                    "name": "elasticsearch-config"
                                }, 
                                {
                                    "emptyDir": {}, 
                                    "name": "elasticsearch-storage"
                                }
                            ]
                        }
                    }, 
                    "test": false, 
                    "triggers": [
                        {
                            "type": "ConfigChange"
                        }
                    ]
                }, 
                "status": {
                    "availableReplicas": 0, 
                    "conditions": [
                        {
                            "lastTransitionTime": "2017-06-09T01:53:34Z", 
                            "lastUpdateTime": "2017-06-09T01:53:34Z", 
                            "message": "Deployment config does not have minimum availability.", 
                            "status": "False", 
                            "type": "Available"
                        }, 
                        {
                            "lastTransitionTime": "2017-06-09T01:53:34Z", 
                            "lastUpdateTime": "2017-06-09T01:53:34Z", 
                            "message": "replication controller \"logging-es-ops-data-master-xc2h70yx-1\" is waiting for pod \"logging-es-ops-data-master-xc2h70yx-1-deploy\" to run", 
                            "status": "Unknown", 
                            "type": "Progressing"
                        }
                    ], 
                    "details": {
                        "causes": [
                            {
                                "type": "ConfigChange"
                            }
                        ], 
                        "message": "config change"
                    }, 
                    "latestVersion": 1, 
                    "observedGeneration": 2, 
                    "replicas": 0, 
                    "unavailableReplicas": 0, 
                    "updatedReplicas": 0
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_elasticsearch : Delete temp directory] *****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:274
ok: [openshift] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-03bw2f", 
    "state": "absent"
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:151
statically included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml

TASK [openshift_logging_kibana : fail] *****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:3
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:7
ok: [openshift] => {
    "ansible_facts": {
        "kibana_version": "3_5"
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : fail] *****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:15
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : Create temp directory for doing work in] ******
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:7
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.002077", 
    "end": "2017-06-08 21:53:35.672079", 
    "rc": 0, 
    "start": "2017-06-08 21:53:35.670002"
}

STDOUT:

/tmp/openshift-logging-ansible-SFfLHV

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:12
ok: [openshift] => {
    "ansible_facts": {
        "tempdir": "/tmp/openshift-logging-ansible-SFfLHV"
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : Create templates subdirectory] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:16
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/openshift-logging-ansible-SFfLHV/templates", 
    "secontext": "unconfined_u:object_r:user_tmp_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging_kibana : Create Kibana service account] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:26
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : Create Kibana service account] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:34
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get sa aggregated-logging-kibana -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "imagePullSecrets": [
                    {
                        "name": "aggregated-logging-kibana-dockercfg-d935h"
                    }
                ], 
                "kind": "ServiceAccount", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:36Z", 
                    "name": "aggregated-logging-kibana", 
                    "namespace": "logging", 
                    "resourceVersion": "1420", 
                    "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-kibana", 
                    "uid": "73a8514d-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "secrets": [
                    {
                        "name": "aggregated-logging-kibana-token-jhgn1"
                    }, 
                    {
                        "name": "aggregated-logging-kibana-dockercfg-d935h"
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:42
ok: [openshift] => {
    "ansible_facts": {
        "kibana_component": "kibana", 
        "kibana_name": "logging-kibana"
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : Checking for session_secret] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:47
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging_kibana : Checking for oauth_secret] ********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:51
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "exists": false
    }
}

TASK [openshift_logging_kibana : Generate session secret] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:56
changed: [openshift] => {
    "changed": true, 
    "checksum": "cc13f16d20aca64a8a3ac3c1bfc4b6639e14f052", 
    "dest": "/etc/origin/logging/session_secret", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "7ddd392546ca141554c1ed73d3915d43", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 200, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973217.47-16380946938015/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Generate oauth secret] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:64
changed: [openshift] => {
    "changed": true, 
    "checksum": "11b71104e237b50ed34e74a30f519f58b3ea5cb7", 
    "dest": "/etc/origin/logging/oauth_secret", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "622d149e6a5977f476cb244e0c339c04", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:etc_t:s0", 
    "size": 64, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973217.78-230880744814874/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Retrieving the cert to use when generating secrets for the logging components] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:71
ok: [openshift] => (item={u'name': u'ca_file', u'file': u'ca.crt'}) => {
    "changed": false, 
    "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJME9Wb1hEVEl5TURZd09EQXhOVEkxTUZvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt5OFlydXA4SXc1dDhFVit1bWFkQWkrYmNQSzd2d00xbUhqWC8wdndOU3oKWXNSRDNvWGRqeGl2b2dlbTBhbmM4aWJvRmRVZmQwcFJaUG1QRHZkaHgvVmtDMzBvSktLVWZFN3IrZ2ppT1VGVQp5YkVFbTBPQjNxTEZ1TVZEcHg0QjhSRzBlUTFscU1qdk5wM0hGWVgzKzJjbEkrOVp6ZHMycUNDbmJRb3VVc0xpCnNYZUlmTHVGYVcvWmlRdkVsMnlYbVgvKzR6ZUd0eHBDTXhROXFneDVUU3Y3cmpPclV4dzVhWDJucFNwcnlBeFEKS2xvcU9jSUdNNXdPbDNPbXRTcEJSbFU1N284NHlXaTdwUzF0ZlBDL09wMlVYTVlsN1RBS2dkRkFmU2w2YmtVYQpFcStaU1Q2Q1Nsd0VtVVB1R2JmUzhOeFF5dUFVQmdsN3hOdEJhRHR0clJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHbFYKQTQwQUNzY3cwR1hYTEIwOVZmZWNoaU03bmxiTDViM0xvNU1abWlIaFpYcXBVY2FITUtxRFBmdktRc3k5UHRXYwpYRS9acis2b3RlMExiWlBBa0h2TThhak56OW1QMmpsMUNhODVBcmNMeWhaOTUxV0U1eW02Y3dYSWo4SjlWdEtmCk95SDdHNmlWSmR2VndLRjdrWC9ZNkFvMmEyVDRHa2xCODZKcVJGRFRHYUxhdVlKQ05rN1lFSU9BbjZEdXRiemwKRTl3ZXRmNWd6M2h2UE5vd25yb0p2aTdWL01tQjY5Zzk4YnV4c0NCS2hDSWZrZ0tudWRjUmlwZ3JHcVJNeHhKeQpnNi9nK21nOFNzcXN4emxURUN4TzhLOUlZekpOcnlXZ3F0MTY4RTFobURHNS93RUxMUENzUldEd2pTelBsOEpXCms0c2ZaRGNzRXB1RUFwQktxWEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", 
    "encoding": "base64", 
    "item": {
        "file": "ca.crt", 
        "name": "ca_file"
    }, 
    "source": "/etc/origin/logging/ca.crt"
}
ok: [openshift] => (item={u'name': u'kibana_internal_key', u'file': u'kibana-internal.key'}) => {
    "changed": false, 
    "content": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdGNsSVZmcHpnNXZNdUVmVEFrT2daOU5HU1h2NXFvRGtSRWYzU2hnOWJxOXg4YXIrClQya1JYMEJjRHptTytOL1N3MDVpYzgzdGtKUStaN1hRdzBzUUdvVlZFWm9zN2hJaUhkWk9PS0FPUzkyRUhYSEcKRzVoSjQ5c2p4bnRJUDgzYmNySE1KSmZEZVNRaGdWZy9mcDhEM0llZjgyd0QxbGdKeGsySG5ZVlU4TFBVdlZUawo0Uk43ZkFRYTM2Wi9OUjNYWDRPUjFsa3RWQlI2dWJTSXkwekpqbEtNL2JKYlllWHpuZUtTM0VISWYxWmhGMWpaCmZiWUhKeWorNStwZGd3OFVuOUVzeFNUZ052amQrQXJnYTJ5ZEp4enFBUWw0R29aZEdxRXVya08vT0R2UWlkVDMKR1IxKzlLVnVSdG5SampHZjdSWEdXbUR2UlZpYnM2K1JxT1d0YlFJREFRQUJBb0lCQUJaakpvUm9KcWV6blQrbwpvTVRyblNxTUsyRExZdERydExEd0IvVlpETisvdlpHY2xGc2xQbDF6cUtLN1hPOHJhV0ppR2QvWElZV25yQlBMCm9WMGJ0bXo5dEo5SlZIVXhTSUJTTHluc0ZEYWxuaXFlSTE2c241VHZIUFhKb3Zrd21mRURFbmdETkxDTGtaREQKVkhaOGtOWXM0YmJ4dTNzL05sejBtVm45M0pzVDVWQ0xqY1hKZE93cjd0aHpmbVJDTjAxZGk1ei95L2dubWQwdgpRZEhsaS9xclc1d2hqMHo5U1VUTXpoaVo4bDBRZFhXWndhaHlvZ2U1OWFkUGlKZmd3RTRQRFVFYm4xNWlYTVQvCm1qdEZXcXZXb3dwcmhzcU5scXJrZGdZR3B6K0htS0JlTm9qbE5kN2N0cDdWQlBhOHYxZVRmVGNBeVQxVUljT3UKSVZGaHQrRUNnWUVBNHRsOEowS1lHcmczN1QzR2xPaTg3Qzk0VEZScmVTZHFOdFpsQnByREZmVkhRWUdydGRhNQo5a0RIMEZHUEdjVjNNVVZvYjZ3enRrZy9nekZtUDdQY0ZVR0JrODQ4ejI0c21EcnFNN2hKZHd0cGxGMnJUbjFxCm1tZGRhYmcybDRVcngrTm1PU2FaejUySE81RUQ5bVJMUzlnbXFmYVR5blR5MWk2T2hNMU9oQVVDZ1lFQXpTVmgKbUkraFlzNTl4UU0xUGpPZWIwNzVja2JiRGVoYmFaT0k0eDNQOWtOa3UyOXRwNWFDS1NIcVhGMFYzZ3k1NXFURAozMVEyTlVPbWUxTGg0UjVyekxBeVgvWVY4bllWcWdWdkcyN2NTcVJOWlNXTjVHYkRPaXNvRlBDVEI3aXVock5ZCjIvczJuY3cwUWtSTGFVMU9oV3p0THNWZUR0Ulp2MDhseHI1Z2FFa0NnWUVBbnFaUHAvMXc5eTdqSGk1WUZZaDMKcUE3QzZVOFpJdEFuL2xZT3JZSEs4aTVxT1N3QTlObEprU2xaRlI0VklJYnpoeWZ0bER3d3Brajg4am00TXRFTgpHR2lKd044NXREQnZTNy9ZVDNlUkdZcUh1bFdRR3dLbmJYamc0YkVOclFaYnloNEZQZTc3SHpJaWc4dzFvem9kClZ0dkNucGR1WU9kTmRmRjFodmMyOUNrQ2dZRUF2WmVHa3hCcS9uNElEa1BndVJQTG9PTkQ5akUxMGF5a2p2WWkKMUlPQTV2OXg0U2dpRjNncDR3bk5KbitBN2k2a3dGd1dDaGd4NFJnY2pHMFZCSkN3NEFNWEMwakxEOEhDVTllaAp6NkN0UnU2QitMQzBhaG51NDV0dTk2cyt0eXdmWDYzd3VaMTU1R3dOQUJGT0FJdkp2ZFhsZmd3NTJVcTNodThHCjRwNmZTc0VDZ1lBMFAxT3NsUkZDYWg1UFpEYlE0c2ViQ1djZUdVK3d6Ymg0eldBTksyQ2h4OXJqaGpteVEyeTQKMElEQkJSRitranhqcnI4UmlJb1hoeWRFZjRsd0hzaWh6Q3Q1KzFMblZmeVdMbEZCNkVWRXQ3Tk0rV2gzMFdreApxUEliZ3hGRnZNU0lETHRPNDJTR21SUXF1SkZmUTQxT05aN2MxNXpIQk5FTkovenNreVQzdWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", 
    "encoding": "base64", 
    "item": {
        "file": "kibana-internal.key", 
        "name": "kibana_internal_key"
    }, 
    "source": "/etc/origin/logging/kibana-internal.key"
}
ok: [openshift] => (item={u'name': u'kibana_internal_cert', u'file': u'kibana-internal.crt'}) => {
    "changed": false, 
    "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJMU1Wb1hEVEU1TURZd09UQXhOVEkxTWxvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFDMXlVaFYrbk9EbTh5NFI5TUNRNkJuMDBaSmUvbXFnT1JFUi9kS0dEMXVyM0h4cXY1UGFSRmYKUUZ3UE9ZNzQzOUxEVG1KenplMlFsRDVudGRERFN4QWFoVlVSbWl6dUVpSWQxazQ0b0E1TDNZUWRjY1libUVuagoyeVBHZTBnL3pkdHlzY3drbDhONUpDR0JXRDkrbndQY2g1L3piQVBXV0FuR1RZZWRoVlR3czlTOVZPVGhFM3Q4CkJCcmZwbjgxSGRkZmc1SFdXUzFVRkhxNXRJakxUTW1PVW96OXNsdGg1Zk9kNHBMY1FjaC9WbUVYV05sOXRnY24KS1A3bjZsMkREeFNmMFN6RkpPQTIrTjM0Q3VCcmJKMG5IT29CQ1hnYWhsMGFvUzZ1UTc4NE85Q0oxUGNaSFg3MApwVzVHMmRHT01aL3RGY1phWU85RldKdXpyNUdvNWExdEFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXB4bEJhWmxRNGxZRm94Sm5lbXJQWEFkWWxXQkY1WXVJSDgrOUJoQ0MKcHRzNTJ3YnpFQkM2cXptVVA0Y0MzdkY1UjhuYTZiUE1VeHBzVzZ1aGV1Q1pkZ0g5Y1pYRjJYdHdBVnFJdlRjSwpoMGZNaWNlcVRSQXNta0VHNHV3STRMcjhWdUYxVFo4UUprcSs4bktkL1ZodUxVeUFMaHdnNkkwMWs4YTFlSUd6ClJic2hJeEZRRW0yNk85bytGRS9SNXkzVk1RZWhTaVkvL0RJbm5XZDN3OFJaQVlzMHFWaDJkcXhSSjBKOUdjbWIKazMxOCs0N0x4WmdzTVZjaVc1Mm5FbllXUU9qZ0VIdGFXbTFxM2N4RExUUnMvT1VkU0R3N1RwRnFXenhLOGE3VwpYRzRJUzdwNXc0N0EvUXRBN3llL1R4RDRlSEtrcFFZQXZISFhkUTY4cmVyNE13PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd09UQXhOVEkwT1ZvWERUSXlNRFl3T0RBeE5USTFNRm93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3k4WXJ1cDhJdzV0OEVWK3VtYWRBaStiY1BLN3Z3TTFtSGpYLzB2d05TegpZc1JEM29YZGp4aXZvZ2VtMGFuYzhpYm9GZFVmZDBwUlpQbVBEdmRoeC9Wa0MzMG9KS0tVZkU3citnamlPVUZVCnliRUVtME9CM3FMRnVNVkRweDRCOFJHMGVRMWxxTWp2TnAzSEZZWDMrMmNsSSs5WnpkczJxQ0NuYlFvdVVzTGkKc1hlSWZMdUZhVy9aaVF2RWwyeVhtWC8rNHplR3R4cENNeFE5cWd4NVRTdjdyak9yVXh3NWFYMm5wU3ByeUF4UQpLbG9xT2NJR001d09sM09tdFNwQlJsVTU3bzg0eVdpN3BTMXRmUEMvT3AyVVhNWWw3VEFLZ2RGQWZTbDZia1VhCkVxK1pTVDZDU2x3RW1VUHVHYmZTOE54UXl1QVVCZ2w3eE50QmFEdHRyUk1DQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUdsVgpBNDBBQ3NjdzBHWFhMQjA5VmZlY2hpTTdubGJMNWIzTG81TVptaUhoWlhxcFVjYUhNS3FEUGZ2S1FzeTlQdFdjClhFL1pyKzZvdGUwTGJaUEFrSHZNOGFqTno5bVAyamwxQ2E4NUFyY0x5aFo5NTFXRTV5bTZjd1hJajhKOVZ0S2YKT3lIN0c2aVZKZHZWd0tGN2tYL1k2QW8yYTJUNEdrbEI4NkpxUkZEVEdhTGF1WUpDTms3WUVJT0FuNkR1dGJ6bApFOXdldGY1Z3ozaHZQTm93bnJvSnZpN1YvTW1CNjlnOThidXhzQ0JLaENJZmtnS251ZGNSaXBnckdxUk14eEp5Cmc2L2crbWc4U3Nxc3h6bFRFQ3hPOEs5SVl6Sk5yeVdncXQxNjhFMWhtREc1L3dFTExQQ3NSV0R3alN6UGw4SlcKazRzZlpEY3NFcHVFQXBCS3FYST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", 
    "encoding": "base64", 
    "item": {
        "file": "kibana-internal.crt", 
        "name": "kibana_internal_cert"
    }, 
    "source": "/etc/origin/logging/kibana-internal.crt"
}
ok: [openshift] => (item={u'name': u'server_tls', u'file': u'server-tls.json'}) => {
    "changed": false, 
    "content": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", 
    "encoding": "base64", 
    "item": {
        "file": "server-tls.json", 
        "name": "server_tls"
    }, 
    "source": "/etc/origin/logging/server-tls.json"
}
ok: [openshift] => (item={u'name': u'session_secret', u'file': u'session_secret'}) => {
    "changed": false, 
    "content": "OEdTRjdYbWxxenpHb3ZKd0tRRVJUYzMxZjhMV3BESWtOdW5oTHVnczhZV3VSUExrdFlSV0lDa3RTWHlzYXFDaEJkc2FSY3pHU2NYUWxXUHpxbVFkYVBsRktvY2UxRGVGSXN2d3VCa1pzdHhMeXR6cXFIc1pwakx2ZTdJb2RhaWNjWUJ6QWtmV2lZNDB2Y0ozaFNSSmRya1BlUVN3a0ZudmRsV1F4QUo5TWIwT1Z3SUM2VDdvelNKbURKYVlMck02dTZjelpRNnM=", 
    "encoding": "base64", 
    "item": {
        "file": "session_secret", 
        "name": "session_secret"
    }, 
    "source": "/etc/origin/logging/session_secret"
}
ok: [openshift] => (item={u'name': u'oauth_secret', u'file': u'oauth_secret'}) => {
    "changed": false, 
    "content": "ZmZYVkNRVVdyb2xqVnQ5Sjk5U0hZS2N3dUpqakU3UXRRRWNDd29yeFRnSEVWWDBXWTg0Z0U2bEdza3d6TFVLZA==", 
    "encoding": "base64", 
    "item": {
        "file": "oauth_secret", 
        "name": "oauth_secret"
    }, 
    "source": "/etc/origin/logging/oauth_secret"
}

TASK [openshift_logging_kibana : Set logging-kibana service] *******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:84
changed: [openshift] => {
    "changed": true, 
    "results": {
        "clusterip": "172.30.175.25", 
        "cmd": "/bin/oc get service logging-kibana -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Service", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:39Z", 
                    "name": "logging-kibana", 
                    "namespace": "logging", 
                    "resourceVersion": "1440", 
                    "selfLink": "/api/v1/namespaces/logging/services/logging-kibana", 
                    "uid": "7586a7a7-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "clusterIP": "172.30.175.25", 
                    "ports": [
                        {
                            "port": 443, 
                            "protocol": "TCP", 
                            "targetPort": "oaproxy"
                        }
                    ], 
                    "selector": {
                        "component": "kibana", 
                        "provider": "openshift"
                    }, 
                    "sessionAffinity": "None", 
                    "type": "ClusterIP"
                }, 
                "status": {
                    "loadBalancer": {}
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:101
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_key | trim | length
> 0 }}
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:106
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_cert | trim | length
> 0 }}
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:111
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_ca | trim | length >
0 }}
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:116
ok: [openshift] => {
    "ansible_facts": {
        "kibana_ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJME9Wb1hEVEl5TURZd09EQXhOVEkxTUZvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt5OFlydXA4SXc1dDhFVit1bWFkQWkrYmNQSzd2d00xbUhqWC8wdndOU3oKWXNSRDNvWGRqeGl2b2dlbTBhbmM4aWJvRmRVZmQwcFJaUG1QRHZkaHgvVmtDMzBvSktLVWZFN3IrZ2ppT1VGVQp5YkVFbTBPQjNxTEZ1TVZEcHg0QjhSRzBlUTFscU1qdk5wM0hGWVgzKzJjbEkrOVp6ZHMycUNDbmJRb3VVc0xpCnNYZUlmTHVGYVcvWmlRdkVsMnlYbVgvKzR6ZUd0eHBDTXhROXFneDVUU3Y3cmpPclV4dzVhWDJucFNwcnlBeFEKS2xvcU9jSUdNNXdPbDNPbXRTcEJSbFU1N284NHlXaTdwUzF0ZlBDL09wMlVYTVlsN1RBS2dkRkFmU2w2YmtVYQpFcStaU1Q2Q1Nsd0VtVVB1R2JmUzhOeFF5dUFVQmdsN3hOdEJhRHR0clJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHbFYKQTQwQUNzY3cwR1hYTEIwOVZmZWNoaU03bmxiTDViM0xvNU1abWlIaFpYcXBVY2FITUtxRFBmdktRc3k5UHRXYwpYRS9acis2b3RlMExiWlBBa0h2TThhak56OW1QMmpsMUNhODVBcmNMeWhaOTUxV0U1eW02Y3dYSWo4SjlWdEtmCk95SDdHNmlWSmR2VndLRjdrWC9ZNkFvMmEyVDRHa2xCODZKcVJGRFRHYUxhdVlKQ05rN1lFSU9BbjZEdXRiemwKRTl3ZXRmNWd6M2h2UE5vd25yb0p2aTdWL01tQjY5Zzk4YnV4c0NCS2hDSWZrZ0tudWRjUmlwZ3JHcVJNeHhKeQpnNi9nK21nOFNzcXN4emxURUN4TzhLOUlZekpOcnlXZ3F0MTY4RTFobURHNS93RUxMUENzUldEd2pTelBsOEpXCms0c2ZaRGNzRXB1RUFwQktxWEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : Generating Kibana route template] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:121
ok: [openshift] => {
    "changed": false, 
    "checksum": "7f619116f35b55bed76c71d873aec0eb0729d659", 
    "dest": "/tmp/openshift-logging-ansible-SFfLHV/templates/kibana-route.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "ac75c4db690a16a8291702c2c7e1f5ac", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2714, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973220.34-196454588117528/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Setting Kibana route] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:141
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get route logging-kibana -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Route", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:41Z", 
                    "labels": {
                        "component": "support", 
                        "logging-infra": "support", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-kibana", 
                    "namespace": "logging", 
                    "resourceVersion": "1446", 
                    "selfLink": "/oapi/v1/namespaces/logging/routes/logging-kibana", 
                    "uid": "767823c2-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "host": "kibana.router.default.svc.cluster.local", 
                    "tls": {
                        "caCertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTAxNTI0OVoXDTIyMDYwODAxNTI1MFow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAKy8Yrup8Iw5t8EV+umadAi+bcPK7vwM1mHjX/0vwNSz\nYsRD3oXdjxivogem0anc8iboFdUfd0pRZPmPDvdhx/VkC30oJKKUfE7r+gjiOUFU\nybEEm0OB3qLFuMVDpx4B8RG0eQ1lqMjvNp3HFYX3+2clI+9Zzds2qCCnbQouUsLi\nsXeIfLuFaW/ZiQvEl2yXmX/+4zeGtxpCMxQ9qgx5TSv7rjOrUxw5aX2npSpryAxQ\nKloqOcIGM5wOl3OmtSpBRlU57o84yWi7pS1tfPC/Op2UXMYl7TAKgdFAfSl6bkUa\nEq+ZST6CSlwEmUPuGbfS8NxQyuAUBgl7xNtBaDttrRMCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGlV\nA40ACscw0GXXLB09VfechiM7nlbL5b3Lo5MZmiHhZXqpUcaHMKqDPfvKQsy9PtWc\nXE/Zr+6ote0LbZPAkHvM8ajNz9mP2jl1Ca85ArcLyhZ951WE5ym6cwXIj8J9VtKf\nOyH7G6iVJdvVwKF7kX/Y6Ao2a2T4GklB86JqRFDTGaLauYJCNk7YEIOAn6Dutbzl\nE9wetf5gz3hvPNownroJvi7V/MmB69g98buxsCBKhCIfkgKnudcRipgrGqRMxxJy\ng6/g+mg8SsqsxzlTECxO8K9IYzJNryWgqt168E1hmDG5/wELLPCsRWDwjSzPl8JW\nk4sfZDcsEpuEApBKqXI=\n-----END CERTIFICATE-----\n", 
                        "destinationCACertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTAxNTI0OVoXDTIyMDYwODAxNTI1MFow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAKy8Yrup8Iw5t8EV+umadAi+bcPK7vwM1mHjX/0vwNSz\nYsRD3oXdjxivogem0anc8iboFdUfd0pRZPmPDvdhx/VkC30oJKKUfE7r+gjiOUFU\nybEEm0OB3qLFuMVDpx4B8RG0eQ1lqMjvNp3HFYX3+2clI+9Zzds2qCCnbQouUsLi\nsXeIfLuFaW/ZiQvEl2yXmX/+4zeGtxpCMxQ9qgx5TSv7rjOrUxw5aX2npSpryAxQ\nKloqOcIGM5wOl3OmtSpBRlU57o84yWi7pS1tfPC/Op2UXMYl7TAKgdFAfSl6bkUa\nEq+ZST6CSlwEmUPuGbfS8NxQyuAUBgl7xNtBaDttrRMCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGlV\nA40ACscw0GXXLB09VfechiM7nlbL5b3Lo5MZmiHhZXqpUcaHMKqDPfvKQsy9PtWc\nXE/Zr+6ote0LbZPAkHvM8ajNz9mP2jl1Ca85ArcLyhZ951WE5ym6cwXIj8J9VtKf\nOyH7G6iVJdvVwKF7kX/Y6Ao2a2T4GklB86JqRFDTGaLauYJCNk7YEIOAn6Dutbzl\nE9wetf5gz3hvPNownroJvi7V/MmB69g98buxsCBKhCIfkgKnudcRipgrGqRMxxJy\ng6/g+mg8SsqsxzlTECxO8K9IYzJNryWgqt168E1hmDG5/wELLPCsRWDwjSzPl8JW\nk4sfZDcsEpuEApBKqXI=\n-----END CERTIFICATE-----\n", 
                        "insecureEdgeTerminationPolicy": "Redirect", 
                        "termination": "reencrypt"
                    }, 
                    "to": {
                        "kind": "Service", 
                        "name": "logging-kibana", 
                        "weight": 100
                    }, 
                    "wildcardPolicy": "None"
                }, 
                "status": {
                    "ingress": [
                        {
                            "conditions": [
                                {
                                    "lastTransitionTime": "2017-06-09T01:53:41Z", 
                                    "status": "True", 
                                    "type": "Admitted"
                                }
                            ], 
                            "host": "kibana.router.default.svc.cluster.local", 
                            "routerName": "router", 
                            "wildcardPolicy": "None"
                        }
                    ]
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Get current oauthclient hostnames] ************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:151
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", 
        "results": [
            {}
        ], 
        "returncode": 0, 
        "stderr": "Error from server (NotFound): oauthclients.oauth.openshift.io \"kibana-proxy\" not found\n", 
        "stdout": ""
    }, 
    "state": "list"
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:159
ok: [openshift] => {
    "ansible_facts": {
        "proxy_hostnames": [
            "https://kibana.router.default.svc.cluster.local"
        ]
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : Create oauth-client template] *****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:162
changed: [openshift] => {
    "changed": true, 
    "checksum": "7714492593c8bf9f4c9ec0b643e44e74e2792ff7", 
    "dest": "/tmp/openshift-logging-ansible-SFfLHV/templates/oauth-client.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "b4dba044ca3db8e2891848ece313e974", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 328, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973222.31-46544618786303/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Set kibana-proxy oauth-client] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:170
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "OAuthClient", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:43Z", 
                    "labels": {
                        "logging-infra": "support"
                    }, 
                    "name": "kibana-proxy", 
                    "resourceVersion": "1451", 
                    "selfLink": "/oapi/v1/oauthclients/kibana-proxy", 
                    "uid": "77a45226-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "redirectURIs": [
                    "https://kibana.router.default.svc.cluster.local"
                ], 
                "scopeRestrictions": [
                    {
                        "literals": [
                            "user:info", 
                            "user:check-access", 
                            "user:list-projects"
                        ]
                    }
                ], 
                "secret": "ffXVCQUWroljVt9J99SHYKcwuJjjE7QtQEcCworxTgHEVX0WY84gE6lGskwzLUKd"
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Set Kibana secret] ****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:181
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc secrets new logging-kibana ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.kibana.key cert=/etc/origin/logging/system.logging.kibana.crt -n logging", 
        "results": "", 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Set Kibana Proxy secret] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:195
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc secrets new logging-kibana-proxy oauth-secret=/tmp/oauth-secret-lUBR1q session-secret=/tmp/session-secret-GzWL3N server-key=/tmp/server-key-HKrSvu server-cert=/tmp/server-cert-XLQ2TQ server-tls.json=/tmp/server-tls.json-7LV045 -n logging", 
        "results": "", 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Generate Kibana DC template] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:221
changed: [openshift] => {
    "changed": true, 
    "checksum": "010b475e8c1489789296ce5fca1bb37a0e4aee67", 
    "dest": "/tmp/openshift-logging-ansible-SFfLHV/templates/kibana-dc.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "f637148d2a570ba618895b9b36054501", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 3741, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973225.06-172931210377559/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Set Kibana DC] ********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:240
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get dc logging-kibana -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "DeploymentConfig", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:46Z", 
                    "generation": 2, 
                    "labels": {
                        "component": "kibana", 
                        "logging-infra": "kibana", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-kibana", 
                    "namespace": "logging", 
                    "resourceVersion": "1467", 
                    "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-kibana", 
                    "uid": "795ce2e9-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "replicas": 1, 
                    "selector": {
                        "component": "kibana", 
                        "logging-infra": "kibana", 
                        "provider": "openshift"
                    }, 
                    "strategy": {
                        "activeDeadlineSeconds": 21600, 
                        "resources": {}, 
                        "rollingParams": {
                            "intervalSeconds": 1, 
                            "maxSurge": "25%", 
                            "maxUnavailable": "25%", 
                            "timeoutSeconds": 600, 
                            "updatePeriodSeconds": 1
                        }, 
                        "type": "Rolling"
                    }, 
                    "template": {
                        "metadata": {
                            "creationTimestamp": null, 
                            "labels": {
                                "component": "kibana", 
                                "logging-infra": "kibana", 
                                "provider": "openshift"
                            }, 
                            "name": "logging-kibana"
                        }, 
                        "spec": {
                            "containers": [
                                {
                                    "env": [
                                        {
                                            "name": "ES_HOST", 
                                            "value": "logging-es"
                                        }, 
                                        {
                                            "name": "ES_PORT", 
                                            "value": "9200"
                                        }, 
                                        {
                                            "name": "KIBANA_MEMORY_LIMIT", 
                                            "valueFrom": {
                                                "resourceFieldRef": {
                                                    "containerName": "kibana", 
                                                    "divisor": "0", 
                                                    "resource": "limits.memory"
                                                }
                                            }
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-kibana:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "kibana", 
                                    "readinessProbe": {
                                        "exec": {
                                            "command": [
                                                "/usr/share/kibana/probe/readiness.sh"
                                            ]
                                        }, 
                                        "failureThreshold": 3, 
                                        "initialDelaySeconds": 5, 
                                        "periodSeconds": 5, 
                                        "successThreshold": 1, 
                                        "timeoutSeconds": 4
                                    }, 
                                    "resources": {
                                        "limits": {
                                            "memory": "736Mi"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/etc/kibana/keys", 
                                            "name": "kibana", 
                                            "readOnly": true
                                        }
                                    ]
                                }, 
                                {
                                    "env": [
                                        {
                                            "name": "OAP_BACKEND_URL", 
                                            "value": "http://localhost:5601"
                                        }, 
                                        {
                                            "name": "OAP_AUTH_MODE", 
                                            "value": "oauth2"
                                        }, 
                                        {
                                            "name": "OAP_TRANSFORM", 
                                            "value": "user_header,token_header"
                                        }, 
                                        {
                                            "name": "OAP_OAUTH_ID", 
                                            "value": "kibana-proxy"
                                        }, 
                                        {
                                            "name": "OAP_MASTER_URL", 
                                            "value": "https://kubernetes.default.svc.cluster.local"
                                        }, 
                                        {
                                            "name": "OAP_PUBLIC_MASTER_URL", 
                                            "value": "https://172.18.11.188:8443"
                                        }, 
                                        {
                                            "name": "OAP_LOGOUT_REDIRECT", 
                                            "value": "https://172.18.11.188:8443/console/logout"
                                        }, 
                                        {
                                            "name": "OAP_MASTER_CA_FILE", 
                                            "value": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
                                        }, 
                                        {
                                            "name": "OAP_DEBUG", 
                                            "value": "False"
                                        }, 
                                        {
                                            "name": "OAP_OAUTH_SECRET_FILE", 
                                            "value": "/secret/oauth-secret"
                                        }, 
                                        {
                                            "name": "OAP_SERVER_CERT_FILE", 
                                            "value": "/secret/server-cert"
                                        }, 
                                        {
                                            "name": "OAP_SERVER_KEY_FILE", 
                                            "value": "/secret/server-key"
                                        }, 
                                        {
                                            "name": "OAP_SERVER_TLS_FILE", 
                                            "value": "/secret/server-tls.json"
                                        }, 
                                        {
                                            "name": "OAP_SESSION_SECRET_FILE", 
                                            "value": "/secret/session-secret"
                                        }, 
                                        {
                                            "name": "OCP_AUTH_PROXY_MEMORY_LIMIT", 
                                            "valueFrom": {
                                                "resourceFieldRef": {
                                                    "containerName": "kibana-proxy", 
                                                    "divisor": "0", 
                                                    "resource": "limits.memory"
                                                }
                                            }
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-auth-proxy:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "kibana-proxy", 
                                    "ports": [
                                        {
                                            "containerPort": 3000, 
                                            "name": "oaproxy", 
                                            "protocol": "TCP"
                                        }
                                    ], 
                                    "resources": {
                                        "limits": {
                                            "memory": "96Mi"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/secret", 
                                            "name": "kibana-proxy", 
                                            "readOnly": true
                                        }
                                    ]
                                }
                            ], 
                            "dnsPolicy": "ClusterFirst", 
                            "restartPolicy": "Always", 
                            "schedulerName": "default-scheduler", 
                            "securityContext": {}, 
                            "serviceAccount": "aggregated-logging-kibana", 
                            "serviceAccountName": "aggregated-logging-kibana", 
                            "terminationGracePeriodSeconds": 30, 
                            "volumes": [
                                {
                                    "name": "kibana", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-kibana"
                                    }
                                }, 
                                {
                                    "name": "kibana-proxy", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-kibana-proxy"
                                    }
                                }
                            ]
                        }
                    }, 
                    "test": false, 
                    "triggers": [
                        {
                            "type": "ConfigChange"
                        }
                    ]
                }, 
                "status": {
                    "availableReplicas": 0, 
                    "conditions": [
                        {
                            "lastTransitionTime": "2017-06-09T01:53:46Z", 
                            "lastUpdateTime": "2017-06-09T01:53:46Z", 
                            "message": "Deployment config does not have minimum availability.", 
                            "status": "False", 
                            "type": "Available"
                        }, 
                        {
                            "lastTransitionTime": "2017-06-09T01:53:46Z", 
                            "lastUpdateTime": "2017-06-09T01:53:46Z", 
                            "message": "replication controller \"logging-kibana-1\" is waiting for pod \"logging-kibana-1-deploy\" to run", 
                            "status": "Unknown", 
                            "type": "Progressing"
                        }
                    ], 
                    "details": {
                        "causes": [
                            {
                                "type": "ConfigChange"
                            }
                        ], 
                        "message": "config change"
                    }, 
                    "latestVersion": 1, 
                    "observedGeneration": 2, 
                    "replicas": 0, 
                    "unavailableReplicas": 0, 
                    "updatedReplicas": 0
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Delete temp directory] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:252
ok: [openshift] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-SFfLHV", 
    "state": "absent"
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:166
statically included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml

TASK [openshift_logging_kibana : fail] *****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:3
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:7
ok: [openshift] => {
    "ansible_facts": {
        "kibana_version": "3_5"
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : fail] *****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:15
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : Create temp directory for doing work in] ******
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:7
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.003575", 
    "end": "2017-06-08 21:53:48.139648", 
    "rc": 0, 
    "start": "2017-06-08 21:53:48.136073"
}

STDOUT:

/tmp/openshift-logging-ansible-SfsjMA

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:12
ok: [openshift] => {
    "ansible_facts": {
        "tempdir": "/tmp/openshift-logging-ansible-SfsjMA"
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : Create templates subdirectory] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:16
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/openshift-logging-ansible-SfsjMA/templates", 
    "secontext": "unconfined_u:object_r:user_tmp_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging_kibana : Create Kibana service account] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:26
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : Create Kibana service account] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:34
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get sa aggregated-logging-kibana -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "imagePullSecrets": [
                    {
                        "name": "aggregated-logging-kibana-dockercfg-d935h"
                    }
                ], 
                "kind": "ServiceAccount", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:36Z", 
                    "name": "aggregated-logging-kibana", 
                    "namespace": "logging", 
                    "resourceVersion": "1420", 
                    "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-kibana", 
                    "uid": "73a8514d-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "secrets": [
                    {
                        "name": "aggregated-logging-kibana-token-jhgn1"
                    }, 
                    {
                        "name": "aggregated-logging-kibana-dockercfg-d935h"
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:42
ok: [openshift] => {
    "ansible_facts": {
        "kibana_component": "kibana-ops", 
        "kibana_name": "logging-kibana-ops"
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : Checking for session_secret] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:47
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "atime": 1496973218.589521, 
        "attr_flags": "", 
        "attributes": [], 
        "block_size": 4096, 
        "blocks": 8, 
        "charset": "us-ascii", 
        "checksum": "cc13f16d20aca64a8a3ac3c1bfc4b6639e14f052", 
        "ctime": 1496973217.5895336, 
        "dev": 51714, 
        "device_type": 0, 
        "executable": false, 
        "exists": true, 
        "gid": 0, 
        "gr_name": "root", 
        "inode": 38055315, 
        "isblk": false, 
        "ischr": false, 
        "isdir": false, 
        "isfifo": false, 
        "isgid": false, 
        "islnk": false, 
        "isreg": true, 
        "issock": false, 
        "isuid": false, 
        "md5": "7ddd392546ca141554c1ed73d3915d43", 
        "mimetype": "text/plain", 
        "mode": "0644", 
        "mtime": 1496973217.484535, 
        "nlink": 1, 
        "path": "/etc/origin/logging/session_secret", 
        "pw_name": "root", 
        "readable": true, 
        "rgrp": true, 
        "roth": true, 
        "rusr": true, 
        "size": 200, 
        "uid": 0, 
        "version": "260998254", 
        "wgrp": false, 
        "woth": false, 
        "writeable": true, 
        "wusr": true, 
        "xgrp": false, 
        "xoth": false, 
        "xusr": false
    }
}

TASK [openshift_logging_kibana : Checking for oauth_secret] ********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:51
ok: [openshift] => {
    "changed": false, 
    "stat": {
        "atime": 1496973218.7005196, 
        "attr_flags": "", 
        "attributes": [], 
        "block_size": 4096, 
        "blocks": 8, 
        "charset": "us-ascii", 
        "checksum": "11b71104e237b50ed34e74a30f519f58b3ea5cb7", 
        "ctime": 1496973217.9115295, 
        "dev": 51714, 
        "device_type": 0, 
        "executable": false, 
        "exists": true, 
        "gid": 0, 
        "gr_name": "root", 
        "inode": 59971699, 
        "isblk": false, 
        "ischr": false, 
        "isdir": false, 
        "isfifo": false, 
        "isgid": false, 
        "islnk": false, 
        "isreg": true, 
        "issock": false, 
        "isuid": false, 
        "md5": "622d149e6a5977f476cb244e0c339c04", 
        "mimetype": "text/plain", 
        "mode": "0644", 
        "mtime": 1496973217.798531, 
        "nlink": 1, 
        "path": "/etc/origin/logging/oauth_secret", 
        "pw_name": "root", 
        "readable": true, 
        "rgrp": true, 
        "roth": true, 
        "rusr": true, 
        "size": 64, 
        "uid": 0, 
        "version": "687440741", 
        "wgrp": false, 
        "woth": false, 
        "writeable": true, 
        "wusr": true, 
        "xgrp": false, 
        "xoth": false, 
        "xusr": false
    }
}

TASK [openshift_logging_kibana : Generate session secret] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:56
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : Generate oauth secret] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:64
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : Retrieving the cert to use when generating secrets for the logging components] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:71
ok: [openshift] => (item={u'name': u'ca_file', u'file': u'ca.crt'}) => {
    "changed": false, 
    "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJME9Wb1hEVEl5TURZd09EQXhOVEkxTUZvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt5OFlydXA4SXc1dDhFVit1bWFkQWkrYmNQSzd2d00xbUhqWC8wdndOU3oKWXNSRDNvWGRqeGl2b2dlbTBhbmM4aWJvRmRVZmQwcFJaUG1QRHZkaHgvVmtDMzBvSktLVWZFN3IrZ2ppT1VGVQp5YkVFbTBPQjNxTEZ1TVZEcHg0QjhSRzBlUTFscU1qdk5wM0hGWVgzKzJjbEkrOVp6ZHMycUNDbmJRb3VVc0xpCnNYZUlmTHVGYVcvWmlRdkVsMnlYbVgvKzR6ZUd0eHBDTXhROXFneDVUU3Y3cmpPclV4dzVhWDJucFNwcnlBeFEKS2xvcU9jSUdNNXdPbDNPbXRTcEJSbFU1N284NHlXaTdwUzF0ZlBDL09wMlVYTVlsN1RBS2dkRkFmU2w2YmtVYQpFcStaU1Q2Q1Nsd0VtVVB1R2JmUzhOeFF5dUFVQmdsN3hOdEJhRHR0clJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHbFYKQTQwQUNzY3cwR1hYTEIwOVZmZWNoaU03bmxiTDViM0xvNU1abWlIaFpYcXBVY2FITUtxRFBmdktRc3k5UHRXYwpYRS9acis2b3RlMExiWlBBa0h2TThhak56OW1QMmpsMUNhODVBcmNMeWhaOTUxV0U1eW02Y3dYSWo4SjlWdEtmCk95SDdHNmlWSmR2VndLRjdrWC9ZNkFvMmEyVDRHa2xCODZKcVJGRFRHYUxhdVlKQ05rN1lFSU9BbjZEdXRiemwKRTl3ZXRmNWd6M2h2UE5vd25yb0p2aTdWL01tQjY5Zzk4YnV4c0NCS2hDSWZrZ0tudWRjUmlwZ3JHcVJNeHhKeQpnNi9nK21nOFNzcXN4emxURUN4TzhLOUlZekpOcnlXZ3F0MTY4RTFobURHNS93RUxMUENzUldEd2pTelBsOEpXCms0c2ZaRGNzRXB1RUFwQktxWEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", 
    "encoding": "base64", 
    "item": {
        "file": "ca.crt", 
        "name": "ca_file"
    }, 
    "source": "/etc/origin/logging/ca.crt"
}
ok: [openshift] => (item={u'name': u'kibana_internal_key', u'file': u'kibana-internal.key'}) => {
    "changed": false, 
    "content": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdGNsSVZmcHpnNXZNdUVmVEFrT2daOU5HU1h2NXFvRGtSRWYzU2hnOWJxOXg4YXIrClQya1JYMEJjRHptTytOL1N3MDVpYzgzdGtKUStaN1hRdzBzUUdvVlZFWm9zN2hJaUhkWk9PS0FPUzkyRUhYSEcKRzVoSjQ5c2p4bnRJUDgzYmNySE1KSmZEZVNRaGdWZy9mcDhEM0llZjgyd0QxbGdKeGsySG5ZVlU4TFBVdlZUawo0Uk43ZkFRYTM2Wi9OUjNYWDRPUjFsa3RWQlI2dWJTSXkwekpqbEtNL2JKYlllWHpuZUtTM0VISWYxWmhGMWpaCmZiWUhKeWorNStwZGd3OFVuOUVzeFNUZ052amQrQXJnYTJ5ZEp4enFBUWw0R29aZEdxRXVya08vT0R2UWlkVDMKR1IxKzlLVnVSdG5SampHZjdSWEdXbUR2UlZpYnM2K1JxT1d0YlFJREFRQUJBb0lCQUJaakpvUm9KcWV6blQrbwpvTVRyblNxTUsyRExZdERydExEd0IvVlpETisvdlpHY2xGc2xQbDF6cUtLN1hPOHJhV0ppR2QvWElZV25yQlBMCm9WMGJ0bXo5dEo5SlZIVXhTSUJTTHluc0ZEYWxuaXFlSTE2c241VHZIUFhKb3Zrd21mRURFbmdETkxDTGtaREQKVkhaOGtOWXM0YmJ4dTNzL05sejBtVm45M0pzVDVWQ0xqY1hKZE93cjd0aHpmbVJDTjAxZGk1ei95L2dubWQwdgpRZEhsaS9xclc1d2hqMHo5U1VUTXpoaVo4bDBRZFhXWndhaHlvZ2U1OWFkUGlKZmd3RTRQRFVFYm4xNWlYTVQvCm1qdEZXcXZXb3dwcmhzcU5scXJrZGdZR3B6K0htS0JlTm9qbE5kN2N0cDdWQlBhOHYxZVRmVGNBeVQxVUljT3UKSVZGaHQrRUNnWUVBNHRsOEowS1lHcmczN1QzR2xPaTg3Qzk0VEZScmVTZHFOdFpsQnByREZmVkhRWUdydGRhNQo5a0RIMEZHUEdjVjNNVVZvYjZ3enRrZy9nekZtUDdQY0ZVR0JrODQ4ejI0c21EcnFNN2hKZHd0cGxGMnJUbjFxCm1tZGRhYmcybDRVcngrTm1PU2FaejUySE81RUQ5bVJMUzlnbXFmYVR5blR5MWk2T2hNMU9oQVVDZ1lFQXpTVmgKbUkraFlzNTl4UU0xUGpPZWIwNzVja2JiRGVoYmFaT0k0eDNQOWtOa3UyOXRwNWFDS1NIcVhGMFYzZ3k1NXFURAozMVEyTlVPbWUxTGg0UjVyekxBeVgvWVY4bllWcWdWdkcyN2NTcVJOWlNXTjVHYkRPaXNvRlBDVEI3aXVock5ZCjIvczJuY3cwUWtSTGFVMU9oV3p0THNWZUR0Ulp2MDhseHI1Z2FFa0NnWUVBbnFaUHAvMXc5eTdqSGk1WUZZaDMKcUE3QzZVOFpJdEFuL2xZT3JZSEs4aTVxT1N3QTlObEprU2xaRlI0VklJYnpoeWZ0bER3d3Brajg4am00TXRFTgpHR2lKd044NXREQnZTNy9ZVDNlUkdZcUh1bFdRR3dLbmJYamc0YkVOclFaYnloNEZQZTc3SHpJaWc4dzFvem9kClZ0dkNucGR1WU9kTmRmRjFodmMyOUNrQ2dZRUF2WmVHa3hCcS9uNElEa1BndVJQTG9PTkQ5akUxMGF5a2p2WWkKMUlPQTV2OXg0U2dpRjNncDR3bk5KbitBN2k2a3dGd1dDaGd4NFJnY2pHMFZCSkN3NEFNWEMwakxEOEhDVTllaAp6NkN0UnU2QitMQzBhaG51NDV0dTk2cyt0eXdmWDYzd3VaMTU1R3dOQUJGT0FJdkp2ZFhsZmd3NTJVcTNodThHCjRwNmZTc0VDZ1lBMFAxT3NsUkZDYWg1UFpEYlE0c2ViQ1djZUdVK3d6Ymg0eldBTksyQ2h4OXJqaGpteVEyeTQKMElEQkJSRitranhqcnI4UmlJb1hoeWRFZjRsd0hzaWh6Q3Q1KzFMblZmeVdMbEZCNkVWRXQ3Tk0rV2gzMFdreApxUEliZ3hGRnZNU0lETHRPNDJTR21SUXF1SkZmUTQxT05aN2MxNXpIQk5FTkovenNreVQzdWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", 
    "encoding": "base64", 
    "item": {
        "file": "kibana-internal.key", 
        "name": "kibana_internal_key"
    }, 
    "source": "/etc/origin/logging/kibana-internal.key"
}
ok: [openshift] => (item={u'name': u'kibana_internal_cert', u'file': u'kibana-internal.crt'}) => {
    "changed": false, 
    "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJMU1Wb1hEVEU1TURZd09UQXhOVEkxTWxvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFDMXlVaFYrbk9EbTh5NFI5TUNRNkJuMDBaSmUvbXFnT1JFUi9kS0dEMXVyM0h4cXY1UGFSRmYKUUZ3UE9ZNzQzOUxEVG1KenplMlFsRDVudGRERFN4QWFoVlVSbWl6dUVpSWQxazQ0b0E1TDNZUWRjY1libUVuagoyeVBHZTBnL3pkdHlzY3drbDhONUpDR0JXRDkrbndQY2g1L3piQVBXV0FuR1RZZWRoVlR3czlTOVZPVGhFM3Q4CkJCcmZwbjgxSGRkZmc1SFdXUzFVRkhxNXRJakxUTW1PVW96OXNsdGg1Zk9kNHBMY1FjaC9WbUVYV05sOXRnY24KS1A3bjZsMkREeFNmMFN6RkpPQTIrTjM0Q3VCcmJKMG5IT29CQ1hnYWhsMGFvUzZ1UTc4NE85Q0oxUGNaSFg3MApwVzVHMmRHT01aL3RGY1phWU85RldKdXpyNUdvNWExdEFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXB4bEJhWmxRNGxZRm94Sm5lbXJQWEFkWWxXQkY1WXVJSDgrOUJoQ0MKcHRzNTJ3YnpFQkM2cXptVVA0Y0MzdkY1UjhuYTZiUE1VeHBzVzZ1aGV1Q1pkZ0g5Y1pYRjJYdHdBVnFJdlRjSwpoMGZNaWNlcVRSQXNta0VHNHV3STRMcjhWdUYxVFo4UUprcSs4bktkL1ZodUxVeUFMaHdnNkkwMWs4YTFlSUd6ClJic2hJeEZRRW0yNk85bytGRS9SNXkzVk1RZWhTaVkvL0RJbm5XZDN3OFJaQVlzMHFWaDJkcXhSSjBKOUdjbWIKazMxOCs0N0x4WmdzTVZjaVc1Mm5FbllXUU9qZ0VIdGFXbTFxM2N4RExUUnMvT1VkU0R3N1RwRnFXenhLOGE3VwpYRzRJUzdwNXc0N0EvUXRBN3llL1R4RDRlSEtrcFFZQXZISFhkUTY4cmVyNE13PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd09UQXhOVEkwT1ZvWERUSXlNRFl3T0RBeE5USTFNRm93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3k4WXJ1cDhJdzV0OEVWK3VtYWRBaStiY1BLN3Z3TTFtSGpYLzB2d05TegpZc1JEM29YZGp4aXZvZ2VtMGFuYzhpYm9GZFVmZDBwUlpQbVBEdmRoeC9Wa0MzMG9KS0tVZkU3citnamlPVUZVCnliRUVtME9CM3FMRnVNVkRweDRCOFJHMGVRMWxxTWp2TnAzSEZZWDMrMmNsSSs5WnpkczJxQ0NuYlFvdVVzTGkKc1hlSWZMdUZhVy9aaVF2RWwyeVhtWC8rNHplR3R4cENNeFE5cWd4NVRTdjdyak9yVXh3NWFYMm5wU3ByeUF4UQpLbG9xT2NJR001d09sM09tdFNwQlJsVTU3bzg0eVdpN3BTMXRmUEMvT3AyVVhNWWw3VEFLZ2RGQWZTbDZia1VhCkVxK1pTVDZDU2x3RW1VUHVHYmZTOE54UXl1QVVCZ2w3eE50QmFEdHRyUk1DQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUdsVgpBNDBBQ3NjdzBHWFhMQjA5VmZlY2hpTTdubGJMNWIzTG81TVptaUhoWlhxcFVjYUhNS3FEUGZ2S1FzeTlQdFdjClhFL1pyKzZvdGUwTGJaUEFrSHZNOGFqTno5bVAyamwxQ2E4NUFyY0x5aFo5NTFXRTV5bTZjd1hJajhKOVZ0S2YKT3lIN0c2aVZKZHZWd0tGN2tYL1k2QW8yYTJUNEdrbEI4NkpxUkZEVEdhTGF1WUpDTms3WUVJT0FuNkR1dGJ6bApFOXdldGY1Z3ozaHZQTm93bnJvSnZpN1YvTW1CNjlnOThidXhzQ0JLaENJZmtnS251ZGNSaXBnckdxUk14eEp5Cmc2L2crbWc4U3Nxc3h6bFRFQ3hPOEs5SVl6Sk5yeVdncXQxNjhFMWhtREc1L3dFTExQQ3NSV0R3alN6UGw4SlcKazRzZlpEY3NFcHVFQXBCS3FYST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", 
    "encoding": "base64", 
    "item": {
        "file": "kibana-internal.crt", 
        "name": "kibana_internal_cert"
    }, 
    "source": "/etc/origin/logging/kibana-internal.crt"
}
ok: [openshift] => (item={u'name': u'server_tls', u'file': u'server-tls.json'}) => {
    "changed": false, 
    "content": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", 
    "encoding": "base64", 
    "item": {
        "file": "server-tls.json", 
        "name": "server_tls"
    }, 
    "source": "/etc/origin/logging/server-tls.json"
}
ok: [openshift] => (item={u'name': u'session_secret', u'file': u'session_secret'}) => {
    "changed": false, 
    "content": "OEdTRjdYbWxxenpHb3ZKd0tRRVJUYzMxZjhMV3BESWtOdW5oTHVnczhZV3VSUExrdFlSV0lDa3RTWHlzYXFDaEJkc2FSY3pHU2NYUWxXUHpxbVFkYVBsRktvY2UxRGVGSXN2d3VCa1pzdHhMeXR6cXFIc1pwakx2ZTdJb2RhaWNjWUJ6QWtmV2lZNDB2Y0ozaFNSSmRya1BlUVN3a0ZudmRsV1F4QUo5TWIwT1Z3SUM2VDdvelNKbURKYVlMck02dTZjelpRNnM=", 
    "encoding": "base64", 
    "item": {
        "file": "session_secret", 
        "name": "session_secret"
    }, 
    "source": "/etc/origin/logging/session_secret"
}
ok: [openshift] => (item={u'name': u'oauth_secret', u'file': u'oauth_secret'}) => {
    "changed": false, 
    "content": "ZmZYVkNRVVdyb2xqVnQ5Sjk5U0hZS2N3dUpqakU3UXRRRWNDd29yeFRnSEVWWDBXWTg0Z0U2bEdza3d6TFVLZA==", 
    "encoding": "base64", 
    "item": {
        "file": "oauth_secret", 
        "name": "oauth_secret"
    }, 
    "source": "/etc/origin/logging/oauth_secret"
}

TASK [openshift_logging_kibana : Set logging-kibana-ops service] ***************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:84
changed: [openshift] => {
    "changed": true, 
    "results": {
        "clusterip": "172.30.33.94", 
        "cmd": "/bin/oc get service logging-kibana-ops -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Service", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:52Z", 
                    "name": "logging-kibana-ops", 
                    "namespace": "logging", 
                    "resourceVersion": "1476", 
                    "selfLink": "/api/v1/namespaces/logging/services/logging-kibana-ops", 
                    "uid": "7ce970d2-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "clusterIP": "172.30.33.94", 
                    "ports": [
                        {
                            "port": 443, 
                            "protocol": "TCP", 
                            "targetPort": "oaproxy"
                        }
                    ], 
                    "selector": {
                        "component": "kibana-ops", 
                        "provider": "openshift"
                    }, 
                    "sessionAffinity": "None", 
                    "type": "ClusterIP"
                }, 
                "status": {
                    "loadBalancer": {}
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:101
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_key | trim | length
> 0 }}
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:106
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_cert | trim | length
> 0 }}
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:111
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_ca | trim | length >
0 }}
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:116
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_kibana : Generating Kibana route template] *************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:121
ok: [openshift] => {
    "changed": false, 
    "checksum": "ad9c765b1c3c1d798502e9b1aa0ceb2385812907", 
    "dest": "/tmp/openshift-logging-ansible-SfsjMA/templates/kibana-route.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "cfafda4a1d31e739a4d6fa19d2a8906c", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2726, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973232.82-269669929977861/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Setting Kibana route] *************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:141
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get route logging-kibana-ops -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "Route", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:53Z", 
                    "labels": {
                        "component": "support", 
                        "logging-infra": "support", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-kibana-ops", 
                    "namespace": "logging", 
                    "resourceVersion": "1487", 
                    "selfLink": "/oapi/v1/namespaces/logging/routes/logging-kibana-ops", 
                    "uid": "7dfe137f-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "host": "kibana-ops.router.default.svc.cluster.local", 
                    "tls": {
                        "caCertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTAxNTI0OVoXDTIyMDYwODAxNTI1MFow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAKy8Yrup8Iw5t8EV+umadAi+bcPK7vwM1mHjX/0vwNSz\nYsRD3oXdjxivogem0anc8iboFdUfd0pRZPmPDvdhx/VkC30oJKKUfE7r+gjiOUFU\nybEEm0OB3qLFuMVDpx4B8RG0eQ1lqMjvNp3HFYX3+2clI+9Zzds2qCCnbQouUsLi\nsXeIfLuFaW/ZiQvEl2yXmX/+4zeGtxpCMxQ9qgx5TSv7rjOrUxw5aX2npSpryAxQ\nKloqOcIGM5wOl3OmtSpBRlU57o84yWi7pS1tfPC/Op2UXMYl7TAKgdFAfSl6bkUa\nEq+ZST6CSlwEmUPuGbfS8NxQyuAUBgl7xNtBaDttrRMCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGlV\nA40ACscw0GXXLB09VfechiM7nlbL5b3Lo5MZmiHhZXqpUcaHMKqDPfvKQsy9PtWc\nXE/Zr+6ote0LbZPAkHvM8ajNz9mP2jl1Ca85ArcLyhZ951WE5ym6cwXIj8J9VtKf\nOyH7G6iVJdvVwKF7kX/Y6Ao2a2T4GklB86JqRFDTGaLauYJCNk7YEIOAn6Dutbzl\nE9wetf5gz3hvPNownroJvi7V/MmB69g98buxsCBKhCIfkgKnudcRipgrGqRMxxJy\ng6/g+mg8SsqsxzlTECxO8K9IYzJNryWgqt168E1hmDG5/wELLPCsRWDwjSzPl8JW\nk4sfZDcsEpuEApBKqXI=\n-----END CERTIFICATE-----\n", 
                        "destinationCACertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTAxNTI0OVoXDTIyMDYwODAxNTI1MFow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAKy8Yrup8Iw5t8EV+umadAi+bcPK7vwM1mHjX/0vwNSz\nYsRD3oXdjxivogem0anc8iboFdUfd0pRZPmPDvdhx/VkC30oJKKUfE7r+gjiOUFU\nybEEm0OB3qLFuMVDpx4B8RG0eQ1lqMjvNp3HFYX3+2clI+9Zzds2qCCnbQouUsLi\nsXeIfLuFaW/ZiQvEl2yXmX/+4zeGtxpCMxQ9qgx5TSv7rjOrUxw5aX2npSpryAxQ\nKloqOcIGM5wOl3OmtSpBRlU57o84yWi7pS1tfPC/Op2UXMYl7TAKgdFAfSl6bkUa\nEq+ZST6CSlwEmUPuGbfS8NxQyuAUBgl7xNtBaDttrRMCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAGlV\nA40ACscw0GXXLB09VfechiM7nlbL5b3Lo5MZmiHhZXqpUcaHMKqDPfvKQsy9PtWc\nXE/Zr+6ote0LbZPAkHvM8ajNz9mP2jl1Ca85ArcLyhZ951WE5ym6cwXIj8J9VtKf\nOyH7G6iVJdvVwKF7kX/Y6Ao2a2T4GklB86JqRFDTGaLauYJCNk7YEIOAn6Dutbzl\nE9wetf5gz3hvPNownroJvi7V/MmB69g98buxsCBKhCIfkgKnudcRipgrGqRMxxJy\ng6/g+mg8SsqsxzlTECxO8K9IYzJNryWgqt168E1hmDG5/wELLPCsRWDwjSzPl8JW\nk4sfZDcsEpuEApBKqXI=\n-----END CERTIFICATE-----\n", 
                        "insecureEdgeTerminationPolicy": "Redirect", 
                        "termination": "reencrypt"
                    }, 
                    "to": {
                        "kind": "Service", 
                        "name": "logging-kibana-ops", 
                        "weight": 100
                    }, 
                    "wildcardPolicy": "None"
                }, 
                "status": {
                    "ingress": [
                        {
                            "conditions": [
                                {
                                    "lastTransitionTime": "2017-06-09T01:53:53Z", 
                                    "status": "True", 
                                    "type": "Admitted"
                                }
                            ], 
                            "host": "kibana-ops.router.default.svc.cluster.local", 
                            "routerName": "router", 
                            "wildcardPolicy": "None"
                        }
                    ]
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Get current oauthclient hostnames] ************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:151
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "OAuthClient", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:43Z", 
                    "labels": {
                        "logging-infra": "support"
                    }, 
                    "name": "kibana-proxy", 
                    "resourceVersion": "1451", 
                    "selfLink": "/oapi/v1/oauthclients/kibana-proxy", 
                    "uid": "77a45226-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "redirectURIs": [
                    "https://kibana.router.default.svc.cluster.local"
                ], 
                "scopeRestrictions": [
                    {
                        "literals": [
                            "user:info", 
                            "user:check-access", 
                            "user:list-projects"
                        ]
                    }
                ], 
                "secret": "ffXVCQUWroljVt9J99SHYKcwuJjjE7QtQEcCworxTgHEVX0WY84gE6lGskwzLUKd"
            }
        ], 
        "returncode": 0
    }, 
    "state": "list"
}

TASK [openshift_logging_kibana : set_fact] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:159
ok: [openshift] => {
    "ansible_facts": {
        "proxy_hostnames": [
            "https://kibana.router.default.svc.cluster.local", 
            "https://kibana-ops.router.default.svc.cluster.local"
        ]
    }, 
    "changed": false
}

TASK [openshift_logging_kibana : Create oauth-client template] *****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:162
changed: [openshift] => {
    "changed": true, 
    "checksum": "01dceb9dbbc4cf4353c1313623f8cf844d893762", 
    "dest": "/tmp/openshift-logging-ansible-SfsjMA/templates/oauth-client.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "046b4c7e28ef85e074a740a759473eaa", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 382, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973234.93-259369976917028/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Set kibana-proxy oauth-client] ****************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:170
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "OAuthClient", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:43Z", 
                    "labels": {
                        "logging-infra": "support"
                    }, 
                    "name": "kibana-proxy", 
                    "resourceVersion": "1503", 
                    "selfLink": "/oapi/v1/oauthclients/kibana-proxy", 
                    "uid": "77a45226-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "redirectURIs": [
                    "https://kibana.router.default.svc.cluster.local", 
                    "https://kibana-ops.router.default.svc.cluster.local"
                ], 
                "scopeRestrictions": [
                    {
                        "literals": [
                            "user:info", 
                            "user:check-access", 
                            "user:list-projects"
                        ]
                    }
                ], 
                "secret": "ffXVCQUWroljVt9J99SHYKcwuJjjE7QtQEcCworxTgHEVX0WY84gE6lGskwzLUKd"
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Set Kibana secret] ****************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:181
ok: [openshift] => {
    "changed": false, 
    "results": {
        "apiVersion": "v1", 
        "data": {
            "ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJME9Wb1hEVEl5TURZd09EQXhOVEkxTUZvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt5OFlydXA4SXc1dDhFVit1bWFkQWkrYmNQSzd2d00xbUhqWC8wdndOU3oKWXNSRDNvWGRqeGl2b2dlbTBhbmM4aWJvRmRVZmQwcFJaUG1QRHZkaHgvVmtDMzBvSktLVWZFN3IrZ2ppT1VGVQp5YkVFbTBPQjNxTEZ1TVZEcHg0QjhSRzBlUTFscU1qdk5wM0hGWVgzKzJjbEkrOVp6ZHMycUNDbmJRb3VVc0xpCnNYZUlmTHVGYVcvWmlRdkVsMnlYbVgvKzR6ZUd0eHBDTXhROXFneDVUU3Y3cmpPclV4dzVhWDJucFNwcnlBeFEKS2xvcU9jSUdNNXdPbDNPbXRTcEJSbFU1N284NHlXaTdwUzF0ZlBDL09wMlVYTVlsN1RBS2dkRkFmU2w2YmtVYQpFcStaU1Q2Q1Nsd0VtVVB1R2JmUzhOeFF5dUFVQmdsN3hOdEJhRHR0clJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHbFYKQTQwQUNzY3cwR1hYTEIwOVZmZWNoaU03bmxiTDViM0xvNU1abWlIaFpYcXBVY2FITUtxRFBmdktRc3k5UHRXYwpYRS9acis2b3RlMExiWlBBa0h2TThhak56OW1QMmpsMUNhODVBcmNMeWhaOTUxV0U1eW02Y3dYSWo4SjlWdEtmCk95SDdHNmlWSmR2VndLRjdrWC9ZNkFvMmEyVDRHa2xCODZKcVJGRFRHYUxhdVlKQ05rN1lFSU9BbjZEdXRiemwKRTl3ZXRmNWd6M2h2UE5vd25yb0p2aTdWL01tQjY5Zzk4YnV4c0NCS2hDSWZrZ0tudWRjUmlwZ3JHcVJNeHhKeQpnNi9nK21nOFNzcXN4emxURUN4TzhLOUlZekpOcnlXZ3F0MTY4RTFobURHNS93RUxMUENzUldEd2pTelBsOEpXCms0c2ZaRGNzRXB1RUFwQktxWEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", 
            "cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSVENDQWkyZ0F3SUJBZ0lCQXpBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJMU5Wb1hEVEU1TURZd09UQXhOVEkxTlZvdwpSakVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVI0d0hBWURWUVFECkRCVnplWE4wWlcwdWJHOW5aMmx1Wnk1cmFXSmhibUV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXcKZ2dFS0FvSUJBUURKUDNzU1B2U2hPRHUyQ0liNjU1RjlFNHl3WUgwWDlGdTVLdUwrc25kK3VHdCs3ZEZHbGdpYQpzYWxlWmQ3UzJmdmJvTTN3ZEJtT0FGMFJIMndIUUR6UzFKQ0VPT2tHOEk3QlFCUjZ4d3pqRkR4dE50ZmF5N0xMCm1yYjF2M1ZwMnR3ejQ5Q1NoUlowVmhlWWMvQURnVEcrcnZyNC8xZHRKaXA0bTRuVWl6b3h4Nmppc2lpZnRLYWgKRkZTWDVBYU93cXJGZldrRXR1T0hZenZWSmdmV0JyVDF0VmpFem9idkpGMkh0V3JYOG1CUHNoL2FGNngrUEdaUQpmeTEwMGhwRzF1UzBleW9ZdCtCSmk5RVhtVXZzdFMxd3c5VnNDSmdnWXNDaUZOV0JKZk91aGtRczdkLzZOOGxkClZodGw1UHdQMFMwbjd1enJmcHY2bVhlK3ZWZTZscVlmQWdNQkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUYKb0RBSkJnTlZIUk1FQWpBQU1CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFkQmdOVgpIUTRFRmdRVU5Qck9MM0w1SENLWVVOUDU1Y2RVNHUydytZY3dDUVlEVlIwakJBSXdBREFOQmdrcWhraUc5dzBCCkFRVUZBQU9DQVFFQVY0VDBOL1dGTkdKZUphbVZ4MlcwL2JBa0ZKQW5CaHY4aEx0SzlueGVGVHJOQzJXaUx1SnYKRnFLc0hQV09FOGhZQncyVEgwVGJWMUtncmxubjlNdDFLQXYzcTRLZkVOdXlOVVJjLzJpOGlyTW1KU3NXa3dvMgorNVoxOVlEZ2NOQ3JBemxoMG9DdlBCWnUxRHB0WHByQVRWbldhUEJ4ek5jVkk3cmR0cTlXMXhHUVpnMlNtYUU0CmxiaWtod1ZnK1lBTTBaaUFqRC8vbW9SZXFYNkV0cGx2cGxnRDVxR2lxS0RBUTRBNUk1WnFJam9STTR6K1JQN1MKenQyKzJBN3ZvUmpRd05OOVZySXdJU01pcHcwYkNnRG1SdkJhajBiVTlKbzNtalZHblpiZUNsQk1RdS9qbFBINgpabElGcmVXWjAyblhSeGdnWHo2QmpMNWkyYitKa2xtMDRRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", 
            "key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRREpQM3NTUHZTaE9EdTIKQ0liNjU1RjlFNHl3WUgwWDlGdTVLdUwrc25kK3VHdCs3ZEZHbGdpYXNhbGVaZDdTMmZ2Ym9NM3dkQm1PQUYwUgpIMndIUUR6UzFKQ0VPT2tHOEk3QlFCUjZ4d3pqRkR4dE50ZmF5N0xMbXJiMXYzVnAydHd6NDlDU2hSWjBWaGVZCmMvQURnVEcrcnZyNC8xZHRKaXA0bTRuVWl6b3h4Nmppc2lpZnRLYWhGRlNYNUFhT3dxckZmV2tFdHVPSFl6dlYKSmdmV0JyVDF0VmpFem9idkpGMkh0V3JYOG1CUHNoL2FGNngrUEdaUWZ5MTAwaHBHMXVTMGV5b1l0K0JKaTlFWAptVXZzdFMxd3c5VnNDSmdnWXNDaUZOV0JKZk91aGtRczdkLzZOOGxkVmh0bDVQd1AwUzBuN3V6cmZwdjZtWGUrCnZWZTZscVlmQWdNQkFBRUNnZ0VBQWxKTkpTUzh4YTlVWVFFd0xXekdiTjd3M0lnQWFXcFVOSXFlRzdvTFR0YnAKeG9rUHhQU3VITHEzN1hMWFl5OUlqSHdLWkIreXU4U2RUamxDa2NMWDhNYXE5QnVEOUtTSndRandNNHBnUjY1UQpGQ0p4MHdCT2k3SzVNWlNIMGpUSUhZRWZRdEZ1Tk9GWlhGVGFDL0JObHBtR0k2RXViUC9udFlSMXpwSGNsZjVtClk3RUgya2trV3RRNFdaR1dUaC9KYkxUQ1BsODRIeGk3eWtwbHpYdmRoWWFhWjl1UjBTUGNzZVJzSGlhT3M4S1QKYXRPNE5TdkZ3TG1tSURCdHN0MEFwbFlOYTRwYTFIeEtlUFVkTVgwQXB3ckZZUityRHBSTURjellmekZFQ1l4YQo1QTVuZ1J2eEh3cWZYbEpXeUtqNjU1c0NRVkRza0tPWHdDdlg0Z21OWVFLQmdRRGxCUWJLUDVVL3lEVHE1WUE4CmNoSThNSDZ4d1llWGVkS3NpTnJ0em5MSjQ1KytLaFYvL1h2Y3VGQy80bUJlNU9HMVBNSGpPR1lQbm1pMTFXVVoKakdySmI5KzU1SlBYQ0t3N2MydzN6T2N4Wnk3T0p5dzFVZ1ZWWldGQkNqaXh6dEM4U3l6TkVtNmlwcjhyWWl3TwpHeGhySFBxalJoNVVzRW5abjZyUmUrckdEUUtCZ1FEZzlPUnBjV1R4QkFXZDZwOXVHcUJVcmYyM2pvcU8vbUFzCndBb040UlpKYWhXZ2l5ZUhZWU5QZEVONnJqc0haNndmcEdaZDkwaDdHVHR0aE5OT0xLUE9TUnd5dGordGlsQmMKVkdhWCs5Z1lkMzNiOWZIL0tyQ0JqYXFzMVNVTmN6ZHA4WkhRc3lYYk5LVUZYZVZBNjFIMnEzM1EvRmVEdHA0TAo2bUpyMzE3ZDJ3S0JnRm5HSUtWRFMyUVhQUGNmUTZkdUo4dkVUc1dyVVZXRmdabjBnNjFZa2hLbDBjYWZoSklKCmNYWlNJZ1UxM2dVVXY0MWw1Yk1HTnF2RXN0TWtkVjhRZGdQRWdQVERyMWhKcEFvaDhyZms4SE9qT092QzIwZUQKZ1dlNk4rZGc5Rnh1NzgvL3YrNGJYWmNRdWp0dFhrdWhQMjh0aXVwWjRDWGVmUFI3N0YvMXJWQTVBb0dBQlNZVgp4RVFRSjJRTUxOMGQ0UXRDK0MwelRXdzV4NlFTMTNOZHg0dUxVd3JXaStJamVYbkY0NStwbTdrNUtLWTZ6azZZCitUV2J0eFdRd3FUem9TcHNaV0JQQU9vaTh2bmpkUG1KajVqNERUZE83aVhtOEF3dUZna0VDd2lsM0hUeW83NGYKdEVNbGJxcjV5L0dtT2FJcE1oZ2l2UkhKZnY1REI4ckpqZWFDNlZrQ2dZQmttYWxtVXN3cG5URVRvbXV0VUVyNApTVlNJS2MzOEdxSUJGcGxJakppbU94WCtaQnhTeGJScm9uemZ5SW1IamtUN3FidndtK25ZUFYxdFBxbWhsbXR0ClB2Nk1vNkVrMGhSUWdBb0E5VEhuUDBieWV4dWZzNDRrV3hlckpzZlFOMkxuRkdaM2lRWkVWaUZmUWorcnZVQTUKYjAwQUhDZXhITEluN1o5SnRMbXVKUT09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K"
        }, 
        "kind": "Secret", 
        "metadata": {
            "creationTimestamp": null, 
            "name": "logging-kibana"
        }, 
        "type": "Opaque"
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Set Kibana Proxy secret] **********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:195
ok: [openshift] => {
    "changed": false, 
    "results": {
        "apiVersion": "v1", 
        "data": {
            "oauth-secret": "ZmZYVkNRVVdyb2xqVnQ5Sjk5U0hZS2N3dUpqakU3UXRRRWNDd29yeFRnSEVWWDBXWTg0Z0U2bEdza3d6TFVLZA==", 
            "server-cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJMU1Wb1hEVEU1TURZd09UQXhOVEkxTWxvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFDMXlVaFYrbk9EbTh5NFI5TUNRNkJuMDBaSmUvbXFnT1JFUi9kS0dEMXVyM0h4cXY1UGFSRmYKUUZ3UE9ZNzQzOUxEVG1KenplMlFsRDVudGRERFN4QWFoVlVSbWl6dUVpSWQxazQ0b0E1TDNZUWRjY1libUVuagoyeVBHZTBnL3pkdHlzY3drbDhONUpDR0JXRDkrbndQY2g1L3piQVBXV0FuR1RZZWRoVlR3czlTOVZPVGhFM3Q4CkJCcmZwbjgxSGRkZmc1SFdXUzFVRkhxNXRJakxUTW1PVW96OXNsdGg1Zk9kNHBMY1FjaC9WbUVYV05sOXRnY24KS1A3bjZsMkREeFNmMFN6RkpPQTIrTjM0Q3VCcmJKMG5IT29CQ1hnYWhsMGFvUzZ1UTc4NE85Q0oxUGNaSFg3MApwVzVHMmRHT01aL3RGY1phWU85RldKdXpyNUdvNWExdEFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXB4bEJhWmxRNGxZRm94Sm5lbXJQWEFkWWxXQkY1WXVJSDgrOUJoQ0MKcHRzNTJ3YnpFQkM2cXptVVA0Y0MzdkY1UjhuYTZiUE1VeHBzVzZ1aGV1Q1pkZ0g5Y1pYRjJYdHdBVnFJdlRjSwpoMGZNaWNlcVRSQXNta0VHNHV3STRMcjhWdUYxVFo4UUprcSs4bktkL1ZodUxVeUFMaHdnNkkwMWs4YTFlSUd6ClJic2hJeEZRRW0yNk85bytGRS9SNXkzVk1RZWhTaVkvL0RJbm5XZDN3OFJaQVlzMHFWaDJkcXhSSjBKOUdjbWIKazMxOCs0N0x4WmdzTVZjaVc1Mm5FbllXUU9qZ0VIdGFXbTFxM2N4RExUUnMvT1VkU0R3N1RwRnFXenhLOGE3VwpYRzRJUzdwNXc0N0EvUXRBN3llL1R4RDRlSEtrcFFZQXZISFhkUTY4cmVyNE13PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd09UQXhOVEkwT1ZvWERUSXlNRFl3T0RBeE5USTFNRm93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3k4WXJ1cDhJdzV0OEVWK3VtYWRBaStiY1BLN3Z3TTFtSGpYLzB2d05TegpZc1JEM29YZGp4aXZvZ2VtMGFuYzhpYm9GZFVmZDBwUlpQbVBEdmRoeC9Wa0MzMG9KS0tVZkU3citnamlPVUZVCnliRUVtME9CM3FMRnVNVkRweDRCOFJHMGVRMWxxTWp2TnAzSEZZWDMrMmNsSSs5WnpkczJxQ0NuYlFvdVVzTGkKc1hlSWZMdUZhVy9aaVF2RWwyeVhtWC8rNHplR3R4cENNeFE5cWd4NVRTdjdyak9yVXh3NWFYMm5wU3ByeUF4UQpLbG9xT2NJR001d09sM09tdFNwQlJsVTU3bzg0eVdpN3BTMXRmUEMvT3AyVVhNWWw3VEFLZ2RGQWZTbDZia1VhCkVxK1pTVDZDU2x3RW1VUHVHYmZTOE54UXl1QVVCZ2w3eE50QmFEdHRyUk1DQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUdsVgpBNDBBQ3NjdzBHWFhMQjA5VmZlY2hpTTdubGJMNWIzTG81TVptaUhoWlhxcFVjYUhNS3FEUGZ2S1FzeTlQdFdjClhFL1pyKzZvdGUwTGJaUEFrSHZNOGFqTno5bVAyamwxQ2E4NUFyY0x5aFo5NTFXRTV5bTZjd1hJajhKOVZ0S2YKT3lIN0c2aVZKZHZWd0tGN2tYL1k2QW8yYTJUNEdrbEI4NkpxUkZEVEdhTGF1WUpDTms3WUVJT0FuNkR1dGJ6bApFOXdldGY1Z3ozaHZQTm93bnJvSnZpN1YvTW1CNjlnOThidXhzQ0JLaENJZmtnS251ZGNSaXBnckdxUk14eEp5Cmc2L2crbWc4U3Nxc3h6bFRFQ3hPOEs5SVl6Sk5yeVdncXQxNjhFMWhtREc1L3dFTExQQ3NSV0R3alN6UGw4SlcKazRzZlpEY3NFcHVFQXBCS3FYST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", 
            "server-key": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdGNsSVZmcHpnNXZNdUVmVEFrT2daOU5HU1h2NXFvRGtSRWYzU2hnOWJxOXg4YXIrClQya1JYMEJjRHptTytOL1N3MDVpYzgzdGtKUStaN1hRdzBzUUdvVlZFWm9zN2hJaUhkWk9PS0FPUzkyRUhYSEcKRzVoSjQ5c2p4bnRJUDgzYmNySE1KSmZEZVNRaGdWZy9mcDhEM0llZjgyd0QxbGdKeGsySG5ZVlU4TFBVdlZUawo0Uk43ZkFRYTM2Wi9OUjNYWDRPUjFsa3RWQlI2dWJTSXkwekpqbEtNL2JKYlllWHpuZUtTM0VISWYxWmhGMWpaCmZiWUhKeWorNStwZGd3OFVuOUVzeFNUZ052amQrQXJnYTJ5ZEp4enFBUWw0R29aZEdxRXVya08vT0R2UWlkVDMKR1IxKzlLVnVSdG5SampHZjdSWEdXbUR2UlZpYnM2K1JxT1d0YlFJREFRQUJBb0lCQUJaakpvUm9KcWV6blQrbwpvTVRyblNxTUsyRExZdERydExEd0IvVlpETisvdlpHY2xGc2xQbDF6cUtLN1hPOHJhV0ppR2QvWElZV25yQlBMCm9WMGJ0bXo5dEo5SlZIVXhTSUJTTHluc0ZEYWxuaXFlSTE2c241VHZIUFhKb3Zrd21mRURFbmdETkxDTGtaREQKVkhaOGtOWXM0YmJ4dTNzL05sejBtVm45M0pzVDVWQ0xqY1hKZE93cjd0aHpmbVJDTjAxZGk1ei95L2dubWQwdgpRZEhsaS9xclc1d2hqMHo5U1VUTXpoaVo4bDBRZFhXWndhaHlvZ2U1OWFkUGlKZmd3RTRQRFVFYm4xNWlYTVQvCm1qdEZXcXZXb3dwcmhzcU5scXJrZGdZR3B6K0htS0JlTm9qbE5kN2N0cDdWQlBhOHYxZVRmVGNBeVQxVUljT3UKSVZGaHQrRUNnWUVBNHRsOEowS1lHcmczN1QzR2xPaTg3Qzk0VEZScmVTZHFOdFpsQnByREZmVkhRWUdydGRhNQo5a0RIMEZHUEdjVjNNVVZvYjZ3enRrZy9nekZtUDdQY0ZVR0JrODQ4ejI0c21EcnFNN2hKZHd0cGxGMnJUbjFxCm1tZGRhYmcybDRVcngrTm1PU2FaejUySE81RUQ5bVJMUzlnbXFmYVR5blR5MWk2T2hNMU9oQVVDZ1lFQXpTVmgKbUkraFlzNTl4UU0xUGpPZWIwNzVja2JiRGVoYmFaT0k0eDNQOWtOa3UyOXRwNWFDS1NIcVhGMFYzZ3k1NXFURAozMVEyTlVPbWUxTGg0UjVyekxBeVgvWVY4bllWcWdWdkcyN2NTcVJOWlNXTjVHYkRPaXNvRlBDVEI3aXVock5ZCjIvczJuY3cwUWtSTGFVMU9oV3p0THNWZUR0Ulp2MDhseHI1Z2FFa0NnWUVBbnFaUHAvMXc5eTdqSGk1WUZZaDMKcUE3QzZVOFpJdEFuL2xZT3JZSEs4aTVxT1N3QTlObEprU2xaRlI0VklJYnpoeWZ0bER3d3Brajg4am00TXRFTgpHR2lKd044NXREQnZTNy9ZVDNlUkdZcUh1bFdRR3dLbmJYamc0YkVOclFaYnloNEZQZTc3SHpJaWc4dzFvem9kClZ0dkNucGR1WU9kTmRmRjFodmMyOUNrQ2dZRUF2WmVHa3hCcS9uNElEa1BndVJQTG9PTkQ5akUxMGF5a2p2WWkKMUlPQTV2OXg0U2dpRjNncDR3bk5KbitBN2k2a3dGd1dDaGd4NFJnY2pHMFZCSkN3NEFNWEMwakxEOEhDVTllaAp6NkN0UnU2QitMQzBhaG51NDV0dTk2cyt0eXdmWDYzd3VaMTU1R3dOQUJGT0FJdkp2ZFhsZmd3NTJVcTNodThHCjRwNmZTc0VDZ1lBMFAxT3NsUkZDYWg1UFpEYlE0c2ViQ1djZUdVK3d6Ymg0eldBTksyQ2h4OXJqaGpteVEyeTQKMElEQkJSRitranhqcnI4UmlJb1hoeWRFZjRsd0hzaWh6Q3Q1KzFMblZmeVdMbEZCNkVWRXQ3Tk0rV2gzMFdreApxUEliZ3hGRnZNU0lETHRPNDJTR21SUXF1SkZmUTQxT05aN2MxNXpIQk5FTkovenNreVQzdWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", 
            "server-tls.json": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", 
            "session-secret": "OEdTRjdYbWxxenpHb3ZKd0tRRVJUYzMxZjhMV3BESWtOdW5oTHVnczhZV3VSUExrdFlSV0lDa3RTWHlzYXFDaEJkc2FSY3pHU2NYUWxXUHpxbVFkYVBsRktvY2UxRGVGSXN2d3VCa1pzdHhMeXR6cXFIc1pwakx2ZTdJb2RhaWNjWUJ6QWtmV2lZNDB2Y0ozaFNSSmRya1BlUVN3a0ZudmRsV1F4QUo5TWIwT1Z3SUM2VDdvelNKbURKYVlMck02dTZjelpRNnM="
        }, 
        "kind": "Secret", 
        "metadata": {
            "creationTimestamp": null, 
            "name": "logging-kibana-proxy"
        }, 
        "type": "Opaque"
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Generate Kibana DC template] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:221
changed: [openshift] => {
    "changed": true, 
    "checksum": "64934c107931f3990fb8e92e3225584429e5ceb3", 
    "dest": "/tmp/openshift-logging-ansible-SfsjMA/templates/kibana-dc.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "eb63fefe7153ccc93aeaaf0cc7afd316", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 3765, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973237.52-53472568339162/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_kibana : Set Kibana DC] ********************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:240
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get dc logging-kibana-ops -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "DeploymentConfig", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:53:58Z", 
                    "generation": 2, 
                    "labels": {
                        "component": "kibana-ops", 
                        "logging-infra": "kibana", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-kibana-ops", 
                    "namespace": "logging", 
                    "resourceVersion": "1522", 
                    "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-kibana-ops", 
                    "uid": "80ae21a1-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "replicas": 1, 
                    "selector": {
                        "component": "kibana-ops", 
                        "logging-infra": "kibana", 
                        "provider": "openshift"
                    }, 
                    "strategy": {
                        "activeDeadlineSeconds": 21600, 
                        "resources": {}, 
                        "rollingParams": {
                            "intervalSeconds": 1, 
                            "maxSurge": "25%", 
                            "maxUnavailable": "25%", 
                            "timeoutSeconds": 600, 
                            "updatePeriodSeconds": 1
                        }, 
                        "type": "Rolling"
                    }, 
                    "template": {
                        "metadata": {
                            "creationTimestamp": null, 
                            "labels": {
                                "component": "kibana-ops", 
                                "logging-infra": "kibana", 
                                "provider": "openshift"
                            }, 
                            "name": "logging-kibana-ops"
                        }, 
                        "spec": {
                            "containers": [
                                {
                                    "env": [
                                        {
                                            "name": "ES_HOST", 
                                            "value": "logging-es-ops"
                                        }, 
                                        {
                                            "name": "ES_PORT", 
                                            "value": "9200"
                                        }, 
                                        {
                                            "name": "KIBANA_MEMORY_LIMIT", 
                                            "valueFrom": {
                                                "resourceFieldRef": {
                                                    "containerName": "kibana", 
                                                    "divisor": "0", 
                                                    "resource": "limits.memory"
                                                }
                                            }
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-kibana:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "kibana", 
                                    "readinessProbe": {
                                        "exec": {
                                            "command": [
                                                "/usr/share/kibana/probe/readiness.sh"
                                            ]
                                        }, 
                                        "failureThreshold": 3, 
                                        "initialDelaySeconds": 5, 
                                        "periodSeconds": 5, 
                                        "successThreshold": 1, 
                                        "timeoutSeconds": 4
                                    }, 
                                    "resources": {
                                        "limits": {
                                            "memory": "736Mi"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/etc/kibana/keys", 
                                            "name": "kibana", 
                                            "readOnly": true
                                        }
                                    ]
                                }, 
                                {
                                    "env": [
                                        {
                                            "name": "OAP_BACKEND_URL", 
                                            "value": "http://localhost:5601"
                                        }, 
                                        {
                                            "name": "OAP_AUTH_MODE", 
                                            "value": "oauth2"
                                        }, 
                                        {
                                            "name": "OAP_TRANSFORM", 
                                            "value": "user_header,token_header"
                                        }, 
                                        {
                                            "name": "OAP_OAUTH_ID", 
                                            "value": "kibana-proxy"
                                        }, 
                                        {
                                            "name": "OAP_MASTER_URL", 
                                            "value": "https://kubernetes.default.svc.cluster.local"
                                        }, 
                                        {
                                            "name": "OAP_PUBLIC_MASTER_URL", 
                                            "value": "https://172.18.11.188:8443"
                                        }, 
                                        {
                                            "name": "OAP_LOGOUT_REDIRECT", 
                                            "value": "https://172.18.11.188:8443/console/logout"
                                        }, 
                                        {
                                            "name": "OAP_MASTER_CA_FILE", 
                                            "value": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
                                        }, 
                                        {
                                            "name": "OAP_DEBUG", 
                                            "value": "False"
                                        }, 
                                        {
                                            "name": "OAP_OAUTH_SECRET_FILE", 
                                            "value": "/secret/oauth-secret"
                                        }, 
                                        {
                                            "name": "OAP_SERVER_CERT_FILE", 
                                            "value": "/secret/server-cert"
                                        }, 
                                        {
                                            "name": "OAP_SERVER_KEY_FILE", 
                                            "value": "/secret/server-key"
                                        }, 
                                        {
                                            "name": "OAP_SERVER_TLS_FILE", 
                                            "value": "/secret/server-tls.json"
                                        }, 
                                        {
                                            "name": "OAP_SESSION_SECRET_FILE", 
                                            "value": "/secret/session-secret"
                                        }, 
                                        {
                                            "name": "OCP_AUTH_PROXY_MEMORY_LIMIT", 
                                            "valueFrom": {
                                                "resourceFieldRef": {
                                                    "containerName": "kibana-proxy", 
                                                    "divisor": "0", 
                                                    "resource": "limits.memory"
                                                }
                                            }
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-auth-proxy:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "kibana-proxy", 
                                    "ports": [
                                        {
                                            "containerPort": 3000, 
                                            "name": "oaproxy", 
                                            "protocol": "TCP"
                                        }
                                    ], 
                                    "resources": {
                                        "limits": {
                                            "memory": "96Mi"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/secret", 
                                            "name": "kibana-proxy", 
                                            "readOnly": true
                                        }
                                    ]
                                }
                            ], 
                            "dnsPolicy": "ClusterFirst", 
                            "restartPolicy": "Always", 
                            "schedulerName": "default-scheduler", 
                            "securityContext": {}, 
                            "serviceAccount": "aggregated-logging-kibana", 
                            "serviceAccountName": "aggregated-logging-kibana", 
                            "terminationGracePeriodSeconds": 30, 
                            "volumes": [
                                {
                                    "name": "kibana", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-kibana"
                                    }
                                }, 
                                {
                                    "name": "kibana-proxy", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-kibana-proxy"
                                    }
                                }
                            ]
                        }
                    }, 
                    "test": false, 
                    "triggers": [
                        {
                            "type": "ConfigChange"
                        }
                    ]
                }, 
                "status": {
                    "availableReplicas": 0, 
                    "conditions": [
                        {
                            "lastTransitionTime": "2017-06-09T01:53:58Z", 
                            "lastUpdateTime": "2017-06-09T01:53:58Z", 
                            "message": "Deployment config does not have minimum availability.", 
                            "status": "False", 
                            "type": "Available"
                        }, 
                        {
                            "lastTransitionTime": "2017-06-09T01:53:58Z", 
                            "lastUpdateTime": "2017-06-09T01:53:58Z", 
                            "message": "replication controller \"logging-kibana-ops-1\" is waiting for pod \"logging-kibana-ops-1-deploy\" to run", 
                            "status": "Unknown", 
                            "type": "Progressing"
                        }
                    ], 
                    "details": {
                        "causes": [
                            {
                                "type": "ConfigChange"
                            }
                        ], 
                        "message": "config change"
                    }, 
                    "latestVersion": 1, 
                    "observedGeneration": 2, 
                    "replicas": 0, 
                    "unavailableReplicas": 0, 
                    "updatedReplicas": 0
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_kibana : Delete temp directory] ************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:252
ok: [openshift] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-SfsjMA", 
    "state": "absent"
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:195
statically included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml

TASK [openshift_logging_curator : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:3
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:7
ok: [openshift] => {
    "ansible_facts": {
        "curator_version": "3_5"
    }, 
    "changed": false
}

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:15
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : Create temp directory for doing work in] *****
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:5
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.003637", 
    "end": "2017-06-08 21:54:00.381429", 
    "rc": 0, 
    "start": "2017-06-08 21:54:00.377792"
}

STDOUT:

/tmp/openshift-logging-ansible-GOC46V

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:10
ok: [openshift] => {
    "ansible_facts": {
        "tempdir": "/tmp/openshift-logging-ansible-GOC46V"
    }, 
    "changed": false
}

TASK [openshift_logging_curator : Create templates subdirectory] ***************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:14
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/openshift-logging-ansible-GOC46V/templates", 
    "secontext": "unconfined_u:object_r:user_tmp_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging_curator : Create Curator service account] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:24
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : Create Curator service account] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:32
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get sa aggregated-logging-curator -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "imagePullSecrets": [
                    {
                        "name": "aggregated-logging-curator-dockercfg-mgqjt"
                    }
                ], 
                "kind": "ServiceAccount", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:01Z", 
                    "name": "aggregated-logging-curator", 
                    "namespace": "logging", 
                    "resourceVersion": "1532", 
                    "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-curator", 
                    "uid": "828a5a0e-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "secrets": [
                    {
                        "name": "aggregated-logging-curator-dockercfg-mgqjt"
                    }, 
                    {
                        "name": "aggregated-logging-curator-token-c8s8g"
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:41
ok: [openshift] => {
    "changed": false, 
    "checksum": "9008efd9a8892dcc42c28c6dfb6708527880a6d8", 
    "dest": "/tmp/openshift-logging-ansible-GOC46V/curator.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "5498c5fd98f3dd06e34b20eb1f55dc12", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 320, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973242.06-67384922782219/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_curator : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:47
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : Set Curator configmap] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:53
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get configmap logging-curator -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "data": {
                    "config.yaml": "# Logging example curator config file\n\n# uncomment and use this to override the defaults from env vars\n#.defaults:\n#  delete:\n#    days: 30\n#  runhour: 0\n#  runminute: 0\n\n# to keep ops logs for a different duration:\n#.operations:\n#  delete:\n#    weeks: 8\n\n# example for a normal project\n#myapp:\n#  delete:\n#    weeks: 1\n"
                }, 
                "kind": "ConfigMap", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:02Z", 
                    "name": "logging-curator", 
                    "namespace": "logging", 
                    "resourceVersion": "1535", 
                    "selfLink": "/api/v1/namespaces/logging/configmaps/logging-curator", 
                    "uid": "8354574c-4cb6-11e7-9445-0ecf874efb82"
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : Set Curator secret] **************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:62
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc secrets new logging-curator ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.curator.key cert=/etc/origin/logging/system.logging.curator.crt -n logging", 
        "results": "", 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:75
ok: [openshift] => {
    "ansible_facts": {
        "curator_component": "curator", 
        "curator_name": "logging-curator"
    }, 
    "changed": false
}

TASK [openshift_logging_curator : Generate Curator deploymentconfig] ***********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:81
ok: [openshift] => {
    "changed": false, 
    "checksum": "99be068df5ba7cbf43034f9978a626d07a43f7ec", 
    "dest": "/tmp/openshift-logging-ansible-GOC46V/templates/curator-dc.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "9c854c2feab28978ff6563ebd8561fe4", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2341, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973243.98-258344935502576/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_curator : Set Curator DC] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:99
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get dc logging-curator -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "DeploymentConfig", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:05Z", 
                    "generation": 2, 
                    "labels": {
                        "component": "curator", 
                        "logging-infra": "curator", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-curator", 
                    "namespace": "logging", 
                    "resourceVersion": "1551", 
                    "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-curator", 
                    "uid": "849cdfa6-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "replicas": 1, 
                    "selector": {
                        "component": "curator", 
                        "logging-infra": "curator", 
                        "provider": "openshift"
                    }, 
                    "strategy": {
                        "activeDeadlineSeconds": 21600, 
                        "recreateParams": {
                            "timeoutSeconds": 600
                        }, 
                        "resources": {}, 
                        "rollingParams": {
                            "intervalSeconds": 1, 
                            "maxSurge": "25%", 
                            "maxUnavailable": "25%", 
                            "timeoutSeconds": 600, 
                            "updatePeriodSeconds": 1
                        }, 
                        "type": "Recreate"
                    }, 
                    "template": {
                        "metadata": {
                            "creationTimestamp": null, 
                            "labels": {
                                "component": "curator", 
                                "logging-infra": "curator", 
                                "provider": "openshift"
                            }, 
                            "name": "logging-curator"
                        }, 
                        "spec": {
                            "containers": [
                                {
                                    "env": [
                                        {
                                            "name": "K8S_HOST_URL", 
                                            "value": "https://kubernetes.default.svc.cluster.local"
                                        }, 
                                        {
                                            "name": "ES_HOST", 
                                            "value": "logging-es"
                                        }, 
                                        {
                                            "name": "ES_PORT", 
                                            "value": "9200"
                                        }, 
                                        {
                                            "name": "ES_CLIENT_CERT", 
                                            "value": "/etc/curator/keys/cert"
                                        }, 
                                        {
                                            "name": "ES_CLIENT_KEY", 
                                            "value": "/etc/curator/keys/key"
                                        }, 
                                        {
                                            "name": "ES_CA", 
                                            "value": "/etc/curator/keys/ca"
                                        }, 
                                        {
                                            "name": "CURATOR_DEFAULT_DAYS", 
                                            "value": "30"
                                        }, 
                                        {
                                            "name": "CURATOR_RUN_HOUR", 
                                            "value": "0"
                                        }, 
                                        {
                                            "name": "CURATOR_RUN_MINUTE", 
                                            "value": "0"
                                        }, 
                                        {
                                            "name": "CURATOR_RUN_TIMEZONE", 
                                            "value": "UTC"
                                        }, 
                                        {
                                            "name": "CURATOR_SCRIPT_LOG_LEVEL", 
                                            "value": "INFO"
                                        }, 
                                        {
                                            "name": "CURATOR_LOG_LEVEL", 
                                            "value": "ERROR"
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-curator:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "curator", 
                                    "resources": {
                                        "limits": {
                                            "cpu": "100m"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/etc/curator/keys", 
                                            "name": "certs", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/curator/settings", 
                                            "name": "config", 
                                            "readOnly": true
                                        }
                                    ]
                                }
                            ], 
                            "dnsPolicy": "ClusterFirst", 
                            "restartPolicy": "Always", 
                            "schedulerName": "default-scheduler", 
                            "securityContext": {}, 
                            "serviceAccount": "aggregated-logging-curator", 
                            "serviceAccountName": "aggregated-logging-curator", 
                            "terminationGracePeriodSeconds": 30, 
                            "volumes": [
                                {
                                    "name": "certs", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-curator"
                                    }
                                }, 
                                {
                                    "configMap": {
                                        "defaultMode": 420, 
                                        "name": "logging-curator"
                                    }, 
                                    "name": "config"
                                }
                            ]
                        }
                    }, 
                    "test": false, 
                    "triggers": [
                        {
                            "type": "ConfigChange"
                        }
                    ]
                }, 
                "status": {
                    "availableReplicas": 0, 
                    "conditions": [
                        {
                            "lastTransitionTime": "2017-06-09T01:54:05Z", 
                            "lastUpdateTime": "2017-06-09T01:54:05Z", 
                            "message": "Deployment config does not have minimum availability.", 
                            "status": "False", 
                            "type": "Available"
                        }, 
                        {
                            "lastTransitionTime": "2017-06-09T01:54:05Z", 
                            "lastUpdateTime": "2017-06-09T01:54:05Z", 
                            "message": "replication controller \"logging-curator-1\" is waiting for pod \"logging-curator-1-deploy\" to run", 
                            "status": "Unknown", 
                            "type": "Progressing"
                        }
                    ], 
                    "details": {
                        "causes": [
                            {
                                "type": "ConfigChange"
                            }
                        ], 
                        "message": "config change"
                    }, 
                    "latestVersion": 1, 
                    "observedGeneration": 2, 
                    "replicas": 0, 
                    "unavailableReplicas": 0, 
                    "updatedReplicas": 0
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : Delete temp directory] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:109
ok: [openshift] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-GOC46V", 
    "state": "absent"
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:207
statically included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml

TASK [openshift_logging_curator : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:3
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:7
ok: [openshift] => {
    "ansible_facts": {
        "curator_version": "3_5"
    }, 
    "changed": false
}

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:15
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : Create temp directory for doing work in] *****
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:5
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.002236", 
    "end": "2017-06-08 21:54:08.123023", 
    "rc": 0, 
    "start": "2017-06-08 21:54:08.120787"
}

STDOUT:

/tmp/openshift-logging-ansible-5r2qPj

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:10
ok: [openshift] => {
    "ansible_facts": {
        "tempdir": "/tmp/openshift-logging-ansible-5r2qPj"
    }, 
    "changed": false
}

TASK [openshift_logging_curator : Create templates subdirectory] ***************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:14
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/openshift-logging-ansible-5r2qPj/templates", 
    "secontext": "unconfined_u:object_r:user_tmp_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging_curator : Create Curator service account] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:24
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : Create Curator service account] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:32
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get sa aggregated-logging-curator -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "imagePullSecrets": [
                    {
                        "name": "aggregated-logging-curator-dockercfg-mgqjt"
                    }
                ], 
                "kind": "ServiceAccount", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:01Z", 
                    "name": "aggregated-logging-curator", 
                    "namespace": "logging", 
                    "resourceVersion": "1532", 
                    "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-curator", 
                    "uid": "828a5a0e-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "secrets": [
                    {
                        "name": "aggregated-logging-curator-dockercfg-mgqjt"
                    }, 
                    {
                        "name": "aggregated-logging-curator-token-c8s8g"
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:41
ok: [openshift] => {
    "changed": false, 
    "checksum": "9008efd9a8892dcc42c28c6dfb6708527880a6d8", 
    "dest": "/tmp/openshift-logging-ansible-5r2qPj/curator.yml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "5498c5fd98f3dd06e34b20eb1f55dc12", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 320, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973249.0-236423441146323/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_curator : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:47
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_curator : Set Curator configmap] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:53
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get configmap logging-curator -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "data": {
                    "config.yaml": "# Logging example curator config file\n\n# uncomment and use this to override the defaults from env vars\n#.defaults:\n#  delete:\n#    days: 30\n#  runhour: 0\n#  runminute: 0\n\n# to keep ops logs for a different duration:\n#.operations:\n#  delete:\n#    weeks: 8\n\n# example for a normal project\n#myapp:\n#  delete:\n#    weeks: 1\n"
                }, 
                "kind": "ConfigMap", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:02Z", 
                    "name": "logging-curator", 
                    "namespace": "logging", 
                    "resourceVersion": "1535", 
                    "selfLink": "/api/v1/namespaces/logging/configmaps/logging-curator", 
                    "uid": "8354574c-4cb6-11e7-9445-0ecf874efb82"
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : Set Curator secret] **************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:62
ok: [openshift] => {
    "changed": false, 
    "results": {
        "apiVersion": "v1", 
        "data": {
            "ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJME9Wb1hEVEl5TURZd09EQXhOVEkxTUZvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUt5OFlydXA4SXc1dDhFVit1bWFkQWkrYmNQSzd2d00xbUhqWC8wdndOU3oKWXNSRDNvWGRqeGl2b2dlbTBhbmM4aWJvRmRVZmQwcFJaUG1QRHZkaHgvVmtDMzBvSktLVWZFN3IrZ2ppT1VGVQp5YkVFbTBPQjNxTEZ1TVZEcHg0QjhSRzBlUTFscU1qdk5wM0hGWVgzKzJjbEkrOVp6ZHMycUNDbmJRb3VVc0xpCnNYZUlmTHVGYVcvWmlRdkVsMnlYbVgvKzR6ZUd0eHBDTXhROXFneDVUU3Y3cmpPclV4dzVhWDJucFNwcnlBeFEKS2xvcU9jSUdNNXdPbDNPbXRTcEJSbFU1N284NHlXaTdwUzF0ZlBDL09wMlVYTVlsN1RBS2dkRkFmU2w2YmtVYQpFcStaU1Q2Q1Nsd0VtVVB1R2JmUzhOeFF5dUFVQmdsN3hOdEJhRHR0clJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHbFYKQTQwQUNzY3cwR1hYTEIwOVZmZWNoaU03bmxiTDViM0xvNU1abWlIaFpYcXBVY2FITUtxRFBmdktRc3k5UHRXYwpYRS9acis2b3RlMExiWlBBa0h2TThhak56OW1QMmpsMUNhODVBcmNMeWhaOTUxV0U1eW02Y3dYSWo4SjlWdEtmCk95SDdHNmlWSmR2VndLRjdrWC9ZNkFvMmEyVDRHa2xCODZKcVJGRFRHYUxhdVlKQ05rN1lFSU9BbjZEdXRiemwKRTl3ZXRmNWd6M2h2UE5vd25yb0p2aTdWL01tQjY5Zzk4YnV4c0NCS2hDSWZrZ0tudWRjUmlwZ3JHcVJNeHhKeQpnNi9nK21nOFNzcXN4emxURUN4TzhLOUlZekpOcnlXZ3F0MTY4RTFobURHNS93RUxMUENzUldEd2pTelBsOEpXCms0c2ZaRGNzRXB1RUFwQktxWEk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", 
            "cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lCQkRBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEF4TlRJMU5Wb1hEVEU1TURZd09UQXhOVEkxTlZvdwpSekVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVI4d0hRWURWUVFECkRCWnplWE4wWlcwdWJHOW5aMmx1Wnk1amRYSmhkRzl5TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEEKTUlJQkNnS0NBUUVBNUNkYlNvaWZTYmtEbVluSjJGNmc5WXh5TnRpdnE1ZnVZRHZrSHVURVhSRzFqY1ZleHBlWApHL2ExdW40cFhSVGFkcjFyYnc5enhJNVVCTGRCaUM0SzB1YlVzM2VqeTIyYURlRVFGVUdaRUJ3U21iTTgzZW5wCjROaW9LYzZTU29iZ3hrcE91cjJLNjEzOGJBd0NkbXh1VkR0Mm1hczRJaUloNG9HZjBIL3RHMExEUFZQZnFDQ1IKbE50RitqNUZsS3lkUUNNT1J4MmlqNDhKVVZKemorN0lwTG5NU0lqWHhlemhycG9abmVmSFdYazZ6UHJ0T2pTcgpobTM1emJSNXpKdFlxcytOM1Nkc2pPUnUrOG9VcXB3WWIwM29abSthc0JlUXhGRWdZQi93ZGpmK0I0NDY5RlgxCnNkaXNPMnBEeWxaQmxKSW1iTWNYRndCVG05YTkwODVXcHdJREFRQUJvMll3WkRBT0JnTlZIUThCQWY4RUJBTUMKQmFBd0NRWURWUjBUQkFJd0FEQWRCZ05WSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3SFFZRApWUjBPQkJZRUZIV0FRZTduOE9lSVJMTWgzZ2ZyWTRuTUlDN29NQWtHQTFVZEl3UUNNQUF3RFFZSktvWklodmNOCkFRRUZCUUFEZ2dFQkFCenhRQlRVNEFFY2VGNko0dW9kYUVCc1Y5NTVjUlp6RWx5dVhTTS9CQjJtRUZZc0d4V0QKNENGYTNVLzNGeHRDYllIWGZROHdnelRkaEhScGp0M1AwK29kQVV1ako5bUF3TjUzWVRkLzBBQnpmNHJmQXpIbwpvd3h5cUozUVJ1MXRKYWNjdXljcDlYMkFYenpZL25ENU5RUHRMRHR4OHdKWHNZU2ZYbjJ3cW9JbWxKQjlmRmQzCkJtcEJ0Z1BEK3RqNWdBaktocjNRZExvVTMrOENPS3ZIaTVVYVhRTEY2S1h1dWlwSGhTcFdYNmhFUisveW81S3UKSTNSODNUSmE2VVlISGI3cGFTWXRnTE5ocUxlK3BaOHBEdW5CUlRoM01vbEcwVGo0dGNJckxTcXRBMlN2UTlUNQp3S2xNeEtIcE5hSnJwUUNkUlBWWm9TVTNtZ3R2M0xvWEg4bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", 
            "key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRGtKMXRLaUo5SnVRT1oKaWNuWVhxRDFqSEkyMksrcmwrNWdPK1FlNU1SZEViV054VjdHbDVjYjlyVzZmaWxkRk5wMnZXdHZEM1BFamxRRQp0MEdJTGdyUzV0U3pkNlBMYlpvTjRSQVZRWmtRSEJLWnN6emQ2ZW5nMktncHpwSktodURHU2s2NnZZcnJYZnhzCkRBSjJiRzVVTzNhWnF6Z2lJaUhpZ1ovUWYrMGJRc005VTkrb0lKR1UyMFg2UGtXVXJKMUFJdzVISGFLUGp3bFIKVW5PUDdzaWt1Y3hJaU5mRjdPR3VtaG1kNThkWmVUck0rdTA2Tkt1R2Jmbk50SG5NbTFpcXo0M2RKMnlNNUc3Nwp5aFNxbkJodlRlaG1iNXF3RjVERVVTQmdIL0IyTi80SGpqcjBWZld4Mkt3N2FrUEtWa0dVa2lac3h4Y1hBRk9iCjFyM1R6bGFuQWdNQkFBRUNnZ0VBREtoeVZDeElTaHJOckZNTXM4aHNQYk5SRXVIcXZTWnN2MElUWWZOblFaS2QKOUFPalFubGVsTnFYTW9XVlhlaXVSakEwS2JFOXh3WHVlMlIvYWtMRHJ2ZkhqVDF5QlBOTHZNRmoxd29RcCtnbQowQWcxdEVvcUE2T0JrUEE1QlpGK0h3STRZL3ZvSFM3VnRsamtPaFhCK1VKalRodEZ2ZjhPeWpaTzI0NTlaU213ClFqVURTQkNJVjE5MW1ESG50OEdMbEcxNXZaR1FBUnRrM1MvcWllWU13cUF4TDRZdnRYV01GaDZpUldCMDYyQkkKNkptNDFaK1g3L0QwcWZXUnpnWWx4TkFVUUc0YS9MbTRvRS9Xb2RRNGR6akFGNzVLOXp4STMweHJUMGJZSjh1awpTVWFlTXNVLzd3RGZqSEVWMHlGYjlSZUZlQWM5UjluL1FVNE5ORDlhUVFLQmdRRHg2eVVnMDM2U3ZBbFpHOHFxClgrR24veFJxUzJTNmloRnhkRFBpcDdJcHBSR29pcnB2L09pVEl6VkhRUE1wWEtDeEJ5dHprMU9DNitzaysrMzgKbG05cEkvZFF1T2RsWWdzaGQwL1ZlNmptU29UN0sxb3BQbW1mVUtVRWUrZ1A4amtuSUVYVzlEQ29tMzdPVFUycgpGQWNmWGlMK24xTzhsSkt3M0tWK29WYkZQd0tCZ1FEeGJ4bk0xZzNVbkFDYmdMUmpDNVlQQm1kN2Z3S1ZDT21aCmd6bWZKSGE3SFNramV0S3ZEK1lRVkFEUTlFZUpLTEh2WU0xbkhFblFmSmg1WGQ1eHNZY3lvSi9XbDlWVStxUDYKMjNmSldHNGJGdVpEcCtmcEIwYmNFOVBxb01IcUw4Mk1mUytrOVh1U0VZR3RyZHNaWnFweUJiL05ibjMybUhJMApaZE1Cbyt5TW1RS0JnQmhncGFFbExzQUNpcjZiK2xRb3pVaHNmOVltT3NSQlhYaWRTUTB4OE5ZWmVDb1BzTEhRClBtOTFRTTBwVWxkOHFnU3N3RWdwTkdVZytOVUZQZm9SL3JBTm04SmFuNWFyeG90Y3hvS3dyMWhsY2ZrTmFVeDIKcVpZUVBsQ3hXN1VmcDNxMTJkUExUNHZ0LzEweWxQMEVTNk54alAwemVQQ3IyQXhTYjZyTy96dHBBb0dCQUtlNgp3YmZXdGdFUTZETWdOVEhpS0x3RGZQMEUvZXhBSnRucG1xeC9EcVZyMnRxMVI0MHJoRyt2akdtZWE5eFVFMW4wCmJIN0gzbGdqVjJKcDNsSXFQWHprcm1iTlVQNGFxclZxcDB1UVRkNHdDSVRVTDM5cStNV0lXTjlXRTZINDE5cFUKVmpkSi9ERThURlUyeFZKZVN1ZXdLdEl6Z3Z0QWFZY1Jmb2hUTTlGeEFvR0FZOTVvMTh0V1R5eTJ5b3p5Mi9hNQpjMHd3bmdGd2g1alBlTStqVEpRVm82VzlvZWZlSTV2ckxQd0F4TTZLMEJjREkxeWVXL04wdlZRVFo0ZUIxZURxClJDYzRrZCtwV1FONk1RU2tWcEUyQTFuL0QveHBCNElOTkdBa0o2eitJenJRbTZTVTJ1cGVGQXFoSVhKejRYWHQKRGRscklyV0RiY2N1ZktlNzUvV3VoR009Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K"
        }, 
        "kind": "Secret", 
        "metadata": {
            "creationTimestamp": null, 
            "name": "logging-curator"
        }, 
        "type": "Opaque"
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:75
ok: [openshift] => {
    "ansible_facts": {
        "curator_component": "curator-ops", 
        "curator_name": "logging-curator-ops"
    }, 
    "changed": false
}

TASK [openshift_logging_curator : Generate Curator deploymentconfig] ***********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:81
ok: [openshift] => {
    "changed": false, 
    "checksum": "64dd49b0a1d93d2661d01e2a2219d8e2976f416f", 
    "dest": "/tmp/openshift-logging-ansible-5r2qPj/templates/curator-dc.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "e1ef02721582334967ad3757a0ba6111", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 2365, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973250.74-51372893636195/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_curator : Set Curator DC] ******************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:99
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get dc logging-curator-ops -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "kind": "DeploymentConfig", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:11Z", 
                    "generation": 2, 
                    "labels": {
                        "component": "curator-ops", 
                        "logging-infra": "curator", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-curator-ops", 
                    "namespace": "logging", 
                    "resourceVersion": "1591", 
                    "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-curator-ops", 
                    "uid": "88837d72-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "replicas": 1, 
                    "selector": {
                        "component": "curator-ops", 
                        "logging-infra": "curator", 
                        "provider": "openshift"
                    }, 
                    "strategy": {
                        "activeDeadlineSeconds": 21600, 
                        "recreateParams": {
                            "timeoutSeconds": 600
                        }, 
                        "resources": {}, 
                        "rollingParams": {
                            "intervalSeconds": 1, 
                            "maxSurge": "25%", 
                            "maxUnavailable": "25%", 
                            "timeoutSeconds": 600, 
                            "updatePeriodSeconds": 1
                        }, 
                        "type": "Recreate"
                    }, 
                    "template": {
                        "metadata": {
                            "creationTimestamp": null, 
                            "labels": {
                                "component": "curator-ops", 
                                "logging-infra": "curator", 
                                "provider": "openshift"
                            }, 
                            "name": "logging-curator-ops"
                        }, 
                        "spec": {
                            "containers": [
                                {
                                    "env": [
                                        {
                                            "name": "K8S_HOST_URL", 
                                            "value": "https://kubernetes.default.svc.cluster.local"
                                        }, 
                                        {
                                            "name": "ES_HOST", 
                                            "value": "logging-es-ops"
                                        }, 
                                        {
                                            "name": "ES_PORT", 
                                            "value": "9200"
                                        }, 
                                        {
                                            "name": "ES_CLIENT_CERT", 
                                            "value": "/etc/curator/keys/cert"
                                        }, 
                                        {
                                            "name": "ES_CLIENT_KEY", 
                                            "value": "/etc/curator/keys/key"
                                        }, 
                                        {
                                            "name": "ES_CA", 
                                            "value": "/etc/curator/keys/ca"
                                        }, 
                                        {
                                            "name": "CURATOR_DEFAULT_DAYS", 
                                            "value": "30"
                                        }, 
                                        {
                                            "name": "CURATOR_RUN_HOUR", 
                                            "value": "0"
                                        }, 
                                        {
                                            "name": "CURATOR_RUN_MINUTE", 
                                            "value": "0"
                                        }, 
                                        {
                                            "name": "CURATOR_RUN_TIMEZONE", 
                                            "value": "UTC"
                                        }, 
                                        {
                                            "name": "CURATOR_SCRIPT_LOG_LEVEL", 
                                            "value": "INFO"
                                        }, 
                                        {
                                            "name": "CURATOR_LOG_LEVEL", 
                                            "value": "ERROR"
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-curator:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "curator", 
                                    "resources": {
                                        "limits": {
                                            "cpu": "100m"
                                        }
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/etc/curator/keys", 
                                            "name": "certs", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/curator/settings", 
                                            "name": "config", 
                                            "readOnly": true
                                        }
                                    ]
                                }
                            ], 
                            "dnsPolicy": "ClusterFirst", 
                            "restartPolicy": "Always", 
                            "schedulerName": "default-scheduler", 
                            "securityContext": {}, 
                            "serviceAccount": "aggregated-logging-curator", 
                            "serviceAccountName": "aggregated-logging-curator", 
                            "terminationGracePeriodSeconds": 30, 
                            "volumes": [
                                {
                                    "name": "certs", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-curator"
                                    }
                                }, 
                                {
                                    "configMap": {
                                        "defaultMode": 420, 
                                        "name": "logging-curator"
                                    }, 
                                    "name": "config"
                                }
                            ]
                        }
                    }, 
                    "test": false, 
                    "triggers": [
                        {
                            "type": "ConfigChange"
                        }
                    ]
                }, 
                "status": {
                    "availableReplicas": 0, 
                    "conditions": [
                        {
                            "lastTransitionTime": "2017-06-09T01:54:11Z", 
                            "lastUpdateTime": "2017-06-09T01:54:11Z", 
                            "message": "Deployment config does not have minimum availability.", 
                            "status": "False", 
                            "type": "Available"
                        }, 
                        {
                            "lastTransitionTime": "2017-06-09T01:54:11Z", 
                            "lastUpdateTime": "2017-06-09T01:54:11Z", 
                            "message": "replication controller \"logging-curator-ops-1\" is waiting for pod \"logging-curator-ops-1-deploy\" to run", 
                            "status": "Unknown", 
                            "type": "Progressing"
                        }
                    ], 
                    "details": {
                        "causes": [
                            {
                                "type": "ConfigChange"
                            }
                        ], 
                        "message": "config change"
                    }, 
                    "latestVersion": 1, 
                    "observedGeneration": 2, 
                    "replicas": 0, 
                    "unavailableReplicas": 0, 
                    "updatedReplicas": 0
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_curator : Delete temp directory] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:109
ok: [openshift] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-5r2qPj", 
    "state": "absent"
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:226
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : include_role] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:241
statically included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml

TASK [openshift_logging_fluentd : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:2
 [WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: {{ openshift_logging_fluentd_nodeselector.keys()
| count }} > 1
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:6
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:10
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:14
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:3
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:7
ok: [openshift] => {
    "ansible_facts": {
        "fluentd_version": "3_5"
    }, 
    "changed": false
}

TASK [openshift_logging_fluentd : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:12
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : fail] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:15
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:20
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:26
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : Create temp directory for doing work in] *****
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:33
ok: [openshift] => {
    "changed": false, 
    "cmd": [
        "mktemp", 
        "-d", 
        "/tmp/openshift-logging-ansible-XXXXXX"
    ], 
    "delta": "0:00:00.002061", 
    "end": "2017-06-08 21:54:15.328281", 
    "rc": 0, 
    "start": "2017-06-08 21:54:15.326220"
}

STDOUT:

/tmp/openshift-logging-ansible-bQuZHl

TASK [openshift_logging_fluentd : set_fact] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:38
ok: [openshift] => {
    "ansible_facts": {
        "tempdir": "/tmp/openshift-logging-ansible-bQuZHl"
    }, 
    "changed": false
}

TASK [openshift_logging_fluentd : Create templates subdirectory] ***************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:41
ok: [openshift] => {
    "changed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/openshift-logging-ansible-bQuZHl/templates", 
    "secontext": "unconfined_u:object_r:user_tmp_t:s0", 
    "size": 6, 
    "state": "directory", 
    "uid": 0
}

TASK [openshift_logging_fluentd : Create Fluentd service account] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:51
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : Create Fluentd service account] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:59
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get sa aggregated-logging-fluentd -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "imagePullSecrets": [
                    {
                        "name": "aggregated-logging-fluentd-dockercfg-s1vvl"
                    }
                ], 
                "kind": "ServiceAccount", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:16Z", 
                    "name": "aggregated-logging-fluentd", 
                    "namespace": "logging", 
                    "resourceVersion": "1613", 
                    "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-fluentd", 
                    "uid": "8b456c7a-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "secrets": [
                    {
                        "name": "aggregated-logging-fluentd-token-t90bl"
                    }, 
                    {
                        "name": "aggregated-logging-fluentd-dockercfg-s1vvl"
                    }
                ]
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_fluentd : Set privileged permissions for Fluentd] ******
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:68
changed: [openshift] => {
    "changed": true, 
    "present": "present", 
    "results": {
        "cmd": "/bin/oc adm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd -n logging", 
        "results": "", 
        "returncode": 0
    }
}

TASK [openshift_logging_fluentd : Set cluster-reader permissions for Fluentd] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:77
changed: [openshift] => {
    "changed": true, 
    "present": "present", 
    "results": {
        "cmd": "/bin/oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:logging:aggregated-logging-fluentd -n logging", 
        "results": "", 
        "returncode": 0
    }
}

TASK [openshift_logging_fluentd : template] ************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:86
ok: [openshift] => {
    "changed": false, 
    "checksum": "a8c8596f5fc2c5dd7c8d33d244af17a2555be086", 
    "dest": "/tmp/openshift-logging-ansible-bQuZHl/fluent.conf", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "579698b48ffce6276ee0e8d5ac71a338", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 1301, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973257.85-100480781999745/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_fluentd : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:94
ok: [openshift] => {
    "changed": false, 
    "checksum": "b3e75eddc4a0765edc77da092384c0c6f95440e1", 
    "dest": "/tmp/openshift-logging-ansible-bQuZHl/fluentd-throttle-config.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "25871b8e0a9bedc166a6029872a6c336", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 133, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973258.26-52706993635066/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_fluentd : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:100
ok: [openshift] => {
    "changed": false, 
    "checksum": "a3aa36da13f3108aa4ad5b98d4866007b44e9798", 
    "dest": "/tmp/openshift-logging-ansible-bQuZHl/secure-forward.conf", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "1084b00c427f4fa48dfc66d6ad6555d4", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 563, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973258.69-51292540553615/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_fluentd : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:107
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:113
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : copy] ****************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:119
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging_fluentd : Set Fluentd configmap] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:125
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get configmap logging-fluentd -o json -n logging", 
        "results": [
            {
                "apiVersion": "v1", 
                "data": {
                    "fluent.conf": "# This file is the fluentd configuration entrypoint. Edit with care.\n\n@include configs.d/openshift/system.conf\n\n# In each section below, pre- and post- includes don't include anything initially;\n# they exist to enable future additions to openshift conf as needed.\n\n## sources\n## ordered so that syslog always runs last...\n@include configs.d/openshift/input-pre-*.conf\n@include configs.d/dynamic/input-docker-*.conf\n@include configs.d/dynamic/input-syslog-*.conf\n@include configs.d/openshift/input-post-*.conf\n##\n\n<label @INGRESS>\n## filters\n  @include configs.d/openshift/filter-pre-*.conf\n  @include configs.d/openshift/filter-retag-journal.conf\n  @include configs.d/openshift/filter-k8s-meta.conf\n  @include configs.d/openshift/filter-kibana-transform.conf\n  @include configs.d/openshift/filter-k8s-flatten-hash.conf\n  @include configs.d/openshift/filter-k8s-record-transform.conf\n  @include configs.d/openshift/filter-syslog-record-transform.conf\n  @include configs.d/openshift/filter-viaq-data-model.conf\n  @include configs.d/openshift/filter-post-*.conf\n##\n\n## matches\n  @include configs.d/openshift/output-pre-*.conf\n  @include configs.d/openshift/output-operations.conf\n  @include configs.d/openshift/output-applications.conf\n  # no post - applications.conf matches everything left\n##\n</label>\n", 
                    "secure-forward.conf": "# @type secure_forward\n\n# self_hostname ${HOSTNAME}\n# shared_key <SECRET_STRING>\n\n# secure yes\n# enable_strict_verification yes\n\n# ca_cert_path /etc/fluent/keys/your_ca_cert\n# ca_private_key_path /etc/fluent/keys/your_private_key\n  # for private CA secret key\n# ca_private_key_passphrase passphrase\n\n# <server>\n  # or IP\n#   host server.fqdn.example.com\n#   port 24284\n# </server>\n# <server>\n  # ip address to connect\n#   host 203.0.113.8\n  # specify hostlabel for FQDN verification if ipaddress is used for host\n#   hostlabel server.fqdn.example.com\n# </server>\n", 
                    "throttle-config.yaml": "# Logging example fluentd throttling config file\n\n#example-project:\n#  read_lines_limit: 10\n#\n#.operations:\n#  read_lines_limit: 100\n"
                }, 
                "kind": "ConfigMap", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:19Z", 
                    "name": "logging-fluentd", 
                    "namespace": "logging", 
                    "resourceVersion": "1624", 
                    "selfLink": "/api/v1/namespaces/logging/configmaps/logging-fluentd", 
                    "uid": "8d438e9a-4cb6-11e7-9445-0ecf874efb82"
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_fluentd : Set logging-fluentd secret] ******************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:137
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc secrets new logging-fluentd ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.fluentd.key cert=/etc/origin/logging/system.logging.fluentd.crt -n logging", 
        "results": "", 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_fluentd : Generate logging-fluentd daemonset definition] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:154
ok: [openshift] => {
    "changed": false, 
    "checksum": "3a5268349752387b84b5fb0a3a56e308ae0f6be3", 
    "dest": "/tmp/openshift-logging-ansible-bQuZHl/templates/logging-fluentd.yaml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "68d1ad8879e300a918e480be92fa1787", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "unconfined_u:object_r:admin_home_t:s0", 
    "size": 3415, 
    "src": "/root/.ansible/tmp/ansible-tmp-1496973260.48-196518049889376/source", 
    "state": "file", 
    "uid": 0
}

TASK [openshift_logging_fluentd : Set logging-fluentd daemonset] ***************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:172
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc get daemonset logging-fluentd -o json -n logging", 
        "results": [
            {
                "apiVersion": "extensions/v1beta1", 
                "kind": "DaemonSet", 
                "metadata": {
                    "creationTimestamp": "2017-06-09T01:54:21Z", 
                    "generation": 1, 
                    "labels": {
                        "component": "fluentd", 
                        "logging-infra": "fluentd", 
                        "provider": "openshift"
                    }, 
                    "name": "logging-fluentd", 
                    "namespace": "logging", 
                    "resourceVersion": "1632", 
                    "selfLink": "/apis/extensions/v1beta1/namespaces/logging/daemonsets/logging-fluentd", 
                    "uid": "8e568b8c-4cb6-11e7-9445-0ecf874efb82"
                }, 
                "spec": {
                    "selector": {
                        "matchLabels": {
                            "component": "fluentd", 
                            "provider": "openshift"
                        }
                    }, 
                    "template": {
                        "metadata": {
                            "creationTimestamp": null, 
                            "labels": {
                                "component": "fluentd", 
                                "logging-infra": "fluentd", 
                                "provider": "openshift"
                            }, 
                            "name": "fluentd-elasticsearch"
                        }, 
                        "spec": {
                            "containers": [
                                {
                                    "env": [
                                        {
                                            "name": "K8S_HOST_URL", 
                                            "value": "https://kubernetes.default.svc.cluster.local"
                                        }, 
                                        {
                                            "name": "ES_HOST", 
                                            "value": "logging-es"
                                        }, 
                                        {
                                            "name": "ES_PORT", 
                                            "value": "9200"
                                        }, 
                                        {
                                            "name": "ES_CLIENT_CERT", 
                                            "value": "/etc/fluent/keys/cert"
                                        }, 
                                        {
                                            "name": "ES_CLIENT_KEY", 
                                            "value": "/etc/fluent/keys/key"
                                        }, 
                                        {
                                            "name": "ES_CA", 
                                            "value": "/etc/fluent/keys/ca"
                                        }, 
                                        {
                                            "name": "OPS_HOST", 
                                            "value": "logging-es-ops"
                                        }, 
                                        {
                                            "name": "OPS_PORT", 
                                            "value": "9200"
                                        }, 
                                        {
                                            "name": "OPS_CLIENT_CERT", 
                                            "value": "/etc/fluent/keys/cert"
                                        }, 
                                        {
                                            "name": "OPS_CLIENT_KEY", 
                                            "value": "/etc/fluent/keys/key"
                                        }, 
                                        {
                                            "name": "OPS_CA", 
                                            "value": "/etc/fluent/keys/ca"
                                        }, 
                                        {
                                            "name": "ES_COPY", 
                                            "value": "false"
                                        }, 
                                        {
                                            "name": "USE_JOURNAL", 
                                            "value": "true"
                                        }, 
                                        {
                                            "name": "JOURNAL_SOURCE"
                                        }, 
                                        {
                                            "name": "JOURNAL_READ_FROM_HEAD", 
                                            "value": "false"
                                        }
                                    ], 
                                    "image": "172.30.155.104:5000/logging/logging-fluentd:latest", 
                                    "imagePullPolicy": "Always", 
                                    "name": "fluentd-elasticsearch", 
                                    "resources": {
                                        "limits": {
                                            "cpu": "100m", 
                                            "memory": "512Mi"
                                        }
                                    }, 
                                    "securityContext": {
                                        "privileged": true
                                    }, 
                                    "terminationMessagePath": "/dev/termination-log", 
                                    "terminationMessagePolicy": "File", 
                                    "volumeMounts": [
                                        {
                                            "mountPath": "/run/log/journal", 
                                            "name": "runlogjournal"
                                        }, 
                                        {
                                            "mountPath": "/var/log", 
                                            "name": "varlog"
                                        }, 
                                        {
                                            "mountPath": "/var/lib/docker/containers", 
                                            "name": "varlibdockercontainers", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/fluent/configs.d/user", 
                                            "name": "config", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/fluent/keys", 
                                            "name": "certs", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/docker-hostname", 
                                            "name": "dockerhostname", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/localtime", 
                                            "name": "localtime", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/sysconfig/docker", 
                                            "name": "dockercfg", 
                                            "readOnly": true
                                        }, 
                                        {
                                            "mountPath": "/etc/docker", 
                                            "name": "dockerdaemoncfg", 
                                            "readOnly": true
                                        }
                                    ]
                                }
                            ], 
                            "dnsPolicy": "ClusterFirst", 
                            "nodeSelector": {
                                "logging-infra-fluentd": "true"
                            }, 
                            "restartPolicy": "Always", 
                            "schedulerName": "default-scheduler", 
                            "securityContext": {}, 
                            "serviceAccount": "aggregated-logging-fluentd", 
                            "serviceAccountName": "aggregated-logging-fluentd", 
                            "terminationGracePeriodSeconds": 30, 
                            "volumes": [
                                {
                                    "hostPath": {
                                        "path": "/run/log/journal"
                                    }, 
                                    "name": "runlogjournal"
                                }, 
                                {
                                    "hostPath": {
                                        "path": "/var/log"
                                    }, 
                                    "name": "varlog"
                                }, 
                                {
                                    "hostPath": {
                                        "path": "/var/lib/docker/containers"
                                    }, 
                                    "name": "varlibdockercontainers"
                                }, 
                                {
                                    "configMap": {
                                        "defaultMode": 420, 
                                        "name": "logging-fluentd"
                                    }, 
                                    "name": "config"
                                }, 
                                {
                                    "name": "certs", 
                                    "secret": {
                                        "defaultMode": 420, 
                                        "secretName": "logging-fluentd"
                                    }
                                }, 
                                {
                                    "hostPath": {
                                        "path": "/etc/hostname"
                                    }, 
                                    "name": "dockerhostname"
                                }, 
                                {
                                    "hostPath": {
                                        "path": "/etc/localtime"
                                    }, 
                                    "name": "localtime"
                                }, 
                                {
                                    "hostPath": {
                                        "path": "/etc/sysconfig/docker"
                                    }, 
                                    "name": "dockercfg"
                                }, 
                                {
                                    "hostPath": {
                                        "path": "/etc/docker"
                                    }, 
                                    "name": "dockerdaemoncfg"
                                }
                            ]
                        }
                    }, 
                    "templateGeneration": 1, 
                    "updateStrategy": {
                        "rollingUpdate": {
                            "maxUnavailable": 1
                        }, 
                        "type": "RollingUpdate"
                    }
                }, 
                "status": {
                    "currentNumberScheduled": 0, 
                    "desiredNumberScheduled": 0, 
                    "numberMisscheduled": 0, 
                    "numberReady": 0, 
                    "observedGeneration": 1
                }
            }
        ], 
        "returncode": 0
    }, 
    "state": "present"
}

TASK [openshift_logging_fluentd : Retrieve list of Fluentd hosts] **************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:183
ok: [openshift] => {
    "changed": false, 
    "results": {
        "cmd": "/bin/oc get node -o json -n default", 
        "results": [
            {
                "apiVersion": "v1", 
                "items": [
                    {
                        "apiVersion": "v1", 
                        "kind": "Node", 
                        "metadata": {
                            "annotations": {
                                "volumes.kubernetes.io/controller-managed-attach-detach": "true"
                            }, 
                            "creationTimestamp": "2017-06-09T01:37:00Z", 
                            "labels": {
                                "beta.kubernetes.io/arch": "amd64", 
                                "beta.kubernetes.io/os": "linux", 
                                "kubernetes.io/hostname": "172.18.11.188"
                            }, 
                            "name": "172.18.11.188", 
                            "namespace": "", 
                            "resourceVersion": "1595", 
                            "selfLink": "/api/v1/nodes/172.18.11.188", 
                            "uid": "21fc4fa9-4cb4-11e7-9445-0ecf874efb82"
                        }, 
                        "spec": {
                            "externalID": "172.18.11.188", 
                            "providerID": "aws:////i-08ff6aa2c118c7c4f"
                        }, 
                        "status": {
                            "addresses": [
                                {
                                    "address": "172.18.11.188", 
                                    "type": "LegacyHostIP"
                                }, 
                                {
                                    "address": "172.18.11.188", 
                                    "type": "InternalIP"
                                }, 
                                {
                                    "address": "172.18.11.188", 
                                    "type": "Hostname"
                                }
                            ], 
                            "allocatable": {
                                "cpu": "4", 
                                "memory": "7129288Ki", 
                                "pods": "40"
                            }, 
                            "capacity": {
                                "cpu": "4", 
                                "memory": "7231688Ki", 
                                "pods": "40"
                            }, 
                            "conditions": [
                                {
                                    "lastHeartbeatTime": "2017-06-09T01:54:14Z", 
                                    "lastTransitionTime": "2017-06-09T01:37:00Z", 
                                    "message": "kubelet has sufficient disk space available", 
                                    "reason": "KubeletHasSufficientDisk", 
                                    "status": "False", 
                                    "type": "OutOfDisk"
                                }, 
                                {
                                    "lastHeartbeatTime": "2017-06-09T01:54:14Z", 
                                    "lastTransitionTime": "2017-06-09T01:37:00Z", 
                                    "message": "kubelet has sufficient memory available", 
                                    "reason": "KubeletHasSufficientMemory", 
                                    "status": "False", 
                                    "type": "MemoryPressure"
                                }, 
                                {
                                    "lastHeartbeatTime": "2017-06-09T01:54:14Z", 
                                    "lastTransitionTime": "2017-06-09T01:37:00Z", 
                                    "message": "kubelet has no disk pressure", 
                                    "reason": "KubeletHasNoDiskPressure", 
                                    "status": "False", 
                                    "type": "DiskPressure"
                                }, 
                                {
                                    "lastHeartbeatTime": "2017-06-09T01:54:14Z", 
                                    "lastTransitionTime": "2017-06-09T01:37:00Z", 
                                    "message": "kubelet is posting ready status", 
                                    "reason": "KubeletReady", 
                                    "status": "True", 
                                    "type": "Ready"
                                }
                            ], 
                            "daemonEndpoints": {
                                "kubeletEndpoint": {
                                    "Port": 10250
                                }
                            }, 
                            "images": [
                                {
                                    "names": [
                                        "openshift/origin-federation:6acabdc", 
                                        "openshift/origin-federation:latest"
                                    ], 
                                    "sizeBytes": 1205885664
                                }, 
                                {
                                    "names": [
                                        "docker.io/openshift/origin-docker-registry@sha256:8b7fd3b0c284d3647bfce4420c44436eeaccf11896aeea29e87e111971abf885", 
                                        "docker.io/openshift/origin-docker-registry:latest"
                                    ], 
                                    "sizeBytes": 1100552584
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-docker-registry:6acabdc", 
                                        "openshift/origin-docker-registry:latest"
                                    ], 
                                    "sizeBytes": 1100164272
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-gitserver:6acabdc", 
                                        "openshift/origin-gitserver:latest"
                                    ], 
                                    "sizeBytes": 1086520226
                                }, 
                                {
                                    "names": [
                                        "openshift/openvswitch:6acabdc", 
                                        "openshift/openvswitch:latest"
                                    ], 
                                    "sizeBytes": 1053403667
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-keepalived-ipfailover:6acabdc", 
                                        "openshift/origin-keepalived-ipfailover:latest"
                                    ], 
                                    "sizeBytes": 1028529711
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-haproxy-router:6acabdc", 
                                        "openshift/origin-haproxy-router:latest"
                                    ], 
                                    "sizeBytes": 1022758742
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-sti-builder:6acabdc", 
                                        "openshift/origin-sti-builder:latest"
                                    ], 
                                    "sizeBytes": 1001728427
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-deployer:6acabdc", 
                                        "openshift/origin-deployer:latest"
                                    ], 
                                    "sizeBytes": 1001728427
                                }, 
                                {
                                    "names": [
                                        "openshift/origin:6acabdc", 
                                        "openshift/origin:latest"
                                    ], 
                                    "sizeBytes": 1001728427
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-docker-builder:6acabdc", 
                                        "openshift/origin-docker-builder:latest"
                                    ], 
                                    "sizeBytes": 1001728427
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-f5-router:6acabdc", 
                                        "openshift/origin-f5-router:latest"
                                    ], 
                                    "sizeBytes": 1001728427
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-cluster-capacity:6acabdc", 
                                        "openshift/origin-cluster-capacity:latest"
                                    ], 
                                    "sizeBytes": 962455026
                                }, 
                                {
                                    "names": [
                                        "rhel7.1:latest"
                                    ], 
                                    "sizeBytes": 765301508
                                }, 
                                {
                                    "names": [
                                        "openshift/dind-master:latest"
                                    ], 
                                    "sizeBytes": 731456758
                                }, 
                                {
                                    "names": [
                                        "openshift/dind-node:latest"
                                    ], 
                                    "sizeBytes": 731453034
                                }, 
                                {
                                    "names": [
                                        "172.30.155.104:5000/logging/logging-auth-proxy@sha256:7c30338543c23de847e9a2b3f346ca9f0a378b0ef31bb2278a8644e9491840f2", 
                                        "172.30.155.104:5000/logging/logging-auth-proxy:latest"
                                    ], 
                                    "sizeBytes": 715536028
                                }, 
                                {
                                    "names": [
                                        "docker.io/node@sha256:46db0dd19955beb87b841c30a6b9812ba626473283e84117d1c016deee5949a9", 
                                        "docker.io/node:0.10.36"
                                    ], 
                                    "sizeBytes": 697128386
                                }, 
                                {
                                    "names": [
                                        "docker.io/openshift/origin-logging-kibana@sha256:9e3e11edb1f14c744ecf9587a3212e7648934a8bb302513ba84a8c6b058a1229", 
                                        "docker.io/openshift/origin-logging-kibana:latest"
                                    ], 
                                    "sizeBytes": 682851463
                                }, 
                                {
                                    "names": [
                                        "172.30.155.104:5000/logging/logging-kibana@sha256:c75d93ffb987105eb64a04379c8e8171044c942d0c2406e5f71e68146d57bb66", 
                                        "172.30.155.104:5000/logging/logging-kibana:latest"
                                    ], 
                                    "sizeBytes": 682851463
                                }, 
                                {
                                    "names": [
                                        "openshift/dind:latest"
                                    ], 
                                    "sizeBytes": 640650210
                                }, 
                                {
                                    "names": [
                                        "172.30.155.104:5000/logging/logging-elasticsearch@sha256:85f80a11215b71e3d1d4545268d28942c5abe4bb405cb4e62324f2f1c184094d", 
                                        "172.30.155.104:5000/logging/logging-elasticsearch:latest"
                                    ], 
                                    "sizeBytes": 623379764
                                }, 
                                {
                                    "names": [
                                        "172.30.155.104:5000/logging/logging-fluentd@sha256:b5bcf7cb0813c89c2679948c78eb0ea908126526dd01bd08cb553e4c195afec7", 
                                        "172.30.155.104:5000/logging/logging-fluentd:latest"
                                    ], 
                                    "sizeBytes": 472184916
                                }, 
                                {
                                    "names": [
                                        "172.30.155.104:5000/logging/logging-curator@sha256:ffb9fa5f9e53754a51e68900b092e5187a03f18154293fc7a55153e2b78f0532", 
                                        "172.30.155.104:5000/logging/logging-curator:latest"
                                    ], 
                                    "sizeBytes": 418287859
                                }, 
                                {
                                    "names": [
                                        "docker.io/openshift/base-centos7@sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c", 
                                        "docker.io/openshift/base-centos7:latest"
                                    ], 
                                    "sizeBytes": 383049978
                                }, 
                                {
                                    "names": [
                                        "rhel7.2:latest"
                                    ], 
                                    "sizeBytes": 377493597
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-base:latest"
                                    ], 
                                    "sizeBytes": 363070172
                                }, 
                                {
                                    "names": [
                                        "<none>@<none>", 
                                        "<none>:<none>"
                                    ], 
                                    "sizeBytes": 363024702
                                }, 
                                {
                                    "names": [
                                        "docker.io/fedora@sha256:69281ddd7b2600e5f2b17f1e12d7fba25207f459204fb2d15884f8432c479136", 
                                        "docker.io/fedora:25"
                                    ], 
                                    "sizeBytes": 230864375
                                }, 
                                {
                                    "names": [
                                        "rhel7.3:latest", 
                                        "rhel7:latest"
                                    ], 
                                    "sizeBytes": 219121266
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-pod:6acabdc", 
                                        "openshift/origin-pod:latest"
                                    ], 
                                    "sizeBytes": 213199843
                                }, 
                                {
                                    "names": [
                                        "registry.access.redhat.com/rhel7.2@sha256:98e6ca5d226c26e31a95cd67716afe22833c943e1926a21daf1a030906a02249", 
                                        "registry.access.redhat.com/rhel7.2:latest"
                                    ], 
                                    "sizeBytes": 201376319
                                }, 
                                {
                                    "names": [
                                        "registry.access.redhat.com/rhel7.3@sha256:1e232401d8e0ba53b36b757b4712fbcbd1dab9c21db039c45a84871a74e89e68", 
                                        "registry.access.redhat.com/rhel7.3:latest"
                                    ], 
                                    "sizeBytes": 192693772
                                }, 
                                {
                                    "names": [
                                        "docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077"
                                    ], 
                                    "sizeBytes": 192548999
                                }, 
                                {
                                    "names": [
                                        "openshift/origin-source:latest"
                                    ], 
                                    "sizeBytes": 192548894
                                }, 
                                {
                                    "names": [
                                        "registry.access.redhat.com/rhel7.1@sha256:1bc5a4c43bbb29a5a96a61896ff696933be3502e2f5fdc4cde02d9e101731fdd", 
                                        "registry.access.redhat.com/rhel7.1:latest"
                                    ], 
                                    "sizeBytes": 158229901
                                }, 
                                {
                                    "names": [
                                        "openshift/hello-openshift:6acabdc", 
                                        "openshift/hello-openshift:latest"
                                    ], 
                                    "sizeBytes": 5643318
                                }
                            ], 
                            "nodeInfo": {
                                "architecture": "amd64", 
                                "bootID": "adb5af75-a764-42ee-b935-43316d27f23d", 
                                "containerRuntimeVersion": "docker://1.12.6", 
                                "kernelVersion": "3.10.0-327.22.2.el7.x86_64", 
                                "kubeProxyVersion": "v1.6.1+5115d708d7", 
                                "kubeletVersion": "v1.6.1+5115d708d7", 
                                "machineID": "f9370ed252a14f73b014c1301a9b6d1b", 
                                "operatingSystem": "linux", 
                                "osImage": "Red Hat Enterprise Linux Server 7.3 (Maipo)", 
                                "systemUUID": "EC2C94CB-D989-55B5-BF3C-986A8B251863"
                            }
                        }
                    }
                ], 
                "kind": "List", 
                "metadata": {}, 
                "resourceVersion": "", 
                "selfLink": ""
            }
        ], 
        "returncode": 0
    }, 
    "state": "list"
}

TASK [openshift_logging_fluentd : Set openshift_logging_fluentd_hosts] *********
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:190
ok: [openshift] => {
    "ansible_facts": {
        "openshift_logging_fluentd_hosts": [
            "172.18.11.188"
        ]
    }, 
    "changed": false
}

TASK [openshift_logging_fluentd : include] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:195
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml for openshift

TASK [openshift_logging_fluentd : Label 172.18.11.188 for Fluentd deployment] ***
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:2
changed: [openshift] => {
    "changed": true, 
    "results": {
        "cmd": "/bin/oc label node 172.18.11.188 logging-infra-fluentd=true --overwrite", 
        "results": "", 
        "returncode": 0
    }, 
    "state": "add"
}

TASK [openshift_logging_fluentd : command] *************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:10
changed: [openshift -> 127.0.0.1] => {
    "changed": true, 
    "cmd": [
        "sleep", 
        "0.5"
    ], 
    "delta": "0:00:00.502436", 
    "end": "2017-06-08 21:54:23.678784", 
    "rc": 0, 
    "start": "2017-06-08 21:54:23.176348"
}

TASK [openshift_logging_fluentd : Delete temp directory] ***********************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:202
ok: [openshift] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-bQuZHl", 
    "state": "absent"
}

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:253
included: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/update_master_config.yaml for openshift

TASK [openshift_logging : include] *********************************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:36
skipping: [openshift] => {
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}

TASK [openshift_logging : Cleaning up local temp dir] **************************
task path: /tmp/tmp.na8PBpGpOF/openhift-ansible/roles/openshift_logging/tasks/main.yaml:40
ok: [openshift -> 127.0.0.1] => {
    "changed": false, 
    "path": "/tmp/openshift-logging-ansible-kL1YIP", 
    "state": "absent"
}
META: ran handlers
META: ran handlers

PLAY [Update Master configs] ***************************************************
skipping: no hosts matched

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0   
openshift                  : ok=213  changed=71   unreachable=0    failed=0   

/data/src/github.com/openshift/origin-aggregated-logging
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:170: executing 'oc get pods -l component=es' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s...
SUCCESS after 0.309s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:170: executing 'oc get pods -l component=es' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s
Standard output from the command:
NAME                                      READY     STATUS    RESTARTS   AGE
logging-es-data-master-8nzz83ik-1-cqnxl   1/1       Running   0          58s

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:171: executing 'oc get pods -l component=kibana' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s...
SUCCESS after 0.268s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:171: executing 'oc get pods -l component=kibana' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s
Standard output from the command:
NAME                     READY     STATUS    RESTARTS   AGE
logging-kibana-1-fz6pt   2/2       Running   0          32s

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:172: executing 'oc get pods -l component=curator' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s...
SUCCESS after 15.025s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:172: executing 'oc get pods -l component=curator' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s
Standard output from the command:
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          11s
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          12s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          13s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          14s
... repeated 3 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          15s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          16s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          17s
... repeated 3 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          18s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          19s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          20s
... repeated 3 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          21s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          22s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          23s
... repeated 2 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          24s
... repeated 3 times
NAME                      READY     STATUS              RESTARTS   AGE
logging-curator-1-68xqs   0/1       ContainerCreating   0          25s
... repeated 2 times
NAME                      READY     STATUS    RESTARTS   AGE
logging-curator-1-68xqs   1/1       Running   0          26s
Standard error from the command:
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:175: executing 'oc get pods -l component=es-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s...
SUCCESS after 0.281s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:175: executing 'oc get pods -l component=es-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s
Standard output from the command:
NAME                                          READY     STATUS    RESTARTS   AGE
logging-es-ops-data-master-xc2h70yx-1-08w7b   1/1       Running   0          1m

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:176: executing 'oc get pods -l component=kibana-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s...
SUCCESS after 0.274s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:176: executing 'oc get pods -l component=kibana-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s
Standard output from the command:
NAME                         READY     STATUS    RESTARTS   AGE
logging-kibana-ops-1-tgf4p   1/2       Running   0          31s

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:177: executing 'oc get pods -l component=curator-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s...
SUCCESS after 1.217s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:177: executing 'oc get pods -l component=curator-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s
Standard output from the command:
NAME                          READY     STATUS              RESTARTS   AGE
logging-curator-ops-1-64gxn   0/1       ContainerCreating   0          15s
... repeated 2 times
NAME                          READY     STATUS    RESTARTS   AGE
logging-curator-ops-1-64gxn   1/1       Running   0          16s
Standard error from the command:
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:185: executing 'oc project logging > /dev/null' expecting success...
SUCCESS after 0.245s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:185: executing 'oc project logging > /dev/null' expecting success
There was no output from the command.
There was no error output from the command.
/data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging
--> Deploying template "logging/logging-fluentd-template-maker" for "-" to project logging

     logging-fluentd-template-maker
     ---------
     Template to create template for fluentd

     * With parameters:
        * MASTER_URL=https://kubernetes.default.svc.cluster.local
        * ES_HOST=logging-es
        * ES_PORT=9200
        * ES_CLIENT_CERT=/etc/fluent/keys/cert
        * ES_CLIENT_KEY=/etc/fluent/keys/key
        * ES_CA=/etc/fluent/keys/ca
        * OPS_HOST=logging-es-ops
        * OPS_PORT=9200
        * OPS_CLIENT_CERT=/etc/fluent/keys/cert
        * OPS_CLIENT_KEY=/etc/fluent/keys/key
        * OPS_CA=/etc/fluent/keys/ca
        * ES_COPY=false
        * ES_COPY_HOST=
        * ES_COPY_PORT=
        * ES_COPY_SCHEME=https
        * ES_COPY_CLIENT_CERT=
        * ES_COPY_CLIENT_KEY=
        * ES_COPY_CA=
        * ES_COPY_USERNAME=
        * ES_COPY_PASSWORD=
        * OPS_COPY_HOST=
        * OPS_COPY_PORT=
        * OPS_COPY_SCHEME=https
        * OPS_COPY_CLIENT_CERT=
        * OPS_COPY_CLIENT_KEY=
        * OPS_COPY_CA=
        * OPS_COPY_USERNAME=
        * OPS_COPY_PASSWORD=
        * IMAGE_PREFIX_DEFAULT=172.30.155.104:5000/logging/
        * IMAGE_VERSION_DEFAULT=latest
        * USE_JOURNAL=
        * JOURNAL_SOURCE=
        * JOURNAL_READ_FROM_HEAD=false
        * USE_MUX=false
        * USE_MUX_CLIENT=false
        * MUX_ALLOW_EXTERNAL=false
        * BUFFER_QUEUE_LIMIT=1024
        * BUFFER_SIZE_LIMIT=16777216

--> Creating resources ...
    template "logging-fluentd-template" created
--> Success
    Run 'oc status' to view your app.
START wait_for_fluentd_to_catch_up at 2017-06-09 01:54:55.275560172+00:00
added es message 764559c4-ab1d-4856-a5a3-dd16304cc970
added es-ops message 22d88ba9-e874-4354-9aaf-756d7cadacd7
good - wait_for_fluentd_to_catch_up: found 1 record project logging for 764559c4-ab1d-4856-a5a3-dd16304cc970
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 22d88ba9-e874-4354-9aaf-756d7cadacd7
END wait_for_fluentd_to_catch_up took 11 seconds at 2017-06-09 01:55:06.215467536+00:00
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:223: executing 'oc login --username=admin --password=admin' expecting success...
SUCCESS after 0.273s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:223: executing 'oc login --username=admin --password=admin' expecting success
Standard output from the command:
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>


There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:224: executing 'oc login --username=system:admin' expecting success...
SUCCESS after 0.221s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:224: executing 'oc login --username=system:admin' expecting success
Standard output from the command:
Logged into "https://172.18.11.188:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-public
    kube-system
    logging
    openshift
    openshift-infra

Using project "default".

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:225: executing 'oadm policy add-cluster-role-to-user cluster-admin admin' expecting success...
SUCCESS after 0.236s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:225: executing 'oadm policy add-cluster-role-to-user cluster-admin admin' expecting success
Standard output from the command:
cluster role "cluster-admin" added: "admin"

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:226: executing 'oc login --username=loguser --password=loguser' expecting success...
SUCCESS after 0.246s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:226: executing 'oc login --username=loguser --password=loguser' expecting success
Standard output from the command:
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>


There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:227: executing 'oc login --username=system:admin' expecting success...
SUCCESS after 0.258s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:227: executing 'oc login --username=system:admin' expecting success
Standard output from the command:
Logged into "https://172.18.11.188:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * default
    kube-public
    kube-system
    logging
    openshift
    openshift-infra

Using project "default".

There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:228: executing 'oc project logging > /dev/null' expecting success...
SUCCESS after 0.244s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:228: executing 'oc project logging > /dev/null' expecting success
There was no output from the command.
There was no error output from the command.
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:229: executing 'oadm policy add-role-to-user view loguser' expecting success...
SUCCESS after 0.224s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:229: executing 'oadm policy add-role-to-user view loguser' expecting success
Standard output from the command:
role "view" added: "loguser"

There was no error output from the command.
Checking if Elasticsearch logging-es-data-master-8nzz83ik-1-cqnxl is ready
{
    "_id": "0",
    "_index": ".searchguard.logging-es-data-master-8nzz83ik-1-cqnxl",
    "_shards": {
        "failed": 0,
        "successful": 1,
        "total": 1
    },
    "_type": "rolesmapping",
    "_version": 2,
    "created": false
}
Checking if Elasticsearch logging-es-ops-data-master-xc2h70yx-1-08w7b is ready
{
    "_id": "0",
    "_index": ".searchguard.logging-es-ops-data-master-xc2h70yx-1-08w7b",
    "_shards": {
        "failed": 0,
        "successful": 1,
        "total": 1
    },
    "_type": "rolesmapping",
    "_version": 2,
    "created": false
}
------------------------------------------
     Test 'admin' user can access cluster stats
------------------------------------------
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:265: executing 'test 200 = 200' expecting success...
SUCCESS after 0.013s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:265: executing 'test 200 = 200' expecting success
There was no output from the command.
There was no error output from the command.
------------------------------------------
     Test 'admin' user can access cluster stats for OPS cluster
------------------------------------------
Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:274: executing 'test 200 = 200' expecting success...
SUCCESS after 0.009s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:274: executing 'test 200 = 200' expecting success
There was no output from the command.
There was no error output from the command.
Running e2e tests
Checking installation of the EFK stack...
Running test/cluster/rollout.sh:20: executing 'oc project logging' expecting success...
SUCCESS after 0.256s: test/cluster/rollout.sh:20: executing 'oc project logging' expecting success
Standard output from the command:
Already on project "logging" on server "https://172.18.11.188:8443".

There was no error output from the command.
[INFO] Checking for DeploymentConfigurations...
Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana' expecting success...
SUCCESS after 0.243s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana' expecting success
Standard output from the command:
NAME             REVISION   DESIRED   CURRENT   TRIGGERED BY
logging-kibana   1          1         1         config

There was no error output from the command.
Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana' expecting success...
SUCCESS after 0.212s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana' expecting success
Standard output from the command:
replication controller "logging-kibana-1" successfully rolled out

There was no error output from the command.
Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator' expecting success...
SUCCESS after 0.209s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator' expecting success
Standard output from the command:
NAME              REVISION   DESIRED   CURRENT   TRIGGERED BY
logging-curator   1          1         1         config

There was no error output from the command.
Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator' expecting success...
SUCCESS after 0.247s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator' expecting success
Standard output from the command:
replication controller "logging-curator-1" successfully rolled out

There was no error output from the command.
Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana-ops' expecting success...
SUCCESS after 0.252s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana-ops' expecting success
Standard output from the command:
NAME                 REVISION   DESIRED   CURRENT   TRIGGERED BY
logging-kibana-ops   1          1         1         config

There was no error output from the command.
Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana-ops' expecting success...
SUCCESS after 0.208s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana-ops' expecting success
Standard output from the command:
replication controller "logging-kibana-ops-1" successfully rolled out

There was no error output from the command.
Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator-ops' expecting success...
SUCCESS after 0.209s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator-ops' expecting success
Standard output from the command:
NAME                  REVISION   DESIRED   CURRENT   TRIGGERED BY
logging-curator-ops   1          1         1         config

There was no error output from the command.
Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator-ops' expecting success...
SUCCESS after 0.207s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator-ops' expecting success
Standard output from the command:
replication controller "logging-curator-ops-1" successfully rolled out

There was no error output from the command.
Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-data-master-8nzz83ik' expecting success...
SUCCESS after 0.319s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-data-master-8nzz83ik' expecting success
Standard output from the command:
NAME                              REVISION   DESIRED   CURRENT   TRIGGERED BY
logging-es-data-master-8nzz83ik   1          1         1         config

There was no error output from the command.
Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-data-master-8nzz83ik' expecting success...
SUCCESS after 0.212s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-data-master-8nzz83ik' expecting success
Standard output from the command:
replication controller "logging-es-data-master-8nzz83ik-1" successfully rolled out

There was no error output from the command.
Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-ops-data-master-xc2h70yx' expecting success...
SUCCESS after 0.262s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-ops-data-master-xc2h70yx' expecting success
Standard output from the command:
NAME                                  REVISION   DESIRED   CURRENT   TRIGGERED BY
logging-es-ops-data-master-xc2h70yx   1          1         1         config

There was no error output from the command.
Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-ops-data-master-xc2h70yx' expecting success...
SUCCESS after 0.216s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-ops-data-master-xc2h70yx' expecting success
Standard output from the command:
replication controller "logging-es-ops-data-master-xc2h70yx-1" successfully rolled out

There was no error output from the command.
[INFO] Checking for Routes...
Running test/cluster/rollout.sh:30: executing 'oc get route logging-kibana' expecting success...
SUCCESS after 0.243s: test/cluster/rollout.sh:30: executing 'oc get route logging-kibana' expecting success
Standard output from the command:
NAME             HOST/PORT                                 PATH      SERVICES         PORT      TERMINATION          WILDCARD
logging-kibana   kibana.router.default.svc.cluster.local             logging-kibana   <all>     reencrypt/Redirect   None

There was no error output from the command.
Running test/cluster/rollout.sh:30: executing 'oc get route logging-kibana-ops' expecting success...
SUCCESS after 0.272s: test/cluster/rollout.sh:30: executing 'oc get route logging-kibana-ops' expecting success
Standard output from the command:
NAME                 HOST/PORT                                     PATH      SERVICES             PORT      TERMINATION          WILDCARD
logging-kibana-ops   kibana-ops.router.default.svc.cluster.local             logging-kibana-ops   <all>     reencrypt/Redirect   None

There was no error output from the command.
[INFO] Checking for Services...
Running test/cluster/rollout.sh:35: executing 'oc get service logging-es' expecting success...
SUCCESS after 0.236s: test/cluster/rollout.sh:35: executing 'oc get service logging-es' expecting success
Standard output from the command:
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
logging-es   172.30.153.244   <none>        9200/TCP   2m

There was no error output from the command.
Running test/cluster/rollout.sh:35: executing 'oc get service logging-es-cluster' expecting success...
SUCCESS after 0.220s: test/cluster/rollout.sh:35: executing 'oc get service logging-es-cluster' expecting success
Standard output from the command:
NAME                 CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
logging-es-cluster   172.30.127.128   <none>        9300/TCP   2m

There was no error output from the command.
Running test/cluster/rollout.sh:35: executing 'oc get service logging-kibana' expecting success...
SUCCESS after 0.218s: test/cluster/rollout.sh:35: executing 'oc get service logging-kibana' expecting success
Standard output from the command:
NAME             CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
logging-kibana   172.30.175.25   <none>        443/TCP   1m

There was no error output from the command.
Running test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops' expecting success...
SUCCESS after 0.211s: test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops' expecting success
Standard output from the command:
NAME             CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
logging-es-ops   172.30.186.99   <none>        9200/TCP   1m

There was no error output from the command.
Running test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops-cluster' expecting success...
SUCCESS after 0.227s: test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops-cluster' expecting success
Standard output from the command:
NAME                     CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
logging-es-ops-cluster   172.30.195.113   <none>        9300/TCP   2m

There was no error output from the command.
Running test/cluster/rollout.sh:35: executing 'oc get service logging-kibana-ops' expecting success...
SUCCESS after 0.294s: test/cluster/rollout.sh:35: executing 'oc get service logging-kibana-ops' expecting success
Standard output from the command:
NAME                 CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
logging-kibana-ops   172.30.33.94   <none>        443/TCP   1m

There was no error output from the command.
[INFO] Checking for OAuthClients...
Running test/cluster/rollout.sh:40: executing 'oc get oauthclient kibana-proxy' expecting success...
SUCCESS after 0.264s: test/cluster/rollout.sh:40: executing 'oc get oauthclient kibana-proxy' expecting success
Standard output from the command:
NAME           SECRET                                                             WWW-CHALLENGE   REDIRECT URIS
kibana-proxy   ffXVCQUWroljVt9J99SHYKcwuJjjE7QtQEcCworxTgHEVX0WY84gE6lGskwzLUKd   FALSE           https://kibana.router.default.svc.cluster.local,https://kibana-ops.router.default.svc.cluster.local

There was no error output from the command.
[INFO] Checking for DaemonSets...
Running test/cluster/rollout.sh:45: executing 'oc get daemonset logging-fluentd' expecting success...
SUCCESS after 0.231s: test/cluster/rollout.sh:45: executing 'oc get daemonset logging-fluentd' expecting success
Standard output from the command:
NAME              DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE-SELECTOR                AGE
logging-fluentd   1         1         1         1            1           logging-infra-fluentd=true   1m

There was no error output from the command.
Running test/cluster/rollout.sh:47: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '1'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.240s: test/cluster/rollout.sh:47: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '1'; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
1
There was no error output from the command.
Checking for log entry matches between ES and their sources...
WARNING: bridge-nf-call-ip6tables is disabled
Running test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success...
SUCCESS after 0.238s: test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success
Standard output from the command:
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * logging
    openshift
    openshift-infra

Using project "logging".

There was no error output from the command.
Running test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success...
SUCCESS after 0.233s: test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success
Standard output from the command:
Logged into "https://172.18.11.188:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * logging
    openshift
    openshift-infra

Using project "logging".

There was no error output from the command.
Running test/cluster/functionality.sh:45: executing 'oc project logging' expecting success...
SUCCESS after 0.220s: test/cluster/functionality.sh:45: executing 'oc project logging' expecting success
Standard output from the command:
Already on project "logging" on server "https://172.18.11.188:8443".

There was no error output from the command.
[INFO] Testing Kibana pod logging-kibana-1-fz6pt for a successful start...
Running test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-1-fz6pt -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 120.324s: test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-1-fz6pt -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
200
There was no error output from the command.
Running test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-1-fz6pt -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.218s: test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-1-fz6pt -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
true
There was no error output from the command.
Running test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-1-fz6pt -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.227s: test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-1-fz6pt -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
true
There was no error output from the command.
[INFO] Testing Elasticsearch pod logging-es-data-master-8nzz83ik-1-cqnxl for a successful start...
Running test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.408s: test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
200
There was no error output from the command.
Running test/cluster/functionality.sh:60: executing 'oc get pod logging-es-data-master-8nzz83ik-1-cqnxl -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.213s: test/cluster/functionality.sh:60: executing 'oc get pod logging-es-data-master-8nzz83ik-1-cqnxl -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
true
There was no error output from the command.
[INFO] Checking that Elasticsearch pod logging-es-data-master-8nzz83ik-1-cqnxl recovered its indices after starting...
Running test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.359s: test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
{"cluster_name":"logging-es","master_node":"eVBr3enUQ4O97NI2dC80vg"}200
There was no error output from the command.
[INFO] Elasticsearch pod logging-es-data-master-8nzz83ik-1-cqnxl is the master
[INFO] Checking that Elasticsearch pod logging-es-data-master-8nzz83ik-1-cqnxl has persisted indices created by Fluentd...
Running test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.383s: test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997                
.kibana                                                         
.searchguard.logging-es-data-master-8nzz83ik-1-cqnxl            
project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82.2017.06.09 
project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82.2017.06.09 

There was no error output from the command.
[INFO] Cheking for index project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82 with Kibana pod logging-kibana-1-fz6pt...
Running test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fz6pt' 'logging-es:9200' 'project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82' '/var/log/containers/*_23305e21-4cb4-11e7-9445-0ecf874efb82_*.log' '500' 'admin' 'Crd4jiyIpdNl9R2U0vU3SoAly1k7CJymRyts_zT7N0U' '127.0.0.1'' expecting success...
SUCCESS after 8.818s: test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fz6pt' 'logging-es:9200' 'project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82' '/var/log/containers/*_23305e21-4cb4-11e7-9445-0ecf874efb82_*.log' '500' 'admin' 'Crd4jiyIpdNl9R2U0vU3SoAly1k7CJymRyts_zT7N0U' '127.0.0.1'' expecting success
Standard output from the command:
Executing command [oc exec logging-kibana-1-fz6pt -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer Crd4jiyIpdNl9R2U0vU3SoAly1k7CJymRyts_zT7N0U' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es:9200/project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82.*/_search?q=hostname:ip-172-18-11-188&fields=message&size=500"]
Failure - no log entries found in Elasticsearch logging-es:9200 for index project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82

There was no error output from the command.
[INFO] Cheking for index project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82 with Kibana pod logging-kibana-1-fz6pt...
Running test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fz6pt' 'logging-es:9200' 'project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82' '/var/log/containers/*_1f2e57ff-4cb4-11e7-9445-0ecf874efb82_*.log' '500' 'admin' 'Crd4jiyIpdNl9R2U0vU3SoAly1k7CJymRyts_zT7N0U' '127.0.0.1'' expecting success...
SUCCESS after 0.568s: test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fz6pt' 'logging-es:9200' 'project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82' '/var/log/containers/*_1f2e57ff-4cb4-11e7-9445-0ecf874efb82_*.log' '500' 'admin' 'Crd4jiyIpdNl9R2U0vU3SoAly1k7CJymRyts_zT7N0U' '127.0.0.1'' expecting success
Standard output from the command:
Executing command [oc exec logging-kibana-1-fz6pt -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer Crd4jiyIpdNl9R2U0vU3SoAly1k7CJymRyts_zT7N0U' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es:9200/project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82.*/_search?q=hostname:ip-172-18-11-188&fields=message&size=500"]
Failure - no log entries found in Elasticsearch logging-es:9200 for index project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82

There was no error output from the command.
[INFO] Checking that Elasticsearch pod logging-es-data-master-8nzz83ik-1-cqnxl contains common data model index templates...
Running test/cluster/functionality.sh:105: executing 'oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- ls -1 /usr/share/elasticsearch/index_templates' expecting success...
SUCCESS after 0.304s: test/cluster/functionality.sh:105: executing 'oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- ls -1 /usr/share/elasticsearch/index_templates' expecting success
Standard output from the command:
com.redhat.viaq-openshift-operations.template.json
com.redhat.viaq-openshift-project.template.json

There was no error output from the command.
Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'...
SUCCESS after 0.370s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'
Standard output from the command:
200
There was no error output from the command.
Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'...
SUCCESS after 0.372s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-8nzz83ik-1-cqnxl' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'
Standard output from the command:
200
There was no error output from the command.
Running test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success...
SUCCESS after 0.238s: test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success
Standard output from the command:
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * logging
    openshift
    openshift-infra

Using project "logging".

There was no error output from the command.
Running test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success...
SUCCESS after 0.268s: test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success
Standard output from the command:
Logged into "https://172.18.11.188:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * logging
    openshift
    openshift-infra

Using project "logging".

There was no error output from the command.
Running test/cluster/functionality.sh:45: executing 'oc project logging' expecting success...
SUCCESS after 0.230s: test/cluster/functionality.sh:45: executing 'oc project logging' expecting success
Standard output from the command:
Already on project "logging" on server "https://172.18.11.188:8443".

There was no error output from the command.
[INFO] Testing Kibana pod logging-kibana-ops-1-tgf4p for a successful start...
Running test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-ops-1-tgf4p -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 120.293s: test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-ops-1-tgf4p -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
200
There was no error output from the command.
Running test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-ops-1-tgf4p -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.214s: test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-ops-1-tgf4p -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
true
There was no error output from the command.
Running test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-ops-1-tgf4p -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.227s: test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-ops-1-tgf4p -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
true
There was no error output from the command.
[INFO] Testing Elasticsearch pod logging-es-ops-data-master-xc2h70yx-1-08w7b for a successful start...
Running test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.383s: test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
200
There was no error output from the command.
Running test/cluster/functionality.sh:60: executing 'oc get pod logging-es-ops-data-master-xc2h70yx-1-08w7b -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s...
SUCCESS after 0.212s: test/cluster/functionality.sh:60: executing 'oc get pod logging-es-ops-data-master-xc2h70yx-1-08w7b -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s
Standard output from the command:
true
There was no error output from the command.
[INFO] Checking that Elasticsearch pod logging-es-ops-data-master-xc2h70yx-1-08w7b recovered its indices after starting...
Running test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.362s: test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
{"cluster_name":"logging-es-ops","master_node":"TzAYajVlR-6HC3MYnA6H-A"}200
There was no error output from the command.
[INFO] Elasticsearch pod logging-es-ops-data-master-xc2h70yx-1-08w7b is the master
[INFO] Checking that Elasticsearch pod logging-es-ops-data-master-xc2h70yx-1-08w7b has persisted indices created by Fluentd...
Running test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s...
SUCCESS after 0.376s: test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s
Standard output from the command:
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997         
.operations.2017.06.09                                   
.searchguard.logging-es-ops-data-master-xc2h70yx-1-08w7b 
.kibana                                                  

There was no error output from the command.
[INFO] Cheking for index .operations with Kibana pod logging-kibana-ops-1-tgf4p...
Running test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-tgf4p' 'logging-es-ops:9200' '.operations' '/var/log/messages' '500' 'admin' 'iLeDHiDs3YqpHXN3xzMf6v-0mHpUmY5SMTS0urpr9PI' '127.0.0.1'' expecting success...
SUCCESS after 0.866s: test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-tgf4p' 'logging-es-ops:9200' '.operations' '/var/log/messages' '500' 'admin' 'iLeDHiDs3YqpHXN3xzMf6v-0mHpUmY5SMTS0urpr9PI' '127.0.0.1'' expecting success
Standard output from the command:
Executing command [oc exec logging-kibana-ops-1-tgf4p -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer iLeDHiDs3YqpHXN3xzMf6v-0mHpUmY5SMTS0urpr9PI' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es-ops:9200/.operations.*/_search?q=hostname:ip-172-18-11-188&fields=message&size=500"]
Failure - no log entries found in Elasticsearch logging-es-ops:9200 for index .operations

There was no error output from the command.
[INFO] Checking that Elasticsearch pod logging-es-ops-data-master-xc2h70yx-1-08w7b contains common data model index templates...
Running test/cluster/functionality.sh:105: executing 'oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- ls -1 /usr/share/elasticsearch/index_templates' expecting success...
SUCCESS after 0.367s: test/cluster/functionality.sh:105: executing 'oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- ls -1 /usr/share/elasticsearch/index_templates' expecting success
Standard output from the command:
com.redhat.viaq-openshift-operations.template.json
com.redhat.viaq-openshift-project.template.json

There was no error output from the command.
Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'...
SUCCESS after 0.463s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'
Standard output from the command:
200
There was no error output from the command.
Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'...
SUCCESS after 0.360s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-xc2h70yx-1-08w7b' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'
Standard output from the command:
200
There was no error output from the command.
running test test-curator.sh
configmap "logging-curator" deleted
configmap "logging-curator" created
deploymentconfig "logging-curator" scaled
deploymentconfig "logging-curator" scaled
configmap "logging-curator" deleted
configmap "logging-curator" created
deploymentconfig "logging-curator" scaled
deploymentconfig "logging-curator" scaled
configmap "logging-curator" deleted
configmap "logging-curator" created
deploymentconfig "logging-curator" scaled
deploymentconfig "logging-curator" scaled
current indices before 1st deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.curatortest.2017.03.31
.operations.curatortest.2017.06.09
.searchguard.logging-es-data-master-8nzz83ik-1-cqnxl
default-index.curatortest.2017.05.09
default-index.curatortest.2017.06.09
project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.project-dev.curatortest.2017.06.08
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.05.12
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.02
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.02
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.02
project.project3-qe.curatortest.2017.06.09
Fri Jun  9 02:01:53 UTC 2017
configmap "logging-curator" deleted
configmap "logging-curator" created
deploymentconfig "logging-curator" scaled
deploymentconfig "logging-curator" scaled
current indices after 1st deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.curatortest.2017.06.09
.searchguard.logging-es-data-master-8nzz83ik-1-cqnxl
default-index.curatortest.2017.06.09
project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.09
good - index project.project-dev.curatortest.2017.06.09 is present
good - index project.project-dev.curatortest.2017.06.08 is missing
good - index project.project-qe.curatortest.2017.06.09 is present
good - index project.project-qe.curatortest.2017.06.02 is missing
good - index project.project-prod.curatortest.2017.06.09 is present
good - index project.project-prod.curatortest.2017.05.12 is missing
good - index .operations.curatortest.2017.06.09 is present
good - index .operations.curatortest.2017.03.31 is missing
good - index default-index.curatortest.2017.06.09 is present
good - index default-index.curatortest.2017.05.09 is missing
good - index project.project2-qe.curatortest.2017.06.09 is present
good - index project.project2-qe.curatortest.2017.06.02 is missing
good - index project.project3-qe.curatortest.2017.06.09 is present
good - index project.project3-qe.curatortest.2017.06.02 is missing
current indices before 2nd deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.curatortest.2017.03.31
.operations.curatortest.2017.06.09
.searchguard.logging-es-data-master-8nzz83ik-1-cqnxl
default-index.curatortest.2017.05.09
default-index.curatortest.2017.06.09
project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.project-dev.curatortest.2017.06.08
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.05.12
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.02
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.02
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.02
project.project3-qe.curatortest.2017.06.09
sleeping 219 seconds to see if runhour and runminute are working . . .
verify indices deletion again
current indices after 2nd deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.curatortest.2017.06.09
.searchguard.logging-es-data-master-8nzz83ik-1-cqnxl
default-index.curatortest.2017.06.09
project.default.1f2e57ff-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.logging.23305e21-4cb4-11e7-9445-0ecf874efb82.2017.06.09
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.09
good - index project.project-dev.curatortest.2017.06.09 is present
good - index project.project-dev.curatortest.2017.06.08 is missing
good - index project.project-qe.curatortest.2017.06.09 is present
good - index project.project-qe.curatortest.2017.06.02 is missing
good - index project.project-prod.curatortest.2017.06.09 is present
good - index project.project-prod.curatortest.2017.05.12 is missing
good - index .operations.curatortest.2017.06.09 is present
good - index .operations.curatortest.2017.03.31 is missing
good - index default-index.curatortest.2017.06.09 is present
good - index default-index.curatortest.2017.05.09 is missing
good - index project.project2-qe.curatortest.2017.06.09 is present
good - index project.project2-qe.curatortest.2017.06.02 is missing
good - index project.project3-qe.curatortest.2017.06.09 is present
good - index project.project3-qe.curatortest.2017.06.02 is missing
current indices before 1st deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.2017.06.09
.operations.curatortest.2017.03.31
.operations.curatortest.2017.06.09
.searchguard.logging-es-ops-data-master-xc2h70yx-1-08w7b
default-index.curatortest.2017.05.09
default-index.curatortest.2017.06.09
project.project-dev.curatortest.2017.06.08
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.05.12
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.02
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.02
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.02
project.project3-qe.curatortest.2017.06.09
Fri Jun  9 02:07:18 UTC 2017
configmap "logging-curator" deleted
configmap "logging-curator" created
deploymentconfig "logging-curator-ops" scaled
deploymentconfig "logging-curator-ops" scaled
current indices after 1st deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.2017.06.09
.operations.curatortest.2017.06.09
.searchguard.logging-es-ops-data-master-xc2h70yx-1-08w7b
default-index.curatortest.2017.06.09
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.09
good - index project.project-dev.curatortest.2017.06.09 is present
good - index project.project-dev.curatortest.2017.06.08 is missing
good - index project.project-qe.curatortest.2017.06.09 is present
good - index project.project-qe.curatortest.2017.06.02 is missing
good - index project.project-prod.curatortest.2017.06.09 is present
good - index project.project-prod.curatortest.2017.05.12 is missing
good - index .operations.curatortest.2017.06.09 is present
good - index .operations.curatortest.2017.03.31 is missing
good - index default-index.curatortest.2017.06.09 is present
good - index default-index.curatortest.2017.05.09 is missing
good - index project.project2-qe.curatortest.2017.06.09 is present
good - index project.project2-qe.curatortest.2017.06.02 is missing
good - index project.project3-qe.curatortest.2017.06.09 is present
good - index project.project3-qe.curatortest.2017.06.02 is missing
current indices before 2nd deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.2017.06.09
.operations.curatortest.2017.03.31
.operations.curatortest.2017.06.09
.searchguard.logging-es-ops-data-master-xc2h70yx-1-08w7b
default-index.curatortest.2017.05.09
default-index.curatortest.2017.06.09
project.project-dev.curatortest.2017.06.08
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.05.12
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.02
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.02
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.02
project.project3-qe.curatortest.2017.06.09
sleeping 220 seconds to see if runhour and runminute are working . . .
verify indices deletion again
current indices after 2nd deletion are:
.kibana
.kibana.d033e22ae348aeb5660fc2140aec35850c4da997
.operations.2017.06.09
.operations.curatortest.2017.06.09
.searchguard.logging-es-ops-data-master-xc2h70yx-1-08w7b
default-index.curatortest.2017.06.09
project.project-dev.curatortest.2017.06.09
project.project-prod.curatortest.2017.06.09
project.project-qe.curatortest.2017.06.09
project.project2-qe.curatortest.2017.06.09
project.project3-qe.curatortest.2017.06.09
good - index project.project-dev.curatortest.2017.06.09 is present
good - index project.project-dev.curatortest.2017.06.08 is missing
good - index project.project-qe.curatortest.2017.06.09 is present
good - index project.project-qe.curatortest.2017.06.02 is missing
good - index project.project-prod.curatortest.2017.06.09 is present
good - index project.project-prod.curatortest.2017.05.12 is missing
good - index .operations.curatortest.2017.06.09 is present
good - index .operations.curatortest.2017.03.31 is missing
good - index default-index.curatortest.2017.06.09 is present
good - index default-index.curatortest.2017.05.09 is missing
good - index project.project2-qe.curatortest.2017.06.09 is present
good - index project.project2-qe.curatortest.2017.06.02 is missing
good - index project.project3-qe.curatortest.2017.06.09 is present
good - index project.project3-qe.curatortest.2017.06.02 is missing
curator running [5] jobs
curator run finish
curator running [5] jobs
curator run finish
configmap "logging-curator" deleted
configmap "logging-curator" created
deploymentconfig "logging-curator-ops" scaled
deploymentconfig "logging-curator-ops" scaled
running test test-datetime-future.sh
++ set -o nounset
++ set -o pipefail
++ type get_running_pod
++ [[ 1 -ne 1 ]]
++ [[ true = \f\a\l\s\e ]]
++ CLUSTER=true
++ ops=-ops
++ INDEX_PREFIX=
++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts
++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']'
++ get_test_user_token
++ local current_project
+++ oc project -q
++ current_project=logging
++ oc login --username=admin --password=admin
+++ oc whoami -t
++ test_token=iSQxKTNfto0eNfQxbJrA2uE19np-tQxLAq1WGv2N_Xw
+++ oc whoami
++ test_name=admin
++ test_ip=127.0.0.1
++ oc login --username=system:admin
++ oc project logging
++ TEST_DIVIDER=------------------------------------------
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-cx4bm
+++ date +%z
++ nodetz=-0400
+++ oc exec logging-fluentd-cx4bm -- date +%z
++ podtz=-0400
++ '[' x-0400 = x-0400 ']'
+++ date +%Z
Good - node timezone -0400 EDT is equal to the fluentd pod timezone
++ echo Good - node timezone -0400 EDT is equal to the fluentd pod timezone
++ docker_uses_journal
++ type -p docker
++ sudo docker info
++ grep -q 'Logging Driver: journald'
WARNING: bridge-nf-call-ip6tables is disabled
++ return 0
++ echo The rest of the test is not applicable when using the journal - skipping
The rest of the test is not applicable when using the journal - skipping
++ exit 0
running test test-es-copy.sh
++ set -o nounset
++ set -o pipefail
++ type get_running_pod
++ [[ 1 -ne 1 ]]
++ [[ true = \f\a\l\s\e ]]
++ CLUSTER=true
++ ops=-ops
++ INDEX_PREFIX=
++ PROJ_PREFIX=project.
++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts
++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']'
++ get_test_user_token
++ local current_project
+++ oc project -q
++ current_project=logging
++ oc login --username=admin --password=admin
+++ oc whoami -t
++ test_token=ZfAtSsUNb8VqzcuHvkuovh4HUv_u9A8r8-rx86x1JzQ
+++ oc whoami
++ test_name=admin
++ test_ip=127.0.0.1
++ oc login --username=system:admin
++ oc project logging
++ TEST_DIVIDER=------------------------------------------
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-cx4bm
+++ mktemp
++ cfg=/tmp/tmp.5y3lSWqQDy
++ oc get template logging-fluentd-template -o yaml
++ sed '/- name: ES_COPY/,/value:/ s/value: .*$/value: "false"/'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-cx4bm
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-cx4bm
++ '[' -z logging-fluentd-cx4bm -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-cx4bm
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
No resources found.
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
++ echo pod for component=fluentd not running yet
pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-z72q1
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-z72q1 ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-z72q1
+++ mktemp
++ origconfig=/tmp/tmp.iuQBaNy4QN
++ oc get template logging-fluentd-template -o yaml
++ write_and_verify_logs 1
++ rc=0
++ wait_for_fluentd_to_catch_up '' '' 1
+++ date +%s
++ local starttime=1496974674
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:17:54.111875251+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:17:54.111875251+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ uuidgen
++ local uuid_es_ops=8cea7dc3-6d35-4679-8774-4486223e8bc4
++ local expected=1
++ local timeout=300
++ add_test_message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/649cd48e-3fcb-41c0-86a4-490f1eadcc4c
added es message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
++ echo added es message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
++ logger -i -p local6.info -t 8cea7dc3-6d35-4679-8774-4486223e8bc4 8cea7dc3-6d35-4679-8774-4486223e8bc4
added es-ops message 8cea7dc3-6d35-4679-8774-4486223e8bc4
++ echo added es-ops message 8cea7dc3-6d35-4679-8774-4486223e8bc4
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=649cd48e-3fcb-41c0-86a4-490f1eadcc4c
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ get_count_from_json
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ get_count_from_json
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 288 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 287 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:649cd48e-3fcb-41c0-86a4-490f1eadcc4c'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 287 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
good - wait_for_fluentd_to_catch_up: found 1 record project logging for 649cd48e-3fcb-41c0-86a4-490f1eadcc4c
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=8cea7dc3-6d35-4679-8774-4486223e8bc4
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER 8cea7dc3-6d35-4679-8774-4486223e8bc4
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:8cea7dc3-6d35-4679-8774-4486223e8bc4' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:8cea7dc3-6d35-4679-8774-4486223e8bc4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:8cea7dc3-6d35-4679-8774-4486223e8bc4'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 8cea7dc3-6d35-4679-8774-4486223e8bc4
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 8cea7dc3-6d35-4679-8774-4486223e8bc4
++ '[' -n '' ']'
++ '[' -n '' ']'
+++ date +%s
++ local endtime=1496974694
+++ expr 1496974694 - 1496974674
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 20 seconds at 2017-06-09 02:18:14.067195058+00:00
++ echo END wait_for_fluentd_to_catch_up took 20 seconds at 2017-06-09 02:18:14.067195058+00:00
++ return 0
++ '[' 0 -ne 0 ']'
++ return 0
++ trap cleanup INT TERM EXIT
+++ mktemp
++ nocopy=/tmp/tmp.znjJs39bCv
++ sed /_COPY/,/value/d /tmp/tmp.iuQBaNy4QN
+++ mktemp
++ envpatch=/tmp/tmp.cg43gsr7Mq
++ sed -n '/^        - env:/,/^          image:/ {
/^          image:/d
/^        - env:/d
/name: K8S_HOST_URL/,/value/d
s/ES_/ES_COPY_/
s/OPS_/OPS_COPY_/
p
}' /tmp/tmp.znjJs39bCv
++ cat
++ cat /tmp/tmp.znjJs39bCv
++ oc replace -f -
++ sed '/^        - env:/r /tmp/tmp.cg43gsr7Mq'
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.cg43gsr7Mq /tmp/tmp.znjJs39bCv
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-z72q1
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-z72q1
++ '[' -z logging-fluentd-z72q1 -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-z72q1
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=fluentd not running yet
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-2jz19
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-2jz19 ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-2jz19
++ write_and_verify_logs 2
++ rc=0
++ wait_for_fluentd_to_catch_up '' '' 2
+++ date +%s
++ local starttime=1496974708
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:18:28.764642667+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:18:28.764642667+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=90189e37-ce6c-4ae6-bb14-27cace715e54
+++ uuidgen
++ local uuid_es_ops=cd75c4bc-a724-40fd-b83a-3c28613f14d8
++ local expected=2
++ local timeout=300
++ add_test_message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/90189e37-ce6c-4ae6-bb14-27cace715e54
added es message 90189e37-ce6c-4ae6-bb14-27cace715e54
++ echo added es message 90189e37-ce6c-4ae6-bb14-27cace715e54
++ logger -i -p local6.info -t cd75c4bc-a724-40fd-b83a-3c28613f14d8 cd75c4bc-a724-40fd-b83a-3c28613f14d8
added es-ops message cd75c4bc-a724-40fd-b83a-3c28613f14d8
++ echo added es-ops message cd75c4bc-a724-40fd-b83a-3c28613f14d8
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=90189e37-ce6c-4ae6-bb14-27cace715e54
++ expected=2
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 288 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=0
++ test 0 = 2
++ sleep 1
++ let ii=ii-1
++ '[' 287 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 90189e37-ce6c-4ae6-bb14-27cace715e54
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:90189e37-ce6c-4ae6-bb14-27cace715e54'
++ local nrecs=2
++ test 2 = 2
++ break
++ '[' 287 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 2 record project logging for 90189e37-ce6c-4ae6-bb14-27cace715e54
++ echo good - wait_for_fluentd_to_catch_up: found 2 record project logging for 90189e37-ce6c-4ae6-bb14-27cace715e54
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=cd75c4bc-a724-40fd-b83a-3c28613f14d8
++ expected=2
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER cd75c4bc-a724-40fd-b83a-3c28613f14d8
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:cd75c4bc-a724-40fd-b83a-3c28613f14d8' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:cd75c4bc-a724-40fd-b83a-3c28613f14d8'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:cd75c4bc-a724-40fd-b83a-3c28613f14d8'
++ local nrecs=2
++ test 2 = 2
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 2 record project .operations for cd75c4bc-a724-40fd-b83a-3c28613f14d8
++ echo good - wait_for_fluentd_to_catch_up: found 2 record project .operations for cd75c4bc-a724-40fd-b83a-3c28613f14d8
++ '[' -n '' ']'
++ '[' -n '' ']'
+++ date +%s
++ local endtime=1496974728
+++ expr 1496974728 - 1496974708
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 20 seconds at 2017-06-09 02:18:48.884152196+00:00
++ echo END wait_for_fluentd_to_catch_up took 20 seconds at 2017-06-09 02:18:48.884152196+00:00
++ return 0
++ '[' 0 -ne 0 ']'
++ return 0
++ oc replace --force -f /tmp/tmp.iuQBaNy4QN
template "logging-fluentd-template" deleted
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.iuQBaNy4QN
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-2jz19
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-2jz19
++ '[' -z logging-fluentd-2jz19 -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-2jz19
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=fluentd not running yet
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-vx5q2
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-vx5q2 ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-vx5q2
++ write_and_verify_logs 1
++ rc=0
++ wait_for_fluentd_to_catch_up '' '' 1
+++ date +%s
++ local starttime=1496974744
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:19:04.523983122+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:19:04.523983122+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ uuidgen
++ local uuid_es_ops=816face9-c165-48d5-a249-0ab7f06fadbd
++ local expected=1
++ local timeout=300
++ add_test_message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/1abb416f-4f5f-48f7-b42c-44285f1acef4
added es message 1abb416f-4f5f-48f7-b42c-44285f1acef4
++ echo added es message 1abb416f-4f5f-48f7-b42c-44285f1acef4
++ logger -i -p local6.info -t 816face9-c165-48d5-a249-0ab7f06fadbd 816face9-c165-48d5-a249-0ab7f06fadbd
added es-ops message 816face9-c165-48d5-a249-0ab7f06fadbd
++ echo added es-ops message 816face9-c165-48d5-a249-0ab7f06fadbd
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=1abb416f-4f5f-48f7-b42c-44285f1acef4
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 1abb416f-4f5f-48f7-b42c-44285f1acef4
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:1abb416f-4f5f-48f7-b42c-44285f1acef4'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 291 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 1abb416f-4f5f-48f7-b42c-44285f1acef4
good - wait_for_fluentd_to_catch_up: found 1 record project logging for 1abb416f-4f5f-48f7-b42c-44285f1acef4
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=816face9-c165-48d5-a249-0ab7f06fadbd
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER 816face9-c165-48d5-a249-0ab7f06fadbd
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:816face9-c165-48d5-a249-0ab7f06fadbd' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:816face9-c165-48d5-a249-0ab7f06fadbd'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:816face9-c165-48d5-a249-0ab7f06fadbd'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 816face9-c165-48d5-a249-0ab7f06fadbd
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 816face9-c165-48d5-a249-0ab7f06fadbd
++ '[' -n '' ']'
++ '[' -n '' ']'
+++ date +%s
++ local endtime=1496974758
+++ expr 1496974758 - 1496974744
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-09 02:19:18.824540473+00:00
++ echo END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-09 02:19:18.824540473+00:00
++ return 0
++ '[' 0 -ne 0 ']'
++ return 0
++ cleanup
++ '[' '!' -f /tmp/tmp.iuQBaNy4QN ']'
++ return 0
running test test-fluentd-forward.sh
++ set -o nounset
++ set -o pipefail
++ type get_running_pod
++ [[ 1 -ne 1 ]]
++ [[ true = \f\a\l\s\e ]]
++ CLUSTER=true
++ ops=-ops
++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts
++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']'
++ PROJ_PREFIX=project.
++ get_test_user_token
++ local current_project
+++ oc project -q
++ current_project=logging
++ oc login --username=admin --password=admin
+++ oc whoami -t
++ test_token=Gch_58-Xpr9OWI1juVOQ9UBI5pGY8iTpUjiYqk7LFGM
+++ oc whoami
++ test_name=admin
++ test_ip=127.0.0.1
++ oc login --username=system:admin
++ oc project logging
++ TEST_DIVIDER=------------------------------------------
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-vx5q2
++ write_and_verify_logs 1
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up '' ''
+++ date +%s
++ local starttime=1496974760
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:19:20.279764624+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:19:20.279764624+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ uuidgen
++ local uuid_es_ops=a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
++ local expected=1
++ local timeout=300
++ add_test_message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
added es message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
++ echo added es message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
++ logger -i -p local6.info -t a66fcb12-ac0d-40c6-a216-ce69ff39b7f5 a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
added es-ops message a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
++ echo added es-ops message a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 295 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
good - wait_for_fluentd_to_catch_up: found 1 record project logging for 87468bde-e8d4-4a1d-a3d8-8fb7d608c2f6
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:a66fcb12-ac0d-40c6-a216-ce69ff39b7f5' --connect-timeout 1
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:a66fcb12-ac0d-40c6-a216-ce69ff39b7f5'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:a66fcb12-ac0d-40c6-a216-ce69ff39b7f5'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for a66fcb12-ac0d-40c6-a216-ce69ff39b7f5
++ '[' -n '' ']'
++ '[' -n '' ']'
+++ date +%s
++ local endtime=1496974768
+++ expr 1496974768 - 1496974760
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 8 seconds at 2017-06-09 02:19:28.898545366+00:00
++ echo END wait_for_fluentd_to_catch_up took 8 seconds at 2017-06-09 02:19:28.898545366+00:00
++ return 0
++ return 0
++ trap cleanup INT TERM EXIT
++ create_forwarding_fluentd
++ oc create configmap logging-forward-fluentd --from-file=fluent.conf=../templates/forward-fluent.conf
configmap "logging-forward-fluentd" created
++ oc get template/logging-fluentd-template -o yaml
++ sed -e 's/logging-infra-fluentd: "true"/logging-infra-forward-fluentd: "true"/' -e 's/name: logging-fluentd/name: logging-forward-fluentd/' -e 's/ fluentd/ forward-fluentd/' -e '/image:/ a \
          ports: \
            - containerPort: 24284'
++ oc new-app -f -
--> Deploying template "logging/logging-forward-fluentd-template" for "-" to project logging

     logging-forward-fluentd-template
     ---------
     Template for logging forward-fluentd deployment.

     * With parameters:
        * IMAGE_PREFIX=172.30.155.104:5000/logging/
        * IMAGE_VERSION=latest

--> Creating resources ...
    daemonset "logging-forward-fluentd" created
--> Success
    Run 'oc status' to view your app.
++ oc label node --all logging-infra-forward-fluentd=true
node "172.18.11.188" labeled
++ wait_for_pod_ACTION start forward-fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod forward-fluentd
+++ oc get pods -l component=forward-fluentd
+++ awk -v sel=forward-fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=forward-fluentd not running yet
++ echo pod for component=forward-fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod forward-fluentd
+++ oc get pods -l component=forward-fluentd
+++ awk -v sel=forward-fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-forward-fluentd-njsn0
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-forward-fluentd-njsn0 ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
++ update_current_fluentd
++ oc label node --all logging-infra-fluentd-
node "172.18.11.188" labeled
++ wait_for_pod_ACTION stop logging-fluentd-vx5q2
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-vx5q2
++ '[' -z logging-fluentd-vx5q2 -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-vx5q2
++ '[' -n 1 ']'
pod logging-fluentd-vx5q2 still running
++ echo pod logging-fluentd-vx5q2 still running
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' stop = start ']'
++ '[' 110 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-vx5q2
++ '[' stop = start ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
++ oc get configmap/logging-fluentd -o yaml
++ sed '/## matches/ a\
      <match **>\
        @include configs.d/user/secure-forward.conf\
      </match>'
++ oc replace -f -
configmap "logging-fluentd" replaced
+++ oc get pods -l component=forward-fluentd -o name
++ POD=pods/logging-forward-fluentd-njsn0
+++ oc get pods/logging-forward-fluentd-njsn0 '--template={{.status.podIP}}'
++ FLUENTD_FORWARD=172.17.0.10
++ oc patch configmap/logging-fluentd --type=json --patch '[{ "op": "replace", "path": "/data/secure-forward.conf", "value": "\
  @type secure_forward\n\
  self_hostname forwarding-${HOSTNAME}\n\
  shared_key aggregated_logging_ci_testing\n\
  secure no\n\
  buffer_queue_limit \"#{ENV['\''BUFFER_QUEUE_LIMIT'\'']}\"\n\
  buffer_chunk_limit \"#{ENV['\''BUFFER_SIZE_LIMIT'\'']}\"\n\
  <server>\n\
   host 172.17.0.10\n\
   port 24284\n\
  </server>"}]'
configmap "logging-fluentd" patched
++ oc label node --all logging-infra-fluentd=true
node "172.18.11.188" labeled
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=fluentd not running yet
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-39t0d
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-39t0d ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
++ write_and_verify_logs 1
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up '' ''
+++ date +%s
++ local starttime=1496974802
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:20:02.354855639+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:20:02.354855639+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ uuidgen
++ local uuid_es_ops=f5b9e922-e33a-4f29-b31b-e3426f420836
++ local expected=1
++ local timeout=300
++ add_test_message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
++ echo added es message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
added es message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
++ logger -i -p local6.info -t f5b9e922-e33a-4f29-b31b-e3426f420836 f5b9e922-e33a-4f29-b31b-e3426f420836
added es-ops message f5b9e922-e33a-4f29-b31b-e3426f420836
++ echo added es-ops message f5b9e922-e33a-4f29-b31b-e3426f420836
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 288 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 287 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 286 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 285 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 284 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 283 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 282 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 281 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 280 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 279 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 278 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 277 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 276 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 275 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 274 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 273 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 272 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 271 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 270 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 269 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 268 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 267 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 266 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 265 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 264 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 263 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 262 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 261 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ get_count_from_json
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 260 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 259 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 258 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 257 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 256 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 255 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 254 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 253 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 252 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 251 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 250 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 249 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 248 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 247 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 246 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 245 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 244 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 243 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d44cc3d5-d8c6-426a-a0d8-63c9029f6b22'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 243 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project logging for d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for d44cc3d5-d8c6-426a-a0d8-63c9029f6b22
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=f5b9e922-e33a-4f29-b31b-e3426f420836
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER f5b9e922-e33a-4f29-b31b-e3426f420836
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:f5b9e922-e33a-4f29-b31b-e3426f420836' --connect-timeout 1
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:f5b9e922-e33a-4f29-b31b-e3426f420836'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:f5b9e922-e33a-4f29-b31b-e3426f420836'
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for f5b9e922-e33a-4f29-b31b-e3426f420836
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for f5b9e922-e33a-4f29-b31b-e3426f420836
++ '[' -n '' ']'
++ '[' -n '' ']'
+++ date +%s
++ local endtime=1496974883
+++ expr 1496974883 - 1496974802
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 81 seconds at 2017-06-09 02:21:23.806417141+00:00
++ echo END wait_for_fluentd_to_catch_up took 81 seconds at 2017-06-09 02:21:23.806417141+00:00
++ return 0
++ return 0
++ cleanup
++ cleanup_forward
++ oc label node --all logging-infra-fluentd-
node "172.18.11.188" labeled
++ wait_for_pod_ACTION stop logging-fluentd-vx5q2
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-vx5q2
++ '[' -z logging-fluentd-vx5q2 -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-vx5q2
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc delete daemonset/logging-forward-fluentd
daemonset "logging-forward-fluentd" deleted
+++ oc get configmap/logging-fluentd -o yaml
+++ grep '<match \*\*>'
++ '[' -n '      <match **>' ']'
++ oc get configmap/logging-fluentd -o yaml
++ oc replace -f -
++ sed -e '/<match \*\*>/ d' -e '/@include configs\.d\/user\/secure-forward\.conf/ d' -e '/<\/match>/ d'
configmap "logging-fluentd" replaced
++ oc patch configmap/logging-fluentd --type=json --patch '[{ "op": "replace", "path": "/data/secure-forward.conf", "value": "\
# @type secure_forward\n\
# self_hostname forwarding-${HOSTNAME}\n\
# shared_key aggregated_logging_ci_testing\n\
#  secure no\n\
#  <server>\n\
#   host ${FLUENTD_FORWARD}\n\
#   port 24284\n\
#  </server>"}]'
configmap "logging-fluentd" patched
++ oc label node --all logging-infra-fluentd=true
node "172.18.11.188" labeled
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=fluentd not running yet
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-g06k8
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-g06k8 ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-g06k8
++ oc get events -o yaml
++ write_and_verify_logs 1
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up '' ''
+++ date +%s
++ local starttime=1496974899
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:21:39.495805317+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:21:39.495805317+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=6bad3ab3-2cc8-4300-96bc-e41101378090
+++ uuidgen
++ local uuid_es_ops=5ea1ffb4-6463-4adf-a458-500fd38b8ff7
++ local expected=1
++ local timeout=300
++ add_test_message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/6bad3ab3-2cc8-4300-96bc-e41101378090
added es message 6bad3ab3-2cc8-4300-96bc-e41101378090
++ echo added es message 6bad3ab3-2cc8-4300-96bc-e41101378090
++ logger -i -p local6.info -t 5ea1ffb4-6463-4adf-a458-500fd38b8ff7 5ea1ffb4-6463-4adf-a458-500fd38b8ff7
added es-ops message 5ea1ffb4-6463-4adf-a458-500fd38b8ff7
++ echo added es-ops message 5ea1ffb4-6463-4adf-a458-500fd38b8ff7
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=6bad3ab3-2cc8-4300-96bc-e41101378090
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 288 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 287 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ get_count_from_json
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 286 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 285 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_count_from_json
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 284 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6bad3ab3-2cc8-4300-96bc-e41101378090
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6bad3ab3-2cc8-4300-96bc-e41101378090'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 284 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 6bad3ab3-2cc8-4300-96bc-e41101378090
good - wait_for_fluentd_to_catch_up: found 1 record project logging for 6bad3ab3-2cc8-4300-96bc-e41101378090
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=5ea1ffb4-6463-4adf-a458-500fd38b8ff7
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER 5ea1ffb4-6463-4adf-a458-500fd38b8ff7
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5ea1ffb4-6463-4adf-a458-500fd38b8ff7' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5ea1ffb4-6463-4adf-a458-500fd38b8ff7'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5ea1ffb4-6463-4adf-a458-500fd38b8ff7'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 5ea1ffb4-6463-4adf-a458-500fd38b8ff7
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 5ea1ffb4-6463-4adf-a458-500fd38b8ff7
++ '[' -n '' ']'
++ '[' -n '' ']'
+++ date +%s
++ local endtime=1496974923
+++ expr 1496974923 - 1496974899
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 24 seconds at 2017-06-09 02:22:03.676196682+00:00
++ echo END wait_for_fluentd_to_catch_up took 24 seconds at 2017-06-09 02:22:03.676196682+00:00
++ return 0
++ return 0
++ cleanup
++ cleanup_forward
++ oc label node --all logging-infra-fluentd-
node "172.18.11.188" labeled
++ wait_for_pod_ACTION stop logging-fluentd-g06k8
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-g06k8
++ '[' -z logging-fluentd-g06k8 -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-g06k8
++ '[' -n 1 ']'
pod logging-fluentd-g06k8 still running
++ echo pod logging-fluentd-g06k8 still running
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' stop = start ']'
++ '[' 110 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-g06k8
++ '[' stop = start ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
++ oc delete daemonset/logging-forward-fluentd
Error from server (NotFound): daemonsets.extensions "logging-forward-fluentd" not found
++ :
+++ oc get configmap/logging-fluentd -o yaml
+++ grep '<match \*\*>'
++ '[' -n '' ']'
++ oc patch configmap/logging-fluentd --type=json --patch '[{ "op": "replace", "path": "/data/secure-forward.conf", "value": "\
# @type secure_forward\n\
# self_hostname forwarding-${HOSTNAME}\n\
# shared_key aggregated_logging_ci_testing\n\
#  secure no\n\
#  <server>\n\
#   host ${FLUENTD_FORWARD}\n\
#   port 24284\n\
#  </server>"}]'
configmap "logging-fluentd" not patched
++ oc label node --all logging-infra-fluentd=true
node "172.18.11.188" labeled
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=fluentd not running yet
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-kgk6c
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-kgk6c ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-kgk6c
++ oc get events -o yaml
running test test-json-parsing.sh
++ set -o nounset
++ set -o pipefail
++ type get_running_pod
++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts
++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']'
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ uuidgen
++ uuid_es=24405998-374a-4a32-8d2c-2d151d0bd85f
Adding test message 24405998-374a-4a32-8d2c-2d151d0bd85f to Kibana . . .
++ echo Adding test message 24405998-374a-4a32-8d2c-2d151d0bd85f to Kibana . . .
++ add_test_message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/24405998-374a-4a32-8d2c-2d151d0bd85f
++ rc=0
++ timeout=600
++ echo Waiting 600 seconds for 24405998-374a-4a32-8d2c-2d151d0bd85f to show up in Elasticsearch . . .
Waiting 600 seconds for 24405998-374a-4a32-8d2c-2d151d0bd85f to show up in Elasticsearch . . .
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging.
++ mymessage=24405998-374a-4a32-8d2c-2d151d0bd85f
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 600
++ let ii=600
++ local interval=1
++ '[' 600 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 599 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 598 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 597 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 596 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 595 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 594 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 593 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 592 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 591 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 590 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 589 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 588 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 587 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 586 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 585 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _count message 24405998-374a-4a32-8d2c-2d151d0bd85f
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 585 -le 0 ']'
++ return 0
++ echo good - ./logging.sh: found 1 record project logging for 24405998-374a-4a32-8d2c-2d151d0bd85f
good - ./logging.sh: found 1 record project logging for 24405998-374a-4a32-8d2c-2d151d0bd85f
Testing if record is in correct format . . .
++ echo Testing if record is in correct format . . .
++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging. _search message 24405998-374a-4a32-8d2c-2d151d0bd85f
++ python test-json-parsing.py 24405998-374a-4a32-8d2c-2d151d0bd85f
++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging.*/_search?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f' --connect-timeout 1
++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
++ local 'endpoint=/project.logging.*/_search?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
++ shift
++ shift
++ args=("${@:-}")
++ local args
++ local secret_dir=/etc/elasticsearch/secret/
++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_search?q=message:24405998-374a-4a32-8d2c-2d151d0bd85f'
Success: record contains all of the expected fields/values
Success: ./logging.sh passed
++ echo Success: ./logging.sh passed
++ exit 0
running test test-mux.sh
++ set -o nounset
++ set -o pipefail
++ type get_running_pod
++ '[' false == false -o false == false ']'
Skipping -- This test requires both USE_MUX_CLIENT and MUX_ALLOW_EXTERNAL are true.
++ echo 'Skipping -- This test requires both USE_MUX_CLIENT and MUX_ALLOW_EXTERNAL are true.'
++ exit 0
SKIPPING upgrade test for now
running test test-viaq-data-model.sh
++ set -o nounset
++ set -o pipefail
++ type get_running_pod
++ [[ 1 -ne 1 ]]
++ [[ true = \f\a\l\s\e ]]
++ CLUSTER=true
++ ops=-ops
++ INDEX_PREFIX=
++ PROJ_PREFIX=project.
++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts
++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']'
++ get_test_user_token
++ local current_project
+++ oc project -q
++ current_project=logging
++ oc login --username=admin --password=admin
+++ oc whoami -t
++ test_token=b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo
+++ oc whoami
++ test_name=admin
++ test_ip=127.0.0.1
++ oc login --username=system:admin
++ oc project logging
++ TEST_DIVIDER=------------------------------------------
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-kgk6c
++ remove_test_volume
++ oc get template logging-fluentd-template -o json
++ python -c 'import json, sys; obj = json.loads(sys.stdin.read()); vm = obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"]; obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"] = [xx for xx in vm if xx["name"] != "cdmtest"]; vs = obj["objects"][0]["spec"]["template"]["spec"]["volumes"]; obj["objects"][0]["spec"]["template"]["spec"]["volumes"] = [xx for xx in vs if xx["name"] != "cdmtest"]; print json.dumps(obj, indent=2)'
++ oc replace -f -
template "logging-fluentd-template" replaced
+++ mktemp
++ cfg=/tmp/tmp.HEU3elyHps
++ cat
++ add_test_volume /tmp/tmp.HEU3elyHps
++ oc get template logging-fluentd-template -o json
++ python -c 'import json, sys; obj = json.loads(sys.stdin.read()); obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"].append({"name": "cdmtest", "mountPath": "/etc/fluent/configs.d/openshift/filter-pre-cdm-test.conf", "readOnly": True}); obj["objects"][0]["spec"]["template"]["spec"]["volumes"].append({"name": "cdmtest", "hostPath": {"path": "/tmp/tmp.HEU3elyHps"}}); print json.dumps(obj, indent=2)'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ trap cleanup INT TERM EXIT
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-kgk6c
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-kgk6c
++ '[' -z logging-fluentd-kgk6c -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-kgk6c
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
pod for component=fluentd not running yet
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-0v2kd
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-0v2kd ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-0v2kd
++ keep_fields=method,statusCode,type,@timestamp,req,res
++ write_and_verify_logs test1
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2
+++ date +%s
++ local starttime=1496974986
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:23:06.029844143+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:23:06.029844143+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=a7abe340-b73c-4ec2-aad9-49d606659644
+++ uuidgen
++ local uuid_es_ops=613467a4-5c7e-4052-8448-798b82d56d70
++ local expected=1
++ local timeout=300
++ add_test_message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/a7abe340-b73c-4ec2-aad9-49d606659644
added es message a7abe340-b73c-4ec2-aad9-49d606659644
++ echo added es message a7abe340-b73c-4ec2-aad9-49d606659644
++ logger -i -p local6.info -t 613467a4-5c7e-4052-8448-798b82d56d70 613467a4-5c7e-4052-8448-798b82d56d70
added es-ops message 613467a4-5c7e-4052-8448-798b82d56d70
++ echo added es-ops message 613467a4-5c7e-4052-8448-798b82d56d70
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=a7abe340-b73c-4ec2-aad9-49d606659644
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_count_from_json
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a7abe340-b73c-4ec2-aad9-49d606659644
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ local nrecs=1
++ test 1 = 1
good - wait_for_fluentd_to_catch_up: found 1 record project logging for a7abe340-b73c-4ec2-aad9-49d606659644
++ break
++ '[' 289 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for a7abe340-b73c-4ec2-aad9-49d606659644
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=613467a4-5c7e-4052-8448-798b82d56d70
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER 613467a4-5c7e-4052-8448-798b82d56d70
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:613467a4-5c7e-4052-8448-798b82d56d70' --connect-timeout 1
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:613467a4-5c7e-4052-8448-798b82d56d70'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:613467a4-5c7e-4052-8448-798b82d56d70'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 613467a4-5c7e-4052-8448-798b82d56d70
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 613467a4-5c7e-4052-8448-798b82d56d70
++ '[' -n get_logmessage ']'
++ get_logmessage a7abe340-b73c-4ec2-aad9-49d606659644
++ logmessage=a7abe340-b73c-4ec2-aad9-49d606659644
++ '[' -n get_logmessage2 ']'
++ get_logmessage2 613467a4-5c7e-4052-8448-798b82d56d70
++ logmessage2=613467a4-5c7e-4052-8448-798b82d56d70
+++ date +%s
++ local endtime=1496975003
+++ expr 1496975003 - 1496974986
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 17 seconds at 2017-06-09 02:23:23.366573646+00:00
++ echo END wait_for_fluentd_to_catch_up took 17 seconds at 2017-06-09 02:23:23.366573646+00:00
++ return 0
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ kpod=logging-kibana-1-fz6pt
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es project.logging _search message a7abe340-b73c-4ec2-aad9-49d606659644
++ python test-viaq-data-model.py test1
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:a7abe340-b73c-4ec2-aad9-49d606659644'
++ :
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es-ops .operations _search message 613467a4-5c7e-4052-8448-798b82d56d70
++ python test-viaq-data-model.py test1
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:613467a4-5c7e-4052-8448-798b82d56d70'
++ :
++ '[' 0 '!=' 0 ']'
++ return 0
++ add_cdm_env_var_val CDM_USE_UNDEFINED '"true"'
+++ mktemp
++ junk=/tmp/tmp.kG2LLs13n7
++ cat
++ oc get template logging-fluentd-template -o yaml
++ sed '/env:/r /tmp/tmp.kG2LLs13n7'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.kG2LLs13n7
++ add_cdm_env_var_val CDM_EXTRA_KEEP_FIELDS method,statusCode,type,@timestamp,req,res
+++ mktemp
++ junk=/tmp/tmp.XiVJAnAKTo
++ cat
++ oc get template logging-fluentd-template -o yaml
++ sed '/env:/r /tmp/tmp.XiVJAnAKTo'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.XiVJAnAKTo
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-0v2kd
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-0v2kd
++ '[' -z logging-fluentd-0v2kd -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-0v2kd
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
++ echo pod for component=fluentd not running yet
pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-lbfsw
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-lbfsw ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-lbfsw
++ write_and_verify_logs test2
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2
+++ date +%s
++ local starttime=1496975020
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:23:40.627196783+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:23:40.627196783+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=a057b0f5-4090-45a0-9e40-18a2056a304a
+++ uuidgen
++ local uuid_es_ops=3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ local expected=1
++ local timeout=300
++ add_test_message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/a057b0f5-4090-45a0-9e40-18a2056a304a
added es message a057b0f5-4090-45a0-9e40-18a2056a304a
++ echo added es message a057b0f5-4090-45a0-9e40-18a2056a304a
++ logger -i -p local6.info -t 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
added es-ops message 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ echo added es-ops message 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=a057b0f5-4090-45a0-9e40-18a2056a304a
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message a057b0f5-4090-45a0-9e40-18a2056a304a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 292 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project logging for a057b0f5-4090-45a0-9e40-18a2056a304a
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for a057b0f5-4090-45a0-9e40-18a2056a304a
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09' --connect-timeout 1
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ '[' -n get_logmessage ']'
++ get_logmessage a057b0f5-4090-45a0-9e40-18a2056a304a
++ logmessage=a057b0f5-4090-45a0-9e40-18a2056a304a
++ '[' -n get_logmessage2 ']'
++ get_logmessage2 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ logmessage2=3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
+++ date +%s
++ local endtime=1496975033
+++ expr 1496975033 - 1496975020
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-09 02:23:53.459437173+00:00
++ echo END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-09 02:23:53.459437173+00:00
++ return 0
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ kpod=logging-kibana-1-fz6pt
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es project.logging _search message a057b0f5-4090-45a0-9e40-18a2056a304a
++ python test-viaq-data-model.py test2
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:a057b0f5-4090-45a0-9e40-18a2056a304a'
++ :
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es-ops .operations _search message 3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09
++ python test-viaq-data-model.py test2
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:3fe1750d-3ed3-4e5b-89a9-32ca7ca29e09'
++ :
++ '[' 0 '!=' 0 ']'
++ return 0
++ del_cdm_env_var CDM_EXTRA_KEEP_FIELDS
++ oc get template logging-fluentd-template -o yaml
++ sed '/- name: CDM_EXTRA_KEEP_FIELDS$/,/value:/d'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ add_cdm_env_var_val CDM_EXTRA_KEEP_FIELDS undefined4,undefined5,method,statusCode,type,@timestamp,req,res
+++ mktemp
++ junk=/tmp/tmp.rj9kTzb5q9
++ cat
++ oc get template logging-fluentd-template -o yaml
++ sed '/env:/r /tmp/tmp.rj9kTzb5q9'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.rj9kTzb5q9
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-lbfsw
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-lbfsw
++ '[' -z logging-fluentd-lbfsw -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-lbfsw
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=fluentd not running yet
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-glmjj
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-glmjj ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-glmjj
++ write_and_verify_logs test3
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2
+++ date +%s
++ local starttime=1496975050
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:24:10.414177003+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:24:10.414177003+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=6e69cb38-df43-4143-a9ba-93a709257e49
+++ uuidgen
++ local uuid_es_ops=bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ local expected=1
++ local timeout=300
++ add_test_message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/6e69cb38-df43-4143-a9ba-93a709257e49
added es message 6e69cb38-df43-4143-a9ba-93a709257e49
++ echo added es message 6e69cb38-df43-4143-a9ba-93a709257e49
++ logger -i -p local6.info -t bdb35d8f-1ada-4b3b-887b-444d32a06c4b bdb35d8f-1ada-4b3b-887b-444d32a06c4b
added es-ops message bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ echo added es-ops message bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=6e69cb38-df43-4143-a9ba-93a709257e49
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 288 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message 6e69cb38-df43-4143-a9ba-93a709257e49
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 288 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project logging for 6e69cb38-df43-4143-a9ba-93a709257e49
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 6e69cb38-df43-4143-a9ba-93a709257e49
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER bdb35d8f-1ada-4b3b-887b-444d32a06c4b
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:bdb35d8f-1ada-4b3b-887b-444d32a06c4b' --connect-timeout 1
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:bdb35d8f-1ada-4b3b-887b-444d32a06c4b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:bdb35d8f-1ada-4b3b-887b-444d32a06c4b'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ '[' -n get_logmessage ']'
++ get_logmessage 6e69cb38-df43-4143-a9ba-93a709257e49
++ logmessage=6e69cb38-df43-4143-a9ba-93a709257e49
++ '[' -n get_logmessage2 ']'
++ get_logmessage2 bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ logmessage2=bdb35d8f-1ada-4b3b-887b-444d32a06c4b
+++ date +%s
++ local endtime=1496975068
+++ expr 1496975068 - 1496975050
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 18 seconds at 2017-06-09 02:24:28.849563900+00:00
++ echo END wait_for_fluentd_to_catch_up took 18 seconds at 2017-06-09 02:24:28.849563900+00:00
++ return 0
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ kpod=logging-kibana-1-fz6pt
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es project.logging _search message 6e69cb38-df43-4143-a9ba-93a709257e49
++ python test-viaq-data-model.py test3
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:6e69cb38-df43-4143-a9ba-93a709257e49'
++ :
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es-ops .operations _search message bdb35d8f-1ada-4b3b-887b-444d32a06c4b
++ python test-viaq-data-model.py test3
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:bdb35d8f-1ada-4b3b-887b-444d32a06c4b'
++ :
++ '[' 0 '!=' 0 ']'
++ return 0
++ add_cdm_env_var_val CDM_UNDEFINED_NAME myname
+++ mktemp
++ junk=/tmp/tmp.aqzHKf7Tel
++ cat
++ oc get template logging-fluentd-template -o yaml
++ sed '/env:/r /tmp/tmp.aqzHKf7Tel'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.aqzHKf7Tel
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-glmjj
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-glmjj
++ '[' -z logging-fluentd-glmjj -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-glmjj
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
pod for component=fluentd not running yet
++ echo pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-hlc4r
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-hlc4r ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-hlc4r
++ write_and_verify_logs test4
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2
+++ date +%s
++ local starttime=1496975092
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:24:52.568953465+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:24:52.568953465+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=bb55c0ed-170c-4147-9334-5cfe0960003b
+++ uuidgen
++ local uuid_es_ops=5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ local expected=1
++ local timeout=300
++ add_test_message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/bb55c0ed-170c-4147-9334-5cfe0960003b
added es message bb55c0ed-170c-4147-9334-5cfe0960003b
++ echo added es message bb55c0ed-170c-4147-9334-5cfe0960003b
++ logger -i -p local6.info -t 5270be67-d874-4aab-b9a9-9bd5d3644d2e 5270be67-d874-4aab-b9a9-9bd5d3644d2e
added es-ops message 5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ echo added es-ops message 5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=bb55c0ed-170c-4147-9334-5cfe0960003b
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 288 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 287 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 286 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message bb55c0ed-170c-4147-9334-5cfe0960003b
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
good - wait_for_fluentd_to_catch_up: found 1 record project logging for bb55c0ed-170c-4147-9334-5cfe0960003b
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 286 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for bb55c0ed-170c-4147-9334-5cfe0960003b
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER 5270be67-d874-4aab-b9a9-9bd5d3644d2e
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5270be67-d874-4aab-b9a9-9bd5d3644d2e' --connect-timeout 1
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5270be67-d874-4aab-b9a9-9bd5d3644d2e'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5270be67-d874-4aab-b9a9-9bd5d3644d2e'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ '[' -n get_logmessage ']'
++ get_logmessage bb55c0ed-170c-4147-9334-5cfe0960003b
++ logmessage=bb55c0ed-170c-4147-9334-5cfe0960003b
++ '[' -n get_logmessage2 ']'
++ get_logmessage2 5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ logmessage2=5270be67-d874-4aab-b9a9-9bd5d3644d2e
+++ date +%s
++ local endtime=1496975113
+++ expr 1496975113 - 1496975092
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 21 seconds at 2017-06-09 02:25:13.915658620+00:00
++ echo END wait_for_fluentd_to_catch_up took 21 seconds at 2017-06-09 02:25:13.915658620+00:00
++ return 0
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ kpod=logging-kibana-1-fz6pt
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es project.logging _search message bb55c0ed-170c-4147-9334-5cfe0960003b
++ python test-viaq-data-model.py test4
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:bb55c0ed-170c-4147-9334-5cfe0960003b'
++ :
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es-ops .operations _search message 5270be67-d874-4aab-b9a9-9bd5d3644d2e
++ python test-viaq-data-model.py test4
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:5270be67-d874-4aab-b9a9-9bd5d3644d2e'
++ :
++ '[' 0 '!=' 0 ']'
++ return 0
++ del_cdm_env_var CDM_EXTRA_KEEP_FIELDS
++ oc get template logging-fluentd-template -o yaml
++ sed '/- name: CDM_EXTRA_KEEP_FIELDS$/,/value:/d'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ add_cdm_env_var_val CDM_EXTRA_KEEP_FIELDS undefined4,undefined5,empty1,undefined3,method,statusCode,type,@timestamp,req,res
+++ mktemp
++ junk=/tmp/tmp.pRo2brmMc0
++ cat
++ oc get template logging-fluentd-template -o yaml
++ sed '/env:/r /tmp/tmp.pRo2brmMc0'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.pRo2brmMc0
++ add_cdm_env_var_val CDM_KEEP_EMPTY_FIELDS undefined4,undefined5,empty1,undefined3
+++ mktemp
++ junk=/tmp/tmp.5Ub9HiDpfi
++ cat
++ oc get template logging-fluentd-template -o yaml
++ sed '/env:/r /tmp/tmp.5Ub9HiDpfi'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.5Ub9HiDpfi
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-hlc4r
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-hlc4r
++ '[' -z logging-fluentd-hlc4r -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-hlc4r
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
++ echo pod for component=fluentd not running yet
pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-pv704
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-pv704 ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ fpod=logging-fluentd-pv704
++ write_and_verify_logs test5 allow_empty
++ expected=1
++ rc=0
++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2
+++ date +%s
++ local starttime=1496975142
+++ date -u --rfc-3339=ns
START wait_for_fluentd_to_catch_up at 2017-06-09 02:25:42.440149479+00:00
++ echo START wait_for_fluentd_to_catch_up at 2017-06-09 02:25:42.440149479+00:00
+++ get_running_pod es
+++ oc get pods -l component=es
+++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ get_running_pod es-ops
+++ oc get pods -l component=es-ops
+++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}'
++ local es_ops_pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ '[' -z logging-es-ops-data-master-xc2h70yx-1-08w7b ']'
+++ uuidgen
++ local uuid_es=d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ uuidgen
++ local uuid_es_ops=50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ local expected=1
++ local timeout=300
++ add_test_message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ local kib_pod=logging-kibana-1-fz6pt
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
added es message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ echo added es message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ logger -i -p local6.info -t 50d50fad-2e5e-465b-ab08-dde7225f9ed6 50d50fad-2e5e-465b-ab08-dde7225f9ed6
added es-ops message 50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ echo added es-ops message 50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ local rc=0
++ espod=logging-es-data-master-8nzz83ik-1-cqnxl
++ myproject=project.logging
++ mymessage=d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ expected=1
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 299 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 298 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 297 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 296 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 295 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 294 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 293 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 292 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 291 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 290 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 289 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 288 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 287 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ local nrecs=0
++ test 0 = 1
++ sleep 1
++ let ii=ii-1
++ '[' 286 -gt 0 ']'
++ test_count_expected
++ myfield=message
+++ query_es_from_es logging-es-data-master-8nzz83ik-1-cqnxl project.logging _count message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
+++ get_count_from_json
+++ curl_es logging-es-data-master-8nzz83ik-1-cqnxl '/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-data-master-8nzz83ik-1-cqnxl
+++ local 'endpoint=/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-data-master-8nzz83ik-1-cqnxl -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
good - wait_for_fluentd_to_catch_up: found 1 record project logging for d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 286 -le 0 ']'
++ return 0
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ espod=logging-es-ops-data-master-xc2h70yx-1-08w7b
++ myproject=.operations
++ mymessage=50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ expected=1
++ myfield=systemd.u.SYSLOG_IDENTIFIER
++ wait_until_cmd_or_err test_count_expected test_count_err 300
++ let ii=300
++ local interval=1
++ '[' 300 -gt 0 ']'
++ test_count_expected
++ myfield=systemd.u.SYSLOG_IDENTIFIER
+++ query_es_from_es logging-es-ops-data-master-xc2h70yx-1-08w7b .operations _count systemd.u.SYSLOG_IDENTIFIER 50d50fad-2e5e-465b-ab08-dde7225f9ed6
+++ get_count_from_json
+++ curl_es logging-es-ops-data-master-xc2h70yx-1-08w7b '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:50d50fad-2e5e-465b-ab08-dde7225f9ed6' --connect-timeout 1
+++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)'
+++ local pod=logging-es-ops-data-master-xc2h70yx-1-08w7b
+++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:50d50fad-2e5e-465b-ab08-dde7225f9ed6'
+++ shift
+++ shift
+++ args=("${@:-}")
+++ local args
+++ local secret_dir=/etc/elasticsearch/secret/
+++ oc exec logging-es-ops-data-master-xc2h70yx-1-08w7b -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:50d50fad-2e5e-465b-ab08-dde7225f9ed6'
++ local nrecs=1
++ test 1 = 1
++ break
++ '[' 300 -le 0 ']'
++ return 0
good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ '[' -n get_logmessage ']'
++ get_logmessage d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ logmessage=d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ '[' -n get_logmessage2 ']'
++ get_logmessage2 50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ logmessage2=50d50fad-2e5e-465b-ab08-dde7225f9ed6
+++ date +%s
++ local endtime=1496975163
+++ expr 1496975163 - 1496975142
+++ date -u --rfc-3339=ns
END wait_for_fluentd_to_catch_up took 21 seconds at 2017-06-09 02:26:03.934954787+00:00
++ echo END wait_for_fluentd_to_catch_up took 21 seconds at 2017-06-09 02:26:03.934954787+00:00
++ return 0
+++ get_running_pod kibana
+++ oc get pods -l component=kibana
+++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}'
++ kpod=logging-kibana-1-fz6pt
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es project.logging _search message d37c6bc9-d55c-438a-ac57-52a2b4c40f3a
++ python test-viaq-data-model.py test5 allow_empty
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:d37c6bc9-d55c-438a-ac57-52a2b4c40f3a'
++ :
++ '[' 0 = 0 ']'
++ curl_es_from_kibana logging-kibana-1-fz6pt logging-es-ops .operations _search message 50d50fad-2e5e-465b-ab08-dde7225f9ed6
++ python test-viaq-data-model.py test5 allow_empty
++ oc exec logging-kibana-1-fz6pt -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer b9YXT3gcexKSesTOvxPU0vSYJF_8FU6TsX8JT5BEmyo' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:50d50fad-2e5e-465b-ab08-dde7225f9ed6'
++ :
++ '[' 0 '!=' 0 ']'
++ return 0
++ cleanup
++ remove_test_volume
++ oc get template logging-fluentd-template -o json
++ oc replace -f -
++ python -c 'import json, sys; obj = json.loads(sys.stdin.read()); vm = obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"]; obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"] = [xx for xx in vm if xx["name"] != "cdmtest"]; vs = obj["objects"][0]["spec"]["template"]["spec"]["volumes"]; obj["objects"][0]["spec"]["template"]["spec"]["volumes"] = [xx for xx in vs if xx["name"] != "cdmtest"]; print json.dumps(obj, indent=2)'
template "logging-fluentd-template" replaced
++ remove_cdm_env
++ oc get template logging-fluentd-template -o yaml
++ sed '/- name: CDM_/,/value:/d'
++ oc replace -f -
template "logging-fluentd-template" replaced
++ rm -f /tmp/tmp.HEU3elyHps
++ restart_fluentd
++ oc delete daemonset logging-fluentd
daemonset "logging-fluentd" deleted
++ wait_for_pod_ACTION stop logging-fluentd-pv704
++ local ii=120
++ local incr=10
++ '[' stop = start ']'
++ curpod=logging-fluentd-pv704
++ '[' -z logging-fluentd-pv704 -a -n '' ']'
++ '[' 120 -gt 0 ']'
++ '[' stop = stop ']'
++ oc describe pod/logging-fluentd-pv704
++ '[' stop = start ']'
++ break
++ '[' 120 -le 0 ']'
++ return 0
++ oc process logging-fluentd-template
++ oc create -f -
daemonset "logging-fluentd" created
++ wait_for_pod_ACTION start fluentd
++ local ii=120
++ local incr=10
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=
++ '[' 120 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z '' ']'
++ '[' -n '' ']'
++ '[' -n 1 ']'
++ echo pod for component=fluentd not running yet
pod for component=fluentd not running yet
++ sleep 10
+++ expr 120 - 10
++ ii=110
++ '[' start = start ']'
+++ get_running_pod fluentd
+++ oc get pods -l component=fluentd
+++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}'
++ curpod=logging-fluentd-pwxwh
++ '[' 110 -gt 0 ']'
++ '[' start = stop ']'
++ '[' start = start ']'
++ '[' -z logging-fluentd-pwxwh ']'
++ break
++ '[' 110 -le 0 ']'
++ return 0
SKIPPING reinstall test for now
/data/src/github.com/openshift/origin-aggregated-logging/hack/lib/log/system.sh: line 31:  4612 Terminated              sar -A -o "${binary_logfile}" 1 86400 > /dev/null 2> "${stderr_logfile}"  (wd: /data/src/github.com/openshift/origin-aggregated-logging)
[INFO] [CLEANUP] Beginning cleanup routines...
[INFO] [CLEANUP] Dumping cluster events to /tmp/origin-aggregated-logging/artifacts/events.txt
[INFO] [CLEANUP] Dumping etcd contents to /tmp/origin-aggregated-logging/artifacts/etcd
[WARNING] No compiled `etcdhelper` binary was found. Attempting to build one using:
[WARNING]   $ hack/build-go.sh tools/etcdhelper
++ Building go targets for linux/amd64: tools/etcdhelper
/data/src/github.com/openshift/origin-aggregated-logging/../origin/hack/build-go.sh took 129 seconds
2017-06-08 22:28:45.031539 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
[INFO] [CLEANUP] Dumping container logs to /tmp/origin-aggregated-logging/logs/containers
[INFO] [CLEANUP] Truncating log files over 200M
[INFO] [CLEANUP] Stopping docker containers
[INFO] [CLEANUP] Removing docker containers
Error response from daemon: You cannot remove a running container f0631b367c5332163cdf3e73daa1cf76eeffa31d95859c57ed4062bb60b6e40e. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container b2aa234ddccf8547a90479ed6e520c746c0c96386925d67f375da6f3fff6ee3e. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container edd62c4bf2a472750ec4e209bec8ff2b7ce4b65d3307f63be0d76b5d8d163c41. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container 1ee3978fdbda0756c77708273ea59f732da42e46d48413581f5a9e347eb96977. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container b8224c8a6c08dc0a7a5cf8bbaadd117af5090cc0ba2c0e3cbb377519b4b1741c. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container 2cf1a2c9eeda28221bd2174b8297fb26fb599fd380d6d37f6dda16a31d6db729. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container b5dfd700823a300d1bac1c6446990661c1df1f981b417154f678d6b39b931847. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container 06a30ac01cb8fc54bd4a6b15a9908e5ae30ccfefa266a97c7ca867ade1256417. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container cdfd396b9afb270155a20f29f6e27b4015ec2c7242aa713da768df235473d906. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container 17c584219c01665563559ddba2a4e3013079a7427961544850e9c01fb69fa335. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container 47f29e470a80373ce04cfb55f41073033dafa2b002d4cc257a3697a532d1b971. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container b0bb5976debdb2790a39fb70a1b4957488776c7d5bcbf3721aa7d258690e9c03. Stop the container before attempting removal or use -f
Error response from daemon: You cannot remove a running container 9f6154d1ebac4e13f9ebd2d9e135fd6669a2393f40203e61903a79e6750f2648. Stop the container before attempting removal or use -f
[INFO] [CLEANUP] Killing child processes
[INFO] [CLEANUP] Pruning etcd data directory
[INFO] /data/src/github.com/openshift/origin-aggregated-logging/logging.sh exited with code 0 after 00h 58m 59s

real	58m59.075s
user	5m39.639s
sys	1m5.121s
Finished GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh
***************************************************
==> openshiftdev: Downloading logs
==> openshiftdev: Downloading artifacts from '/var/log/yum.log' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace/origin/artifacts/yum.log'
==> openshiftdev: Downloading artifacts from '/var/log/secure' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace/origin/artifacts/secure'
==> openshiftdev: Downloading artifacts from '/var/log/audit/audit.log' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace/origin/artifacts/audit.log'
==> openshiftdev: Downloading artifacts from '/tmp/origin-aggregated-logging/' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace/origin/artifacts'
+ false
+ false
+ test_pull_requests --mark_test_success --repo origin-aggregated-logging --config /var/lib/jenkins/.test_pull_requests_logging.json
Rate limit remaining: 2349
  Marking SUCCESS for pull request #--repo in repo ''
/usr/share/ruby/net/http/response.rb:119:in `error!': 404 "Not Found" (Net::HTTPServerException)
	from /bin/test_pull_requests:543:in `block in get_comments'
	from /bin/test_pull_requests:535:in `each'
	from /bin/test_pull_requests:535:in `get_comments'
	from /bin/test_pull_requests:1074:in `get_comment_matching_regex'
	from /bin/test_pull_requests:1090:in `get_comment_with_prefix'
	from /bin/test_pull_requests:779:in `mark_test_success'
	from /bin/test_pull_requests:2361:in `<main>'
+ true
[description-setter] Could not determine description.
[PostBuildScript] - Execution post build scripts.
[workspace] $ /bin/sh -xe /tmp/hudson1064897459589279789.sh
+ INSTANCE_NAME=origin_logging-rhel7-1648
+ pushd origin
~/jobs/test-origin-aggregated-logging/workspace/origin ~/jobs/test-origin-aggregated-logging/workspace
+ rc=0
+ '[' -f .vagrant-openshift.json ']'
++ /usr/bin/vagrant ssh -c 'sudo ausearch -m avc'
+ ausearchresult='<no matches>'
+ rc=1
+ '[' '<no matches>' = '<no matches>' ']'
+ rc=0
+ /usr/bin/vagrant destroy -f
==> openshiftdev: Terminating the instance...
==> openshiftdev: Running cleanup tasks for 'shell' provisioner...
+ popd
~/jobs/test-origin-aggregated-logging/workspace
+ exit 0
Finished: SUCCESS