Started by remote host 50.17.198.52 [EnvInject] - Loading node environment variables. Building in workspace /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content OS_ROOT=/data/src/github.com/openshift/origin INSTANCE_TYPE=c4.xlarge GITHUB_REPO=openshift OS=rhel7 TESTNAME=logging [EnvInject] - Variables injected successfully. [workspace@2] $ /bin/sh -xe /tmp/hudson1493504654616898295.sh + false + unset GOPATH + REPO_NAME=origin-aggregated-logging + rm -rf origin-aggregated-logging + vagrant origin-local-checkout --replace --repo origin-aggregated-logging -b master You don't seem to have the GOPATH environment variable set on your system. See: 'go help gopath' for more details about GOPATH. Waiting for the cloning process to finish Cloning origin-aggregated-logging ... Submodule 'deployer/common' (https://github.com/openshift/origin-integration-common) registered for path 'deployer/common' Submodule 'kibana-proxy' (https://github.com/fabric8io/openshift-auth-proxy.git) registered for path 'kibana-proxy' Cloning into 'deployer/common'... Submodule path 'deployer/common': checked out '45bf993212cdcbab5cbce3b3fab74a72b851402e' Cloning into 'kibana-proxy'... Submodule path 'kibana-proxy': checked out '6c40d1a5e8f79fba353f4df3950010f7f6b773eb' Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 + pushd origin-aggregated-logging ~/jobs/test-origin-aggregated-logging/workspace@2/origin-aggregated-logging ~/jobs/test-origin-aggregated-logging/workspace@2 + git checkout master Already on 'master' + popd ~/jobs/test-origin-aggregated-logging/workspace@2 + '[' -n 462 ']' + set +x *****Locally Merging Pull Request: https://github.com/openshift/origin-aggregated-logging/pull/462 + test_pull_requests --local_merge_pull_request 462 --repo origin-aggregated-logging --config /var/lib/jenkins/.test_pull_requests_logging.json Rate limit remaining: 1791 Checking if current base repo commit ID matches what we expect Local merging pull request #462 for repo 'origin-aggregated-logging' against base repo commit id 0986045cc4f5bb216c462883f4f9ae77025f3e62 + pushd origin-aggregated-logging + git checkout master Already on 'master' + git checkout -b tpr_use-dc-name-for-es_portante Switched to a new branch 'tpr_use-dc-name-for-es_portante' + git pull git@github.com:portante/origin-aggregated-logging.git use-dc-name-for-es From github.com:portante/origin-aggregated-logging * branch use-dc-name-for-es -> FETCH_HEAD + git pull git@github.com:portante/origin-aggregated-logging.git use-dc-name-for-es --tags From github.com:portante/origin-aggregated-logging * branch use-dc-name-for-es -> FETCH_HEAD + git checkout master Switched to branch 'master' + git merge tpr_use-dc-name-for-es_portante + git submodule update --recursive + popd Rate limit remaining: 1787; delta: 4 ~/jobs/test-origin-aggregated-logging/workspace@2/origin-aggregated-logging ~/jobs/test-origin-aggregated-logging/workspace@2 Updating 0986045..f1648e6 Fast-forward elasticsearch/run.sh | 52 +++++++++++++++++++++++++--------------------------- 1 file changed, 25 insertions(+), 27 deletions(-) Already up-to-date. Updating 0986045..f1648e6 Fast-forward elasticsearch/run.sh | 52 +++++++++++++++++++++++++--------------------------- 1 file changed, 25 insertions(+), 27 deletions(-) ~/jobs/test-origin-aggregated-logging/workspace@2 + vagrant origin-local-checkout --replace You don't seem to have the GOPATH environment variable set on your system. See: 'go help gopath' for more details about GOPATH. Waiting for the cloning process to finish Checking repo integrity for /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 # On branch master # Untracked files: # (use "git add <file>..." to include in what will be committed) # # artifacts/ nothing added to commit but untracked files present (use "git add" to track) ~/jobs/test-origin-aggregated-logging/workspace@2 Replacing: /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 Already on 'master' HEAD is now at 5f2f3f4 Merge pull request #14441 from sdodson/bz1455472 Removing .vagrant-openshift.json Removing .vagrant/ Removing artifacts/ fatal: branch name required ~/jobs/test-origin-aggregated-logging/workspace@2 Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 + pushd origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 + INSTANCE_NAME=origin_logging-rhel7-1550 + GIT_URL=https://github.com/openshift/origin-aggregated-logging ++ echo https://github.com/openshift/origin-aggregated-logging ++ sed s,https://,, + OAL_LOCAL_PATH=github.com/openshift/origin-aggregated-logging + OS_O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging + env + sort AGGREGATED_LOGGING_PULL_ID=462 _=/bin/env BRANCH=master BUILD_CAUSE=REMOTECAUSE BUILD_CAUSE_REMOTECAUSE=true BUILD_DISPLAY_NAME=#1550 BUILD_ID=1550 BUILD_NUMBER=1550 BUILD_TAG=jenkins-test-origin-aggregated-logging-1550 BUILD_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1550/ EXECUTOR_NUMBER=41 GITHUB_REPO=openshift HOME=/var/lib/jenkins HUDSON_COOKIE=c9a2bc2a-3464-411b-b996-6251cc93c1a3 HUDSON_HOME=/var/lib/jenkins HUDSON_SERVER_COOKIE=ec11f8b2841c966f HUDSON_URL=https://ci.openshift.redhat.com/jenkins/ INSTANCE_TYPE=c4.xlarge JENKINS_HOME=/var/lib/jenkins JENKINS_SERVER_COOKIE=ec11f8b2841c966f JENKINS_URL=https://ci.openshift.redhat.com/jenkins/ JOB_BASE_NAME=test-origin-aggregated-logging JOB_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/display/redirect JOB_NAME=test-origin-aggregated-logging JOB_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/ LANG=en_US.UTF-8 LOGNAME=jenkins MERGE=false MERGE_SEVERITY=none NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat NODE_LABELS=master NODE_NAME=master OLDPWD=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 OS_ANSIBLE_BRANCH=master OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS=rhel7 OS_ROOT=/data/src/github.com/openshift/origin PATH=/sbin:/usr/sbin:/bin:/usr/bin PWD=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin ROOT_BUILD_CAUSE=REMOTECAUSE ROOT_BUILD_CAUSE_REMOTECAUSE=true RUN_CHANGES_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1550/display/redirect?page=changes RUN_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1550/display/redirect SHELL=/bin/bash SHLVL=3 TESTNAME=logging TEST_PERF=false USER=jenkins WORKSPACE=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt + vagrant origin-init --stage inst --os rhel7 --instance-type c4.xlarge origin_logging-rhel7-1550 Reading AWS credentials from /var/lib/jenkins/.awscred Searching devenv-rhel7_* for latest base AMI (required_name_tag=) Found: ami-40e7a056 (devenv-rhel7_6247) ++ seq 0 2 + for i in '$(seq 0 2)' + vagrant up --provider aws Bringing machine 'openshiftdev' up with 'aws' provider... ==> openshiftdev: Warning! The AWS provider doesn't support any of the Vagrant ==> openshiftdev: high-level network configurations (`config.vm.network`). They ==> openshiftdev: will be silently ignored. ==> openshiftdev: Warning! You're launching this instance into a VPC without an ==> openshiftdev: elastic IP. Please verify you're properly connected to a VPN so ==> openshiftdev: you can access this machine, otherwise Vagrant will not be able ==> openshiftdev: to SSH into it. ==> openshiftdev: Launching an instance with the following settings... ==> openshiftdev: -- Type: c4.xlarge ==> openshiftdev: -- AMI: ami-40e7a056 ==> openshiftdev: -- Region: us-east-1 ==> openshiftdev: -- Keypair: libra ==> openshiftdev: -- Subnet ID: subnet-cf57c596 ==> openshiftdev: -- User Data: yes ==> openshiftdev: -- User Data: ==> openshiftdev: # cloud-config ==> openshiftdev: ==> openshiftdev: growpart: ==> openshiftdev: mode: auto ==> openshiftdev: devices: ['/'] ==> openshiftdev: runcmd: ==> openshiftdev: - [ sh, -xc, "sed -i s/^Defaults.*requiretty/#Defaults requiretty/g /etc/sudoers"] ==> openshiftdev: ==> openshiftdev: -- Block Device Mapping: [{"DeviceName"=>"/dev/sda1", "Ebs.VolumeSize"=>25, "Ebs.VolumeType"=>"gp2"}, {"DeviceName"=>"/dev/sdb", "Ebs.VolumeSize"=>35, "Ebs.VolumeType"=>"gp2"}] ==> openshiftdev: -- Terminate On Shutdown: false ==> openshiftdev: -- Monitoring: false ==> openshiftdev: -- EBS optimized: false ==> openshiftdev: -- Assigning a public IP address in a VPC: false ==> openshiftdev: Waiting for instance to become "ready"... ==> openshiftdev: Waiting for SSH to become available... ==> openshiftdev: Machine is booted and ready for use! ==> openshiftdev: Running provisioner: setup (shell)... openshiftdev: Running: /tmp/vagrant-shell20170602-17735-xwfazo.sh ==> openshiftdev: Host: ec2-54-173-112-164.compute-1.amazonaws.com + break + vagrant sync-origin-aggregated-logging -c -s Running ssh/sudo command 'rm -rf /data/src/github.com/openshift/origin-aggregated-logging-bare; ' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /ec2-user/.ssh; mv /tmp/file20170602-20560-1ka59u1 /ec2-user/.ssh/config && chown ec2-user:ec2-user /ec2-user/.ssh/config && chmod 0600 /ec2-user/.ssh/config' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/builder && chown -R ec2-user:ec2-user /data/src/github.com/openshift/' with timeout 14400. Attempt #0 Running ssh/sudo command 'set -e rm -fr /data/src/github.com/openshift/origin-aggregated-logging-bare; if [ ! -d /data/src/github.com/openshift/origin-aggregated-logging-bare ]; then git clone --quiet --bare https://github.com/openshift/origin-aggregated-logging.git /data/src/github.com/openshift/origin-aggregated-logging-bare >/dev/null fi ' with timeout 14400. Attempt #0 Synchronizing local sources Synchronizing [origin-aggregated-logging@master] from origin-aggregated-logging... Warning: Permanently added '54.173.112.164' (ECDSA) to the list of known hosts. Running ssh/sudo command 'set -e if [ -d /data/src/github.com/openshift/origin-aggregated-logging-bare ]; then rm -rf /data/src/github.com/openshift/origin-aggregated-logging echo 'Cloning origin-aggregated-logging ...' git clone --quiet --recurse-submodules /data/src/github.com/openshift/origin-aggregated-logging-bare /data/src/github.com/openshift/origin-aggregated-logging else MISSING_REPO+='origin-aggregated-logging-bare' fi if [ -n "$MISSING_REPO" ]; then echo 'Missing required upstream repositories:' echo $MISSING_REPO echo 'To fix, execute command: vagrant clone-upstream-repos' fi ' with timeout 14400. Attempt #0 Cloning origin-aggregated-logging ... Submodule 'deployer/common' (https://github.com/openshift/origin-integration-common) registered for path 'deployer/common' Submodule 'kibana-proxy' (https://github.com/fabric8io/openshift-auth-proxy.git) registered for path 'kibana-proxy' Cloning into 'deployer/common'... Submodule path 'deployer/common': checked out '45bf993212cdcbab5cbce3b3fab74a72b851402e' Cloning into 'kibana-proxy'... Submodule path 'kibana-proxy': checked out '6c40d1a5e8f79fba353f4df3950010f7f6b773eb' + vagrant ssh -c 'if [ ! -d /tmp/openshift ] ; then mkdir /tmp/openshift ; fi ; sudo chmod 777 /tmp/openshift' + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/base-centos7 ... pulling image openshift/base-centos7 ... + vagrant ssh -c 'docker pull openshift/base-centos7' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/base-centos7 ... latest: Pulling from docker.io/openshift/base-centos7 45a2e645736c: Pulling fs layer 734fb161cf89: Pulling fs layer 78efc9e155c4: Pulling fs layer 8a3400b7e31a: Pulling fs layer 8a3400b7e31a: Waiting 734fb161cf89: Verifying Checksum 734fb161cf89: Download complete 8a3400b7e31a: Verifying Checksum 8a3400b7e31a: Download complete 45a2e645736c: Verifying Checksum 45a2e645736c: Download complete 78efc9e155c4: Verifying Checksum 78efc9e155c4: Download complete 45a2e645736c: Pull complete 734fb161cf89: Pull complete 78efc9e155c4: Pull complete 8a3400b7e31a: Pull complete Digest: sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c + echo done with openshift/base-centos7 done with openshift/base-centos7 + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image centos:centos7 ... pulling image centos:centos7 ... + vagrant ssh -c 'docker pull centos:centos7' -- -n Trying to pull repository docker.io/library/centos ... centos7: Pulling from docker.io/library/centos Digest: sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077 + echo done with centos:centos7 done with centos:centos7 + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-elasticsearch ... pulling image openshift/origin-logging-elasticsearch ... + vagrant ssh -c 'docker pull openshift/origin-logging-elasticsearch' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-elasticsearch ... latest: Pulling from docker.io/openshift/origin-logging-elasticsearch 343b09361036: Already exists 602cb92853de: Pulling fs layer e1f44cfeaf8a: Pulling fs layer 0a3cac8893f5: Pulling fs layer 231e705a374a: Pulling fs layer 93da2c997eba: Pulling fs layer 0255b9686718: Pulling fs layer 81b6e38fb6c6: Pulling fs layer 517dc11d313d: Pulling fs layer 44b14bcf52fb: Pulling fs layer 29af879969c7: Pulling fs layer 29af879969c7: Waiting 0255b9686718: Waiting 517dc11d313d: Waiting 81b6e38fb6c6: Waiting 44b14bcf52fb: Waiting 231e705a374a: Waiting 93da2c997eba: Waiting 0a3cac8893f5: Verifying Checksum 0a3cac8893f5: Download complete 602cb92853de: Verifying Checksum 602cb92853de: Download complete 93da2c997eba: Verifying Checksum 93da2c997eba: Download complete 0255b9686718: Verifying Checksum 231e705a374a: Verifying Checksum 81b6e38fb6c6: Verifying Checksum 81b6e38fb6c6: Download complete 44b14bcf52fb: Verifying Checksum 44b14bcf52fb: Download complete 29af879969c7: Verifying Checksum 29af879969c7: Download complete 517dc11d313d: Verifying Checksum 517dc11d313d: Download complete e1f44cfeaf8a: Verifying Checksum e1f44cfeaf8a: Download complete 602cb92853de: Pull complete e1f44cfeaf8a: Pull complete 0a3cac8893f5: Pull complete 231e705a374a: Pull complete 93da2c997eba: Pull complete 0255b9686718: Pull complete 81b6e38fb6c6: Pull complete 517dc11d313d: Pull complete 44b14bcf52fb: Pull complete 29af879969c7: Pull complete Digest: sha256:b019d3224117d0da040262d89dda70b900b03f376ada0ffdfc3b2f5d72ca6209 + echo done with openshift/origin-logging-elasticsearch done with openshift/origin-logging-elasticsearch + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-fluentd ... pulling image openshift/origin-logging-fluentd ... + vagrant ssh -c 'docker pull openshift/origin-logging-fluentd' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-fluentd ... latest: Pulling from docker.io/openshift/origin-logging-fluentd 343b09361036: Already exists 0619052dfb71: Pulling fs layer 40fab143b377: Pulling fs layer 7c7725ebc8b8: Pulling fs layer d6295a66fc55: Pulling fs layer ff90c6ea676d: Pulling fs layer d6295a66fc55: Waiting ff90c6ea676d: Waiting 7c7725ebc8b8: Download complete d6295a66fc55: Verifying Checksum d6295a66fc55: Download complete ff90c6ea676d: Download complete 40fab143b377: Verifying Checksum 40fab143b377: Download complete 0619052dfb71: Verifying Checksum 0619052dfb71: Download complete 0619052dfb71: Pull complete 40fab143b377: Pull complete 7c7725ebc8b8: Pull complete d6295a66fc55: Pull complete ff90c6ea676d: Pull complete Digest: sha256:82ab5554786b6880995b5b0d1fecfb49d52f26afcc1478558e82e70396502e11 + echo done with openshift/origin-logging-fluentd done with openshift/origin-logging-fluentd + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-curator ... pulling image openshift/origin-logging-curator ... + vagrant ssh -c 'docker pull openshift/origin-logging-curator' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-curator ... latest: Pulling from docker.io/openshift/origin-logging-curator 343b09361036: Already exists 4cb24f5de95d: Pulling fs layer 99fd6cdac796: Pulling fs layer 4cb24f5de95d: Verifying Checksum 4cb24f5de95d: Download complete 99fd6cdac796: Verifying Checksum 99fd6cdac796: Download complete 4cb24f5de95d: Pull complete 99fd6cdac796: Pull complete Digest: sha256:b7a90ccb4806591205705e4e71ad6518a8d21979b5cf2f516fc82c789dd24bbf + echo done with openshift/origin-logging-curator done with openshift/origin-logging-curator + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-kibana ... pulling image openshift/origin-logging-kibana ... + vagrant ssh -c 'docker pull openshift/origin-logging-kibana' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-kibana ... latest: Pulling from docker.io/openshift/origin-logging-kibana 45a2e645736c: Already exists 734fb161cf89: Already exists 78efc9e155c4: Already exists 8a3400b7e31a: Already exists 7678d0bebdbe: Pulling fs layer 71b19f556799: Pulling fs layer 35f99f8b06e1: Pulling fs layer eac55ce17ab4: Pulling fs layer 9f8164f95c18: Pulling fs layer 05bcf6fe7c0d: Pulling fs layer eac55ce17ab4: Waiting 9f8164f95c18: Waiting 05bcf6fe7c0d: Waiting 35f99f8b06e1: Verifying Checksum 35f99f8b06e1: Download complete 7678d0bebdbe: Verifying Checksum 7678d0bebdbe: Download complete eac55ce17ab4: Verifying Checksum eac55ce17ab4: Download complete 9f8164f95c18: Verifying Checksum 9f8164f95c18: Download complete 7678d0bebdbe: Pull complete 71b19f556799: Verifying Checksum 71b19f556799: Download complete 05bcf6fe7c0d: Download complete 71b19f556799: Pull complete 35f99f8b06e1: Pull complete eac55ce17ab4: Pull complete 9f8164f95c18: Pull complete 05bcf6fe7c0d: Pull complete Digest: sha256:ce0197985a74ba53c5f931ef4e086f4233f7da967732fe6cd09ac87eb8ef3b57 + echo done with openshift/origin-logging-kibana done with openshift/origin-logging-kibana + vagrant test-origin-aggregated-logging -d --env GIT_URL=https://github.com/openshift/origin-aggregated-logging --env GIT_BRANCH=master --env O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging --env OS_ROOT=/data/src/github.com/openshift/origin --env ENABLE_OPS_CLUSTER=true --env USE_LOCAL_SOURCE=true --env TEST_PERF=false --env VERBOSE=1 --env OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible --env OS_ANSIBLE_BRANCH=master *************************************************** Running GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh... /data/src/github.com/openshift/origin /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging/hack/testing Loaded plugins: amazon-id, rhui-lb, search-disabled-repos Metadata Cache Created Loaded plugins: amazon-id, rhui-lb, search-disabled-repos Resolving Dependencies --> Running transaction check ---> Package ansible.noarch 0:2.3.0.0-3.el7 will be installed --> Processing Dependency: sshpass for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-paramiko for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-keyczar for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-httplib2 for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-crypto for package: ansible-2.3.0.0-3.el7.noarch ---> Package python2-pip.noarch 0:8.1.2-5.el7 will be installed ---> Package python2-ruamel-yaml.x86_64 0:0.12.14-9.el7 will be installed --> Processing Dependency: python2-typing for package: python2-ruamel-yaml-0.12.14-9.el7.x86_64 --> Processing Dependency: python2-ruamel-ordereddict for package: python2-ruamel-yaml-0.12.14-9.el7.x86_64 --> Running transaction check ---> Package python-httplib2.noarch 0:0.9.1-2.el7aos will be installed ---> Package python-keyczar.noarch 0:0.71c-2.el7aos will be installed --> Processing Dependency: python-pyasn1 for package: python-keyczar-0.71c-2.el7aos.noarch ---> Package python-paramiko.noarch 0:2.1.1-1.el7 will be installed --> Processing Dependency: python-cryptography for package: python-paramiko-2.1.1-1.el7.noarch ---> Package python2-crypto.x86_64 0:2.6.1-13.el7 will be installed --> Processing Dependency: libtomcrypt.so.0()(64bit) for package: python2-crypto-2.6.1-13.el7.x86_64 ---> Package python2-ruamel-ordereddict.x86_64 0:0.4.9-3.el7 will be installed ---> Package python2-typing.noarch 0:3.5.2.2-3.el7 will be installed ---> Package sshpass.x86_64 0:1.06-1.el7 will be installed --> Running transaction check ---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed --> Processing Dependency: libtommath >= 0.42.0 for package: libtomcrypt-1.17-23.el7.x86_64 --> Processing Dependency: libtommath.so.0()(64bit) for package: libtomcrypt-1.17-23.el7.x86_64 ---> Package python2-cryptography.x86_64 0:1.3.1-3.el7 will be installed --> Processing Dependency: python-idna >= 2.0 for package: python2-cryptography-1.3.1-3.el7.x86_64 --> Processing Dependency: python-cffi >= 1.4.1 for package: python2-cryptography-1.3.1-3.el7.x86_64 --> Processing Dependency: python-ipaddress for package: python2-cryptography-1.3.1-3.el7.x86_64 --> Processing Dependency: python-enum34 for package: python2-cryptography-1.3.1-3.el7.x86_64 ---> Package python2-pyasn1.noarch 0:0.1.9-7.el7 will be installed --> Running transaction check ---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed ---> Package python-cffi.x86_64 0:1.6.0-5.el7 will be installed --> Processing Dependency: python-pycparser for package: python-cffi-1.6.0-5.el7.x86_64 ---> Package python-enum34.noarch 0:1.0.4-1.el7 will be installed ---> Package python-idna.noarch 0:2.0-1.el7 will be installed ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed --> Running transaction check ---> Package python-pycparser.noarch 0:2.14-1.el7 will be installed --> Processing Dependency: python-ply for package: python-pycparser-2.14-1.el7.noarch --> Running transaction check ---> Package python-ply.noarch 0:3.4-10.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: ansible noarch 2.3.0.0-3.el7 epel 5.7 M python2-pip noarch 8.1.2-5.el7 epel 1.7 M python2-ruamel-yaml x86_64 0.12.14-9.el7 li 245 k Installing for dependencies: libtomcrypt x86_64 1.17-23.el7 epel 224 k libtommath x86_64 0.42.0-4.el7 epel 35 k python-cffi x86_64 1.6.0-5.el7 oso-rhui-rhel-server-releases 218 k python-enum34 noarch 1.0.4-1.el7 oso-rhui-rhel-server-releases 52 k python-httplib2 noarch 0.9.1-2.el7aos li 115 k python-idna noarch 2.0-1.el7 oso-rhui-rhel-server-releases 92 k python-ipaddress noarch 1.0.16-2.el7 oso-rhui-rhel-server-releases 34 k python-keyczar noarch 0.71c-2.el7aos rhel-7-server-ose-3.1-rpms 217 k python-paramiko noarch 2.1.1-1.el7 rhel-7-server-ose-3.4-rpms 266 k python-ply noarch 3.4-10.el7 oso-rhui-rhel-server-releases 123 k python-pycparser noarch 2.14-1.el7 oso-rhui-rhel-server-releases 105 k python2-crypto x86_64 2.6.1-13.el7 epel 476 k python2-cryptography x86_64 1.3.1-3.el7 oso-rhui-rhel-server-releases 471 k python2-pyasn1 noarch 0.1.9-7.el7 oso-rhui-rhel-server-releases 100 k python2-ruamel-ordereddict x86_64 0.4.9-3.el7 li 38 k python2-typing noarch 3.5.2.2-3.el7 epel 39 k sshpass x86_64 1.06-1.el7 epel 21 k Transaction Summary ================================================================================ Install 3 Packages (+17 Dependent packages) Total download size: 10 M Installed size: 47 M Downloading packages: -------------------------------------------------------------------------------- Total 5.6 MB/s | 10 MB 00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : python2-pyasn1-0.1.9-7.el7.noarch 1/20 Installing : sshpass-1.06-1.el7.x86_64 2/20 Installing : libtommath-0.42.0-4.el7.x86_64 3/20 Installing : libtomcrypt-1.17-23.el7.x86_64 4/20 Installing : python2-crypto-2.6.1-13.el7.x86_64 5/20 Installing : python-keyczar-0.71c-2.el7aos.noarch 6/20 Installing : python-enum34-1.0.4-1.el7.noarch 7/20 Installing : python-ply-3.4-10.el7.noarch 8/20 Installing : python-pycparser-2.14-1.el7.noarch 9/20 Installing : python-cffi-1.6.0-5.el7.x86_64 10/20 Installing : python-httplib2-0.9.1-2.el7aos.noarch 11/20 Installing : python-idna-2.0-1.el7.noarch 12/20 Installing : python2-ruamel-ordereddict-0.4.9-3.el7.x86_64 13/20 Installing : python2-typing-3.5.2.2-3.el7.noarch 14/20 Installing : python-ipaddress-1.0.16-2.el7.noarch 15/20 Installing : python2-cryptography-1.3.1-3.el7.x86_64 16/20 Installing : python-paramiko-2.1.1-1.el7.noarch 17/20 Installing : ansible-2.3.0.0-3.el7.noarch 18/20 Installing : python2-ruamel-yaml-0.12.14-9.el7.x86_64 19/20 Installing : python2-pip-8.1.2-5.el7.noarch 20/20 Verifying : python-pycparser-2.14-1.el7.noarch 1/20 Verifying : python-ipaddress-1.0.16-2.el7.noarch 2/20 Verifying : ansible-2.3.0.0-3.el7.noarch 3/20 Verifying : python2-typing-3.5.2.2-3.el7.noarch 4/20 Verifying : python2-pip-8.1.2-5.el7.noarch 5/20 Verifying : python2-pyasn1-0.1.9-7.el7.noarch 6/20 Verifying : libtomcrypt-1.17-23.el7.x86_64 7/20 Verifying : python-cffi-1.6.0-5.el7.x86_64 8/20 Verifying : python2-ruamel-yaml-0.12.14-9.el7.x86_64 9/20 Verifying : python2-ruamel-ordereddict-0.4.9-3.el7.x86_64 10/20 Verifying : python-idna-2.0-1.el7.noarch 11/20 Verifying : python-httplib2-0.9.1-2.el7aos.noarch 12/20 Verifying : python-ply-3.4-10.el7.noarch 13/20 Verifying : python-enum34-1.0.4-1.el7.noarch 14/20 Verifying : python-keyczar-0.71c-2.el7aos.noarch 15/20 Verifying : libtommath-0.42.0-4.el7.x86_64 16/20 Verifying : sshpass-1.06-1.el7.x86_64 17/20 Verifying : python2-cryptography-1.3.1-3.el7.x86_64 18/20 Verifying : python-paramiko-2.1.1-1.el7.noarch 19/20 Verifying : python2-crypto-2.6.1-13.el7.x86_64 20/20 Installed: ansible.noarch 0:2.3.0.0-3.el7 python2-pip.noarch 0:8.1.2-5.el7 python2-ruamel-yaml.x86_64 0:0.12.14-9.el7 Dependency Installed: libtomcrypt.x86_64 0:1.17-23.el7 libtommath.x86_64 0:0.42.0-4.el7 python-cffi.x86_64 0:1.6.0-5.el7 python-enum34.noarch 0:1.0.4-1.el7 python-httplib2.noarch 0:0.9.1-2.el7aos python-idna.noarch 0:2.0-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-keyczar.noarch 0:0.71c-2.el7aos python-paramiko.noarch 0:2.1.1-1.el7 python-ply.noarch 0:3.4-10.el7 python-pycparser.noarch 0:2.14-1.el7 python2-crypto.x86_64 0:2.6.1-13.el7 python2-cryptography.x86_64 0:1.3.1-3.el7 python2-pyasn1.noarch 0:0.1.9-7.el7 python2-ruamel-ordereddict.x86_64 0:0.4.9-3.el7 python2-typing.noarch 0:3.5.2.2-3.el7 sshpass.x86_64 0:1.06-1.el7 Complete! Cloning into '/tmp/tmp.mpOgoWqQmj/openhift-ansible'... Copying oc from path to /usr/local/bin for use by openshift-ansible Copying oc from path to /usr/bin for use by openshift-ansible Copying oadm from path to /usr/local/bin for use by openshift-ansible Copying oadm from path to /usr/bin for use by openshift-ansible [INFO] Starting logging tests at Fri Jun 2 16:43:55 EDT 2017 Generated new key pair as /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/serviceaccounts.public.key and /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/serviceaccounts.private.key Generating node credentials ... Created node config for 172.18.8.225 in /tmp/openshift/origin-aggregated-logging/openshift.local.config/node-172.18.8.225 Wrote master config to: /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/master-config.yaml Running hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 9.936s: hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Standard output from the command: ok Standard error from the command: The connection to the server 172.18.8.225:8443 was refused - did you specify the right host or port? ... repeated 16 times Error from server (Forbidden): User "system:admin" cannot "get" on "/healthz" ... repeated 4 times Running hack/lib/start.sh:353: executing 'oc get --raw https://172.18.8.225:10250/healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s... SUCCESS after 0.246s: hack/lib/start.sh:353: executing 'oc get --raw https://172.18.8.225:10250/healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s Standard output from the command: ok There was no error output from the command. Running hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 0.352s: hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Standard output from the command: ok There was no error output from the command. Running hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s... SUCCESS after 0.318s: hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 3s There was no error output from the command. Running hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.8.225 --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s... SUCCESS after 0.351s: hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.8.225 --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s Standard output from the command: {"kind":"Node","apiVersion":"v1","metadata":{"name":"172.18.8.225","selfLink":"/api/v1/nodes/172.18.8.225","uid":"4336ec61-47d4-11e7-ab86-0e1196655f96","resourceVersion":"381","creationTimestamp":"2017-06-02T20:44:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"172.18.8.225"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"172.18.8.225","providerID":"aws:////i-0433d36afa0b59316"},"status":{"capacity":{"cpu":"4","memory":"7231688Ki","pods":"40"},"allocatable":{"cpu":"4","memory":"7129288Ki","pods":"40"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2017-06-02T20:44:24Z","lastTransitionTime":"2017-06-02T20:44:24Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2017-06-02T20:44:24Z","lastTransitionTime":"2017-06-02T20:44:24Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2017-06-02T20:44:24Z","lastTransitionTime":"2017-06-02T20:44:24Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"Ready","status":"True","lastHeartbeatTime":"2017-06-02T20:44:24Z","lastTransitionTime":"2017-06-02T20:44:24Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"LegacyHostIP","address":"172.18.8.225"},{"type":"InternalIP","address":"172.18.8.225"},{"type":"Hostname","address":"172.18.8.225"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"f9370ed252a14f73b014c1301a9b6d1b","systemUUID":"EC286797-52DF-2D23-968A-42C055B97B29","bootID":"15cd60ac-cea3-42bb-b148-4ee994ca1032","kernelVersion":"3.10.0-327.22.2.el7.x86_64","osImage":"Red Hat Enterprise Linux Server 7.3 (Maipo)","containerRuntimeVersion":"docker://1.12.6","kubeletVersion":"v1.6.1+5115d708d7","kubeProxyVersion":"v1.6.1+5115d708d7","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/origin-gitserver:b8a4a2c","openshift/origin-gitserver:latest"],"sizeBytes":1078084889},{"names":["openshift/openvswitch:b8a4a2c","openshift/openvswitch:latest"],"sizeBytes":1056771547},{"names":["openshift/node:b8a4a2c","openshift/node:latest"],"sizeBytes":1055090088},{"names":["openshift/origin-keepalived-ipfailover:b8a4a2c","openshift/origin-keepalived-ipfailover:latest"],"sizeBytes":1020242247},{"names":["openshift/origin-haproxy-router:b8a4a2c","openshift/origin-haproxy-router:latest"],"sizeBytes":1014481963},{"names":["openshift/origin-deployer:b8a4a2c","openshift/origin-deployer:latest"],"sizeBytes":993535536},{"names":["openshift/origin:b8a4a2c","openshift/origin:latest"],"sizeBytes":993535536},{"names":["openshift/origin-f5-router:b8a4a2c","openshift/origin-f5-router:latest"],"sizeBytes":993535536},{"names":["openshift/origin-sti-builder:b8a4a2c","openshift/origin-sti-builder:latest"],"sizeBytes":993535536},{"names":["openshift/origin-recycler:b8a4a2c","openshift/origin-recycler:latest"],"sizeBytes":993535536},{"names":["openshift/origin-docker-builder:b8a4a2c","openshift/origin-docker-builder:latest"],"sizeBytes":993535536},{"names":["rhel7.1:latest"],"sizeBytes":986201487},{"names":["docker.io/openshift/origin-release@sha256:e29efb9b91708975ea538d80a66f2da584e1476de81104a5cfefe1f4138a4fd2","docker.io/openshift/origin-release:golang-1.7"],"sizeBytes":825325850},{"names":["openshift/dind-master:latest"],"sizeBytes":709532011},{"names":["openshift/dind-node:latest"],"sizeBytes":709528287},{"names":["docker.io/openshift/origin-logging-kibana@sha256:ce0197985a74ba53c5f931ef4e086f4233f7da967732fe6cd09ac87eb8ef3b57","docker.io/openshift/origin-logging-kibana:latest"],"sizeBytes":682851494},{"names":["openshift/dind:latest"],"sizeBytes":619374911},{"names":["openshift/origin-docker-registry:b8a4a2c","openshift/origin-docker-registry:latest"],"sizeBytes":461161898},{"names":["docker.io/openshift/origin-logging-elasticsearch@sha256:b019d3224117d0da040262d89dda70b900b03f376ada0ffdfc3b2f5d72ca6209","docker.io/openshift/origin-logging-elasticsearch:latest"],"sizeBytes":425428290},{"names":["docker.io/openshift/origin-logging-fluentd@sha256:82ab5554786b6880995b5b0d1fecfb49d52f26afcc1478558e82e70396502e11","docker.io/openshift/origin-logging-fluentd:latest"],"sizeBytes":385027863},{"names":["docker.io/openshift/base-centos7@sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c","docker.io/openshift/base-centos7:latest"],"sizeBytes":383049978},{"names":["rhel7.2:latest"],"sizeBytes":377493440},{"names":["openshift/origin-egress-router:b8a4a2c","openshift/origin-egress-router:latest"],"sizeBytes":364720710},{"names":["openshift/origin-base:latest"],"sizeBytes":363024702},{"names":["docker.io/fedora@sha256:69281ddd7b2600e5f2b17f1e12d7fba25207f459204fb2d15884f8432c479136","docker.io/fedora:25"],"sizeBytes":230864375},{"names":["docker.io/openshift/origin-logging-curator@sha256:b7a90ccb4806591205705e4e71ad6518a8d21979b5cf2f516fc82c789dd24bbf","docker.io/openshift/origin-logging-curator:latest"],"sizeBytes":224972122},{"names":["rhel7.3:latest","rhel7:latest"],"sizeBytes":215403650},{"names":["registry.access.redhat.com/rhel7.2@sha256:98e6ca5d226c26e31a95cd67716afe22833c943e1926a21daf1a030906a02249","registry.access.redhat.com/rhel7.2:latest"],"sizeBytes":201376319},{"names":["registry.access.redhat.com/rhel7.3@sha256:5cbb9eecfc1cfeb385012ad1962f469bf25c6bcc2999e89c74817030d12286fd","registry.access.redhat.com/rhel7.3:latest"],"sizeBytes":192682716},{"names":["docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077","docker.io/centos:centos7"],"sizeBytes":192548999},{"names":["registry.access.redhat.com/rhel7.1@sha256:1bc5a4c43bbb29a5a96a61896ff696933be3502e2f5fdc4cde02d9e101731fdd","registry.access.redhat.com/rhel7.1:latest"],"sizeBytes":158229901},{"names":["openshift/hello-openshift:b8a4a2c","openshift/hello-openshift:latest"],"sizeBytes":5635113},{"names":["openshift/origin-pod:b8a4a2c","openshift/origin-pod:latest"],"sizeBytes":1143145}]}} There was no error output from the command. serviceaccount "registry" created clusterrolebinding "registry-registry-role" created deploymentconfig "docker-registry" created service "docker-registry" created info: password for stats user admin has been set to cNZiErE9Vp --> Creating router router ... serviceaccount "router" created clusterrolebinding "router-router-role" created deploymentconfig "router" created service "router" created --> Success Running /data/src/github.com/openshift/origin/logging.sh:162: executing 'oadm new-project logging --node-selector=''' expecting success... SUCCESS after 0.632s: /data/src/github.com/openshift/origin/logging.sh:162: executing 'oadm new-project logging --node-selector=''' expecting success Standard output from the command: Created project logging There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:163: executing 'oc project logging > /dev/null' expecting success... SUCCESS after 0.376s: /data/src/github.com/openshift/origin/logging.sh:163: executing 'oc project logging > /dev/null' expecting success There was no output from the command. There was no error output from the command. apiVersion: v1 items: - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-elasticsearch component: development logging-infra: development provider: openshift name: logging-elasticsearch spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-fluentd component: development logging-infra: development provider: openshift name: logging-fluentd spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-kibana component: development logging-infra: development provider: openshift name: logging-kibana spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-curator component: development logging-infra: development provider: openshift name: logging-curator spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-auth-proxy component: development logging-infra: development provider: openshift name: logging-auth-proxy spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-deployment component: development logging-infra: development provider: openshift name: origin spec: dockerImageRepository: openshift/origin tags: - from: kind: DockerImage name: openshift/origin:v1.5.0-alpha.2 name: v1.5.0-alpha.2 - apiVersion: v1 kind: BuildConfig metadata: labels: app: logging-elasticsearch component: development logging-infra: development provider: openshift name: logging-elasticsearch spec: output: to: kind: ImageStreamTag name: logging-elasticsearch:latest resources: {} source: contextDir: elasticsearch git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-fluentd component: development logging-infra: development provider: openshift name: logging-fluentd spec: output: to: kind: ImageStreamTag name: logging-fluentd:latest resources: {} source: contextDir: fluentd git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-kibana component: development logging-infra: development provider: openshift name: logging-kibana spec: output: to: kind: ImageStreamTag name: logging-kibana:latest resources: {} source: contextDir: kibana git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-curator component: development logging-infra: development provider: openshift name: logging-curator spec: output: to: kind: ImageStreamTag name: logging-curator:latest resources: {} source: contextDir: curator git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-auth-proxy component: development logging-infra: development provider: openshift name: logging-auth-proxy spec: output: to: kind: ImageStreamTag name: logging-auth-proxy:latest resources: {} source: contextDir: kibana-proxy git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: library/node:0.10.36 type: Docker kind: List metadata: {} Running /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/build-images:31: executing 'oc process -o yaml -f /data/src/github.com/openshift/origin-aggregated-logging/hack/templates/dev-builds-wo-deployer.yaml -v LOGGING_FORK_URL=https://github.com/openshift/origin-aggregated-logging -v LOGGING_FORK_BRANCH=master | build_filter | oc create -f -' expecting success... SUCCESS after 0.377s: /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/build-images:31: executing 'oc process -o yaml -f /data/src/github.com/openshift/origin-aggregated-logging/hack/templates/dev-builds-wo-deployer.yaml -v LOGGING_FORK_URL=https://github.com/openshift/origin-aggregated-logging -v LOGGING_FORK_BRANCH=master | build_filter | oc create -f -' expecting success Standard output from the command: imagestream "logging-elasticsearch" created imagestream "logging-fluentd" created imagestream "logging-kibana" created imagestream "logging-curator" created imagestream "logging-auth-proxy" created imagestream "origin" created buildconfig "logging-elasticsearch" created buildconfig "logging-fluentd" created buildconfig "logging-kibana" created buildconfig "logging-curator" created buildconfig "logging-auth-proxy" created Standard error from the command: Flag --value has been deprecated, Use -p, --param instead. Flag --value has been deprecated, Use -p, --param instead. Running /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/build-images:9: executing 'oc get imagestreamtag origin:latest' expecting success; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.631s: /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/build-images:9: executing 'oc get imagestreamtag origin:latest' expecting success; re-trying every 0.2s until completion or 60.000s Standard output from the command: NAME DOCKER REF UPDATED IMAGENAME origin:latest openshift/origin@sha256:f9621d6c810aac8d344a45be9d4d2401968d55843f7a2a5854ced6064357f160 Less than a second ago sha256:f9621d6c810aac8d344a45be9d4d2401968d55843f7a2a5854ced6064357f160 Standard error from the command: Error from server (NotFound): imagestreamtags.image.openshift.io "origin:latest" not found Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-auth-proxy-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-curator-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-elasticsearch-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-fluentd-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-kibana-1" started Running /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/build-images:33: executing 'wait_for_builds_complete' expecting success... SUCCESS after 60.668s: /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/build-images:33: executing 'wait_for_builds_complete' expecting success Standard output from the command: Builds are complete There was no error output from the command. /tmp/tmp.mpOgoWqQmj/openhift-ansible /data/src/github.com/openshift/origin ### Created host inventory file ### [oo_first_master] openshift [oo_first_master:vars] ansible_become=true ansible_connection=local containerized=true docker_protect_installed_version=true openshift_deployment_type=origin deployment_type=origin required_packages=[] openshift_hosted_logging_hostname=kibana.127.0.0.1.xip.io openshift_master_logging_public_url=https://kibana.127.0.0.1.xip.io openshift_logging_master_public_url=https://172.18.8.225:8443 openshift_logging_image_prefix=172.30.101.10:5000/logging/ openshift_logging_use_ops=true openshift_logging_fluentd_journal_read_from_head=False openshift_logging_es_log_appenders=['console'] openshift_logging_use_mux=false openshift_logging_mux_allow_external=false openshift_logging_use_mux_client=false ################################### Running /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/init-log-stack:58: executing 'oc login -u system:admin' expecting success... SUCCESS after 0.217s: /data/src/github.com/openshift/origin-aggregated-logging/hack/testing/init-log-stack:58: executing 'oc login -u system:admin' expecting success Standard output from the command: Logged into "https://172.18.8.225:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Using /tmp/tmp.mpOgoWqQmj/openhift-ansible/ansible.cfg as config file PLAYBOOK: openshift-logging.yml ************************************************ 4 plays in /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml PLAY [Create initial host groups for localhost] ******************************** META: ran handlers TASK [include_vars] ************************************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/byo/openshift-cluster/initialize_groups.yml:10 ok: [localhost] => { "ansible_facts": { "g_all_hosts": "{{ g_master_hosts | union(g_node_hosts) | union(g_etcd_hosts) | union(g_lb_hosts) | union(g_nfs_hosts) | union(g_new_node_hosts)| union(g_new_master_hosts) | default([]) }}", "g_etcd_hosts": "{{ groups.etcd | default([]) }}", "g_glusterfs_hosts": "{{ groups.glusterfs | default([]) }}", "g_glusterfs_registry_hosts": "{{ groups.glusterfs_registry | default(g_glusterfs_hosts) }}", "g_lb_hosts": "{{ groups.lb | default([]) }}", "g_master_hosts": "{{ groups.masters | default([]) }}", "g_new_master_hosts": "{{ groups.new_masters | default([]) }}", "g_new_node_hosts": "{{ groups.new_nodes | default([]) }}", "g_nfs_hosts": "{{ groups.nfs | default([]) }}", "g_node_hosts": "{{ groups.nodes | default([]) }}" }, "changed": false } META: ran handlers META: ran handlers PLAY [Populate config host groups] ********************************************* META: ran handlers TASK [Evaluate groups - g_etcd_hosts required] ********************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:8 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:13 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:18 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_lb_hosts required] *********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:23 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_nfs_hosts required] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:28 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_nfs_hosts is single host] **************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:33 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_glusterfs_hosts required] **************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:38 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate oo_all_hosts] *************************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:43 TASK [Evaluate oo_masters] ***************************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:52 TASK [Evaluate oo_first_master] ************************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:61 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate oo_masters_to_config] ******************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:70 TASK [Evaluate oo_etcd_to_config] ********************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:79 TASK [Evaluate oo_first_etcd] ************************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:88 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:100 TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:107 creating host via 'add_host': hostname=openshift ok: [localhost] => (item=openshift) => { "add_host": { "groups": [ "oo_etcd_hosts_to_backup" ], "host_name": "openshift", "host_vars": {} }, "changed": false, "item": "openshift" } TASK [Evaluate oo_nodes_to_config] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:114 TASK [Add master to oo_nodes_to_config] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:124 TASK [Evaluate oo_lb_to_config] ************************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:134 TASK [Evaluate oo_nfs_to_config] *********************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:143 TASK [Evaluate oo_glusterfs_to_config] ***************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:152 META: ran handlers META: ran handlers PLAY [OpenShift Aggregated Logging] ******************************************** TASK [Gathering Facts] ********************************************************* ok: [openshift] META: ran handlers TASK [openshift_sanitize_inventory : Abort when conflicting deployment type variables are set] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_sanitize_inventory : Standardize on latest variable names] ***** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:15 ok: [openshift] => { "ansible_facts": { "deployment_type": "origin", "openshift_deployment_type": "origin" }, "changed": false } TASK [openshift_sanitize_inventory : Abort when deployment type is invalid] **** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:23 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_sanitize_inventory : Normalize openshift_release] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:31 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:41 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Detecting Operating System] **************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:8 ok: [openshift] => { "ansible_facts": { "l_is_atomic": false }, "changed": false } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:10 ok: [openshift] => { "ansible_facts": { "l_is_containerized": true, "l_is_etcd_system_container": false, "l_is_master_system_container": false, "l_is_node_system_container": false, "l_is_openvswitch_system_container": false }, "changed": false } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:16 ok: [openshift] => { "ansible_facts": { "l_any_system_container": false }, "changed": false } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:18 ok: [openshift] => { "ansible_facts": { "l_etcd_runtime": "docker" }, "changed": false } TASK [openshift_facts : Validate python version] ******************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:22 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Validate python version] ******************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:29 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Determine Atomic Host Docker Version] ****************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:42 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : assert] ************************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:46 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Load variables] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:53 ok: [openshift] => (item=/tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/vars/default.yml) => { "ansible_facts": { "required_packages": [ "iproute", "python-dbus", "PyYAML", "yum-utils" ] }, "item": "/tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/vars/default.yml" } TASK [openshift_facts : Ensure various deps are installed] ********************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:59 ok: [openshift] => (item=iproute) => { "changed": false, "item": "iproute", "rc": 0, "results": [ "iproute-3.10.0-74.el7.x86_64 providing iproute is already installed" ] } ok: [openshift] => (item=python-dbus) => { "changed": false, "item": "python-dbus", "rc": 0, "results": [ "dbus-python-1.1.1-9.el7.x86_64 providing python-dbus is already installed" ] } ok: [openshift] => (item=PyYAML) => { "changed": false, "item": "PyYAML", "rc": 0, "results": [ "PyYAML-3.10-11.el7.x86_64 providing PyYAML is already installed" ] } ok: [openshift] => (item=yum-utils) => { "changed": false, "item": "yum-utils", "rc": 0, "results": [ "yum-utils-1.1.31-40.el7.noarch providing yum-utils is already installed" ] } TASK [openshift_facts : Ensure various deps for running system containers are installed] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:64 skipping: [openshift] => (item=atomic) => { "changed": false, "item": "atomic", "skip_reason": "Conditional result was False", "skipped": true } skipping: [openshift] => (item=ostree) => { "changed": false, "item": "ostree", "skip_reason": "Conditional result was False", "skipped": true } skipping: [openshift] => (item=runc) => { "changed": false, "item": "runc", "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Gather Cluster facts and set is_containerized if needed] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:71 changed: [openshift] => { "ansible_facts": { "openshift": { "common": { "admin_binary": "/usr/local/bin/oadm", "all_hostnames": [ "ip-172-18-8-225.ec2.internal", "54.173.112.164", "172.18.8.225", "ec2-54-173-112-164.compute-1.amazonaws.com" ], "cli_image": "openshift/origin", "client_binary": "/usr/local/bin/oc", "cluster_id": "default", "config_base": "/etc/origin", "data_dir": "/var/lib/origin", "debug_level": "2", "deployer_image": "openshift/origin-deployer", "deployment_subtype": "basic", "deployment_type": "origin", "dns_domain": "cluster.local", "etcd_runtime": "docker", "examples_content_version": "v3.6", "generate_no_proxy_hosts": true, "hostname": "ip-172-18-8-225.ec2.internal", "install_examples": true, "internal_hostnames": [ "ip-172-18-8-225.ec2.internal", "172.18.8.225" ], "ip": "172.18.8.225", "is_atomic": false, "is_containerized": true, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.30.0.1", "pod_image": "openshift/origin-pod", "portal_net": "172.30.0.0/16", "public_hostname": "ec2-54-173-112-164.compute-1.amazonaws.com", "public_ip": "54.173.112.164", "registry_image": "openshift/origin-docker-registry", "router_image": "openshift/origin-haproxy-router", "sdn_network_plugin_name": "redhat/openshift-ovs-subnet", "service_type": "origin", "use_calico": false, "use_contiv": false, "use_dnsmasq": true, "use_flannel": false, "use_manageiq": true, "use_nuage": false, "use_openshift_sdn": true, "version_gte_3_1_1_or_1_1_1": true, "version_gte_3_1_or_1_1": true, "version_gte_3_2_or_1_2": true, "version_gte_3_3_or_1_3": true, "version_gte_3_4_or_1_4": true, "version_gte_3_5_or_1_5": true, "version_gte_3_6": true }, "current_config": { "roles": [ "node", "docker" ] }, "docker": { "api_version": 1.24, "disable_push_dockerhub": false, "gte_1_10": true, "options": "--log-driver=journald", "service_name": "docker", "version": "1.12.6" }, "hosted": { "logging": { "selector": null }, "metrics": { "selector": null }, "registry": { "selector": "region=infra" }, "router": { "selector": "region=infra" } }, "node": { "annotations": {}, "iptables_sync_period": "30s", "kubelet_args": { "node-labels": [] }, "labels": {}, "local_quota_per_fsgroup": "", "node_image": "openshift/node", "node_system_image": "openshift/node", "nodename": "ip-172-18-8-225.ec2.internal", "ovs_image": "openshift/openvswitch", "ovs_system_image": "openshift/openvswitch", "registry_url": "openshift/origin-${component}:${version}", "schedulable": true, "sdn_mtu": "8951", "set_node_ip": false, "storage_plugin_deps": [ "ceph", "glusterfs", "iscsi" ] }, "provider": { "metadata": { "ami-id": "ami-40e7a056", "ami-launch-index": "0", "ami-manifest-path": "(unknown)", "block-device-mapping": { "ami": "/dev/sda1", "ebs17": "sdb", "root": "/dev/sda1" }, "hostname": "ip-172-18-8-225.ec2.internal", "instance-action": "none", "instance-id": "i-0433d36afa0b59316", "instance-type": "c4.xlarge", "local-hostname": "ip-172-18-8-225.ec2.internal", "local-ipv4": "172.18.8.225", "mac": "0e:11:96:65:5f:96", "metrics": { "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" }, "network": { "interfaces": { "macs": { "0e:11:96:65:5f:96": { "device-number": "0", "interface-id": "eni-74e194ae", "ipv4-associations": { "54.173.112.164": "172.18.8.225" }, "local-hostname": "ip-172-18-8-225.ec2.internal", "local-ipv4s": "172.18.8.225", "mac": "0e:11:96:65:5f:96", "owner-id": "531415883065", "public-hostname": "ec2-54-173-112-164.compute-1.amazonaws.com", "public-ipv4s": "54.173.112.164", "security-group-ids": "sg-7e73221a", "security-groups": "default", "subnet-id": "subnet-cf57c596", "subnet-ipv4-cidr-block": "172.18.0.0/20", "vpc-id": "vpc-69705d0c", "vpc-ipv4-cidr-block": "172.18.0.0/16", "vpc-ipv4-cidr-blocks": "172.18.0.0/16" } } } }, "placement": { "availability-zone": "us-east-1d" }, "profile": "default-hvm", "public-hostname": "ec2-54-173-112-164.compute-1.amazonaws.com", "public-ipv4": "54.173.112.164", "public-keys/": "0=libra", "reservation-id": "r-0fb8aaab72be39d60", "security-groups": "default", "services": { "domain": "amazonaws.com", "partition": "aws" } }, "name": "aws", "network": { "hostname": "ip-172-18-8-225.ec2.internal", "interfaces": [ { "ips": [ "172.18.8.225" ], "network_id": "subnet-cf57c596", "network_type": "vpc", "public_ips": [ "54.173.112.164" ] } ], "ip": "172.18.8.225", "ipv6_enabled": false, "public_hostname": "ec2-54-173-112-164.compute-1.amazonaws.com", "public_ip": "54.173.112.164" }, "zone": "us-east-1d" } } }, "changed": true } TASK [openshift_facts : Set repoquery command] ********************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_facts/tasks/main.yml:99 ok: [openshift] => { "ansible_facts": { "repoquery_cmd": "repoquery --plugins" }, "changed": false } TASK [openshift_logging : fail] ************************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Set default image variables based on deployment_type] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:6 ok: [openshift] => (item=/tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/vars/default_images.yml) => { "ansible_facts": { "__openshift_logging_image_prefix": "{{ openshift_hosted_logging_deployer_prefix | default('docker.io/openshift/origin-') }}", "__openshift_logging_image_version": "{{ openshift_hosted_logging_deployer_version | default('latest') }}" }, "item": "/tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/vars/default_images.yml" } TASK [openshift_logging : Set logging image facts] ***************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:12 ok: [openshift] => { "ansible_facts": { "openshift_logging_image_prefix": "172.30.101.10:5000/logging/", "openshift_logging_image_version": "latest" }, "changed": false } TASK [openshift_logging : Create temp directory for doing work in] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:17 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:01.003228", "end": "2017-06-02 16:56:42.077986", "rc": 0, "start": "2017-06-02 16:56:41.074758" } STDOUT: /tmp/openshift-logging-ansible-HWJ2Gz TASK [openshift_logging : debug] *********************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:24 ok: [openshift] => { "changed": false } MSG: Created temp dir /tmp/openshift-logging-ansible-HWJ2Gz TASK [openshift_logging : Create local temp directory for doing work in] ******* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:26 ok: [openshift -> 127.0.0.1] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.002029", "end": "2017-06-02 16:56:42.238398", "rc": 0, "start": "2017-06-02 16:56:42.236369" } STDOUT: /tmp/openshift-logging-ansible-RfkC5G TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:33 included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml for openshift TASK [openshift_logging : Gather OpenShift Logging Facts] ********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:2 ok: [openshift] => { "ansible_facts": { "openshift_logging_facts": { "curator": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "curator_ops": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "elasticsearch": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "elasticsearch_ops": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "fluentd": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "kibana": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "kibana_ops": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} } } }, "changed": false } TASK [openshift_logging : Set logging project] ********************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:7 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get namespace logging -o json", "results": { "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "openshift.io/description": "", "openshift.io/display-name": "", "openshift.io/node-selector": "", "openshift.io/sa.scc.mcs": "s0:c7,c4", "openshift.io/sa.scc.supplemental-groups": "1000050000/10000", "openshift.io/sa.scc.uid-range": "1000050000/10000" }, "creationTimestamp": "2017-06-02T20:44:26Z", "name": "logging", "resourceVersion": "700", "selfLink": "/api/v1/namespaces/logging", "uid": "447251fb-47d4-11e7-ab86-0e1196655f96" }, "spec": { "finalizers": [ "openshift.io/origin", "kubernetes" ] }, "status": { "phase": "Active" } }, "returncode": 0 }, "state": "present" } TASK [openshift_logging : Labelling logging project] *************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:13 TASK [openshift_logging : Labelling logging project] *************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create logging cert directory] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:39 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/origin/logging", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:47 included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml for openshift TASK [openshift_logging : Checking for ca.key] ********************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:3 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for ca.crt] ********************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:8 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for ca.serial.txt] ************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:13 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Generate certificates] ******************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:18 changed: [openshift] => { "changed": true, "cmd": [ "/usr/local/bin/oadm", "--config=/tmp/openshift-logging-ansible-HWJ2Gz/admin.kubeconfig", "ca", "create-signer-cert", "--key=/etc/origin/logging/ca.key", "--cert=/etc/origin/logging/ca.crt", "--serial=/etc/origin/logging/ca.serial.txt", "--name=logging-signer-test" ], "delta": "0:00:00.272820", "end": "2017-06-02 16:56:46.500672", "rc": 0, "start": "2017-06-02 16:56:46.227852" } TASK [openshift_logging : Checking for signing.conf] *************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:29 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : template] ******************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:34 changed: [openshift] => { "changed": true, "checksum": "a5a1bda430be44f982fa9097778b7d35d2e42780", "dest": "/etc/origin/logging/signing.conf", "gid": 0, "group": "root", "md5sum": "449087446670073f2899aac33113350c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 4263, "src": "/root/.ansible/tmp/ansible-tmp-1496437006.66-215522129902570/source", "state": "file", "uid": 0 } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:39 included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift TASK [openshift_logging : Checking for kibana.crt] ***************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for kibana.key] ***************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Trying to discover server cert variable name for kibana] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Trying to discover the server key variable name for kibana] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating signed server cert and key for kibana] ****** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying server key for kibana to generated certs directory] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying Server cert for kibana to generated certs directory] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Checking for kibana-ops.crt] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for kibana-ops.key] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Trying to discover server cert variable name for kibana-ops] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Trying to discover the server key variable name for kibana-ops] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating signed server cert and key for kibana-ops] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying server key for kibana-ops to generated certs directory] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying Server cert for kibana-ops to generated certs directory] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Checking for kibana-internal.crt] ******************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for kibana-internal.key] ******************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Trying to discover server cert variable name for kibana-internal] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Trying to discover the server key variable name for kibana-internal] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating signed server cert and key for kibana-internal] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28 changed: [openshift] => { "changed": true, "cmd": [ "/usr/local/bin/oadm", "--config=/tmp/openshift-logging-ansible-HWJ2Gz/admin.kubeconfig", "ca", "create-server-cert", "--key=/etc/origin/logging/kibana-internal.key", "--cert=/etc/origin/logging/kibana-internal.crt", "--hostnames=kibana, kibana-ops, kibana.127.0.0.1.xip.io, kibana-ops.router.default.svc.cluster.local", "--signer-cert=/etc/origin/logging/ca.crt", "--signer-key=/etc/origin/logging/ca.key", "--signer-serial=/etc/origin/logging/ca.serial.txt" ], "delta": "0:00:00.417038", "end": "2017-06-02 16:56:48.732149", "rc": 0, "start": "2017-06-02 16:56:48.315111" } TASK [openshift_logging : Copying server key for kibana-internal to generated certs directory] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying Server cert for kibana-internal to generated certs directory] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:48 skipping: [openshift] => (item={u'procure_component': u'mux', u'hostnames': u'logging-mux, mux.router.default.svc.cluster.local'}) => { "cert_info": { "hostnames": "logging-mux, mux.router.default.svc.cluster.local", "procure_component": "mux" }, "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:56 skipping: [openshift] => (item={u'procure_component': u'mux'}) => { "changed": false, "shared_key_info": { "procure_component": "mux" }, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:63 skipping: [openshift] => (item={u'procure_component': u'es', u'hostnames': u'es, es.router.default.svc.cluster.local'}) => { "cert_info": { "hostnames": "es, es.router.default.svc.cluster.local", "procure_component": "es" }, "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:71 skipping: [openshift] => (item={u'procure_component': u'es-ops', u'hostnames': u'es-ops, es-ops.router.default.svc.cluster.local'}) => { "cert_info": { "hostnames": "es-ops, es-ops.router.default.svc.cluster.local", "procure_component": "es-ops" }, "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copy proxy TLS configuration file] ******************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:81 changed: [openshift] => { "changed": true, "checksum": "36991681e03970736a99be9f084773521c44db06", "dest": "/etc/origin/logging/server-tls.json", "gid": 0, "group": "root", "md5sum": "2a954195add2b2fdde4ed09ff5c8e1c5", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 321, "src": "/root/.ansible/tmp/ansible-tmp-1496437009.2-139101931794913/source", "state": "file", "uid": 0 } TASK [openshift_logging : Copy proxy TLS configuration file] ******************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:86 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Checking for ca.db] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:91 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : copy] ************************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:96 changed: [openshift] => { "changed": true, "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "dest": "/etc/origin/logging/ca.db", "gid": 0, "group": "root", "md5sum": "d41d8cd98f00b204e9800998ecf8427e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 0, "src": "/root/.ansible/tmp/ansible-tmp-1496437009.57-231497239541384/source", "state": "file", "uid": 0 } TASK [openshift_logging : Checking for ca.crt.srl] ***************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:101 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : copy] ************************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:106 changed: [openshift] => { "changed": true, "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "dest": "/etc/origin/logging/ca.crt.srl", "gid": 0, "group": "root", "md5sum": "d41d8cd98f00b204e9800998ecf8427e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 0, "src": "/root/.ansible/tmp/ansible-tmp-1496437009.92-187651722497001/source", "state": "file", "uid": 0 } TASK [openshift_logging : Generate PEM certs] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:111 included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift TASK [openshift_logging : Checking for system.logging.fluentd.key] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.logging.fluentd.crt] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.logging.fluentd] ******** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.logging.fluentd] ******** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.logging.fluentd.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.logging.fluentd.key", "-subj", "/CN=system.logging.fluentd/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:00.141040", "end": "2017-06-02 16:56:50.871971", "rc": 0, "start": "2017-06-02 16:56:50.730931" } STDERR: Generating a 2048 bit RSA private key .......+++ ....+++ writing new private key to '/etc/origin/logging/system.logging.fluentd.key' ----- TASK [openshift_logging : Sign cert request with CA for system.logging.fluentd] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.logging.fluentd.csr", "-notext", "-out", "/etc/origin/logging/system.logging.fluentd.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007715", "end": "2017-06-02 16:56:51.001387", "rc": 0, "start": "2017-06-02 16:56:50.993672" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 2 (0x2) Validity Not Before: Jun 2 20:56:50 2017 GMT Not After : Jun 2 20:56:50 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.logging.fluentd X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: C0:09:25:4D:CF:77:DB:EB:A0:4E:41:09:39:BC:85:6C:2F:E5:25:41 X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 2 20:56:50 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Checking for system.logging.kibana.key] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.logging.kibana.crt] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.logging.kibana] ********* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.logging.kibana] ********* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.logging.kibana.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.logging.kibana.key", "-subj", "/CN=system.logging.kibana/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:00.055531", "end": "2017-06-02 16:56:51.447362", "rc": 0, "start": "2017-06-02 16:56:51.391831" } STDERR: Generating a 2048 bit RSA private key ...........+++ ........................+++ writing new private key to '/etc/origin/logging/system.logging.kibana.key' ----- TASK [openshift_logging : Sign cert request with CA for system.logging.kibana] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.logging.kibana.csr", "-notext", "-out", "/etc/origin/logging/system.logging.kibana.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007657", "end": "2017-06-02 16:56:51.582795", "rc": 0, "start": "2017-06-02 16:56:51.575138" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 3 (0x3) Validity Not Before: Jun 2 20:56:51 2017 GMT Not After : Jun 2 20:56:51 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.logging.kibana X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 68:EB:93:DC:ED:5D:C3:25:90:86:58:89:9A:2B:8A:5B:26:60:58:29 X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 2 20:56:51 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Checking for system.logging.curator.key] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.logging.curator.crt] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.logging.curator] ******** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.logging.curator] ******** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.logging.curator.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.logging.curator.key", "-subj", "/CN=system.logging.curator/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:00.046492", "end": "2017-06-02 16:56:52.058768", "rc": 0, "start": "2017-06-02 16:56:52.012276" } STDERR: Generating a 2048 bit RSA private key ....................+++ ........+++ writing new private key to '/etc/origin/logging/system.logging.curator.key' ----- TASK [openshift_logging : Sign cert request with CA for system.logging.curator] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.logging.curator.csr", "-notext", "-out", "/etc/origin/logging/system.logging.curator.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007613", "end": "2017-06-02 16:56:52.189438", "rc": 0, "start": "2017-06-02 16:56:52.181825" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 4 (0x4) Validity Not Before: Jun 2 20:56:52 2017 GMT Not After : Jun 2 20:56:52 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.logging.curator X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 08:08:32:1F:D9:1E:6C:58:B9:09:A1:A1:3E:9B:32:10:CE:1F:61:F7 X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 2 20:56:52 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Checking for system.admin.key] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.admin.crt] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.admin] ****************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.admin] ****************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.admin.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.admin.key", "-subj", "/CN=system.admin/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:00.241679", "end": "2017-06-02 16:56:52.825148", "rc": 0, "start": "2017-06-02 16:56:52.583469" } STDERR: Generating a 2048 bit RSA private key .......................................................................+++ ......................................................................................................+++ writing new private key to '/etc/origin/logging/system.admin.key' ----- TASK [openshift_logging : Sign cert request with CA for system.admin] ********** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.admin.csr", "-notext", "-out", "/etc/origin/logging/system.admin.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007812", "end": "2017-06-02 16:56:52.956337", "rc": 0, "start": "2017-06-02 16:56:52.948525" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 5 (0x5) Validity Not Before: Jun 2 20:56:52 2017 GMT Not After : Jun 2 20:56:52 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.admin X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 1A:44:7B:60:8D:A8:CB:8E:B5:90:5D:AC:01:33:2A:C5:D3:C5:70:FD X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 2 20:56:52 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Generate PEM cert for mux] *************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:121 skipping: [openshift] => (item=system.logging.mux) => { "changed": false, "node_name": "system.logging.mux", "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Generate PEM cert for Elasticsearch external route] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:129 skipping: [openshift] => (item=system.logging.es) => { "changed": false, "node_name": "system.logging.es", "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating necessary JKS certs] ************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:137 included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml for openshift TASK [openshift_logging : Checking for elasticsearch.jks] ********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:3 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for logging-es.jks] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:8 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.admin.jks] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:13 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for truststore.jks] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:18 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:23 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:28 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:33 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:38 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : pulling down signing items from host] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:43 changed: [openshift] => (item=ca.crt) => { "changed": true, "checksum": "8176b332bcd9070176021bc40d972a2704a98e96", "dest": "/tmp/openshift-logging-ansible-RfkC5G/ca.crt", "item": "ca.crt", "md5sum": "7b1366d030eea123c69200f9dfc8481d", "remote_checksum": "8176b332bcd9070176021bc40d972a2704a98e96", "remote_md5sum": null } changed: [openshift] => (item=ca.key) => { "changed": true, "checksum": "d1fecc11520a0646d47ad1340584559a103c185c", "dest": "/tmp/openshift-logging-ansible-RfkC5G/ca.key", "item": "ca.key", "md5sum": "f75148f988742f096f075d769737763d", "remote_checksum": "d1fecc11520a0646d47ad1340584559a103c185c", "remote_md5sum": null } changed: [openshift] => (item=ca.serial.txt) => { "changed": true, "checksum": "b649682b92a811746098e5c91e891e5142a41950", "dest": "/tmp/openshift-logging-ansible-RfkC5G/ca.serial.txt", "item": "ca.serial.txt", "md5sum": "76b01ce73ac53fdac1c67d27ac040473", "remote_checksum": "b649682b92a811746098e5c91e891e5142a41950", "remote_md5sum": null } ok: [openshift] => (item=ca.crl.srl) => { "changed": false, "file": "/etc/origin/logging/ca.crl.srl", "item": "ca.crl.srl" } MSG: the remote file does not exist, not transferring, ignored changed: [openshift] => (item=ca.db) => { "changed": true, "checksum": "222548e8d6b17788196c763c8ed5997fe7832214", "dest": "/tmp/openshift-logging-ansible-RfkC5G/ca.db", "item": "ca.db", "md5sum": "b889b51a0e93f00ecc5bf6b0755213c7", "remote_checksum": "222548e8d6b17788196c763c8ed5997fe7832214", "remote_md5sum": null } TASK [openshift_logging : template] ******************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:56 changed: [openshift -> 127.0.0.1] => { "changed": true, "checksum": "4deec6657ff125e2f968e36c33be0f4251f4e71a", "dest": "/tmp/openshift-logging-ansible-RfkC5G/signing.conf", "gid": 0, "group": "root", "md5sum": "8771919e1624f5496d511345d7eb5eba", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 4281, "src": "/root/.ansible/tmp/ansible-tmp-1496437014.56-35174431195948/source", "state": "file", "uid": 0 } TASK [openshift_logging : Run JKS generation script] *************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:61 changed: [openshift -> 127.0.0.1] => { "changed": true, "rc": 0 } STDOUT: Generating keystore and certificate for node system.admin Generating certificate signing request for node system.admin Sign certificate request with CA Import back to keystore (including CA chain) All done for system.admin Generating keystore and certificate for node elasticsearch Generating certificate signing request for node elasticsearch Sign certificate request with CA Import back to keystore (including CA chain) All done for elasticsearch Generating keystore and certificate for node logging-es Generating certificate signing request for node logging-es Sign certificate request with CA Import back to keystore (including CA chain) All done for logging-es Import CA to truststore for validating client certs STDERR: + '[' 2 -lt 1 ']' + dir=/tmp/openshift-logging-ansible-RfkC5G + SCRATCH_DIR=/tmp/openshift-logging-ansible-RfkC5G + PROJECT=logging + [[ ! -f /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks ]] + generate_JKS_client_cert system.admin + NODE_NAME=system.admin + ks_pass=kspass + ts_pass=tspass + dir=/tmp/openshift-logging-ansible-RfkC5G + echo Generating keystore and certificate for node system.admin + keytool -genkey -alias system.admin -keystore /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks -keyalg RSA -keysize 2048 -validity 712 -keypass kspass -storepass kspass -dname 'CN=system.admin, OU=OpenShift, O=Logging' + echo Generating certificate signing request for node system.admin + keytool -certreq -alias system.admin -keystore /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks -file /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks.csr -keyalg rsa -keypass kspass -storepass kspass -dname 'CN=system.admin, OU=OpenShift, O=Logging' + echo Sign certificate request with CA + openssl ca -in /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks.csr -notext -out /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks.crt -config /tmp/openshift-logging-ansible-RfkC5G/signing.conf -extensions v3_req -batch -extensions server_ext Using configuration from /tmp/openshift-logging-ansible-RfkC5G/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 6 (0x6) Validity Not Before: Jun 2 20:57:03 2017 GMT Not After : Jun 2 20:57:03 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.admin X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 8F:8E:06:D6:10:DA:06:38:B2:12:D1:37:9B:80:2D:1E:CA:14:B0:78 X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 2 20:57:03 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated + echo 'Import back to keystore (including CA chain)' + keytool -import -file /tmp/openshift-logging-ansible-RfkC5G/ca.crt -keystore /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks -storepass kspass -noprompt -alias sig-ca Certificate was added to keystore + keytool -import -file /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks.crt -keystore /tmp/openshift-logging-ansible-RfkC5G/system.admin.jks -storepass kspass -noprompt -alias system.admin Certificate reply was installed in keystore + echo All done for system.admin + [[ ! -f /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.jks ]] ++ join , logging-es logging-es-ops ++ local IFS=, ++ shift ++ echo logging-es,logging-es-ops + generate_JKS_chain true elasticsearch logging-es,logging-es-ops + dir=/tmp/openshift-logging-ansible-RfkC5G + ADD_OID=true + NODE_NAME=elasticsearch + CERT_NAMES=logging-es,logging-es-ops + ks_pass=kspass + ts_pass=tspass + rm -rf elasticsearch + extension_names= + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es-ops + '[' true = true ']' + extension_names=,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5 + echo Generating keystore and certificate for node elasticsearch + keytool -genkey -alias elasticsearch -keystore /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.jks -keypass kspass -storepass kspass -keyalg RSA -keysize 2048 -validity 712 -dname 'CN=elasticsearch, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5 + echo Generating certificate signing request for node elasticsearch + keytool -certreq -alias elasticsearch -keystore /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.jks -storepass kspass -file /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.csr -keyalg rsa -dname 'CN=elasticsearch, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5 + echo Sign certificate request with CA + openssl ca -in /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.csr -notext -out /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.crt -config /tmp/openshift-logging-ansible-RfkC5G/signing.conf -extensions v3_req -batch -extensions server_ext Using configuration from /tmp/openshift-logging-ansible-RfkC5G/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 7 (0x7) Validity Not Before: Jun 2 20:57:04 2017 GMT Not After : Jun 2 20:57:04 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = elasticsearch X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 0C:52:29:4E:54:AD:5C:BF:B9:D3:82:07:BB:E0:17:9A:49:F4:35:99 X509v3 Authority Key Identifier: 0. X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1, DNS:logging-es, DNS:logging-es-ops, Registered ID:1.2.3.4.5.5 Certificate is to be certified until Jun 2 20:57:04 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated + echo 'Import back to keystore (including CA chain)' + keytool -import -file /tmp/openshift-logging-ansible-RfkC5G/ca.crt -keystore /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.jks -storepass kspass -noprompt -alias sig-ca Certificate was added to keystore + keytool -import -file /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.crt -keystore /tmp/openshift-logging-ansible-RfkC5G/elasticsearch.jks -storepass kspass -noprompt -alias elasticsearch Certificate reply was installed in keystore + echo All done for elasticsearch + [[ ! -f /tmp/openshift-logging-ansible-RfkC5G/logging-es.jks ]] ++ join , logging-es logging-es.logging.svc.cluster.local logging-es-cluster logging-es-cluster.logging.svc.cluster.local logging-es-ops logging-es-ops.logging.svc.cluster.local logging-es-ops-cluster logging-es-ops-cluster.logging.svc.cluster.local ++ local IFS=, ++ shift ++ echo logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local + generate_JKS_chain false logging-es logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local + dir=/tmp/openshift-logging-ansible-RfkC5G + ADD_OID=false + NODE_NAME=logging-es + CERT_NAMES=logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local + ks_pass=kspass + ts_pass=tspass + rm -rf logging-es + extension_names= + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local + '[' false = true ']' + echo Generating keystore and certificate for node logging-es + keytool -genkey -alias logging-es -keystore /tmp/openshift-logging-ansible-RfkC5G/logging-es.jks -keypass kspass -storepass kspass -keyalg RSA -keysize 2048 -validity 712 -dname 'CN=logging-es, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local + echo Generating certificate signing request for node logging-es + keytool -certreq -alias logging-es -keystore /tmp/openshift-logging-ansible-RfkC5G/logging-es.jks -storepass kspass -file /tmp/openshift-logging-ansible-RfkC5G/logging-es.csr -keyalg rsa -dname 'CN=logging-es, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local + echo Sign certificate request with CA + openssl ca -in /tmp/openshift-logging-ansible-RfkC5G/logging-es.csr -notext -out /tmp/openshift-logging-ansible-RfkC5G/logging-es.crt -config /tmp/openshift-logging-ansible-RfkC5G/signing.conf -extensions v3_req -batch -extensions server_ext Using configuration from /tmp/openshift-logging-ansible-RfkC5G/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 8 (0x8) Validity Not Before: Jun 2 20:57:06 2017 GMT Not After : Jun 2 20:57:06 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = logging-es X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 90:B4:4C:F1:80:CD:68:2B:38:75:50:F0:70:BA:0A:7F:3F:AA:0A:E8 X509v3 Authority Key Identifier: 0. X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1, DNS:logging-es, DNS:logging-es.logging.svc.cluster.local, DNS:logging-es-cluster, DNS:logging-es-cluster.logging.svc.cluster.local, DNS:logging-es-ops, DNS:logging-es-ops.logging.svc.cluster.local, DNS:logging-es-ops-cluster, DNS:logging-es-ops-cluster.logging.svc.cluster.local Certificate is to be certified until Jun 2 20:57:06 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated + echo 'Import back to keystore (including CA chain)' + keytool -import -file /tmp/openshift-logging-ansible-RfkC5G/ca.crt -keystore /tmp/openshift-logging-ansible-RfkC5G/logging-es.jks -storepass kspass -noprompt -alias sig-ca Certificate was added to keystore + keytool -import -file /tmp/openshift-logging-ansible-RfkC5G/logging-es.crt -keystore /tmp/openshift-logging-ansible-RfkC5G/logging-es.jks -storepass kspass -noprompt -alias logging-es Certificate reply was installed in keystore + echo All done for logging-es + '[' '!' -f /tmp/openshift-logging-ansible-RfkC5G/truststore.jks ']' + createTruststore + echo 'Import CA to truststore for validating client certs' + keytool -import -file /tmp/openshift-logging-ansible-RfkC5G/ca.crt -keystore /tmp/openshift-logging-ansible-RfkC5G/truststore.jks -storepass tspass -noprompt -alias sig-ca Certificate was added to keystore + exit 0 TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:66 changed: [openshift] => { "changed": true, "checksum": "f972628977242e59fb782a46960dfd536f8e7a86", "dest": "/etc/origin/logging/elasticsearch.jks", "gid": 0, "group": "root", "md5sum": "50e69f2a3626f6130dee3328e1a0a567", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 3768, "src": "/root/.ansible/tmp/ansible-tmp-1496437026.71-38881897735201/source", "state": "file", "uid": 0 } TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:72 changed: [openshift] => { "changed": true, "checksum": "8f355a56b7e42be12eb6a01f12c116443742c5ff", "dest": "/etc/origin/logging/logging-es.jks", "gid": 0, "group": "root", "md5sum": "88e5b2c7a8b3e9a5d9dd1558d60c2f83", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 3982, "src": "/root/.ansible/tmp/ansible-tmp-1496437026.94-198883712168909/source", "state": "file", "uid": 0 } TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:78 changed: [openshift] => { "changed": true, "checksum": "2ddf03c6f128348e92bc566eaccd812803d5b116", "dest": "/etc/origin/logging/system.admin.jks", "gid": 0, "group": "root", "md5sum": "a5f10e749bc94181aff99a3ca2b83d5e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 3702, "src": "/root/.ansible/tmp/ansible-tmp-1496437027.16-128014430374331/source", "state": "file", "uid": 0 } TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:84 changed: [openshift] => { "changed": true, "checksum": "58c12be4c8086ce6f14c1ee1f9cae7ac4730fefc", "dest": "/etc/origin/logging/truststore.jks", "gid": 0, "group": "root", "md5sum": "6cc4b62bc33ec48a6a048f4f802d94a8", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 797, "src": "/root/.ansible/tmp/ansible-tmp-1496437027.39-280193500555719/source", "state": "file", "uid": 0 } TASK [openshift_logging : Generate proxy session] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:141 ok: [openshift] => { "ansible_facts": { "session_secret": "6PRtgDTCnKxz6URkwVVW9tQzlTWT7IyLmkEI6YmbsLfQlpthtPykCmPSmFXsPSn0fAz3FsWoshV7qalG8rCHQt7q09JTNaVsWNOBsSrztwLddpQOIalJBcUanKu6DFBOIg5g8PC6OwvxjcUzJatfypLILXAdQG1HNtik9WpqDIzCojekyxxDHBekdN6J6nxFZMcZSCM0" }, "changed": false } TASK [openshift_logging : Generate oauth client secret] ************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:146 ok: [openshift] => { "ansible_facts": { "oauth_secret": "a6Wzc3KHdVTDgYMlQsQXOkXZ0KqHFrM2ifwi6HFJn2GU68hfSnJbAY4eLRj5Zmii" }, "changed": false } TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:53 TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:57 ok: [openshift] => { "ansible_facts": { "es_indices": "[]" }, "changed": false } TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:60 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:64 TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:83 statically included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml TASK [openshift_logging_elasticsearch : Validate Elasticsearch cluster size] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Validate Elasticsearch Ops cluster size] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:6 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:10 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:14 ok: [openshift] => { "ansible_facts": { "elasticsearch_name": "logging-elasticsearch", "es_component": "es" }, "changed": false } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "es_version": "3_5" }, "changed": false } TASK [openshift_logging_elasticsearch : debug] ********************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:11 ok: [openshift] => { "changed": false, "openshift_logging_image_version": "latest" } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:14 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:17 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create temp directory for doing work in] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:21 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:01.002951", "end": "2017-06-02 16:57:09.228217", "rc": 0, "start": "2017-06-02 16:57:08.225266" } STDOUT: /tmp/openshift-logging-ansible-iKPiyU TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:26 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-iKPiyU" }, "changed": false } TASK [openshift_logging_elasticsearch : Create templates subdirectory] ********* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:30 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-iKPiyU/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:48 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-elasticsearch -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-elasticsearch-dockercfg-w6bhq" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-02T20:57:10Z", "name": "aggregated-logging-elasticsearch", "namespace": "logging", "resourceVersion": "1224", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-elasticsearch", "uid": "0b917537-47d6-11e7-ab86-0e1196655f96" }, "secrets": [ { "name": "aggregated-logging-elasticsearch-dockercfg-w6bhq" }, { "name": "aggregated-logging-elasticsearch-token-b6l4q" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:57 changed: [openshift] => { "changed": true, "checksum": "e5015364391ac609da8655a9a1224131599a5cea", "dest": "/tmp/openshift-logging-ansible-iKPiyU/rolebinding-reader.yml", "gid": 0, "group": "root", "md5sum": "446fb96447527f48f97e69bb41bad7be", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 135, "src": "/root/.ansible/tmp/ansible-tmp-1496437030.42-190523301523558/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Create rolebinding-reader role] ******** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:61 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get clusterrole rolebinding-reader -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "ClusterRole", "metadata": { "creationTimestamp": "2017-06-02T20:57:11Z", "name": "rolebinding-reader", "resourceVersion": "191", "selfLink": "/oapi/v1/clusterroles/rolebinding-reader", "uid": "0c3a63cd-47d6-11e7-ab86-0e1196655f96" }, "rules": [ { "apiGroups": [ "" ], "attributeRestrictions": null, "resources": [ "clusterrolebindings" ], "verbs": [ "get" ] } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set rolebinding-reader permissions for ES] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:72 changed: [openshift] => { "changed": true, "present": "present", "results": { "cmd": "/bin/oc adm policy add-cluster-role-to-user rolebinding-reader system:serviceaccount:logging:aggregated-logging-elasticsearch -n logging", "results": "", "returncode": 0 } } TASK [openshift_logging_elasticsearch : Generate logging-elasticsearch-view-role] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "d752c09323565f80ed14fa806d42284f0c5aef2a", "dest": "/tmp/openshift-logging-ansible-iKPiyU/logging-elasticsearch-view-role.yaml", "gid": 0, "group": "root", "md5sum": "8299dca2fb036c06ba7c4f620680e0f6", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 183, "src": "/root/.ansible/tmp/ansible-tmp-1496437032.16-90434118343627/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:94 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get rolebinding logging-elasticsearch-view-role -o json -n logging", "results": [ { "apiVersion": "v1", "groupNames": null, "kind": "RoleBinding", "metadata": { "creationTimestamp": "2017-06-02T20:57:12Z", "name": "logging-elasticsearch-view-role", "namespace": "logging", "resourceVersion": "778", "selfLink": "/oapi/v1/namespaces/logging/rolebindings/logging-elasticsearch-view-role", "uid": "0d431713-47d6-11e7-ab86-0e1196655f96" }, "roleRef": { "name": "view" }, "subjects": [ { "kind": "ServiceAccount", "name": "aggregated-logging-elasticsearch", "namespace": "logging" } ], "userNames": [ "system:serviceaccount:logging:aggregated-logging-elasticsearch" ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:105 ok: [openshift] => { "changed": false, "checksum": "f91458d5dad42c496e2081ef872777a6f6eb9ff9", "dest": "/tmp/openshift-logging-ansible-iKPiyU/elasticsearch-logging.yml", "gid": 0, "group": "root", "md5sum": "e4be7c33c1927bbdd8c909bfbe3d9f0b", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2171, "src": "/root/.ansible/tmp/ansible-tmp-1496437033.18-108367351587323/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:111 ok: [openshift] => { "changed": false, "checksum": "28a55474455a795904c1523b1af8350195a44af2", "dest": "/tmp/openshift-logging-ansible-iKPiyU/elasticsearch.yml", "gid": 0, "group": "root", "md5sum": "d6c444dffc4a364a53304e55c8cc4645", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2322, "src": "/root/.ansible/tmp/ansible-tmp-1496437033.42-88665473045595/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:121 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:127 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES configmap] ********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:133 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-elasticsearch -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "elasticsearch.yml": "cluster:\n name: ${CLUSTER_NAME}\n\nscript:\n inline: on\n indexed: on\n\nindex:\n number_of_shards: 1\n number_of_replicas: 0\n unassigned.node_left.delayed_timeout: 2m\n translog:\n flush_threshold_size: 256mb\n flush_threshold_period: 5m\n\nnode:\n master: ${IS_MASTER}\n data: ${HAS_DATA}\n\nnetwork:\n host: 0.0.0.0\n\ncloud:\n kubernetes:\n service: ${SERVICE_DNS}\n namespace: ${NAMESPACE}\n\ndiscovery:\n type: kubernetes\n zen.ping.multicast.enabled: false\n zen.minimum_master_nodes: ${NODE_QUORUM}\n\ngateway:\n recover_after_nodes: ${NODE_QUORUM}\n expected_nodes: ${RECOVER_EXPECTED_NODES}\n recover_after_time: ${RECOVER_AFTER_TIME}\n\nio.fabric8.elasticsearch.authentication.users: [\"system.logging.kibana\", \"system.logging.fluentd\", \"system.logging.curator\", \"system.admin\"]\nio.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\n\nopenshift.config:\n use_common_data_model: true\n project_index_prefix: \"project\"\n time_field_name: \"@timestamp\"\n\nopenshift.searchguard:\n keystore.path: /etc/elasticsearch/secret/admin.jks\n truststore.path: /etc/elasticsearch/secret/searchguard.truststore\n\nopenshift.operations.allow_cluster_reader: false\n\npath:\n data: /elasticsearch/persistent/${CLUSTER_NAME}/data\n logs: /elasticsearch/${CLUSTER_NAME}/logs\n work: /elasticsearch/${CLUSTER_NAME}/work\n scripts: /elasticsearch/${CLUSTER_NAME}/scripts\n\nsearchguard:\n authcz.admin_dn:\n - CN=system.admin,OU=OpenShift,O=Logging\n config_index_name: \".searchguard.${HOSTNAME}\"\n ssl:\n transport:\n enabled: true\n enforce_hostname_verification: false\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/searchguard.key\n keystore_password: kspass\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/searchguard.truststore\n truststore_password: tspass\n http:\n enabled: true\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/key\n keystore_password: kspass\n clientauth_mode: OPTIONAL\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/truststore\n truststore_password: tspass\n", "logging.yml": "# you can override this using by setting a system property, for example -Des.logger.level=DEBUG\nes.logger.level: INFO\nrootLogger: ${es.logger.level}, console, file\nlogger:\n # log action execution errors for easier debugging\n action: WARN\n # reduce the logging for aws, too much is logged under the default INFO\n com.amazonaws: WARN\n io.fabric8.elasticsearch: ${PLUGIN_LOGLEVEL}\n io.fabric8.kubernetes: ${PLUGIN_LOGLEVEL}\n\n # gateway\n #gateway: DEBUG\n #index.gateway: DEBUG\n\n # peer shard recovery\n #indices.recovery: DEBUG\n\n # discovery\n #discovery: TRACE\n\n index.search.slowlog: TRACE, index_search_slow_log_file\n index.indexing.slowlog: TRACE, index_indexing_slow_log_file\n\n # search-guard\n com.floragunn.searchguard: WARN\n\nadditivity:\n index.search.slowlog: false\n index.indexing.slowlog: false\n\nappender:\n console:\n type: console\n layout:\n type: consolePattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.\n # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html\n #file:\n #type: extrasRollingFile\n #file: ${path.logs}/${cluster.name}.log\n #rollingPolicy: timeBased\n #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz\n #layout:\n #type: pattern\n #conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_search_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_search_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_indexing_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-02T20:57:14Z", "name": "logging-elasticsearch", "namespace": "logging", "resourceVersion": "1231", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-elasticsearch", "uid": "0e18bf45-47d6-11e7-ab86-0e1196655f96" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set ES secret] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:144 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-elasticsearch key=/etc/origin/logging/logging-es.jks truststore=/etc/origin/logging/truststore.jks searchguard.key=/etc/origin/logging/elasticsearch.jks searchguard.truststore=/etc/origin/logging/truststore.jks admin-key=/etc/origin/logging/system.admin.key admin-cert=/etc/origin/logging/system.admin.crt admin-ca=/etc/origin/logging/ca.crt admin.jks=/etc/origin/logging/system.admin.jks -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es-cluster service] ******** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:168 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.112.81", "cmd": "/bin/oc get service logging-es-cluster -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-02T20:57:16Z", "name": "logging-es-cluster", "namespace": "logging", "resourceVersion": "1234", "selfLink": "/api/v1/namespaces/logging/services/logging-es-cluster", "uid": "0f23d5a8-47d6-11e7-ab86-0e1196655f96" }, "spec": { "clusterIP": "172.30.112.81", "ports": [ { "port": 9300, "protocol": "TCP", "targetPort": 9300 } ], "selector": { "component": "es", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es service] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:182 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.152.168", "cmd": "/bin/oc get service logging-es -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-02T20:57:17Z", "name": "logging-es", "namespace": "logging", "resourceVersion": "1238", "selfLink": "/api/v1/namespaces/logging/services/logging-es", "uid": "0fc44158-47d6-11e7-ab86-0e1196655f96" }, "spec": { "clusterIP": "172.30.152.168", "ports": [ { "port": 9200, "protocol": "TCP", "targetPort": "restapi" } ], "selector": { "component": "es", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:197 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:210 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES storage] ************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:225 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:237 ok: [openshift] => { "ansible_facts": { "es_deploy_name": "logging-es-data-master-i5jtydma" }, "changed": false } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:241 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES dc templates] ******************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:246 changed: [openshift] => { "changed": true, "checksum": "e32d537dfc5919742dcb8afbf9d48daf95d5ba68", "dest": "/tmp/openshift-logging-ansible-iKPiyU/templates/logging-es-dc.yml", "gid": 0, "group": "root", "md5sum": "4aa6cddfdf17b4a09e2267e4ee41a116", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3050, "src": "/root/.ansible/tmp/ansible-tmp-1496437037.59-36919262999866/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set ES dc] ***************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:262 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-es-data-master-i5jtydma -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-02T20:57:18Z", "generation": 2, "labels": { "component": "es", "deployment": "logging-es-data-master-i5jtydma", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-data-master-i5jtydma", "namespace": "logging", "resourceVersion": "1253", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-es-data-master-i5jtydma", "uid": "107fa50b-47d6-11e7-ab86-0e1196655f96" }, "spec": { "replicas": 1, "selector": { "component": "es", "deployment": "logging-es-data-master-i5jtydma", "logging-infra": "elasticsearch", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "es", "deployment": "logging-es-data-master-i5jtydma", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-data-master-i5jtydma" }, "spec": { "containers": [ { "env": [ { "name": "NAMESPACE", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "metadata.namespace" } } }, { "name": "KUBERNETES_TRUST_CERT", "value": "true" }, { "name": "SERVICE_DNS", "value": "logging-es-cluster" }, { "name": "CLUSTER_NAME", "value": "logging-es" }, { "name": "INSTANCE_RAM", "value": "8Gi" }, { "name": "NODE_QUORUM", "value": "1" }, { "name": "RECOVER_EXPECTED_NODES", "value": "1" }, { "name": "RECOVER_AFTER_TIME", "value": "5m" }, { "name": "IS_MASTER", "value": "true" }, { "name": "HAS_DATA", "value": "true" } ], "image": "172.30.101.10:5000/logging/logging-elasticsearch:latest", "imagePullPolicy": "Always", "name": "elasticsearch", "ports": [ { "containerPort": 9200, "name": "restapi", "protocol": "TCP" }, { "containerPort": 9300, "name": "cluster", "protocol": "TCP" } ], "readinessProbe": { "exec": { "command": [ "/usr/share/elasticsearch/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 5, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 4 }, "resources": { "limits": { "cpu": "1", "memory": "8Gi" }, "requests": { "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/elasticsearch/secret", "name": "elasticsearch", "readOnly": true }, { "mountPath": "/usr/share/java/elasticsearch/config", "name": "elasticsearch-config", "readOnly": true }, { "mountPath": "/elasticsearch/persistent", "name": "elasticsearch-storage" } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": { "supplementalGroups": [ 65534 ] }, "serviceAccount": "aggregated-logging-elasticsearch", "serviceAccountName": "aggregated-logging-elasticsearch", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "elasticsearch", "secret": { "defaultMode": 420, "secretName": "logging-elasticsearch" } }, { "configMap": { "defaultMode": 420, "name": "logging-elasticsearch" }, "name": "elasticsearch-config" }, { "emptyDir": {}, "name": "elasticsearch-storage" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-02T20:57:18Z", "lastUpdateTime": "2017-06-02T20:57:18Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-02T20:57:18Z", "lastUpdateTime": "2017-06-02T20:57:18Z", "message": "replication controller \"logging-es-data-master-i5jtydma-1\" is waiting for pod \"logging-es-data-master-i5jtydma-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Delete temp directory] ***************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:274 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-iKPiyU", "state": "absent" } TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:97 TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:103 ok: [openshift] => { "ansible_facts": { "es_ops_indices": "[]" }, "changed": false } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:107 TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:129 statically included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml TASK [openshift_logging_elasticsearch : Validate Elasticsearch cluster size] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Validate Elasticsearch Ops cluster size] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:6 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:10 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:14 ok: [openshift] => { "ansible_facts": { "elasticsearch_name": "logging-elasticsearch-ops", "es_component": "es-ops" }, "changed": false } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "es_version": "3_5" }, "changed": false } TASK [openshift_logging_elasticsearch : debug] ********************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:11 ok: [openshift] => { "changed": false, "openshift_logging_image_version": "latest" } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:14 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:17 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create temp directory for doing work in] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:21 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.002117", "end": "2017-06-02 16:57:19.694446", "rc": 0, "start": "2017-06-02 16:57:19.692329" } STDOUT: /tmp/openshift-logging-ansible-aKv7rU TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:26 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-aKv7rU" }, "changed": false } TASK [openshift_logging_elasticsearch : Create templates subdirectory] ********* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:30 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-aKv7rU/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:48 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get sa aggregated-logging-elasticsearch -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-elasticsearch-dockercfg-w6bhq" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-02T20:57:10Z", "name": "aggregated-logging-elasticsearch", "namespace": "logging", "resourceVersion": "1224", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-elasticsearch", "uid": "0b917537-47d6-11e7-ab86-0e1196655f96" }, "secrets": [ { "name": "aggregated-logging-elasticsearch-dockercfg-w6bhq" }, { "name": "aggregated-logging-elasticsearch-token-b6l4q" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:57 changed: [openshift] => { "changed": true, "checksum": "e5015364391ac609da8655a9a1224131599a5cea", "dest": "/tmp/openshift-logging-ansible-aKv7rU/rolebinding-reader.yml", "gid": 0, "group": "root", "md5sum": "446fb96447527f48f97e69bb41bad7be", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 135, "src": "/root/.ansible/tmp/ansible-tmp-1496437040.47-51732254502897/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Create rolebinding-reader role] ******** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:61 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get clusterrole rolebinding-reader -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "ClusterRole", "metadata": { "creationTimestamp": "2017-06-02T20:57:11Z", "name": "rolebinding-reader", "resourceVersion": "191", "selfLink": "/oapi/v1/clusterroles/rolebinding-reader", "uid": "0c3a63cd-47d6-11e7-ab86-0e1196655f96" }, "rules": [ { "apiGroups": [ "" ], "attributeRestrictions": null, "resources": [ "clusterrolebindings" ], "verbs": [ "get" ] } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set rolebinding-reader permissions for ES] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:72 ok: [openshift] => { "changed": false, "present": "present" } TASK [openshift_logging_elasticsearch : Generate logging-elasticsearch-view-role] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "d752c09323565f80ed14fa806d42284f0c5aef2a", "dest": "/tmp/openshift-logging-ansible-aKv7rU/logging-elasticsearch-view-role.yaml", "gid": 0, "group": "root", "md5sum": "8299dca2fb036c06ba7c4f620680e0f6", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 183, "src": "/root/.ansible/tmp/ansible-tmp-1496437042.15-140206754787967/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:94 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get rolebinding logging-elasticsearch-view-role -o json -n logging", "results": [ { "apiVersion": "v1", "groupNames": null, "kind": "RoleBinding", "metadata": { "creationTimestamp": "2017-06-02T20:57:12Z", "name": "logging-elasticsearch-view-role", "namespace": "logging", "resourceVersion": "1229", "selfLink": "/oapi/v1/namespaces/logging/rolebindings/logging-elasticsearch-view-role", "uid": "0d431713-47d6-11e7-ab86-0e1196655f96" }, "roleRef": { "name": "view" }, "subjects": [ { "kind": "ServiceAccount", "name": "aggregated-logging-elasticsearch", "namespace": "logging" } ], "userNames": [ "system:serviceaccount:logging:aggregated-logging-elasticsearch" ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:105 ok: [openshift] => { "changed": false, "checksum": "f91458d5dad42c496e2081ef872777a6f6eb9ff9", "dest": "/tmp/openshift-logging-ansible-aKv7rU/elasticsearch-logging.yml", "gid": 0, "group": "root", "md5sum": "e4be7c33c1927bbdd8c909bfbe3d9f0b", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2171, "src": "/root/.ansible/tmp/ansible-tmp-1496437043.56-97016031754259/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:111 ok: [openshift] => { "changed": false, "checksum": "28a55474455a795904c1523b1af8350195a44af2", "dest": "/tmp/openshift-logging-ansible-aKv7rU/elasticsearch.yml", "gid": 0, "group": "root", "md5sum": "d6c444dffc4a364a53304e55c8cc4645", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2322, "src": "/root/.ansible/tmp/ansible-tmp-1496437043.84-271947345204001/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:121 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:127 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES configmap] ********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:133 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-elasticsearch-ops -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "elasticsearch.yml": "cluster:\n name: ${CLUSTER_NAME}\n\nscript:\n inline: on\n indexed: on\n\nindex:\n number_of_shards: 1\n number_of_replicas: 0\n unassigned.node_left.delayed_timeout: 2m\n translog:\n flush_threshold_size: 256mb\n flush_threshold_period: 5m\n\nnode:\n master: ${IS_MASTER}\n data: ${HAS_DATA}\n\nnetwork:\n host: 0.0.0.0\n\ncloud:\n kubernetes:\n service: ${SERVICE_DNS}\n namespace: ${NAMESPACE}\n\ndiscovery:\n type: kubernetes\n zen.ping.multicast.enabled: false\n zen.minimum_master_nodes: ${NODE_QUORUM}\n\ngateway:\n recover_after_nodes: ${NODE_QUORUM}\n expected_nodes: ${RECOVER_EXPECTED_NODES}\n recover_after_time: ${RECOVER_AFTER_TIME}\n\nio.fabric8.elasticsearch.authentication.users: [\"system.logging.kibana\", \"system.logging.fluentd\", \"system.logging.curator\", \"system.admin\"]\nio.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\n\nopenshift.config:\n use_common_data_model: true\n project_index_prefix: \"project\"\n time_field_name: \"@timestamp\"\n\nopenshift.searchguard:\n keystore.path: /etc/elasticsearch/secret/admin.jks\n truststore.path: /etc/elasticsearch/secret/searchguard.truststore\n\nopenshift.operations.allow_cluster_reader: false\n\npath:\n data: /elasticsearch/persistent/${CLUSTER_NAME}/data\n logs: /elasticsearch/${CLUSTER_NAME}/logs\n work: /elasticsearch/${CLUSTER_NAME}/work\n scripts: /elasticsearch/${CLUSTER_NAME}/scripts\n\nsearchguard:\n authcz.admin_dn:\n - CN=system.admin,OU=OpenShift,O=Logging\n config_index_name: \".searchguard.${HOSTNAME}\"\n ssl:\n transport:\n enabled: true\n enforce_hostname_verification: false\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/searchguard.key\n keystore_password: kspass\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/searchguard.truststore\n truststore_password: tspass\n http:\n enabled: true\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/key\n keystore_password: kspass\n clientauth_mode: OPTIONAL\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/truststore\n truststore_password: tspass\n", "logging.yml": "# you can override this using by setting a system property, for example -Des.logger.level=DEBUG\nes.logger.level: INFO\nrootLogger: ${es.logger.level}, console, file\nlogger:\n # log action execution errors for easier debugging\n action: WARN\n # reduce the logging for aws, too much is logged under the default INFO\n com.amazonaws: WARN\n io.fabric8.elasticsearch: ${PLUGIN_LOGLEVEL}\n io.fabric8.kubernetes: ${PLUGIN_LOGLEVEL}\n\n # gateway\n #gateway: DEBUG\n #index.gateway: DEBUG\n\n # peer shard recovery\n #indices.recovery: DEBUG\n\n # discovery\n #discovery: TRACE\n\n index.search.slowlog: TRACE, index_search_slow_log_file\n index.indexing.slowlog: TRACE, index_indexing_slow_log_file\n\n # search-guard\n com.floragunn.searchguard: WARN\n\nadditivity:\n index.search.slowlog: false\n index.indexing.slowlog: false\n\nappender:\n console:\n type: console\n layout:\n type: consolePattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.\n # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html\n #file:\n #type: extrasRollingFile\n #file: ${path.logs}/${cluster.name}.log\n #rollingPolicy: timeBased\n #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz\n #layout:\n #type: pattern\n #conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_search_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_search_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_indexing_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-02T20:57:24Z", "name": "logging-elasticsearch-ops", "namespace": "logging", "resourceVersion": "1279", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-elasticsearch-ops", "uid": "146d3492-47d6-11e7-ab86-0e1196655f96" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set ES secret] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:144 ok: [openshift] => { "changed": false, "results": { "apiVersion": "v1", "data": { "admin-ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME5Wb1hEVEl5TURZd01USXdOVFkwTmxvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5DN0dHV0dTRVlKdDZRTXVrQVJnUDhMVkJJMnp3RktDN0M4aFUyOThLb1MKMWMybHdqTldXZXdWK016QnVTbVB0RDJ3aXplcDJucHdzbDBoMjNtTGVOVTJINFJLMU8ySnpVbjJWcUtnZmxuTwovVVhidDZRQVU3ZGFmSFo5UnNPUXdEU1ZBMkEwV0ExbTYrRWtjTUc4UWFNbGhiZ01TV3N2R0JWQW41YlFaeEtzCis2QVF4KzJBbkJQOTJJc0V6OXdrRWU0NTFDbHhWU3ArRG1IbEQ5OXM2OTFxZ1RveGRsaVZCcWhFTzNzV2tuMzMKQVUySEl5Uy9iT3pHbHgwTU5NU0NQcUZIOG1nTWgvUEJUZHF4WFZsTG1rVHFpUVFQN2pmeWtxTHVDbTBMeVhTTQpVd2crUGFHVTFkSTZMNFlsZ2VqeVFnS2taeno5SFpuT1pSSUh0L2xVYnBVQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcysKRUpCclI1RzNGWDRPL2FvOTlzMlk2YmNSZzZKcGZpbWdSZGsybXM3UHFlajFUNDUra1Q0anFOaG5rbjhnUkhzeApMdTlGVlBmdkNzWHp4ZHI0UCs3a2VUWDdqbkFXTzY1VmR6UGxxOW9iblpsTGZCZENBdG9OdEJ1NjZ2cmo1S1VKClRVVDk1bzBvUlBFUTQ4YkZyd3g5djVHZHJjUkFMM1ZrbjVKVzZ2YnZRakRaZFZadEZtckVWRTF2SXN6ZjAzNFAKSm0vNkg1UzExZDBJVlR5YWdXSXdjeGYxWHREQkVtSnczVWN0cHBySTZTYm84YjZqR1JoWnpGMFN2NUFvc1F3egpNWmZXNHlLeURSOWxUemN6dTU3SENxVlF4dzNVVnhyb1A0Nm9MSldFdUlBYW1HOC80d2tDU05TZ3RXVEp0eDYzCnRzY214ZmtMWHA5QTZFRlZNb0E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "admin-cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQRENDQWlTZ0F3SUJBZ0lCQlRBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZMU1sb1hEVEU1TURZd01qSXdOVFkxTWxvdwpQVEVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVJVd0V3WURWUVFECkRBeHplWE4wWlcwdVlXUnRhVzR3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRREkKem13U0pGUCtCbUZwQnlyNC85VzMvWWhkTHNJTzc0ZjNTWGU3Vi9zVGV6QTh0ZGF1QjhRaWtkU2x4TzIraUt2WQpFUHNoQmNoWC9LN01SalBHSmlFeTNjV0JYNzdkR0RWUVd0RnliUzFNTjFUR1pkTENvb0RFY2twV3pVUzArT0ZOCmF3Z3RrMStiaTRKUlJsd0h4MEIxTHA3MnFDZlo4OWhLMFFXWnJOWnBSS1dyeFlOeTEra0o0TWJObzM3TWxUekQKSDFmQUtPODJvVGF2NXAyVnpHdG1OVm9CZU1Yd0pob1ViVzZqRWY4UncvQUZLRlNpSWdlVnUrQ3N5TSs3ZmdWVQpSOXdhamFYV3pSVHhVOHREU3M3L0ZXeDQvRkhUTnFvMDIwUVpPa1dCQVVDVTErdmpEam9GL2pWNE96eFZUUGxWCi9yVlUrK3c1cWNBU29BTkVmby85QWdNQkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUZvREFKQmdOVkhSTUUKQWpBQU1CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFkQmdOVkhRNEVGZ1FVR2tSNwpZSTJveTQ2MWtGMnNBVE1xeGRQRmNQMHdDUVlEVlIwakJBSXdBREFOQmdrcWhraUc5dzBCQVFVRkFBT0NBUUVBClRnK2Y1K3MrMWZTYUc5dFFSd0RCSE1HaVB2dlpGblorNk9Zd2wyWjlkQkcwcVh0YmQyZHJlNmVpNXVMdkRHcU0KcUE1V1NhQTNiM2pYa3JpMDFzOFFVdGVKVGVUSVJGTmFSV21Jd1pzZmJZSUdORCtPZmlqMEFPSk9VaWlJeGgregpSRm8zblRnZnpZWVE1NlExZXZ1RFpBRWZyVStWcWZDSUo0N1ZlNFdvUU9aRGpuVDhKUmR5VllLMk04L0FJaVFEClNEUlBKMGZPbFFLbXZjUmZVSFp5Mml4WEJRUEs4dmVzSUpIWCtiWFVBcHJDYWdsVmF4Z1VuYzFjMnY5K0hKTGoKUWlpV2ZmQmRISGhhMzVYOFRQcTNNcDZTbU50NXo4SlpCVDNEcnBvQXc3OWl3MmZGSzNiTURvMTBNcU9rVy9lYQpsWXpmYnBsY0dzUmtnRkdESzVvMS9BPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "admin-key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRREl6bXdTSkZQK0JtRnAKQnlyNC85VzMvWWhkTHNJTzc0ZjNTWGU3Vi9zVGV6QTh0ZGF1QjhRaWtkU2x4TzIraUt2WUVQc2hCY2hYL0s3TQpSalBHSmlFeTNjV0JYNzdkR0RWUVd0RnliUzFNTjFUR1pkTENvb0RFY2twV3pVUzArT0ZOYXdndGsxK2JpNEpSClJsd0h4MEIxTHA3MnFDZlo4OWhLMFFXWnJOWnBSS1dyeFlOeTEra0o0TWJObzM3TWxUekRIMWZBS084Mm9UYXYKNXAyVnpHdG1OVm9CZU1Yd0pob1ViVzZqRWY4UncvQUZLRlNpSWdlVnUrQ3N5TSs3ZmdWVVI5d2FqYVhXelJUeApVOHREU3M3L0ZXeDQvRkhUTnFvMDIwUVpPa1dCQVVDVTErdmpEam9GL2pWNE96eFZUUGxWL3JWVSsrdzVxY0FTCm9BTkVmby85QWdNQkFBRUNnZ0VBUGk1ZG5NaVBFY3hjQWEvc2lLcUFQYmRPc0x3MzczUVJBR3hKblVQRFJlY1IKcXRzTUhWdmVTbTRxRVNNSUU4WXlvSGV4ckNva1BjckxQZ3BISWdiUXBQV3pvVHBLMmlBUzhrME5Lb2ZRVFJlZApNc3A1RnpoRzg0NElveFJ4UURFSlkzWFBWSDJjVDRoRjFIRWJNblZxNmw2RGJ4SG5OVUNqSzVmS1Npb1JRd3NuCmR2cnJpNS8xcXVqalhlVUxWQUtXN1lLOHFNckorM25UQ3RORDhJcUJHd0dvSlFLQVB1R1R5dHROZlJsdkZiYkYKMkpBVmQyU0hHQUgyaVNxeGFuU2Q4V09BTk5uRW8zWGJ4WHI5dVh0R3lHWTQyQ2FxTHM5Q0tRTGJUUlY3ekN0ZAp0WmJ6d2xpaGFTcGF2aGpNUWlTOXBSdEVpZzhEYkhmY1NFRDZXQTRhUVFLQmdRRCs5dkZsdzQ3WGpaRGcxa0dGCnViVzZiTVdQZWFWellBWkV3YW5aK09XVVNtK1FkTXVFMXNMb1VVSExMVFJ0T3E4T0RCbDNKbFFTc3pPL210VWgKU3E2V1RTaGd0clVrRzByK2pqUlNNVGlvOFBDWVJ3SVQ3V2VHaTRjVzBtQ0RSdmlLRW1VNElkMlNyR0ZzM1pGVwpGODFZRmJLVXQwa2xaWjhsSy9EOEE1WmFoUUtCZ1FESm55MVlRb2RrcDdMK3JsYlUyMHRhRWdKY2VqQ1lwWWw2CjNidk84WGJUejBTczRYMHRUVllrZGc1eHJhS3V2UDhYSFNMNDhJTzU4V0RkcE1VWjRJWkRvamFsdnN3TXpoNzQKelVDVlpaMzlpcFlrZURpNjZpZlZLNUx4YUtYVGRtcjdYMzlUTDdiR1cvWXlZbHVveTZCN0cwekpyMWJTc1VqaQo1MUxpVGhpbEdRS0JnQ1hEbDgrMTNuTm80WHViNElxWkRpUzF0YkZobURMMWx4Z2FBemxvMTBCV29oMm9YdmluCkFxbDhWNTFyYmFkOEdLK2c5U2lqd2JJZlh0dlRhQndOUHJ5K1l1dW9SRDQ3MktqSmtWQlhRQWd0MzhUK1IzMkMKSFdKZFNqNEVIUTEwdHAxa3loODlUTjlMcndaNzd1bnNqcHFzWkE0STg4bVpPckE3eU83YTdTc3RBb0dBY2dFWQpnYngwbER5aTRKRXh0Z0FkdGx0U2pIbm0rcGszaUlyU1JDeVN0U2VRdkhSdjlHcXpWOENOWUVmL0lmRHFDR2JJClBKeTZ6eXdtU28xOWlhbEVJZ0FhQ0ZRL1NzcE9CdjhBRXJtM3dRSlk3Vnd4TDdkeE9IOEFBcExhbVJ4dlY1M2kKLytXTjR0RmkvNUJRSmJ5bURKWWVNRGg5em5yQ0xOTUNNY1pZOXhrQ2dZQktNbWI3cm15MlBmamFudS9tWTNnRwptVUpHZFJwS09nYW5ENmNDUnFYeXI4OW94aHJRRURKRnhRMnlhSEE0OVdFajNTOE5uckY1Yjk5bDZTdENzZUQ3CmpVV0FwR2RnODV1aENBMmhGMS9CenU2UCs0bXgrWWdiYXZJVVhNOFhrTHo2a3N0cDNnb0xaQW03elVMTVdCTVQKNWxzK0Q4aktBR2NJKytYaVFqb05QZz09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K", "admin.jks": "/u3+7QAAAAIAAAACAAAAAQAMc3lzdGVtLmFkbWluAAABXGqY41gAAAUDMIIE/zAOBgorBgEEASoCEQEBBQAEggTrfcI7i0rR/2YXpxmNt7oStk4MdEKDCh8Ajh8qteq20F23zz5dhkgn/9QIAiTV0euRtq/RIYEKXrUc3Y1hI7LIHTYHHttnyoT+42MU1L1raMPMaoIiLYkEUWNq+XfrsG7YXvC04A5kGRcgTimixMsOMGULTZkFkVQ7rF60X4LKIGyy45SHTd0Ikc/H879csfxgGrGQTdoIa/q7zgVXu0Wbm0KvTWW1E6/uFMELvncsRK6cQBkbhiLj5K8+bmmCjNjqiNMCChkwFlmv77SCh/DBr9pwtEgha3UukFZ9jJOnhtnftM8rUUKH/F14sI/3q6wMfKAsDAY98E5/2ohwJZqZJ47yexM+DFZbeql3HgwJ7m5qagj2whXiks2tLjV3J1+KduIDcttCZ9zECCjccey//x5YvUOdA/yoiY/hkQcLxwu09yrjyouDTBYpCPq09+wSlqK3rajNfMMaE/k0CZpXR12pyCN5V5Ouq7f4QrGAmKXMXFp98yo7Gbe6wQiPQx66L+jARD1h02Hyk72BfTewRpYPrXR+05dBhdtYUwVSGHPnQN/xaotHW8wMZff4Z8gVTgQf7p0U8N+vRZ4Z0RyMzHV40aofW1vcKFqW1nxHb+V+Xbz+2V/92kLbkW5XmPNLt1beVmfT4QFAZMDu+Ely3jlNt0YzrqHp8dsft/JizqkqDUJeyzZ7vbEHzC9t14Hg8NyUkK2FvZ811JdIE1OFqg6WGcbFTtpGConqKsvh97DIoI1I5JPEtS2K5A/icdLP82Hyja7Tct/nw3vJ0tFXdkvvu9OH5bDHjTUUfJ3jbcobEGa1Y8bPqJhn+vMPuiaa8kkS2VhyQpmLSAE3aOaSM5hDjiBjk7LbMbpM7+pvl1LaPaP9c18cawXoXNw/T3vG3Rqd6dsWqQYI/8NEt79aJHDctjXSZ9P0YlEJPzvpLT3MEyS9x3qpwkr6DopsONrRtdJkUX7WgoumFWc8ESqUORFJiQv2IjsapugoETexBSeEJqsxdvnUI7hVnLkepDl0rSxT7sCqOIcTD05IYnZuLoGgrBaMMjF82WUu3yN52h7vDanN31Qw0oBD2qPllE0hAzW8WKqt8dDPflbAp4B6iq8msw8pclS2tWZlWSY6dtleYNx3q6dSOBV8Pvysw0j+a/3NNL++XUf4GXZblVhU40TS3aCIsk5DreGzPrSwXc5qGyqak+vyfpSIGaMNJ1d29pzDv5EUYQ5R7bpzI6WWOOS4tj2juD+sRYq8lOc/A5dwGdx1wL3wr6ywap71bngt2ZAAXlVBQ6vgAAHMMXPMb/dURj66A59dEDs+SwuRs22IxKfjDwn+XoqaLL1dmLCSSf75220mZmj3Q3BUq+f5qIkeHkU3Z+XuxSLjcNhaEP+zO2lbOnBF8NKLXV5Lo1uWufVuukvelrxNc2Qf+y+tolSTs3QxFvPnlf6VVmGxuHLDTtSUQpMatmUsGovQqoOuVZYSVnxo/u93pG578KTCH+zbSn6gkyuj/hb/eKDn+oCS0LZBMui79mXJbxDX+MYntuNftXJgfYiVLlf4UhCrL8rtfPcfDS38XMWjStoKP5uACpGe0oexqlYBYNChljVakV2e61msCaO+86rbmW7rDe8asKYCKfQyfAPBZFiLasuMdtlj9cFDVc6VfeWLMZZKcJP4og/I7RL5qQkAAAACAAVYLjUwOQAAA0AwggM8MIICJKADAgECAgEGMA0GCSqGSIb3DQEBBQUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjAyMjA1NzAzWhcNMTkwNjAyMjA1NzAzWjA9MRAwDgYDVQQKEwdMb2dnaW5nMRIwEAYDVQQLEwlPcGVuU2hpZnQxFTATBgNVBAMTDHN5c3RlbS5hZG1pbjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAN+/f14rh1ZnYZORg4Y4Pi2xPRRo+p2zSrnlRqnbXVF7nT49CUmgE4+KoUsc53414n8Tv6zS7Fw+Kz0hQOsFBqmK2tFymHuIK+JsD0Cs+7/XpoIx0efdH2Gc4vK5sMvZJnb1GthebzgwNgkbrp1MEcsFUAJOXu5d5QwV3SP52HEiVsOpDDqz8D0cPDvKKyl7tvEsptjLgsa1hiR63uy1KNJQ/xjqgDLgAZCmy3+jjZ3RFpFM5jOJb/DnJZmD7pYNYnxi3ilueivZdqC1ADrJb7JdaHiqwtv6l8N05PoxFdj5GtoLmDDRcfoVwPl2skMTFD6HU3kQH497L9iakn7IcaECAwEAAaNmMGQwDgYDVR0PAQH/BAQDAgWgMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBSPjgbWENoGOLIS0TebgC0eyhSweDAJBgNVHSMEAjAAMA0GCSqGSIb3DQEBBQUAA4IBAQC6EykIYhvV6XQQbQwR+NxH+IuSygpVXbVEsmwSBHta3hUCHfl/+qWTqTEgwFGR8GWAe6r1yhqwXIiIy/ATB4PLGoyx6HxHLMEw88EVQnsjc/B9IVxr4nb/4Aj8HgR4o1jxc4oeCWVxAhB9IyLlte4aaeAwN7WcGvhHV+TmqdHJ1kJds2yBaF9Dvczw9FFAuoIZMQxMUYP16ITP6d3gyc9xIfWFJVPvFRL1bssXexU18Z0zeQByWPX1/cE59QfEb09s2kv2FUZvHR4Ja7hlABpTIJzTmHn7rmUm2GWZCvGI4dEbU7K1/8m0VezG8lhf5v7wYfue2fj5l+7lblc2A8+tAAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjAyMjA1NjQ1WhcNMjIwNjAxMjA1NjQ2WjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0LsYZYZIRgm3pAy6QBGA/wtUEjbPAUoLsLyFTb3wqhLVzaXCM1ZZ7BX4zMG5KY+0PbCLN6naenCyXSHbeYt41TYfhErU7YnNSfZWoqB+Wc79Rdu3pABTt1p8dn1Gw5DANJUDYDRYDWbr4SRwwbxBoyWFuAxJay8YFUCfltBnEqz7oBDH7YCcE/3YiwTP3CQR7jnUKXFVKn4OYeUP32zr3WqBOjF2WJUGqEQ7exaSffcBTYcjJL9s7MaXHQw0xII+oUfyaAyH88FN2rFdWUuaROqJBA/uN/KSou4KbQvJdIxTCD49oZTV0jovhiWB6PJCAqRnPP0dmc5lEge3+VRulQIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAKz4QkGtHkbcVfg79qj32zZjptxGDoml+KaBF2Taazs+p6PVPjn6RPiOo2GeSfyBEezEu70VU9+8KxfPF2vg/7uR5NfuOcBY7rlV3M+Wr2hudmUt8F0IC2g20G7rq+uPkpQlNRP3mjShE8RDjxsWvDH2/kZ2txEAvdWSfklbq9u9CMNl1Vm0WasRUTW8izN/Tfg8mb/oflLXV3QhVPJqBYjBzF/Ve0MESYnDdRy2mmsjpJujxvqMZGFnMXRK/kCixDDMxl9bjIrINH2VPNzO7nscKpVDHDdRXGug/jqgslYS4gBqYbz/jCQJI1KC1ZMm3Hre2xybF+Qten0DoQVUygAAAAAIABnNpZy1jYQAAAVxqmOLLAAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjAyMjA1NjQ1WhcNMjIwNjAxMjA1NjQ2WjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0LsYZYZIRgm3pAy6QBGA/wtUEjbPAUoLsLyFTb3wqhLVzaXCM1ZZ7BX4zMG5KY+0PbCLN6naenCyXSHbeYt41TYfhErU7YnNSfZWoqB+Wc79Rdu3pABTt1p8dn1Gw5DANJUDYDRYDWbr4SRwwbxBoyWFuAxJay8YFUCfltBnEqz7oBDH7YCcE/3YiwTP3CQR7jnUKXFVKn4OYeUP32zr3WqBOjF2WJUGqEQ7exaSffcBTYcjJL9s7MaXHQw0xII+oUfyaAyH88FN2rFdWUuaROqJBA/uN/KSou4KbQvJdIxTCD49oZTV0jovhiWB6PJCAqRnPP0dmc5lEge3+VRulQIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAKz4QkGtHkbcVfg79qj32zZjptxGDoml+KaBF2Taazs+p6PVPjn6RPiOo2GeSfyBEezEu70VU9+8KxfPF2vg/7uR5NfuOcBY7rlV3M+Wr2hudmUt8F0IC2g20G7rq+uPkpQlNRP3mjShE8RDjxsWvDH2/kZ2txEAvdWSfklbq9u9CMNl1Vm0WasRUTW8izN/Tfg8mb/oflLXV3QhVPJqBYjBzF/Ve0MESYnDdRy2mmsjpJujxvqMZGFnMXRK/kCixDDMxl9bjIrINH2VPNzO7nscKpVDHDdRXGug/jqgslYS4gBqYbz/jCQJI1KC1ZMm3Hre2xybF+Qten0DoQVUygDsPVooVpFUfqblislo6QEEkwR+W", "key": "/u3+7QAAAAIAAAACAAAAAQAKbG9nZ2luZy1lcwAAAVxqmO6TAAAFATCCBP0wDgYKKwYBBAEqAhEBAQUABIIE6TkvFxEwvCG4ryUD3CnUSWM31JED4AnGBoPrrMZoaIUX5+ESaIWNVw/x21sny1AnePaXatPuBJhct57q7TPhX96H7TpSdLOXwpDFHmfj0H2JGXbvKeFgN3cnAQ0YX02zG2llIdpxTxtgaiJaWK4cEBwS+Z2QvKNcI69fBH05vpQqNmiptg8IxFd9JkoUYYa8z5Q2mHS8Vm55nSG6Cb8c98UOjfqY7aKaiUQHbqmUfTXgs3lLHLux5K+kMrTtSEmwV1s8fFko8MwxXMLqSg5wx6MbtwgSw25lV7LhxHkdCINYspkxlF4LxsYoJHjvqfiZgVGxlacKeSsFNds9pb1Z/xgoO85LaT9BsHNerbhq5SOiDZbkqYtwgPz2HSTzRJ4lelmDtkyZlMKcYde8gS86P8rSYSQL4WW+jCL8eU+XomoQwF/+Fr+VVoojF4ZdiDE6rtSDoA3OaovUObH2X+UudH5bOSl90AYsK/jyA9vwcW8a7JMpUPEv/6ywHD5Xl/uj8chSnEYTftt8K0sZByYBsmWbNajLptGJym/TC9tXwEAfmHsqk7qRl6Gvk07saFcZXJhC2zd64fyKdb4HTlVB0wBDvwMylIGdBoZj7YooNLgLkjBYGghgbaIJQ1HApjJ40Pw/y7ng0ZBfH8TbU18N6WDczAagtgJm0vscWYBm+oEfydg8ohY/SkzX6G7PARWBOrlddSR7LSVIvi1QqlnJL+9HO2b3n1yfz3TS+9nprKSguDj0ga40q4Qv7feDx3MBoKFmpKVgwLAYe2VswMwaNPdCXCm3/YTUi4H33qwB+goj5v7P3Fp3MQ/09a8VKMAlsGmCZ48brOoOXkLxkRPs8uCRIJ3uIYgC//aJj4fnUKi0cD7g5U6G9ItKC/b/UTeCoFgXdriVGiZeXCaEobVViYgQTv8/S4hHOWr5NL8yW6o6N6mYKC8EQ/bL7Rep895kn5URQEYJpL+lDotaHXq1345JoAn3s1AKlADU9ohxXpl8oX3ceEQl+AzuEAIK3OwPVc1p4llmA6/5hTd68+wmhoBI/zofcV6ovt4jNddErZoY6AgHETjbBniztUk9vDi34Zc9wujoGKwD+pFUoiv2iKe9JDbx2iZKuO/wAekctAdqE22X2nva3wZK2JZ3wVqcoNHnqwcNDV4hlgvkYmHeTxMWIQHaownhBWnFSijUTOfZlvngHzfslxIuuQFORDYQ2bQNxLIH5bIHPZcFzW8zg/BeR9/qL5O6F+WwLzBT6pC1W72UZMyicpBLU9bug2zXoCvjyCBX3z1xB0GAWLDUtZEflZuhfpesna6rWe4OsgDbIwNKPSIW8nq73mtOx0K5vY5qhrIYLHcPoU0NDmJrTcBfdI58W7iHkhrdewOE/g5vRYn86oCTcMO69mBOGOR+rDKL7OHq+qP8ykDdWgRRSH2Edzr3eHMCLHabdCZVa/w95Qbs8mgFVO/tjp5m+yRwXO0sUM8OzbeQL9vRrpHD8ZhxSc9vx/rLQZNdfyw9WWXfNURIqah3fplnMsVuykkooFX66NglX0kZY0oLiG8VZakoUEdoE8bV34XRHFPO2lA84WMWMZJoOSx9BINAWlw17lrzeNlCffXnMdYN263ErOcqbbqYmHcSlmP+t2z+X646XVdl/812EzS0mwBOdgmPtY4iiUGteoIBVQAAAAIABVguNTA5AAAEXDCCBFgwggNAoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDIyMDU3MDZaFw0xOTA2MDIyMDU3MDZaMDsxEDAOBgNVBAoTB0xvZ2dpbmcxEjAQBgNVBAsTCU9wZW5TaGlmdDETMBEGA1UEAxMKbG9nZ2luZy1lczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAIj9jfghgbnXmeJW3jvw0SCrmSSonh/2FBhxRryOJQ9w7+hjw7DawqRDFbjteK/OvtaQXxWzJrvcJDYKgzFqTLK0EK+WpcRTppLfQFLdA+c+UeMymOmwvyvGrfG46N07y+565GIHtLylunjlk9sdN5x+A5Yje4OScxRxfcYhU+6MUsU9ShByR4qq3bRh+Q4dfvyowNqw8vHXeQZBMaE90ctp4SD4gYmU1/aTbr+p7HXhDvXzsA0Usy2cqQl+ZFoBP63okOQRQuk2uNEVh6eWm9OfGqRgIFVVqWveX+RF8Ar/zsDZZ6jmRMNdkNfgC4PyJ/+hQToYE4acK6B67zExDfcCAwEAAaOCAYIwggF+MA4GA1UdDwEB/wQEAwIFoDAJBgNVHRMEAjAAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQUkLRM8YDNaCs4dVDwcLoKfz+qCugwCQYDVR0jBAIwADCCARYGA1UdEQSCAQ0wggEJgglsb2NhbGhvc3SHBH8AAAGCCmxvZ2dpbmctZXOCJGxvZ2dpbmctZXMubG9nZ2luZy5zdmMuY2x1c3Rlci5sb2NhbIISbG9nZ2luZy1lcy1jbHVzdGVygixsb2dnaW5nLWVzLWNsdXN0ZXIubG9nZ2luZy5zdmMuY2x1c3Rlci5sb2NhbIIObG9nZ2luZy1lcy1vcHOCKGxvZ2dpbmctZXMtb3BzLmxvZ2dpbmcuc3ZjLmNsdXN0ZXIubG9jYWyCFmxvZ2dpbmctZXMtb3BzLWNsdXN0ZXKCMGxvZ2dpbmctZXMtb3BzLWNsdXN0ZXIubG9nZ2luZy5zdmMuY2x1c3Rlci5sb2NhbDANBgkqhkiG9w0BAQUFAAOCAQEAmH+yWSHyi3iqyGFCn9BJmgHynkvBVrHOdcu6z3QXbq+69Ba3cx1dJ1OH3B1paVdjUvr57z9eqmAY0O++UGFgRHfuA2aptI0wGVeJu4ICI4bISKjBDTi91Yg+bmhbT4rSgFYeyUqVIWqKa8mlYrcn8Tf70EyxPOxw7sORgJBOCVD9I9+vjgw1jFncKupEwc39Zr0o0IDhJKwzadh1rHiMfYk72/F0GzNbTIYWnEOG3m1h9pmF2lle7z6p+UvBQ1HaYBPZKhBnhY1VmtmHk/CzMQkv9yXZTndBCA5EFDs6FmJE9BIoDCT3DO9/PgdNrwBwsHeW1TZ+QM+sHkJg9uNkmAAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwMjIwNTY0NVoXDTIyMDYwMTIwNTY0NlowHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANC7GGWGSEYJt6QMukARgP8LVBI2zwFKC7C8hU298KoS1c2lwjNWWewV+MzBuSmPtD2wizep2npwsl0h23mLeNU2H4RK1O2JzUn2VqKgflnO/UXbt6QAU7dafHZ9RsOQwDSVA2A0WA1m6+EkcMG8QaMlhbgMSWsvGBVAn5bQZxKs+6AQx+2AnBP92IsEz9wkEe451ClxVSp+DmHlD99s691qgToxdliVBqhEO3sWkn33AU2HIyS/bOzGlx0MNMSCPqFH8mgMh/PBTdqxXVlLmkTqiQQP7jfykqLuCm0LyXSMUwg+PaGU1dI6L4YlgejyQgKkZzz9HZnOZRIHt/lUbpUCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACs+EJBrR5G3FX4O/ao99s2Y6bcRg6JpfimgRdk2ms7Pqej1T45+kT4jqNhnkn8gRHsxLu9FVPfvCsXzxdr4P+7keTX7jnAWO65VdzPlq9obnZlLfBdCAtoNtBu66vrj5KUJTUT95o0oRPEQ48bFrwx9v5GdrcRAL3Vkn5JW6vbvQjDZdVZtFmrEVE1vIszf034PJm/6H5S11d0IVTyagWIwcxf1XtDBEmJw3UctpprI6Sbo8b6jGRhZzF0Sv5AosQwzMZfW4yKyDR9lTzczu57HCqVQxw3UVxroP46oLJWEuIAamG8/4wkCSNSgtWTJtx63tscmxfkLXp9A6EFVMoAAAAACAAZzaWctY2EAAAFcapjuCQAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwMjIwNTY0NVoXDTIyMDYwMTIwNTY0NlowHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANC7GGWGSEYJt6QMukARgP8LVBI2zwFKC7C8hU298KoS1c2lwjNWWewV+MzBuSmPtD2wizep2npwsl0h23mLeNU2H4RK1O2JzUn2VqKgflnO/UXbt6QAU7dafHZ9RsOQwDSVA2A0WA1m6+EkcMG8QaMlhbgMSWsvGBVAn5bQZxKs+6AQx+2AnBP92IsEz9wkEe451ClxVSp+DmHlD99s691qgToxdliVBqhEO3sWkn33AU2HIyS/bOzGlx0MNMSCPqFH8mgMh/PBTdqxXVlLmkTqiQQP7jfykqLuCm0LyXSMUwg+PaGU1dI6L4YlgejyQgKkZzz9HZnOZRIHt/lUbpUCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACs+EJBrR5G3FX4O/ao99s2Y6bcRg6JpfimgRdk2ms7Pqej1T45+kT4jqNhnkn8gRHsxLu9FVPfvCsXzxdr4P+7keTX7jnAWO65VdzPlq9obnZlLfBdCAtoNtBu66vrj5KUJTUT95o0oRPEQ48bFrwx9v5GdrcRAL3Vkn5JW6vbvQjDZdVZtFmrEVE1vIszf034PJm/6H5S11d0IVTyagWIwcxf1XtDBEmJw3UctpprI6Sbo8b6jGRhZzF0Sv5AosQwzMZfW4yKyDR9lTzczu57HCqVQxw3UVxroP46oLJWEuIAamG8/4wkCSNSgtWTJtx63tscmxfkLXp9A6EFVMoDc1E1G6n3qvhGp5dK9yDT2ysGPkQ==", "searchguard.key": "/u3+7QAAAAIAAAACAAAAAQANZWxhc3RpY3NlYXJjaAAAAVxqmOn/AAAFAjCCBP4wDgYKKwYBBAEqAhEBAQUABIIE6oujromxYQn+gTDjHx2SpW/QHczp7bQ8gE2tOZpA6ETL3LPG+ZdlyI1RXUDjfOdH/IS43HlUy/Ea15mQASvc70Q8LbJ5PevhOarEtM0kQ3Bkfp5IeI8BwG4b/JX4Rl5DheRzaBJUWyBObMufrmII1aqROzzo95V1rP6gTlDuHAHvBAVgntJruE2p5mqV3d3ASKfdtWVR45qSwqcR+z8K4Bpu5KUr9fWZIodyaiZC24uXGyHXK29iDfsNjSjE/9y1OGwsx0JfEibcpJ/Kft1GdJ2i7pTFYbL+Vo1E54yyGyayrlCwVDfqUrUnuRYr4guKfFSAMrBiv2iCC/fkMMW0HMmnjJCsgqkbxOvlZlE4shSGU/IZzB2GwpJ+w6JrJcttIRTQLg7peLYdpw8rUirI420OmZpkGBL5eDAqbSbJw6uSGsQcz+P4JVRs8EHpbHFc6h585umbOyffwnsUHQf9XAMS1GvoE1BD7Nl1FUmKF/XZVbL/0WImuJbOw3jmSNG3Sif3A4MxlFgPBSRJ0JHKhzWgfh7SqsIeBe3/glPVrDTEJ533UaQ6lyRKD8zC3xtbArRKz6oLsaFs14ee7nwV21SsldCMpzEWEXnDJ01DG0opb34lriAPdGVuZ7DPnRcdR18aPx+V7RGi49D+NHLJ6qvU4Hedw0DStG4yobs52J4N6YMpRJHAI/ymJlGiGTTXdzOugZ+vfrCE7peDQ56x223NSX9W7G8NbCC6Pxt7b3T8IUAtSqmJeqthus052ymm1KGVwa8PFjGO/pp+upBqlXnCla5GBJL8PRfEm6p0RrggimeIDOAmlSFZzQXoBUZTWejQPLOItWRtzi/V5fBJvifzXR0aDmpUcLG0RLoTTiiY5tnhENrb6A7Z6JpDyi9GIOMOjcZRSvxpsGe79Z+tvuwJA7432tTzAjP78DYk5zzLt8wFo+qLhk99+UVl1sCRlqN+4Syug3jxgXcPRKqHYZwGyCcbA7aBz/GKXRRO6nH15RUodN8l4Z4ZRtiDJI5re2oc7P7Cog0SJ69dQFUwdAAgUtmIHKPVNh4jic2nTlOsPAmR6sRRQtqX6nd+ddUbDBsBVHek4VjS5/XhnvGljvoX8RWWcuv3ICVc7jVmpjETeJ8oPYgikAUOV62vckyWyq5Cq5jf2vuEuyiVIO4Sog3my4hH2CSDt7UGvSlyl6FO4+6+fb9y45m8tj6jsQmmnAp4c4a+rzgP6aUdasIfm9KLvxi6L2yagOxPLFWXowohRy7F3OBfo2vd9YqcEaq5y4x70Hc4vfqtYqsqi/Ma17ZXibuHwyudlehXfqudMe5KsVzgQjTWxGhGjLdWzWqpITfE7mMNVy6z2+NZ8csR4VTaxjMr5j5B8JGOFGuzozD6dzd4qEwF/bQNMVaxbr7RQreTICz2PIfGiYCVwxVSqWe8LZkbeOA1VwTcZqMKhkjWnzwrEr3sHLgrzw1kkBOto5QkGbiHHTWqpkGC57cBOECx3qdC1r05MFNKci/Kz6tFfI+qlQ80pvYmugrG6+AkZnhWaLr3zHqtQuvcAjnY7x0sd2wa6DTXsyA8/5ZGU5o3qVW9f73RTDVl92r4Nafmbz8CE57iWlBPLDpQmIeBSjZFiBBgjiuJBqdLrQcmtyOLBpaduixpv927dWNBx3Wht8gujeSvd7sLGBQAAAACAAVYLjUwOQAAA4IwggN+MIICZqADAgECAgEHMA0GCSqGSIb3DQEBBQUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjAyMjA1NzA0WhcNMTkwNjAyMjA1NzA0WjA+MRAwDgYDVQQKEwdMb2dnaW5nMRIwEAYDVQQLEwlPcGVuU2hpZnQxFjAUBgNVBAMTDWVsYXN0aWNzZWFyY2gwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCEXKnwTovm3+6Nw7Z7111n/STbTHYdi5hj5k6zMbf8IWiVGqTesOlIIuESeEr88bSr3vDWEH5JcM9+1VaXcH61j2N4bq6WmOW8pHtYmRUwJZCGOl7T9bmai8DvLqIe06XArwLVPXHsCyPx3n9Q+X9A8ivqQZkuA42O+OlBchZ/PQ3U3orKFjh02yLVV9bmR5qQFUwEssSGWDhmMNc0C4FTGpEHlI9Dnc4BLVawc7vbP/QP4chDuQIuW/ZZs526aEAvatIl7VZUU7zncTpoHQiVnVbD/KGVMWK6K0MrXy5QmVL6DYi4PrFjHRQzxTudLfQ1Ejqa1IXtIiAEh8XqTTcDAgMBAAGjgaYwgaMwDgYDVR0PAQH/BAQDAgWgMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBQMUilOVK1cv7nTgge74BeaSfQ1mTAJBgNVHSMEAjAAMD0GA1UdEQQ2MDSCCWxvY2FsaG9zdIcEfwAAAYIKbG9nZ2luZy1lc4IObG9nZ2luZy1lcy1vcHOIBSoDBAUFMA0GCSqGSIb3DQEBBQUAA4IBAQBORjiI7dwTdo7iJ9JdoDQldjDpCw3NAlzpH9uTCTHREWyt6CNvJ5QzRsw5Eh2LW/BWnAxOVHdZuSM6ZfyY1oh7SMfxg0sljoZ/MWJuGUeMt/F4rOVJMA1Oyf6Oj3M7kaL4pH1rsBkHEIbPSPGEnShfCGMLaki3frzlHQgpxISv2Mz7O8UjyIQj1j8rU4bA2w6PrreGzYotbjqP96yPcuC6yOVVF5Bo4oInpBCliu9ItPONLr6Kt/1nZtJbnk4MLzUYKUt5ptgcUhtnz44tLfO4FqHuDI1yozKI10s6NTwjIsTquQxHLjbHf9nZZ0A1hnVJLd6qaOBMSsZabcLVTAHSAAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjAyMjA1NjQ1WhcNMjIwNjAxMjA1NjQ2WjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0LsYZYZIRgm3pAy6QBGA/wtUEjbPAUoLsLyFTb3wqhLVzaXCM1ZZ7BX4zMG5KY+0PbCLN6naenCyXSHbeYt41TYfhErU7YnNSfZWoqB+Wc79Rdu3pABTt1p8dn1Gw5DANJUDYDRYDWbr4SRwwbxBoyWFuAxJay8YFUCfltBnEqz7oBDH7YCcE/3YiwTP3CQR7jnUKXFVKn4OYeUP32zr3WqBOjF2WJUGqEQ7exaSffcBTYcjJL9s7MaXHQw0xII+oUfyaAyH88FN2rFdWUuaROqJBA/uN/KSou4KbQvJdIxTCD49oZTV0jovhiWB6PJCAqRnPP0dmc5lEge3+VRulQIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAKz4QkGtHkbcVfg79qj32zZjptxGDoml+KaBF2Taazs+p6PVPjn6RPiOo2GeSfyBEezEu70VU9+8KxfPF2vg/7uR5NfuOcBY7rlV3M+Wr2hudmUt8F0IC2g20G7rq+uPkpQlNRP3mjShE8RDjxsWvDH2/kZ2txEAvdWSfklbq9u9CMNl1Vm0WasRUTW8izN/Tfg8mb/oflLXV3QhVPJqBYjBzF/Ve0MESYnDdRy2mmsjpJujxvqMZGFnMXRK/kCixDDMxl9bjIrINH2VPNzO7nscKpVDHDdRXGug/jqgslYS4gBqYbz/jCQJI1KC1ZMm3Hre2xybF+Qten0DoQVUygAAAAAIABnNpZy1jYQAAAVxqmOltAAVYLjUwOQAAAt4wggLaMIIBwqADAgECAgEBMA0GCSqGSIb3DQEBCwUAMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwHhcNMTcwNjAyMjA1NjQ1WhcNMjIwNjAxMjA1NjQ2WjAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0LsYZYZIRgm3pAy6QBGA/wtUEjbPAUoLsLyFTb3wqhLVzaXCM1ZZ7BX4zMG5KY+0PbCLN6naenCyXSHbeYt41TYfhErU7YnNSfZWoqB+Wc79Rdu3pABTt1p8dn1Gw5DANJUDYDRYDWbr4SRwwbxBoyWFuAxJay8YFUCfltBnEqz7oBDH7YCcE/3YiwTP3CQR7jnUKXFVKn4OYeUP32zr3WqBOjF2WJUGqEQ7exaSffcBTYcjJL9s7MaXHQw0xII+oUfyaAyH88FN2rFdWUuaROqJBA/uN/KSou4KbQvJdIxTCD49oZTV0jovhiWB6PJCAqRnPP0dmc5lEge3+VRulQIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAKz4QkGtHkbcVfg79qj32zZjptxGDoml+KaBF2Taazs+p6PVPjn6RPiOo2GeSfyBEezEu70VU9+8KxfPF2vg/7uR5NfuOcBY7rlV3M+Wr2hudmUt8F0IC2g20G7rq+uPkpQlNRP3mjShE8RDjxsWvDH2/kZ2txEAvdWSfklbq9u9CMNl1Vm0WasRUTW8izN/Tfg8mb/oflLXV3QhVPJqBYjBzF/Ve0MESYnDdRy2mmsjpJujxvqMZGFnMXRK/kCixDDMxl9bjIrINH2VPNzO7nscKpVDHDdRXGug/jqgslYS4gBqYbz/jCQJI1KC1ZMm3Hre2xybF+Qten0DoQVUygMxPBYOgAvCIL5PSNzz9D1v+R7BV", "searchguard.truststore": "/u3+7QAAAAIAAAABAAAAAgAGc2lnLWNhAAABXGqY7xIABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDIyMDU2NDVaFw0yMjA2MDEyMDU2NDZaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDQuxhlhkhGCbekDLpAEYD/C1QSNs8BSguwvIVNvfCqEtXNpcIzVlnsFfjMwbkpj7Q9sIs3qdp6cLJdIdt5i3jVNh+EStTtic1J9laioH5Zzv1F27ekAFO3Wnx2fUbDkMA0lQNgNFgNZuvhJHDBvEGjJYW4DElrLxgVQJ+W0GcSrPugEMftgJwT/diLBM/cJBHuOdQpcVUqfg5h5Q/fbOvdaoE6MXZYlQaoRDt7FpJ99wFNhyMkv2zsxpcdDDTEgj6hR/JoDIfzwU3asV1ZS5pE6okED+438pKi7gptC8l0jFMIPj2hlNXSOi+GJYHo8kICpGc8/R2ZzmUSB7f5VG6VAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQArPhCQa0eRtxV+Dv2qPfbNmOm3EYOiaX4poEXZNprOz6no9U+OfpE+I6jYZ5J/IER7MS7vRVT37wrF88Xa+D/u5Hk1+45wFjuuVXcz5avaG52ZS3wXQgLaDbQbuur64+SlCU1E/eaNKETxEOPGxa8Mfb+Rna3EQC91ZJ+SVur270Iw2XVWbRZqxFRNbyLM39N+DyZv+h+UtdXdCFU8moFiMHMX9V7QwRJicN1HLaaayOkm6PG+oxkYWcxdEr+QKLEMMzGX1uMisg0fZU83M7uexwqlUMcN1Fca6D+OqCyVhLiAGphvP+MJAkjUoLVkybcet7bHJsX5C16fQOhBVTKAlY2J4Lmtdn6Xljd35pXd8nN/fww=", "truststore": "/u3+7QAAAAIAAAABAAAAAgAGc2lnLWNhAAABXGqY7xIABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDIyMDU2NDVaFw0yMjA2MDEyMDU2NDZaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDQuxhlhkhGCbekDLpAEYD/C1QSNs8BSguwvIVNvfCqEtXNpcIzVlnsFfjMwbkpj7Q9sIs3qdp6cLJdIdt5i3jVNh+EStTtic1J9laioH5Zzv1F27ekAFO3Wnx2fUbDkMA0lQNgNFgNZuvhJHDBvEGjJYW4DElrLxgVQJ+W0GcSrPugEMftgJwT/diLBM/cJBHuOdQpcVUqfg5h5Q/fbOvdaoE6MXZYlQaoRDt7FpJ99wFNhyMkv2zsxpcdDDTEgj6hR/JoDIfzwU3asV1ZS5pE6okED+438pKi7gptC8l0jFMIPj2hlNXSOi+GJYHo8kICpGc8/R2ZzmUSB7f5VG6VAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQArPhCQa0eRtxV+Dv2qPfbNmOm3EYOiaX4poEXZNprOz6no9U+OfpE+I6jYZ5J/IER7MS7vRVT37wrF88Xa+D/u5Hk1+45wFjuuVXcz5avaG52ZS3wXQgLaDbQbuur64+SlCU1E/eaNKETxEOPGxa8Mfb+Rna3EQC91ZJ+SVur270Iw2XVWbRZqxFRNbyLM39N+DyZv+h+UtdXdCFU8moFiMHMX9V7QwRJicN1HLaaayOkm6PG+oxkYWcxdEr+QKLEMMzGX1uMisg0fZU83M7uexwqlUMcN1Fca6D+OqCyVhLiAGphvP+MJAkjUoLVkybcet7bHJsX5C16fQOhBVTKAlY2J4Lmtdn6Xljd35pXd8nN/fww=" }, "kind": "Secret", "metadata": { "creationTimestamp": null, "name": "logging-elasticsearch" }, "type": "Opaque" }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es-ops-cluster service] **** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:168 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.163.59", "cmd": "/bin/oc get service logging-es-ops-cluster -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-02T20:57:26Z", "name": "logging-es-ops-cluster", "namespace": "logging", "resourceVersion": "1284", "selfLink": "/api/v1/namespaces/logging/services/logging-es-ops-cluster", "uid": "15928bf9-47d6-11e7-ab86-0e1196655f96" }, "spec": { "clusterIP": "172.30.163.59", "ports": [ { "port": 9300, "protocol": "TCP", "targetPort": 9300 } ], "selector": { "component": "es-ops", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es-ops service] ************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:182 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.184.176", "cmd": "/bin/oc get service logging-es-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-02T20:57:28Z", "name": "logging-es-ops", "namespace": "logging", "resourceVersion": "1288", "selfLink": "/api/v1/namespaces/logging/services/logging-es-ops", "uid": "165fdebb-47d6-11e7-ab86-0e1196655f96" }, "spec": { "clusterIP": "172.30.184.176", "ports": [ { "port": 9200, "protocol": "TCP", "targetPort": "restapi" } ], "selector": { "component": "es-ops", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:197 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:210 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES storage] ************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:225 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:237 ok: [openshift] => { "ansible_facts": { "es_deploy_name": "logging-es-ops-data-master-tycs4wrj" }, "changed": false } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:241 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES dc templates] ******************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:246 changed: [openshift] => { "changed": true, "checksum": "9ce49c74fbf6b724c0186744df9913fd47618541", "dest": "/tmp/openshift-logging-ansible-aKv7rU/templates/logging-es-dc.yml", "gid": 0, "group": "root", "md5sum": "ebfa321c0507824ada20f2eb0dff2ff4", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3090, "src": "/root/.ansible/tmp/ansible-tmp-1496437048.83-205342174057483/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set ES dc] ***************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:262 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-es-ops-data-master-tycs4wrj -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-02T20:57:29Z", "generation": 2, "labels": { "component": "es-ops", "deployment": "logging-es-ops-data-master-tycs4wrj", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-ops-data-master-tycs4wrj", "namespace": "logging", "resourceVersion": "1302", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-es-ops-data-master-tycs4wrj", "uid": "174e96d6-47d6-11e7-ab86-0e1196655f96" }, "spec": { "replicas": 1, "selector": { "component": "es-ops", "deployment": "logging-es-ops-data-master-tycs4wrj", "logging-infra": "elasticsearch", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "es-ops", "deployment": "logging-es-ops-data-master-tycs4wrj", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-ops-data-master-tycs4wrj" }, "spec": { "containers": [ { "env": [ { "name": "NAMESPACE", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "metadata.namespace" } } }, { "name": "KUBERNETES_TRUST_CERT", "value": "true" }, { "name": "SERVICE_DNS", "value": "logging-es-ops-cluster" }, { "name": "CLUSTER_NAME", "value": "logging-es-ops" }, { "name": "INSTANCE_RAM", "value": "8Gi" }, { "name": "NODE_QUORUM", "value": "1" }, { "name": "RECOVER_EXPECTED_NODES", "value": "1" }, { "name": "RECOVER_AFTER_TIME", "value": "5m" }, { "name": "IS_MASTER", "value": "true" }, { "name": "HAS_DATA", "value": "true" } ], "image": "172.30.101.10:5000/logging/logging-elasticsearch:latest", "imagePullPolicy": "Always", "name": "elasticsearch", "ports": [ { "containerPort": 9200, "name": "restapi", "protocol": "TCP" }, { "containerPort": 9300, "name": "cluster", "protocol": "TCP" } ], "readinessProbe": { "exec": { "command": [ "/usr/share/elasticsearch/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 5, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 4 }, "resources": { "limits": { "cpu": "1", "memory": "8Gi" }, "requests": { "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/elasticsearch/secret", "name": "elasticsearch", "readOnly": true }, { "mountPath": "/usr/share/java/elasticsearch/config", "name": "elasticsearch-config", "readOnly": true }, { "mountPath": "/elasticsearch/persistent", "name": "elasticsearch-storage" } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": { "supplementalGroups": [ 65534 ] }, "serviceAccount": "aggregated-logging-elasticsearch", "serviceAccountName": "aggregated-logging-elasticsearch", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "elasticsearch", "secret": { "defaultMode": 420, "secretName": "logging-elasticsearch" } }, { "configMap": { "defaultMode": 420, "name": "logging-elasticsearch" }, "name": "elasticsearch-config" }, { "emptyDir": {}, "name": "elasticsearch-storage" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-02T20:57:29Z", "lastUpdateTime": "2017-06-02T20:57:29Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-02T20:57:29Z", "lastUpdateTime": "2017-06-02T20:57:29Z", "message": "replication controller \"logging-es-ops-data-master-tycs4wrj-1\" is waiting for pod \"logging-es-ops-data-master-tycs4wrj-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Delete temp directory] ***************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:274 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-aKv7rU", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:148 statically included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "kibana_version": "3_5" }, "changed": false } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create temp directory for doing work in] ****** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:7 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.002832", "end": "2017-06-02 16:57:31.295774", "rc": 0, "start": "2017-06-02 16:57:31.292942" } STDOUT: /tmp/openshift-logging-ansible-6K35Ic TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:12 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-6K35Ic" }, "changed": false } TASK [openshift_logging_kibana : Create templates subdirectory] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:16 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-6K35Ic/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:34 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-kibana-dockercfg-0x0c7" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-02T20:57:32Z", "name": "aggregated-logging-kibana", "namespace": "logging", "resourceVersion": "1312", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-kibana", "uid": "18d5f2f1-47d6-11e7-ab86-0e1196655f96" }, "secrets": [ { "name": "aggregated-logging-kibana-dockercfg-0x0c7" }, { "name": "aggregated-logging-kibana-token-lx2hj" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:42 ok: [openshift] => { "ansible_facts": { "kibana_component": "kibana", "kibana_name": "logging-kibana" }, "changed": false } TASK [openshift_logging_kibana : Retrieving the cert to use when generating secrets for the logging components] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:46 ok: [openshift] => (item={u'name': u'ca_file', u'file': u'ca.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME5Wb1hEVEl5TURZd01USXdOVFkwTmxvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5DN0dHV0dTRVlKdDZRTXVrQVJnUDhMVkJJMnp3RktDN0M4aFUyOThLb1MKMWMybHdqTldXZXdWK016QnVTbVB0RDJ3aXplcDJucHdzbDBoMjNtTGVOVTJINFJLMU8ySnpVbjJWcUtnZmxuTwovVVhidDZRQVU3ZGFmSFo5UnNPUXdEU1ZBMkEwV0ExbTYrRWtjTUc4UWFNbGhiZ01TV3N2R0JWQW41YlFaeEtzCis2QVF4KzJBbkJQOTJJc0V6OXdrRWU0NTFDbHhWU3ArRG1IbEQ5OXM2OTFxZ1RveGRsaVZCcWhFTzNzV2tuMzMKQVUySEl5Uy9iT3pHbHgwTU5NU0NQcUZIOG1nTWgvUEJUZHF4WFZsTG1rVHFpUVFQN2pmeWtxTHVDbTBMeVhTTQpVd2crUGFHVTFkSTZMNFlsZ2VqeVFnS2taeno5SFpuT1pSSUh0L2xVYnBVQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcysKRUpCclI1RzNGWDRPL2FvOTlzMlk2YmNSZzZKcGZpbWdSZGsybXM3UHFlajFUNDUra1Q0anFOaG5rbjhnUkhzeApMdTlGVlBmdkNzWHp4ZHI0UCs3a2VUWDdqbkFXTzY1VmR6UGxxOW9iblpsTGZCZENBdG9OdEJ1NjZ2cmo1S1VKClRVVDk1bzBvUlBFUTQ4YkZyd3g5djVHZHJjUkFMM1ZrbjVKVzZ2YnZRakRaZFZadEZtckVWRTF2SXN6ZjAzNFAKSm0vNkg1UzExZDBJVlR5YWdXSXdjeGYxWHREQkVtSnczVWN0cHBySTZTYm84YjZqR1JoWnpGMFN2NUFvc1F3egpNWmZXNHlLeURSOWxUemN6dTU3SENxVlF4dzNVVnhyb1A0Nm9MSldFdUlBYW1HOC80d2tDU05TZ3RXVEp0eDYzCnRzY214ZmtMWHA5QTZFRlZNb0E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "encoding": "base64", "item": { "file": "ca.crt", "name": "ca_file" }, "source": "/etc/origin/logging/ca.crt" } ok: [openshift] => (item={u'name': u'kibana_internal_key', u'file': u'kibana-internal.key'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBM2xXZWM0S2pqOHhIaCt0akpNUVJrWUVDdDcxSGRoV1VzYTlxVHFaNVNqUnhHNVlRClZzNk1ad2FObktLQkJ3QWwzRjBmRXJRR2tmdUp5NGdZQS9lSCt0bDdlSjhiU2hPNnNJai9pemdUdWxmVTBBN2wKSE5Oa1Eya2NRZWNyaEkxekZPZ05vbXJRMzVFRVpsVms4TTdCNDR2d2xuZnl6ZDMxQmp2TzFCbDRVajVMZFZJTApBMG1vY3ZBenF6N0c3cVZEd2VUZXRJU3hia1hxWGkrQk5jeEdJS1lXbDJ5Y3NjUXB0Qi9nSXdqZ3NyVHZlelIvCnoyUngxVzBRSlBkTHI5ZTI0RHhsbDd5aXlpbnMvNUhsWDZQY2ZsMjg4dGFPT3RpN0YwaUhxSUpxRzBVeWtXN04KNmwzaGM5cW5IcWU2WTR3VjhTQnVuSkRKeG1SWjIvVXVqYWcwU3dJREFRQUJBb0lCQUFiL3JQZzM0WXd5UXdJdApUN2Fsa1dRQ0txSzhDNWJWQVJSQzBGYmZlS3YwVUtjc3B5RUVhWGtJeE1ac2V5Rk1TT1RSN1p0NkhVYlZJelpMCjkyMlFpakJFVGxXeXRIbzFlc2Y1MkFsNjMyd2JQYkM2OTAxYi9pajlFdzJrQ0VPbzdEbDVRSXlmVGlucmQ3YjgKOHl0OVpxOFNCYVhHNnRhK0tPdGtVSk51cGRINDR3UFFvc2Y5S3VCc2YvRjRxcTVQbWk2TUkyWlZpME9nZmhqdQpVZlFJTktGNVBLZHdYNHNIOUUyMUkzaENKMm1maDlwU1RYczJqeUo2R1FmaWxTUy82R0diWElyZmtybjZtTUJECm1jZ3kvWGcycXNRY1hpbjJydTBQN3pQQjJvUDE2bVpFYUhWdWd6VUIrQXB0VTVHSjhKR2Y5azEyaFBIQ1FmcU0KeE5LaFF1a0NnWUVBOFVPL0Y0QzdJd2RRNmI1bGlNdUYySk9IWnJ2Q21tVm1PWnFxWEpnSkFVd0htMjdmZlFZeApCYjZhZy9jSC9NVHRWbWJvM2dFK1RPRnFWNzZoYXU4RVBoRjBoc05GNmZyVmlucnBFUkJQRkEyWG5vcXVNR0IvCmxjdnVFQUt1eGRiTmF5aHhRTzEzcGhZU3JBWVRSMmFlNWp6SXFreGl1WFlrSExkVzhoeDZFdjhDZ1lFQTYrbmsKaDNraDFZS0lTVTd0MHExVE9rVklRVy9rZzFoU2drMFYyUUJGZTNodmp3MmV2ek9FVlFSYlpTNW9iRlVubGQ4SwpnampjdzJtK2lBR2ZGcEZLOUpZNkk4dWprWWRSRVdFLzkzUFhuYnBYemVZM3VFWXp6dFdidE56cWtJOWRqUkNKCmp6M05kV0JkZ1RCNzRUS1lTMjVLSURaRVdYNGp1djdOT0d2MU9yVUNnWUVBbDVWUkFwdEcrSU1vT3pQODV5MjQKTXBLK2g3V0FWekZPUVBNRUJwa2ZUMGxObmtMUzkrSmorby8rMU5yb2tjL0lybmlKNXJJeFNteDJQQnJ4b0JYOApQR01MSzRDVTlLVThkWDB6NGh5MUVveFhycXpETkhIc3Qxa2hnYjJ0d1c5c01OK0FDS01xZ1pkc3M5ZzlWS2NOClB1c0J5TDJsYVpEb3I0SWhob3lOeGxFQ2dZQjU5aVF3S1Y1bGZDTXJDd1FHUzViZ1pCcnp3WDM0clR1U28zbHQKQXlmb3FoMjZiZ2NvditCazkyaXNpVzV3dXlGSTZOTU0rWXFmOTlZSmlCVVAzTE5NZVRHN2ViYXBNTFNuY0loYQpUR2dtNGNRczdSelhSbXZZUFRSUEwzcVFtNTE0cFJrSWxhSFhVYWRsZDRSRHF4MXl1YVRXdkZkZmtNZTJENjVXCndmRTRsUUtCZ1FDWUdVZllYR3B0Wk5iWHppNjB1RE1SUGpDV1hFMU84dmRKOUZCR1NvVGxrT09Kd3lCQW5BZWIKUGdhS0xXK25LelZaZmdJWWUvV211VWlnNU15WUNoUm0vWFYvS2c3R0JGNENmcGxLeHVWUWtaSW5aZHpKb1prWgovNWUwT1pUUmhkWEJxUEIxcEFLTFNNTHJvQjBZVkszY0ljanozTHlJa2IrMWNWbCtHbTAzNlE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.key", "name": "kibana_internal_key" }, "source": "/etc/origin/logging/kibana-internal.key" } ok: [openshift] => (item={u'name': u'kibana_internal_cert', u'file': u'kibana-internal.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME4xb1hEVEU1TURZd01qSXdOVFkwT0ZvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFEZVZaNXpncU9QekVlSDYyTWt4QkdSZ1FLM3ZVZDJGWlN4cjJwT3BubEtOSEVibGhCV3pveG4KQm8yY29vRUhBQ1hjWFI4U3RBYVIrNG5MaUJnRDk0ZjYyWHQ0bnh0S0U3cXdpUCtMT0JPNlY5VFFEdVVjMDJSRAphUnhCNXl1RWpYTVU2QTJpYXREZmtRUm1WV1R3enNIamkvQ1dkL0xOM2ZVR084N1VHWGhTUGt0MVVnc0RTYWh5CjhET3JQc2J1cFVQQjVONjBoTEZ1UmVwZUw0RTF6RVlncGhhWGJKeXh4Q20wSCtBakNPQ3l0Tzk3TkgvUFpISFYKYlJBazkwdXYxN2JnUEdXWHZLTEtLZXova2VWZm85eCtYYnp5MW80NjJMc1hTSWVvZ21vYlJUS1JiczNxWGVGegoycWNlcDdwampCWHhJRzZja01uR1pGbmI5UzZOcURSTEFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWxNWHJxMkMxc2U2cFZNTC9DQUF6UmJ4dDRRWWo2eXpKdzVjcW5WN0MKZkoyeGJveFIyS1gvZ21CeGNrcXZvbitqVi8xMldZWk5zRTNIUnhTcFBEblgxL0E4Y1hlS2FJZkxMUVcvNjgwYQpwVURheStaL2Nxdm9yYXIxYmFkc05kMS9kOWF2c2hIckNndWo0dUNuUDU0aEJCd2VUYk1vOU90c0dHVEhlTUZzCms4UXN4ZGZvZm5ZeW9Jdm00SEhlQVIxeDlEL2dIMGV0b0NDckxlRituWVJOdzF5N01waG1IOGRhUlZvTTkxR2MKUE1ZOXZWZGkxRUNUWDhOT0dGRkFBM1FGTndXRkY1YmlPV0ZxMnE2M0IyL1dLdHRxODRHdHkzSkxPbVUyYUZNVgozV2V0a3VWYUtZTEdmZitDd29DcG1PT3ZvRVhIMWZEdVppbW1VblI3ODNIL3dBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd01qSXdOVFkwTlZvWERUSXlNRFl3TVRJd05UWTBObG93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkM3R0dXR1NFWUp0NlFNdWtBUmdQOExWQkkyendGS0M3QzhoVTI5OEtvUwoxYzJsd2pOV1dld1YrTXpCdVNtUHREMndpemVwMm5wd3NsMGgyM21MZU5VMkg0UksxTzJKelVuMlZxS2dmbG5PCi9VWGJ0NlFBVTdkYWZIWjlSc09Rd0RTVkEyQTBXQTFtNitFa2NNRzhRYU1saGJnTVNXc3ZHQlZBbjViUVp4S3MKKzZBUXgrMkFuQlA5MklzRXo5d2tFZTQ1MUNseFZTcCtEbUhsRDk5czY5MXFnVG94ZGxpVkJxaEVPM3NXa24zMwpBVTJISXlTL2JPekdseDBNTk1TQ1BxRkg4bWdNaC9QQlRkcXhYVmxMbWtUcWlRUVA3amZ5a3FMdUNtMEx5WFNNClV3ZytQYUdVMWRJNkw0WWxnZWp5UWdLa1p6ejlIWm5PWlJJSHQvbFVicFVDQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNzKwpFSkJyUjVHM0ZYNE8vYW85OXMyWTZiY1JnNkpwZmltZ1JkazJtczdQcWVqMVQ0NStrVDRqcU5obmtuOGdSSHN4Ckx1OUZWUGZ2Q3NYenhkcjRQKzdrZVRYN2puQVdPNjVWZHpQbHE5b2JuWmxMZkJkQ0F0b050QnU2NnZyajVLVUoKVFVUOTVvMG9SUEVRNDhiRnJ3eDl2NUdkcmNSQUwzVmtuNUpXNnZidlFqRFpkVlp0Rm1yRVZFMXZJc3pmMDM0UApKbS82SDVTMTFkMElWVHlhZ1dJd2N4ZjFYdERCRW1KdzNVY3RwcHJJNlNibzhiNmpHUmhaekYwU3Y1QW9zUXd6Ck1aZlc0eUt5RFI5bFR6Y3p1NTdIQ3FWUXh3M1VWeHJvUDQ2b0xKV0V1SUFhbUc4LzR3a0NTTlNndFdUSnR4NjMKdHNjbXhma0xYcDlBNkVGVk1vQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.crt", "name": "kibana_internal_cert" }, "source": "/etc/origin/logging/kibana-internal.crt" } ok: [openshift] => (item={u'name': u'server_tls', u'file': u'server-tls.json'}) => { "changed": false, "content": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", "encoding": "base64", "item": { "file": "server-tls.json", "name": "server_tls" }, "source": "/etc/origin/logging/server-tls.json" } TASK [openshift_logging_kibana : Set logging-kibana service] ******************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:57 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.140.3", "cmd": "/bin/oc get service logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-02T20:57:34Z", "name": "logging-kibana", "namespace": "logging", "resourceVersion": "1329", "selfLink": "/api/v1/namespaces/logging/services/logging-kibana", "uid": "19feb502-47d6-11e7-ab86-0e1196655f96" }, "spec": { "clusterIP": "172.30.140.3", "ports": [ { "port": 443, "protocol": "TCP", "targetPort": "oaproxy" } ], "selector": { "component": "kibana", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:74 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_key | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:79 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_cert | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:84 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_ca | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:89 ok: [openshift] => { "ansible_facts": { "kibana_ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME5Wb1hEVEl5TURZd01USXdOVFkwTmxvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5DN0dHV0dTRVlKdDZRTXVrQVJnUDhMVkJJMnp3RktDN0M4aFUyOThLb1MKMWMybHdqTldXZXdWK016QnVTbVB0RDJ3aXplcDJucHdzbDBoMjNtTGVOVTJINFJLMU8ySnpVbjJWcUtnZmxuTwovVVhidDZRQVU3ZGFmSFo5UnNPUXdEU1ZBMkEwV0ExbTYrRWtjTUc4UWFNbGhiZ01TV3N2R0JWQW41YlFaeEtzCis2QVF4KzJBbkJQOTJJc0V6OXdrRWU0NTFDbHhWU3ArRG1IbEQ5OXM2OTFxZ1RveGRsaVZCcWhFTzNzV2tuMzMKQVUySEl5Uy9iT3pHbHgwTU5NU0NQcUZIOG1nTWgvUEJUZHF4WFZsTG1rVHFpUVFQN2pmeWtxTHVDbTBMeVhTTQpVd2crUGFHVTFkSTZMNFlsZ2VqeVFnS2taeno5SFpuT1pSSUh0L2xVYnBVQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcysKRUpCclI1RzNGWDRPL2FvOTlzMlk2YmNSZzZKcGZpbWdSZGsybXM3UHFlajFUNDUra1Q0anFOaG5rbjhnUkhzeApMdTlGVlBmdkNzWHp4ZHI0UCs3a2VUWDdqbkFXTzY1VmR6UGxxOW9iblpsTGZCZENBdG9OdEJ1NjZ2cmo1S1VKClRVVDk1bzBvUlBFUTQ4YkZyd3g5djVHZHJjUkFMM1ZrbjVKVzZ2YnZRakRaZFZadEZtckVWRTF2SXN6ZjAzNFAKSm0vNkg1UzExZDBJVlR5YWdXSXdjeGYxWHREQkVtSnczVWN0cHBySTZTYm84YjZqR1JoWnpGMFN2NUFvc1F3egpNWmZXNHlLeURSOWxUemN6dTU3SENxVlF4dzNVVnhyb1A0Nm9MSldFdUlBYW1HOC80d2tDU05TZ3RXVEp0eDYzCnRzY214ZmtMWHA5QTZFRlZNb0E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" }, "changed": false } TASK [openshift_logging_kibana : Generating Kibana route template] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:94 ok: [openshift] => { "changed": false, "checksum": "527a6a06a4457b8da51c929c214ee4606b6bbff8", "dest": "/tmp/openshift-logging-ansible-6K35Ic/templates/kibana-route.yaml", "gid": 0, "group": "root", "md5sum": "b5a7c08933a420c17da3ed0e4649f9d3", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2714, "src": "/root/.ansible/tmp/ansible-tmp-1496437054.88-106716517433177/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Setting Kibana route] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:114 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get route logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Route", "metadata": { "creationTimestamp": "2017-06-02T20:57:35Z", "labels": { "component": "support", "logging-infra": "support", "provider": "openshift" }, "name": "logging-kibana", "namespace": "logging", "resourceVersion": "1335", "selfLink": "/oapi/v1/namespaces/logging/routes/logging-kibana", "uid": "1af88a6f-47d6-11e7-ab86-0e1196655f96" }, "spec": { "host": "kibana.router.default.svc.cluster.local", "tls": { "caCertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwMjIwNTY0NVoXDTIyMDYwMTIwNTY0Nlow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBANC7GGWGSEYJt6QMukARgP8LVBI2zwFKC7C8hU298KoS\n1c2lwjNWWewV+MzBuSmPtD2wizep2npwsl0h23mLeNU2H4RK1O2JzUn2VqKgflnO\n/UXbt6QAU7dafHZ9RsOQwDSVA2A0WA1m6+EkcMG8QaMlhbgMSWsvGBVAn5bQZxKs\n+6AQx+2AnBP92IsEz9wkEe451ClxVSp+DmHlD99s691qgToxdliVBqhEO3sWkn33\nAU2HIyS/bOzGlx0MNMSCPqFH8mgMh/PBTdqxXVlLmkTqiQQP7jfykqLuCm0LyXSM\nUwg+PaGU1dI6L4YlgejyQgKkZzz9HZnOZRIHt/lUbpUCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACs+\nEJBrR5G3FX4O/ao99s2Y6bcRg6JpfimgRdk2ms7Pqej1T45+kT4jqNhnkn8gRHsx\nLu9FVPfvCsXzxdr4P+7keTX7jnAWO65VdzPlq9obnZlLfBdCAtoNtBu66vrj5KUJ\nTUT95o0oRPEQ48bFrwx9v5GdrcRAL3Vkn5JW6vbvQjDZdVZtFmrEVE1vIszf034P\nJm/6H5S11d0IVTyagWIwcxf1XtDBEmJw3UctpprI6Sbo8b6jGRhZzF0Sv5AosQwz\nMZfW4yKyDR9lTzczu57HCqVQxw3UVxroP46oLJWEuIAamG8/4wkCSNSgtWTJtx63\ntscmxfkLXp9A6EFVMoA=\n-----END CERTIFICATE-----\n", "destinationCACertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwMjIwNTY0NVoXDTIyMDYwMTIwNTY0Nlow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBANC7GGWGSEYJt6QMukARgP8LVBI2zwFKC7C8hU298KoS\n1c2lwjNWWewV+MzBuSmPtD2wizep2npwsl0h23mLeNU2H4RK1O2JzUn2VqKgflnO\n/UXbt6QAU7dafHZ9RsOQwDSVA2A0WA1m6+EkcMG8QaMlhbgMSWsvGBVAn5bQZxKs\n+6AQx+2AnBP92IsEz9wkEe451ClxVSp+DmHlD99s691qgToxdliVBqhEO3sWkn33\nAU2HIyS/bOzGlx0MNMSCPqFH8mgMh/PBTdqxXVlLmkTqiQQP7jfykqLuCm0LyXSM\nUwg+PaGU1dI6L4YlgejyQgKkZzz9HZnOZRIHt/lUbpUCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACs+\nEJBrR5G3FX4O/ao99s2Y6bcRg6JpfimgRdk2ms7Pqej1T45+kT4jqNhnkn8gRHsx\nLu9FVPfvCsXzxdr4P+7keTX7jnAWO65VdzPlq9obnZlLfBdCAtoNtBu66vrj5KUJ\nTUT95o0oRPEQ48bFrwx9v5GdrcRAL3Vkn5JW6vbvQjDZdVZtFmrEVE1vIszf034P\nJm/6H5S11d0IVTyagWIwcxf1XtDBEmJw3UctpprI6Sbo8b6jGRhZzF0Sv5AosQwz\nMZfW4yKyDR9lTzczu57HCqVQxw3UVxroP46oLJWEuIAamG8/4wkCSNSgtWTJtx63\ntscmxfkLXp9A6EFVMoA=\n-----END CERTIFICATE-----\n", "insecureEdgeTerminationPolicy": "Redirect", "termination": "reencrypt" }, "to": { "kind": "Service", "name": "logging-kibana", "weight": 100 }, "wildcardPolicy": "None" }, "status": { "ingress": [ { "conditions": [ { "lastTransitionTime": "2017-06-02T20:57:35Z", "status": "True", "type": "Admitted" } ], "host": "kibana.router.default.svc.cluster.local", "routerName": "router", "wildcardPolicy": "None" } ] } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Generate proxy session] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:125 ok: [openshift] => { "ansible_facts": { "session_secret": "bNSoinBvxclQMXkeWmt62N3W1ljK1l3Ric5DYCzEArnSBcRPAr5Z1qy8OZiYVLL3Ow6vvvbIWUa2uupYoSdLkeuzpecvy7dvqAG1w2dnpe2Y5sluZKYOdvOjg9QsCu6csKhhQPH7cYA0earvkeOy2hyZc4KvSqqpQYFa1lQH2qpCnnBtl0xOSKJFstEKwGklu6sGnu1h" }, "changed": false } TASK [openshift_logging_kibana : Generate oauth client secret] ***************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:132 ok: [openshift] => { "ansible_facts": { "oauth_secret": "rdmbJTHbKtvDXWtK7qWayqG1jFmjehGlzEQkXBleKSbmK0Xj9sWa1whCeukSIjJo" }, "changed": false } TASK [openshift_logging_kibana : Create oauth-client template] ***************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:138 changed: [openshift] => { "changed": true, "checksum": "0696714e0b8f576e8320ed85bd4ea0466b3f8051", "dest": "/tmp/openshift-logging-ansible-6K35Ic/templates/oauth-client.yml", "gid": 0, "group": "root", "md5sum": "54d65cdd819f6fe032f43cadfba28d1a", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 328, "src": "/root/.ansible/tmp/ansible-tmp-1496437056.36-241737245721888/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set kibana-proxy oauth-client] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:146 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "OAuthClient", "metadata": { "creationTimestamp": "2017-06-02T20:57:37Z", "labels": { "logging-infra": "support" }, "name": "kibana-proxy", "resourceVersion": "1339", "selfLink": "/oapi/v1/oauthclients/kibana-proxy", "uid": "1bcdbc02-47d6-11e7-ab86-0e1196655f96" }, "redirectURIs": [ "https://kibana.router.default.svc.cluster.local" ], "scopeRestrictions": [ { "literals": [ "user:info", "user:check-access", "user:list-projects" ] } ], "secret": "rdmbJTHbKtvDXWtK7qWayqG1jFmjehGlzEQkXBleKSbmK0Xj9sWa1whCeukSIjJo" } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana secret] **************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:157 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-kibana ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.kibana.key cert=/etc/origin/logging/system.logging.kibana.crt -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana Proxy secret] ********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:171 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-kibana-proxy oauth-secret=/tmp/oauth-secret-cuo4ZL session-secret=/tmp/session-secret-FgjpVF server-key=/tmp/server-key-LWVezn server-cert=/tmp/server-cert-SiaDMb server-tls=/tmp/server-tls-OHwghI -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Generate Kibana DC template] ****************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:197 changed: [openshift] => { "changed": true, "checksum": "76fe194a4e6b55ba31d2bd4de30ba81741a415ab", "dest": "/tmp/openshift-logging-ansible-6K35Ic/templates/kibana-dc.yaml", "gid": 0, "group": "root", "md5sum": "2e7d7e8f44dec1b9c80881c7791042f7", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3737, "src": "/root/.ansible/tmp/ansible-tmp-1496437059.16-233595246449717/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set Kibana DC] ******************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:216 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-02T20:57:40Z", "generation": 2, "labels": { "component": "kibana", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana", "namespace": "logging", "resourceVersion": "1357", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-kibana", "uid": "1d981976-47d6-11e7-ab86-0e1196655f96" }, "spec": { "replicas": 1, "selector": { "component": "kibana", "logging-infra": "kibana", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Rolling" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "kibana", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana" }, "spec": { "containers": [ { "env": [ { "name": "ES_HOST", "value": "logging-es" }, { "name": "ES_PORT", "value": "9200" }, { "name": "KIBANA_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.101.10:5000/logging/logging-kibana:latest", "imagePullPolicy": "Always", "name": "kibana", "readinessProbe": { "exec": { "command": [ "/usr/share/kibana/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 5, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 4 }, "resources": { "limits": { "memory": "736Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/kibana/keys", "name": "kibana", "readOnly": true } ] }, { "env": [ { "name": "OAP_BACKEND_URL", "value": "http://localhost:5601" }, { "name": "OAP_AUTH_MODE", "value": "oauth2" }, { "name": "OAP_TRANSFORM", "value": "user_header,token_header" }, { "name": "OAP_OAUTH_ID", "value": "kibana-proxy" }, { "name": "OAP_MASTER_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "OAP_PUBLIC_MASTER_URL", "value": "https://172.18.8.225:8443" }, { "name": "OAP_LOGOUT_REDIRECT", "value": "https://172.18.8.225:8443/console/logout" }, { "name": "OAP_MASTER_CA_FILE", "value": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" }, { "name": "OAP_DEBUG", "value": "False" }, { "name": "OAP_OAUTH_SECRET_FILE", "value": "/secret/oauth-secret" }, { "name": "OAP_SERVER_CERT_FILE", "value": "/secret/server-cert" }, { "name": "OAP_SERVER_KEY_FILE", "value": "/secret/server-key" }, { "name": "OAP_SERVER_TLS_FILE", "value": "/secret/server-tls.json" }, { "name": "OAP_SESSION_SECRET_FILE", "value": "/secret/session-secret" }, { "name": "OCP_AUTH_PROXY_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana-proxy", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.101.10:5000/logging/logging-auth-proxy:latest", "imagePullPolicy": "Always", "name": "kibana-proxy", "ports": [ { "containerPort": 3000, "name": "oaproxy", "protocol": "TCP" } ], "resources": { "limits": { "memory": "96Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/secret", "name": "kibana-proxy", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-kibana", "serviceAccountName": "aggregated-logging-kibana", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "kibana", "secret": { "defaultMode": 420, "secretName": "logging-kibana" } }, { "name": "kibana-proxy", "secret": { "defaultMode": 420, "secretName": "logging-kibana-proxy" } } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-02T20:57:40Z", "lastUpdateTime": "2017-06-02T20:57:40Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-02T20:57:40Z", "lastUpdateTime": "2017-06-02T20:57:40Z", "message": "replication controller \"logging-kibana-1\" is waiting for pod \"logging-kibana-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Delete temp directory] ************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:228 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-6K35Ic", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:163 statically included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "kibana_version": "3_5" }, "changed": false } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create temp directory for doing work in] ****** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:7 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.009877", "end": "2017-06-02 16:57:42.452316", "rc": 0, "start": "2017-06-02 16:57:42.442439" } STDOUT: /tmp/openshift-logging-ansible-O09ZQa TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:12 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-O09ZQa" }, "changed": false } TASK [openshift_logging_kibana : Create templates subdirectory] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:16 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-O09ZQa/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:34 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get sa aggregated-logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-kibana-dockercfg-0x0c7" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-02T20:57:32Z", "name": "aggregated-logging-kibana", "namespace": "logging", "resourceVersion": "1312", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-kibana", "uid": "18d5f2f1-47d6-11e7-ab86-0e1196655f96" }, "secrets": [ { "name": "aggregated-logging-kibana-dockercfg-0x0c7" }, { "name": "aggregated-logging-kibana-token-lx2hj" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:42 ok: [openshift] => { "ansible_facts": { "kibana_component": "kibana-ops", "kibana_name": "logging-kibana-ops" }, "changed": false } TASK [openshift_logging_kibana : Retrieving the cert to use when generating secrets for the logging components] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:46 ok: [openshift] => (item={u'name': u'ca_file', u'file': u'ca.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME5Wb1hEVEl5TURZd01USXdOVFkwTmxvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5DN0dHV0dTRVlKdDZRTXVrQVJnUDhMVkJJMnp3RktDN0M4aFUyOThLb1MKMWMybHdqTldXZXdWK016QnVTbVB0RDJ3aXplcDJucHdzbDBoMjNtTGVOVTJINFJLMU8ySnpVbjJWcUtnZmxuTwovVVhidDZRQVU3ZGFmSFo5UnNPUXdEU1ZBMkEwV0ExbTYrRWtjTUc4UWFNbGhiZ01TV3N2R0JWQW41YlFaeEtzCis2QVF4KzJBbkJQOTJJc0V6OXdrRWU0NTFDbHhWU3ArRG1IbEQ5OXM2OTFxZ1RveGRsaVZCcWhFTzNzV2tuMzMKQVUySEl5Uy9iT3pHbHgwTU5NU0NQcUZIOG1nTWgvUEJUZHF4WFZsTG1rVHFpUVFQN2pmeWtxTHVDbTBMeVhTTQpVd2crUGFHVTFkSTZMNFlsZ2VqeVFnS2taeno5SFpuT1pSSUh0L2xVYnBVQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcysKRUpCclI1RzNGWDRPL2FvOTlzMlk2YmNSZzZKcGZpbWdSZGsybXM3UHFlajFUNDUra1Q0anFOaG5rbjhnUkhzeApMdTlGVlBmdkNzWHp4ZHI0UCs3a2VUWDdqbkFXTzY1VmR6UGxxOW9iblpsTGZCZENBdG9OdEJ1NjZ2cmo1S1VKClRVVDk1bzBvUlBFUTQ4YkZyd3g5djVHZHJjUkFMM1ZrbjVKVzZ2YnZRakRaZFZadEZtckVWRTF2SXN6ZjAzNFAKSm0vNkg1UzExZDBJVlR5YWdXSXdjeGYxWHREQkVtSnczVWN0cHBySTZTYm84YjZqR1JoWnpGMFN2NUFvc1F3egpNWmZXNHlLeURSOWxUemN6dTU3SENxVlF4dzNVVnhyb1A0Nm9MSldFdUlBYW1HOC80d2tDU05TZ3RXVEp0eDYzCnRzY214ZmtMWHA5QTZFRlZNb0E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "encoding": "base64", "item": { "file": "ca.crt", "name": "ca_file" }, "source": "/etc/origin/logging/ca.crt" } ok: [openshift] => (item={u'name': u'kibana_internal_key', u'file': u'kibana-internal.key'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBM2xXZWM0S2pqOHhIaCt0akpNUVJrWUVDdDcxSGRoV1VzYTlxVHFaNVNqUnhHNVlRClZzNk1ad2FObktLQkJ3QWwzRjBmRXJRR2tmdUp5NGdZQS9lSCt0bDdlSjhiU2hPNnNJai9pemdUdWxmVTBBN2wKSE5Oa1Eya2NRZWNyaEkxekZPZ05vbXJRMzVFRVpsVms4TTdCNDR2d2xuZnl6ZDMxQmp2TzFCbDRVajVMZFZJTApBMG1vY3ZBenF6N0c3cVZEd2VUZXRJU3hia1hxWGkrQk5jeEdJS1lXbDJ5Y3NjUXB0Qi9nSXdqZ3NyVHZlelIvCnoyUngxVzBRSlBkTHI5ZTI0RHhsbDd5aXlpbnMvNUhsWDZQY2ZsMjg4dGFPT3RpN0YwaUhxSUpxRzBVeWtXN04KNmwzaGM5cW5IcWU2WTR3VjhTQnVuSkRKeG1SWjIvVXVqYWcwU3dJREFRQUJBb0lCQUFiL3JQZzM0WXd5UXdJdApUN2Fsa1dRQ0txSzhDNWJWQVJSQzBGYmZlS3YwVUtjc3B5RUVhWGtJeE1ac2V5Rk1TT1RSN1p0NkhVYlZJelpMCjkyMlFpakJFVGxXeXRIbzFlc2Y1MkFsNjMyd2JQYkM2OTAxYi9pajlFdzJrQ0VPbzdEbDVRSXlmVGlucmQ3YjgKOHl0OVpxOFNCYVhHNnRhK0tPdGtVSk51cGRINDR3UFFvc2Y5S3VCc2YvRjRxcTVQbWk2TUkyWlZpME9nZmhqdQpVZlFJTktGNVBLZHdYNHNIOUUyMUkzaENKMm1maDlwU1RYczJqeUo2R1FmaWxTUy82R0diWElyZmtybjZtTUJECm1jZ3kvWGcycXNRY1hpbjJydTBQN3pQQjJvUDE2bVpFYUhWdWd6VUIrQXB0VTVHSjhKR2Y5azEyaFBIQ1FmcU0KeE5LaFF1a0NnWUVBOFVPL0Y0QzdJd2RRNmI1bGlNdUYySk9IWnJ2Q21tVm1PWnFxWEpnSkFVd0htMjdmZlFZeApCYjZhZy9jSC9NVHRWbWJvM2dFK1RPRnFWNzZoYXU4RVBoRjBoc05GNmZyVmlucnBFUkJQRkEyWG5vcXVNR0IvCmxjdnVFQUt1eGRiTmF5aHhRTzEzcGhZU3JBWVRSMmFlNWp6SXFreGl1WFlrSExkVzhoeDZFdjhDZ1lFQTYrbmsKaDNraDFZS0lTVTd0MHExVE9rVklRVy9rZzFoU2drMFYyUUJGZTNodmp3MmV2ek9FVlFSYlpTNW9iRlVubGQ4SwpnampjdzJtK2lBR2ZGcEZLOUpZNkk4dWprWWRSRVdFLzkzUFhuYnBYemVZM3VFWXp6dFdidE56cWtJOWRqUkNKCmp6M05kV0JkZ1RCNzRUS1lTMjVLSURaRVdYNGp1djdOT0d2MU9yVUNnWUVBbDVWUkFwdEcrSU1vT3pQODV5MjQKTXBLK2g3V0FWekZPUVBNRUJwa2ZUMGxObmtMUzkrSmorby8rMU5yb2tjL0lybmlKNXJJeFNteDJQQnJ4b0JYOApQR01MSzRDVTlLVThkWDB6NGh5MUVveFhycXpETkhIc3Qxa2hnYjJ0d1c5c01OK0FDS01xZ1pkc3M5ZzlWS2NOClB1c0J5TDJsYVpEb3I0SWhob3lOeGxFQ2dZQjU5aVF3S1Y1bGZDTXJDd1FHUzViZ1pCcnp3WDM0clR1U28zbHQKQXlmb3FoMjZiZ2NvditCazkyaXNpVzV3dXlGSTZOTU0rWXFmOTlZSmlCVVAzTE5NZVRHN2ViYXBNTFNuY0loYQpUR2dtNGNRczdSelhSbXZZUFRSUEwzcVFtNTE0cFJrSWxhSFhVYWRsZDRSRHF4MXl1YVRXdkZkZmtNZTJENjVXCndmRTRsUUtCZ1FDWUdVZllYR3B0Wk5iWHppNjB1RE1SUGpDV1hFMU84dmRKOUZCR1NvVGxrT09Kd3lCQW5BZWIKUGdhS0xXK25LelZaZmdJWWUvV211VWlnNU15WUNoUm0vWFYvS2c3R0JGNENmcGxLeHVWUWtaSW5aZHpKb1prWgovNWUwT1pUUmhkWEJxUEIxcEFLTFNNTHJvQjBZVkszY0ljanozTHlJa2IrMWNWbCtHbTAzNlE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.key", "name": "kibana_internal_key" }, "source": "/etc/origin/logging/kibana-internal.key" } ok: [openshift] => (item={u'name': u'kibana_internal_cert', u'file': u'kibana-internal.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME4xb1hEVEU1TURZd01qSXdOVFkwT0ZvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFEZVZaNXpncU9QekVlSDYyTWt4QkdSZ1FLM3ZVZDJGWlN4cjJwT3BubEtOSEVibGhCV3pveG4KQm8yY29vRUhBQ1hjWFI4U3RBYVIrNG5MaUJnRDk0ZjYyWHQ0bnh0S0U3cXdpUCtMT0JPNlY5VFFEdVVjMDJSRAphUnhCNXl1RWpYTVU2QTJpYXREZmtRUm1WV1R3enNIamkvQ1dkL0xOM2ZVR084N1VHWGhTUGt0MVVnc0RTYWh5CjhET3JQc2J1cFVQQjVONjBoTEZ1UmVwZUw0RTF6RVlncGhhWGJKeXh4Q20wSCtBakNPQ3l0Tzk3TkgvUFpISFYKYlJBazkwdXYxN2JnUEdXWHZLTEtLZXova2VWZm85eCtYYnp5MW80NjJMc1hTSWVvZ21vYlJUS1JiczNxWGVGegoycWNlcDdwampCWHhJRzZja01uR1pGbmI5UzZOcURSTEFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQWxNWHJxMkMxc2U2cFZNTC9DQUF6UmJ4dDRRWWo2eXpKdzVjcW5WN0MKZkoyeGJveFIyS1gvZ21CeGNrcXZvbitqVi8xMldZWk5zRTNIUnhTcFBEblgxL0E4Y1hlS2FJZkxMUVcvNjgwYQpwVURheStaL2Nxdm9yYXIxYmFkc05kMS9kOWF2c2hIckNndWo0dUNuUDU0aEJCd2VUYk1vOU90c0dHVEhlTUZzCms4UXN4ZGZvZm5ZeW9Jdm00SEhlQVIxeDlEL2dIMGV0b0NDckxlRituWVJOdzF5N01waG1IOGRhUlZvTTkxR2MKUE1ZOXZWZGkxRUNUWDhOT0dGRkFBM1FGTndXRkY1YmlPV0ZxMnE2M0IyL1dLdHRxODRHdHkzSkxPbVUyYUZNVgozV2V0a3VWYUtZTEdmZitDd29DcG1PT3ZvRVhIMWZEdVppbW1VblI3ODNIL3dBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd01qSXdOVFkwTlZvWERUSXlNRFl3TVRJd05UWTBObG93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkM3R0dXR1NFWUp0NlFNdWtBUmdQOExWQkkyendGS0M3QzhoVTI5OEtvUwoxYzJsd2pOV1dld1YrTXpCdVNtUHREMndpemVwMm5wd3NsMGgyM21MZU5VMkg0UksxTzJKelVuMlZxS2dmbG5PCi9VWGJ0NlFBVTdkYWZIWjlSc09Rd0RTVkEyQTBXQTFtNitFa2NNRzhRYU1saGJnTVNXc3ZHQlZBbjViUVp4S3MKKzZBUXgrMkFuQlA5MklzRXo5d2tFZTQ1MUNseFZTcCtEbUhsRDk5czY5MXFnVG94ZGxpVkJxaEVPM3NXa24zMwpBVTJISXlTL2JPekdseDBNTk1TQ1BxRkg4bWdNaC9QQlRkcXhYVmxMbWtUcWlRUVA3amZ5a3FMdUNtMEx5WFNNClV3ZytQYUdVMWRJNkw0WWxnZWp5UWdLa1p6ejlIWm5PWlJJSHQvbFVicFVDQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNzKwpFSkJyUjVHM0ZYNE8vYW85OXMyWTZiY1JnNkpwZmltZ1JkazJtczdQcWVqMVQ0NStrVDRqcU5obmtuOGdSSHN4Ckx1OUZWUGZ2Q3NYenhkcjRQKzdrZVRYN2puQVdPNjVWZHpQbHE5b2JuWmxMZkJkQ0F0b050QnU2NnZyajVLVUoKVFVUOTVvMG9SUEVRNDhiRnJ3eDl2NUdkcmNSQUwzVmtuNUpXNnZidlFqRFpkVlp0Rm1yRVZFMXZJc3pmMDM0UApKbS82SDVTMTFkMElWVHlhZ1dJd2N4ZjFYdERCRW1KdzNVY3RwcHJJNlNibzhiNmpHUmhaekYwU3Y1QW9zUXd6Ck1aZlc0eUt5RFI5bFR6Y3p1NTdIQ3FWUXh3M1VWeHJvUDQ2b0xKV0V1SUFhbUc4LzR3a0NTTlNndFdUSnR4NjMKdHNjbXhma0xYcDlBNkVGVk1vQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.crt", "name": "kibana_internal_cert" }, "source": "/etc/origin/logging/kibana-internal.crt" } ok: [openshift] => (item={u'name': u'server_tls', u'file': u'server-tls.json'}) => { "changed": false, "content": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", "encoding": "base64", "item": { "file": "server-tls.json", "name": "server_tls" }, "source": "/etc/origin/logging/server-tls.json" } TASK [openshift_logging_kibana : Set logging-kibana-ops service] *************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:57 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.5.53", "cmd": "/bin/oc get service logging-kibana-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-02T20:57:45Z", "name": "logging-kibana-ops", "namespace": "logging", "resourceVersion": "1382", "selfLink": "/api/v1/namespaces/logging/services/logging-kibana-ops", "uid": "20e46a04-47d6-11e7-ab86-0e1196655f96" }, "spec": { "clusterIP": "172.30.5.53", "ports": [ { "port": 443, "protocol": "TCP", "targetPort": "oaproxy" } ], "selector": { "component": "kibana-ops", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:74 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_key | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:79 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_cert | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:84 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_ca | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:89 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Generating Kibana route template] ************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:94 ok: [openshift] => { "changed": false, "checksum": "39be8d6bed47e51cbf4368863707012ada0f7f37", "dest": "/tmp/openshift-logging-ansible-O09ZQa/templates/kibana-route.yaml", "gid": 0, "group": "root", "md5sum": "d5993d291def575e3a0f42e60eec2b3e", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2726, "src": "/root/.ansible/tmp/ansible-tmp-1496437066.55-146718957263151/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Setting Kibana route] ************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:114 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get route logging-kibana-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Route", "metadata": { "creationTimestamp": "2017-06-02T20:57:47Z", "labels": { "component": "support", "logging-infra": "support", "provider": "openshift" }, "name": "logging-kibana-ops", "namespace": "logging", "resourceVersion": "1391", "selfLink": "/oapi/v1/namespaces/logging/routes/logging-kibana-ops", "uid": "2208ab6c-47d6-11e7-ab86-0e1196655f96" }, "spec": { "host": "kibana-ops.router.default.svc.cluster.local", "tls": { "caCertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwMjIwNTY0NVoXDTIyMDYwMTIwNTY0Nlow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBANC7GGWGSEYJt6QMukARgP8LVBI2zwFKC7C8hU298KoS\n1c2lwjNWWewV+MzBuSmPtD2wizep2npwsl0h23mLeNU2H4RK1O2JzUn2VqKgflnO\n/UXbt6QAU7dafHZ9RsOQwDSVA2A0WA1m6+EkcMG8QaMlhbgMSWsvGBVAn5bQZxKs\n+6AQx+2AnBP92IsEz9wkEe451ClxVSp+DmHlD99s691qgToxdliVBqhEO3sWkn33\nAU2HIyS/bOzGlx0MNMSCPqFH8mgMh/PBTdqxXVlLmkTqiQQP7jfykqLuCm0LyXSM\nUwg+PaGU1dI6L4YlgejyQgKkZzz9HZnOZRIHt/lUbpUCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACs+\nEJBrR5G3FX4O/ao99s2Y6bcRg6JpfimgRdk2ms7Pqej1T45+kT4jqNhnkn8gRHsx\nLu9FVPfvCsXzxdr4P+7keTX7jnAWO65VdzPlq9obnZlLfBdCAtoNtBu66vrj5KUJ\nTUT95o0oRPEQ48bFrwx9v5GdrcRAL3Vkn5JW6vbvQjDZdVZtFmrEVE1vIszf034P\nJm/6H5S11d0IVTyagWIwcxf1XtDBEmJw3UctpprI6Sbo8b6jGRhZzF0Sv5AosQwz\nMZfW4yKyDR9lTzczu57HCqVQxw3UVxroP46oLJWEuIAamG8/4wkCSNSgtWTJtx63\ntscmxfkLXp9A6EFVMoA=\n-----END CERTIFICATE-----\n", "destinationCACertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwMjIwNTY0NVoXDTIyMDYwMTIwNTY0Nlow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBANC7GGWGSEYJt6QMukARgP8LVBI2zwFKC7C8hU298KoS\n1c2lwjNWWewV+MzBuSmPtD2wizep2npwsl0h23mLeNU2H4RK1O2JzUn2VqKgflnO\n/UXbt6QAU7dafHZ9RsOQwDSVA2A0WA1m6+EkcMG8QaMlhbgMSWsvGBVAn5bQZxKs\n+6AQx+2AnBP92IsEz9wkEe451ClxVSp+DmHlD99s691qgToxdliVBqhEO3sWkn33\nAU2HIyS/bOzGlx0MNMSCPqFH8mgMh/PBTdqxXVlLmkTqiQQP7jfykqLuCm0LyXSM\nUwg+PaGU1dI6L4YlgejyQgKkZzz9HZnOZRIHt/lUbpUCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACs+\nEJBrR5G3FX4O/ao99s2Y6bcRg6JpfimgRdk2ms7Pqej1T45+kT4jqNhnkn8gRHsx\nLu9FVPfvCsXzxdr4P+7keTX7jnAWO65VdzPlq9obnZlLfBdCAtoNtBu66vrj5KUJ\nTUT95o0oRPEQ48bFrwx9v5GdrcRAL3Vkn5JW6vbvQjDZdVZtFmrEVE1vIszf034P\nJm/6H5S11d0IVTyagWIwcxf1XtDBEmJw3UctpprI6Sbo8b6jGRhZzF0Sv5AosQwz\nMZfW4yKyDR9lTzczu57HCqVQxw3UVxroP46oLJWEuIAamG8/4wkCSNSgtWTJtx63\ntscmxfkLXp9A6EFVMoA=\n-----END CERTIFICATE-----\n", "insecureEdgeTerminationPolicy": "Redirect", "termination": "reencrypt" }, "to": { "kind": "Service", "name": "logging-kibana-ops", "weight": 100 }, "wildcardPolicy": "None" }, "status": { "ingress": [ { "conditions": [ { "lastTransitionTime": "2017-06-02T20:57:47Z", "status": "True", "type": "Admitted" } ], "host": "kibana-ops.router.default.svc.cluster.local", "routerName": "router", "wildcardPolicy": "None" } ] } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Generate proxy session] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:125 ok: [openshift] => { "ansible_facts": { "session_secret": "Bx5w94XZhCWS2DOaGVPGNY5UktNeKo7NdYU7NiSJZVSGloV3l1JLFCIAPGAV2YawzebLmsdRDMmr90TdCmWjC8Q1AUO7hN6Gg2jIPym7u4NShQxWWIfvYicJmxeBAeynz5O9oLF3Jl952CLFEpC8igOiCZHrS7Bf8v8XH1ydrIEKbQYQUMVbjVNxbfIzsDObpws1RxSf" }, "changed": false } TASK [openshift_logging_kibana : Generate oauth client secret] ***************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:132 ok: [openshift] => { "ansible_facts": { "oauth_secret": "Vg595zvOOeIz3OagFdCUDooKCuwTVeesSVSqnAEJoBJJsAj1yYEnC7gY2gALrZkK" }, "changed": false } TASK [openshift_logging_kibana : Create oauth-client template] ***************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:138 changed: [openshift] => { "changed": true, "checksum": "5eae3867f48b174378db848330d848418ca029fd", "dest": "/tmp/openshift-logging-ansible-O09ZQa/templates/oauth-client.yml", "gid": 0, "group": "root", "md5sum": "8e5b904d0ac261f6e66b4b393a2f372f", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 332, "src": "/root/.ansible/tmp/ansible-tmp-1496437068.76-130586645430283/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set kibana-proxy oauth-client] **************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:146 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "OAuthClient", "metadata": { "creationTimestamp": "2017-06-02T20:57:37Z", "labels": { "logging-infra": "support" }, "name": "kibana-proxy", "resourceVersion": "1398", "selfLink": "/oapi/v1/oauthclients/kibana-proxy", "uid": "1bcdbc02-47d6-11e7-ab86-0e1196655f96" }, "redirectURIs": [ "https://kibana-ops.router.default.svc.cluster.local" ], "scopeRestrictions": [ { "literals": [ "user:info", "user:check-access", "user:list-projects" ] } ], "secret": "Vg595zvOOeIz3OagFdCUDooKCuwTVeesSVSqnAEJoBJJsAj1yYEnC7gY2gALrZkK" } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana secret] **************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:157 ok: [openshift] => { "changed": false, "results": { "apiVersion": "v1", "data": { "ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME5Wb1hEVEl5TURZd01USXdOVFkwTmxvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5DN0dHV0dTRVlKdDZRTXVrQVJnUDhMVkJJMnp3RktDN0M4aFUyOThLb1MKMWMybHdqTldXZXdWK016QnVTbVB0RDJ3aXplcDJucHdzbDBoMjNtTGVOVTJINFJLMU8ySnpVbjJWcUtnZmxuTwovVVhidDZRQVU3ZGFmSFo5UnNPUXdEU1ZBMkEwV0ExbTYrRWtjTUc4UWFNbGhiZ01TV3N2R0JWQW41YlFaeEtzCis2QVF4KzJBbkJQOTJJc0V6OXdrRWU0NTFDbHhWU3ArRG1IbEQ5OXM2OTFxZ1RveGRsaVZCcWhFTzNzV2tuMzMKQVUySEl5Uy9iT3pHbHgwTU5NU0NQcUZIOG1nTWgvUEJUZHF4WFZsTG1rVHFpUVFQN2pmeWtxTHVDbTBMeVhTTQpVd2crUGFHVTFkSTZMNFlsZ2VqeVFnS2taeno5SFpuT1pSSUh0L2xVYnBVQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcysKRUpCclI1RzNGWDRPL2FvOTlzMlk2YmNSZzZKcGZpbWdSZGsybXM3UHFlajFUNDUra1Q0anFOaG5rbjhnUkhzeApMdTlGVlBmdkNzWHp4ZHI0UCs3a2VUWDdqbkFXTzY1VmR6UGxxOW9iblpsTGZCZENBdG9OdEJ1NjZ2cmo1S1VKClRVVDk1bzBvUlBFUTQ4YkZyd3g5djVHZHJjUkFMM1ZrbjVKVzZ2YnZRakRaZFZadEZtckVWRTF2SXN6ZjAzNFAKSm0vNkg1UzExZDBJVlR5YWdXSXdjeGYxWHREQkVtSnczVWN0cHBySTZTYm84YjZqR1JoWnpGMFN2NUFvc1F3egpNWmZXNHlLeURSOWxUemN6dTU3SENxVlF4dzNVVnhyb1A0Nm9MSldFdUlBYW1HOC80d2tDU05TZ3RXVEp0eDYzCnRzY214ZmtMWHA5QTZFRlZNb0E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSVENDQWkyZ0F3SUJBZ0lCQXpBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZMU1Wb1hEVEU1TURZd01qSXdOVFkxTVZvdwpSakVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVI0d0hBWURWUVFECkRCVnplWE4wWlcwdWJHOW5aMmx1Wnk1cmFXSmhibUV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXcKZ2dFS0FvSUJBUURiek9GWDJBZGZEc2ozeHZmTjlQRk5GVlZINWdFUWUvS2Z6cTZwdlVNNEtEQ2ZkbWhIcmRuRwppNTdjandDWXlUS3NHSTNGZGQ2R0NkODZBUEdSVXhvY3FESXNzZUVUT3JsKzFCeGFlVVBZbmlrQitCS05ZajhNCkEwZ2hVbUJTZHRXZ0k3TkU4SlFYamtKTWhnNmZxcXJKeG5OU2JPdWRvUWZiRk5zam1WZm1TU1hSTmsvcmRXbHcKRkdEQTJOcUxUUnBuN0ptOXNteFd6SjRYTEExYU5GWDB3VVd6Wmo0YTZyd1R1NUFqcHNNTHNBZ3V0eHJSLzI5OAorNXdqV3lMR3p2dTNlV2Yra1dTcnJkMjlMd1BVYktLMEFPeUw0ZzdTTG9TZWNlUXNiR0htTlBKM3g1bTU0TENFCjU0VFFhS2xIUFZoWS84MWNNOXdJK2xlWFR3ODRWWDVaQWdNQkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUYKb0RBSkJnTlZIUk1FQWpBQU1CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFkQmdOVgpIUTRFRmdRVWFPdVQzTzFkd3lXUWhsaUptaXVLV3laZ1dDa3dDUVlEVlIwakJBSXdBREFOQmdrcWhraUc5dzBCCkFRVUZBQU9DQVFFQW5LelFvb1dvV09LWTU3dk9Qb1RPVXdkcU1SL2R4L0o5Z1YzeHB0OWxoOThZUTFWSlRXdnUKRmVBaDNCVGlORTNyTmR0OE8zNUhGQnRKRXBvMEhzWWZYanlWbldCSzYvaWxuMldPaDlwT1RkQnljSEtKWkNiZQo4bFg4RlFPMXkzb3oyM0dScmhuZG1vLzJ1ZnhmcFRSWGNhTHkrVTdBZWJtNnM5K1JydUhFSGNwYjJtVjRjQnhwCjNhY2RURGgrMzE3MFRFZVpaazljZS85SUxYY09QWU1xUWF6L3JDSERVWXpUc1Y0cjduMnlQc0djMFQzaldHWjAKWTkvS0JNY2hQbnJ4T05tV1d2S1NKUkxGNDMwVFo4OWxoVVRDWkxEaHhvUFlrNWc0aU9raERmaTAyZlVRSHJYcwpXa0VDTXIrN2JiK3hROVdDS2FTdE1zOVgya09UNVVLZWVRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2d0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktrd2dnU2xBZ0VBQW9JQkFRRGJ6T0ZYMkFkZkRzajMKeHZmTjlQRk5GVlZINWdFUWUvS2Z6cTZwdlVNNEtEQ2ZkbWhIcmRuR2k1N2Nqd0NZeVRLc0dJM0ZkZDZHQ2Q4NgpBUEdSVXhvY3FESXNzZUVUT3JsKzFCeGFlVVBZbmlrQitCS05ZajhNQTBnaFVtQlNkdFdnSTdORThKUVhqa0pNCmhnNmZxcXJKeG5OU2JPdWRvUWZiRk5zam1WZm1TU1hSTmsvcmRXbHdGR0RBMk5xTFRScG43Sm05c214V3pKNFgKTEExYU5GWDB3VVd6Wmo0YTZyd1R1NUFqcHNNTHNBZ3V0eHJSLzI5OCs1d2pXeUxHenZ1M2VXZitrV1NycmQyOQpMd1BVYktLMEFPeUw0ZzdTTG9TZWNlUXNiR0htTlBKM3g1bTU0TENFNTRUUWFLbEhQVmhZLzgxY005d0krbGVYClR3ODRWWDVaQWdNQkFBRUNnZ0VCQUlkbHp3NmcyZkdabHZ6alNUVkxCUFg2QlQyMEZzWER4TExpeTc5dUFpRnUKeUgwQ29Mdy9BTjhJbFFUQzVwZzNvSXBZMmNSZ2xvSTFSSmhqaW10K0tLQ3NqN1B1bzNxSCswcUFlVExXYm8vYQo1ZUg3b2RvTVFsQXhHVmJGZXVaeG82anhOUFpyeUo0MkdPc3d6WU5YeTd0ZUR4NGdVSWdhY1U5b3FwRmtYYnhTCkFEeCs3d3ZtUFJxOUtLbFBScjUxR2lEMmh5WkUwSnZQT0JqdDlWZzNUS24wUjdhdENmdnA5Nm0wdXdjQ29tbk4KbXFBSlBYSnRHVXN2dUg1RHYzNDdXWjRLcXd1NnhPVkp5UkhTZ2dzY3VKZkFESlBFZldpZ0R0cHRMWEUrWXBtRApxeW9XMUZETTBUK1FyeG41STBlMythOWRnZjNiU3JjZEhyb3hTMllzZmdFQ2dZRUErL2tWWGNLNTAzS29VL0NOCkpxUVNhNEJVUnFJeThoSS9WSkx0M2ZnMDJmRXlkdzh1cll6YUxYVEsvVWsyYmxUcklyR3ZKT0FVWm8yTFpaVGIKakgwVFIyVHFTNFhBL1cvWEhpUkhYT3FQY29iekxNaExiWnArYTVKR1lXSnRYd0ZTR2tMbEFkbnF4RjhJVElvNQpROU5xaUFqUmJYeVcvSWxKalpLdXQxTkJrcGtDZ1lFQTMxQXFqK1FNc0l3Rlh2eHhoVy9NNGNLMFJQL2pmM2F1CkdOalhIcnh3TCtTSDROM0hEaCtXdnVFZEpjUjdQY3pyTlFqV0NKUFgyU2NWQjJXM2VUVWMySkQwV1FJNmpweTEKZGhZZTROUjZDRzdMaUVlaFp2MzJDMTkwZEZKM1JEdjdDQ2tBZFNwaFJBY3JHYjVYdm1BenhqUUV5d2ZuSXl0YQpsb3R3bHlocVljRUNnWUVBMDRlMkFqSjVJaVA5WUFwdjFPS2tmQThOc1FaMTBuYXpKK0w1UWdFZkRWL0pSOTQ5CkI0RlpvQk9PWGJoYXM2RWlqTXV5QnpqK3AyRm9odXpDcTF4TkZRQ0pHTUcrMUlSUmlZSlhUbyt6d1NlOWVmamsKS2Ewck9FOWlPbHNSQ2xMbmhCaG9mSGRlK1YvMmJac1VtL1llVnZsZ0o1UVNoUXNUN29BWG9OdUtEdkVDZ1lBNQpXcDJUMXo1ckdZdEhtZzZOOXVqb0V0bTUzdjdPL2V3NDlYaEtySnNqc2M0ME1zR3RIdS9ZbG5pbCtwQ3NqclRhCktpck9pU29tMjZMTEE1VGJ6SWhjRnQ2cS9hZU1lVE1oNFF5Tk1nVWxwVThnOFVUQzd2Y0NkTUcwSG5vRFRHUnMKOUJycC9MaCtnRmpSZzlHRlU2LzRkK1BEUVlSYnhBYkFJNUFIUXBvUUFRS0JnUURPd0p6cHZNR1Fza3RuTmQ5MgpNcWdnQzdybVBsZlYvRWVOQlZ3NzVlQVkvdEdPLzZla0o2SGI0M2grRGpTdE9pZ3R3QVdkakpuUWgrbmVIellCClJkK0dFR1FqOWIySHNVeDFSQ2dXRkJaeFZtYnBMK2lzRFg0NVhBN051a3FWRUc1cW5SbTRlVzhIUUVDUklBVWIKUE5zZ2c2Vm90Q04wUTcxb29nUG9pSy9TUlE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==" }, "kind": "Secret", "metadata": { "creationTimestamp": null, "name": "logging-kibana" }, "type": "Opaque" }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana Proxy secret] ********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:171 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc replace -f /tmp/logging-kibana-proxy -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Generate Kibana DC template] ****************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:197 changed: [openshift] => { "changed": true, "checksum": "97bfc8c404a0d33020462ef28906037294e6a8c1", "dest": "/tmp/openshift-logging-ansible-O09ZQa/templates/kibana-dc.yaml", "gid": 0, "group": "root", "md5sum": "c3a1f25b26f801b1db001dbe941b00f2", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3761, "src": "/root/.ansible/tmp/ansible-tmp-1496437072.33-248009638161972/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set Kibana DC] ******************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:216 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-kibana-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-02T20:57:53Z", "generation": 2, "labels": { "component": "kibana-ops", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana-ops", "namespace": "logging", "resourceVersion": "1416", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-kibana-ops", "uid": "2552a4db-47d6-11e7-ab86-0e1196655f96" }, "spec": { "replicas": 1, "selector": { "component": "kibana-ops", "logging-infra": "kibana", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Rolling" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "kibana-ops", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana-ops" }, "spec": { "containers": [ { "env": [ { "name": "ES_HOST", "value": "logging-es-ops" }, { "name": "ES_PORT", "value": "9200" }, { "name": "KIBANA_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.101.10:5000/logging/logging-kibana:latest", "imagePullPolicy": "Always", "name": "kibana", "readinessProbe": { "exec": { "command": [ "/usr/share/kibana/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 5, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 4 }, "resources": { "limits": { "memory": "736Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/kibana/keys", "name": "kibana", "readOnly": true } ] }, { "env": [ { "name": "OAP_BACKEND_URL", "value": "http://localhost:5601" }, { "name": "OAP_AUTH_MODE", "value": "oauth2" }, { "name": "OAP_TRANSFORM", "value": "user_header,token_header" }, { "name": "OAP_OAUTH_ID", "value": "kibana-proxy" }, { "name": "OAP_MASTER_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "OAP_PUBLIC_MASTER_URL", "value": "https://172.18.8.225:8443" }, { "name": "OAP_LOGOUT_REDIRECT", "value": "https://172.18.8.225:8443/console/logout" }, { "name": "OAP_MASTER_CA_FILE", "value": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" }, { "name": "OAP_DEBUG", "value": "False" }, { "name": "OAP_OAUTH_SECRET_FILE", "value": "/secret/oauth-secret" }, { "name": "OAP_SERVER_CERT_FILE", "value": "/secret/server-cert" }, { "name": "OAP_SERVER_KEY_FILE", "value": "/secret/server-key" }, { "name": "OAP_SERVER_TLS_FILE", "value": "/secret/server-tls.json" }, { "name": "OAP_SESSION_SECRET_FILE", "value": "/secret/session-secret" }, { "name": "OCP_AUTH_PROXY_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana-proxy", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.101.10:5000/logging/logging-auth-proxy:latest", "imagePullPolicy": "Always", "name": "kibana-proxy", "ports": [ { "containerPort": 3000, "name": "oaproxy", "protocol": "TCP" } ], "resources": { "limits": { "memory": "96Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/secret", "name": "kibana-proxy", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-kibana", "serviceAccountName": "aggregated-logging-kibana", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "kibana", "secret": { "defaultMode": 420, "secretName": "logging-kibana" } }, { "name": "kibana-proxy", "secret": { "defaultMode": 420, "secretName": "logging-kibana-proxy" } } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-02T20:57:53Z", "lastUpdateTime": "2017-06-02T20:57:53Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-02T20:57:53Z", "lastUpdateTime": "2017-06-02T20:57:53Z", "message": "replication controller \"logging-kibana-ops-1\" is waiting for pod \"logging-kibana-ops-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Delete temp directory] ************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:228 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-O09ZQa", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:192 statically included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "curator_version": "3_5" }, "changed": false } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create temp directory for doing work in] ***** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:5 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.003758", "end": "2017-06-02 16:57:55.419167", "rc": 0, "start": "2017-06-02 16:57:55.415409" } STDOUT: /tmp/openshift-logging-ansible-WQwtIJ TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:10 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-WQwtIJ" }, "changed": false } TASK [openshift_logging_curator : Create templates subdirectory] *************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:14 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-WQwtIJ/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:24 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:32 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-curator-dockercfg-43c1z" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-02T20:57:56Z", "name": "aggregated-logging-curator", "namespace": "logging", "resourceVersion": "1438", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-curator", "uid": "275cfb10-47d6-11e7-ab86-0e1196655f96" }, "secrets": [ { "name": "aggregated-logging-curator-token-1hm3x" }, { "name": "aggregated-logging-curator-dockercfg-43c1z" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:41 ok: [openshift] => { "changed": false, "checksum": "9008efd9a8892dcc42c28c6dfb6708527880a6d8", "dest": "/tmp/openshift-logging-ansible-WQwtIJ/curator.yml", "gid": 0, "group": "root", "md5sum": "5498c5fd98f3dd06e34b20eb1f55dc12", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 320, "src": "/root/.ansible/tmp/ansible-tmp-1496437077.21-53689560021629/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:47 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Set Curator configmap] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:53 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "config.yaml": "# Logging example curator config file\n\n# uncomment and use this to override the defaults from env vars\n#.defaults:\n# delete:\n# days: 30\n# runhour: 0\n# runminute: 0\n\n# to keep ops logs for a different duration:\n#.operations:\n# delete:\n# weeks: 8\n\n# example for a normal project\n#myapp:\n# delete:\n# weeks: 1\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-02T20:57:58Z", "name": "logging-curator", "namespace": "logging", "resourceVersion": "1453", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-curator", "uid": "284bd050-47d6-11e7-ab86-0e1196655f96" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Set Curator secret] ************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:62 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-curator ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.curator.key cert=/etc/origin/logging/system.logging.curator.crt -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:75 ok: [openshift] => { "ansible_facts": { "curator_component": "curator", "curator_name": "logging-curator" }, "changed": false } TASK [openshift_logging_curator : Generate Curator deploymentconfig] *********** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "334c90c24ae536c11d386bad7ae4d0db89ac04d4", "dest": "/tmp/openshift-logging-ansible-WQwtIJ/templates/curator-dc.yaml", "gid": 0, "group": "root", "md5sum": "4f479345c597506483e49393be64a15f", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2340, "src": "/root/.ansible/tmp/ansible-tmp-1496437079.85-113582078502884/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : Set Curator DC] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:99 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-02T20:58:01Z", "generation": 2, "labels": { "component": "curator", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator", "namespace": "logging", "resourceVersion": "1476", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-curator", "uid": "29fdb07b-47d6-11e7-ab86-0e1196655f96" }, "spec": { "replicas": 1, "selector": { "component": "curator", "logging-infra": "curator", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "curator", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator" }, "spec": { "containers": [ { "env": [ { "name": "K8S_HOST_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "ES_HOST", "value": "logging-es" }, { "name": "ES_PORT", "value": "9200" }, { "name": "ES_CLIENT_CERT", "value": "/etc/curator/keys/cert" }, { "name": "ES_CLIENT_KEY", "value": "/etc/curator/keys/key" }, { "name": "ES_CA", "value": "/etc/curator/keys/ca" }, { "name": "CURATOR_DEFAULT_DAYS", "value": "30" }, { "name": "CURATOR_RUN_HOUR", "value": "0" }, { "name": "CURATOR_RUN_MINUTE", "value": "0" }, { "name": "CURATOR_RUN_TIMEZONE", "value": "UTC" }, { "name": "CURATOR_SCRIPT_LOG_LEVEL", "value": "INFO" }, { "name": "CURATOR_LOG_LEVEL", "value": "ERROR" } ], "image": "172.30.101.10:5000/logging/logging-curator:latest", "imagePullPolicy": "Always", "name": "curator", "resources": { "limits": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/curator/keys", "name": "certs", "readOnly": true }, { "mountPath": "/etc/curator/settings", "name": "config", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-curator", "serviceAccountName": "aggregated-logging-curator", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "certs", "secret": { "defaultMode": 420, "secretName": "logging-curator" } }, { "configMap": { "defaultMode": 420, "name": "logging-curator" }, "name": "config" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-02T20:58:01Z", "lastUpdateTime": "2017-06-02T20:58:01Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-02T20:58:01Z", "lastUpdateTime": "2017-06-02T20:58:01Z", "message": "replication controller \"logging-curator-1\" is waiting for pod \"logging-curator-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Delete temp directory] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:109 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-WQwtIJ", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:204 statically included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "curator_version": "3_5" }, "changed": false } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create temp directory for doing work in] ***** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:5 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.002769", "end": "2017-06-02 16:58:04.254474", "rc": 0, "start": "2017-06-02 16:58:04.251705" } STDOUT: /tmp/openshift-logging-ansible-XjPEbW TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:10 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-XjPEbW" }, "changed": false } TASK [openshift_logging_curator : Create templates subdirectory] *************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:14 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-XjPEbW/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:24 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:32 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get sa aggregated-logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-curator-dockercfg-43c1z" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-02T20:57:56Z", "name": "aggregated-logging-curator", "namespace": "logging", "resourceVersion": "1438", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-curator", "uid": "275cfb10-47d6-11e7-ab86-0e1196655f96" }, "secrets": [ { "name": "aggregated-logging-curator-token-1hm3x" }, { "name": "aggregated-logging-curator-dockercfg-43c1z" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:41 ok: [openshift] => { "changed": false, "checksum": "9008efd9a8892dcc42c28c6dfb6708527880a6d8", "dest": "/tmp/openshift-logging-ansible-XjPEbW/curator.yml", "gid": 0, "group": "root", "md5sum": "5498c5fd98f3dd06e34b20eb1f55dc12", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 320, "src": "/root/.ansible/tmp/ansible-tmp-1496437085.17-5781074052560/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:47 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Set Curator configmap] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:53 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get configmap logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "config.yaml": "# Logging example curator config file\n\n# uncomment and use this to override the defaults from env vars\n#.defaults:\n# delete:\n# days: 30\n# runhour: 0\n# runminute: 0\n\n# to keep ops logs for a different duration:\n#.operations:\n# delete:\n# weeks: 8\n\n# example for a normal project\n#myapp:\n# delete:\n# weeks: 1\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-02T20:57:58Z", "name": "logging-curator", "namespace": "logging", "resourceVersion": "1453", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-curator", "uid": "284bd050-47d6-11e7-ab86-0e1196655f96" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Set Curator secret] ************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:62 ok: [openshift] => { "changed": false, "results": { "apiVersion": "v1", "data": { "ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZME5Wb1hEVEl5TURZd01USXdOVFkwTmxvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5DN0dHV0dTRVlKdDZRTXVrQVJnUDhMVkJJMnp3RktDN0M4aFUyOThLb1MKMWMybHdqTldXZXdWK016QnVTbVB0RDJ3aXplcDJucHdzbDBoMjNtTGVOVTJINFJLMU8ySnpVbjJWcUtnZmxuTwovVVhidDZRQVU3ZGFmSFo5UnNPUXdEU1ZBMkEwV0ExbTYrRWtjTUc4UWFNbGhiZ01TV3N2R0JWQW41YlFaeEtzCis2QVF4KzJBbkJQOTJJc0V6OXdrRWU0NTFDbHhWU3ArRG1IbEQ5OXM2OTFxZ1RveGRsaVZCcWhFTzNzV2tuMzMKQVUySEl5Uy9iT3pHbHgwTU5NU0NQcUZIOG1nTWgvUEJUZHF4WFZsTG1rVHFpUVFQN2pmeWtxTHVDbTBMeVhTTQpVd2crUGFHVTFkSTZMNFlsZ2VqeVFnS2taeno5SFpuT1pSSUh0L2xVYnBVQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDcysKRUpCclI1RzNGWDRPL2FvOTlzMlk2YmNSZzZKcGZpbWdSZGsybXM3UHFlajFUNDUra1Q0anFOaG5rbjhnUkhzeApMdTlGVlBmdkNzWHp4ZHI0UCs3a2VUWDdqbkFXTzY1VmR6UGxxOW9iblpsTGZCZENBdG9OdEJ1NjZ2cmo1S1VKClRVVDk1bzBvUlBFUTQ4YkZyd3g5djVHZHJjUkFMM1ZrbjVKVzZ2YnZRakRaZFZadEZtckVWRTF2SXN6ZjAzNFAKSm0vNkg1UzExZDBJVlR5YWdXSXdjeGYxWHREQkVtSnczVWN0cHBySTZTYm84YjZqR1JoWnpGMFN2NUFvc1F3egpNWmZXNHlLeURSOWxUemN6dTU3SENxVlF4dzNVVnhyb1A0Nm9MSldFdUlBYW1HOC80d2tDU05TZ3RXVEp0eDYzCnRzY214ZmtMWHA5QTZFRlZNb0E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lCQkRBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdNakl3TlRZMU1sb1hEVEU1TURZd01qSXdOVFkxTWxvdwpSekVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVI4d0hRWURWUVFECkRCWnplWE4wWlcwdWJHOW5aMmx1Wnk1amRYSmhkRzl5TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEEKTUlJQkNnS0NBUUVBd2NkWitCcGNtUXZzd3RQVTJZZmhSdlNHQ3NJNXE3bGhrMVlXRTlGQ0FPWXU0U1AzKzQxMQpGcW56ZGZMS3J2UVo1WnNHd0FsNFc1ZGFDOElOOW9FZXpCZm5kZ2VtMUM4MDNla3FPYzQrYWdGN21xYmlWc2trClJVTmJadXNScWd1MElvNStrWU91WEZ0QjhSMVBNWW5VRFNqYUpIMFZWM1lIUkpnMUlyMTB5bTRERXM1aHBLWDgKQWhqS2JoTW9Vc3lYUTlrTkp2TW9pbHBHUm80OWgxdjlpSDRvUkVJMFRJbkpTNWt6emswVTBRSzJiZlI2MUVWNQpkMU96dGpDL3JkUkdDN25XZmtjMUMzTEFScitMOCtrRVdiRkpQRFRmM3NrTEp5WDRoMDVtbDI3VVBUeFhPOHR1ClNUc0h3N0taUlh2RTdBRGNSenVHNE5QSWFiamVFN3RxWHdJREFRQUJvMll3WkRBT0JnTlZIUThCQWY4RUJBTUMKQmFBd0NRWURWUjBUQkFJd0FEQWRCZ05WSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3SFFZRApWUjBPQkJZRUZBZ0lNaC9aSG14WXVRbWhvVDZiTWhET0gySDNNQWtHQTFVZEl3UUNNQUF3RFFZSktvWklodmNOCkFRRUZCUUFEZ2dFQkFFUVVoNTIvSnVwS2gwSEVQQkRsNkFVdkFmNytXUDlkWEN2NGVRZ0hMcmFZc09NdmQ1a0cKanp2dll1ZkkwQVhpa0pwdHJUTjQ3VVdaWWYzaEx1ck40cUNZNzUrOEoxdGxGKzJiYkFqTDN5VEtna0c1VXpqRwprdTlwT29oc2FHb2kzWmFHMEFRNnRBMzhTOGpMYzFqSmszblZXbWw1SldGNTN5aWhnOUFNcXVxcndsL203a3VNCk9ZcG9lRmZpRWJ6QXMwL095bVZyV0hVU0hqblkwWnp4TXlGdUNqU0xPWkZ5eWkycWdQMXFDQ3l1b0diT2wwWWcKd016NXQrUVNVUi8rRFRzYXR2aUJZaDI5WHVhUytQK0ozVERPVml6U3NORU9kRG9udU51Tm1lNDFCVDlyaDR4dQpYVUlZSzYvdzBud1EwU3ppOVdrOHVRWUZ1NFlGak05K1BTUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2d0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktrd2dnU2xBZ0VBQW9JQkFRREJ4MW40R2x5WkMrekMKMDlUWmgrRkc5SVlLd2ptcnVXR1RWaFlUMFVJQTVpN2hJL2Y3alhVV3FmTjE4c3F1OUJubG13YkFDWGhibDFvTAp3ZzMyZ1I3TUYrZDJCNmJVTHpUZDZTbzV6ajVxQVh1YXB1Sld5U1JGUTF0bTZ4R3FDN1Fpam42Umc2NWNXMEh4CkhVOHhpZFFOS05va2ZSVlhkZ2RFbURVaXZYVEtiZ01Tem1Ha3Bmd0NHTXB1RXloU3pKZEQyUTBtOHlpS1drWkcKamoySFcvMklmaWhFUWpSTWljbExtVFBPVFJUUkFyWnQ5SHJVUlhsM1U3TzJNTCt0MUVZTHVkWitSelVMY3NCRwp2NHZ6NlFSWnNVazhOTi9leVFzbkpmaUhUbWFYYnRROVBGYzd5MjVKT3dmRHNwbEZlOFRzQU54SE80YmcwOGhwCnVONFR1MnBmQWdNQkFBRUNnZ0VBU1pJWWRIdjl3QldvOUdkY25xSmFRNGcvQkFLdHhxY0JodURlVFBQYjdWOTMKV1A5QS9YNjlmN2RTdWV0T1RKSmM2ckdySkduMENrSXlhOWhuV0xtNUtaL0J2eXcwaU1iTGVaMDI3TytDL3RoRgpSM2dvNHU1SEdRenp2T1Z1dFhMd0YxYW1jelRkbEM4Sm9ET1NoNnBlbWdoeW1mdnJpR05GYXlPbXVPUFpYYWt4Clhmajd4WXd3eWJpYi9lOUd3VUVmSU40WEh0d01UbzhnbFVzVEJ0Y0sxQmVtM3dpWEF4THE3b01mcmJ0U2VFbTIKUEs4V1A0TWc1VnVKanZvdWFTS3dNUWI0Ync4QlI4L0NXNW91UW5FYmtHbXF2QUwya1VWcndRREM1V1NjVlNCQwplK09QeEZha1RUeTk4UlpXSlI1VFpHSlV6bVEzT3dob2dFVjZmZHVOZ1FLQmdRRHdmU21mWG92dXhEdDNZNEdyClBKeW4rQWRGUW9Ua05UQ3cwQ2UyMlAyNmlVaTBpTjZBQVFmQ1VvWm1HR3VkclY4YzYyMTZ0c201U25Eb1lwaloKZmpNQmt3TDRNVU1lNnJ4bEZNcENoaStTUGJsRkRxaUNyWnI5czlBN3IvalRpY3FSQmhIN0hmZGFTWnJrdERmYwpHekhDSExnU0ZYZEZaOElqRnBydGppdnYvd0tCZ1FET1J1NjNOK3lzKzVlQ1Y2Mm16OFlLZW9tZDhrUmhadUthCkFYYlhLTDFhSFNvRm1KNmkvamlSSFBCVnZLNXozeGhFTy9ScmhZY2E1Ulord1Bha2ZZWU5CT1R5TXRqK2xFTkUKUWhzTUdqVlhMY0ZmUy9iaXhzRlAvaG9MWWVmY0txRDBJdFcxaG5BMTg3aWgxbitmcElNclFjeURKNXlRL1p3aQpkNzZUeVphRm9RS0JnUURJZk0veVdQUDN2Z2lGWTZONmlqRmZwdHNJMW9mTGFMeUs2ejN3cGI0QmdPbm4rQ0xtCk8vV24vdnlrcUw4dTJKWnVtYWJQb3d0Uk9jb2ZNZk9UZmk0dnBjdlg5ZG1yTUs2VzVsb29VNDNkTVMvL2JsVDEKZkoyMTIrNUJsRmF3cERNSDdET1pVa1lnTXpTNmJiUVQvMmZnRitrc3lsQ0F3QnVNL1E5ejlBNlZLUUtCZ1FDcwp4YVMwT1ZjM0hCT3V6SmxhR3JVWm1jRWlWZ1VJUUJDVVJaMndZU01ZRTAxYkdwWGtsMkh4eVhkVG1KSFY0NHFECnZHUGdteHFxWUM4VFE3UlIyZ0VwYm13RW9LbzNzUjhXVVBndWp0VVdpL0JuVFUwZ2JMRUZ1eU05WFdmQ2RNSVQKT2dvZDNOaW5sOWVSVmdQWFJ3ZkdkM3BBY0RFbkVBUnlxakVwdjdNZmdRS0JnUUNScnRrREtLdEZFWXBBb3hYawoyZDQ3bmFPbkhxdmVJaytUUUZFaVkza2JIUHRQSnovcWxoYUl0LzF2NGpaaUIvUmdHYWxSZTZJazdVQmsvdGZkClR6ZWpUaHRnbVlOQmFLN1plbTBtYm96U3dHdVpRYTkrVy81Qm15VTJQWVQrSmpGN1hTVTgwSE93MGxpVzEvZkQKTXN0RFMyVzVwcFFNbkxJZERhWkRQc1JlTXc9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==" }, "kind": "Secret", "metadata": { "creationTimestamp": null, "name": "logging-curator" }, "type": "Opaque" }, "state": "present" } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:75 ok: [openshift] => { "ansible_facts": { "curator_component": "curator-ops", "curator_name": "logging-curator-ops" }, "changed": false } TASK [openshift_logging_curator : Generate Curator deploymentconfig] *********** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "1a976b2d317cec0defbdc684d388a06e8ef18375", "dest": "/tmp/openshift-logging-ansible-XjPEbW/templates/curator-dc.yaml", "gid": 0, "group": "root", "md5sum": "5ddbd1c817f833e7805c1e256c5871f6", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2364, "src": "/root/.ansible/tmp/ansible-tmp-1496437086.88-24325146681278/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : Set Curator DC] ****************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:99 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-curator-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-02T20:58:07Z", "generation": 2, "labels": { "component": "curator-ops", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator-ops", "namespace": "logging", "resourceVersion": "1520", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-curator-ops", "uid": "2df98396-47d6-11e7-ab86-0e1196655f96" }, "spec": { "replicas": 1, "selector": { "component": "curator-ops", "logging-infra": "curator", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "curator-ops", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator-ops" }, "spec": { "containers": [ { "env": [ { "name": "K8S_HOST_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "ES_HOST", "value": "logging-es-ops" }, { "name": "ES_PORT", "value": "9200" }, { "name": "ES_CLIENT_CERT", "value": "/etc/curator/keys/cert" }, { "name": "ES_CLIENT_KEY", "value": "/etc/curator/keys/key" }, { "name": "ES_CA", "value": "/etc/curator/keys/ca" }, { "name": "CURATOR_DEFAULT_DAYS", "value": "30" }, { "name": "CURATOR_RUN_HOUR", "value": "0" }, { "name": "CURATOR_RUN_MINUTE", "value": "0" }, { "name": "CURATOR_RUN_TIMEZONE", "value": "UTC" }, { "name": "CURATOR_SCRIPT_LOG_LEVEL", "value": "INFO" }, { "name": "CURATOR_LOG_LEVEL", "value": "ERROR" } ], "image": "172.30.101.10:5000/logging/logging-curator:latest", "imagePullPolicy": "Always", "name": "curator", "resources": { "limits": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/curator/keys", "name": "certs", "readOnly": true }, { "mountPath": "/etc/curator/settings", "name": "config", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-curator", "serviceAccountName": "aggregated-logging-curator", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "certs", "secret": { "defaultMode": 420, "secretName": "logging-curator" } }, { "configMap": { "defaultMode": 420, "name": "logging-curator" }, "name": "config" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-02T20:58:07Z", "lastUpdateTime": "2017-06-02T20:58:07Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-02T20:58:07Z", "lastUpdateTime": "2017-06-02T20:58:07Z", "message": "replication controller \"logging-curator-ops-1\" is waiting for pod \"logging-curator-ops-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Delete temp directory] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:109 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-XjPEbW", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:223 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:238 statically included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:2 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_fluentd_nodeselector.keys() | count }} > 1 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:6 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:10 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:14 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "fluentd_version": "3_5" }, "changed": false } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : Create temp directory for doing work in] ***** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:33 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.003741", "end": "2017-06-02 16:58:11.791275", "rc": 0, "start": "2017-06-02 16:58:11.787534" } STDOUT: /tmp/openshift-logging-ansible-jntWjs TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:38 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-jntWjs" }, "changed": false } TASK [openshift_logging_fluentd : Create templates subdirectory] *************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:41 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-jntWjs/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_fluentd : Create Fluentd service account] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:51 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : Create Fluentd service account] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:59 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-fluentd -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-fluentd-dockercfg-lmm53" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-02T20:58:12Z", "name": "aggregated-logging-fluentd", "namespace": "logging", "resourceVersion": "1560", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-fluentd", "uid": "30ecaf46-47d6-11e7-ab86-0e1196655f96" }, "secrets": [ { "name": "aggregated-logging-fluentd-token-n8mnr" }, { "name": "aggregated-logging-fluentd-dockercfg-lmm53" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Set privileged permissions for Fluentd] ****** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:68 changed: [openshift] => { "changed": true, "present": "present", "results": { "cmd": "/bin/oc adm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd -n logging", "results": "", "returncode": 0 } } TASK [openshift_logging_fluentd : Set cluster-reader permissions for Fluentd] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:77 changed: [openshift] => { "changed": true, "present": "present", "results": { "cmd": "/bin/oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:logging:aggregated-logging-fluentd -n logging", "results": "", "returncode": 0 } } TASK [openshift_logging_fluentd : template] ************************************ task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:86 ok: [openshift] => { "changed": false, "checksum": "a8c8596f5fc2c5dd7c8d33d244af17a2555be086", "dest": "/tmp/openshift-logging-ansible-jntWjs/fluent.conf", "gid": 0, "group": "root", "md5sum": "579698b48ffce6276ee0e8d5ac71a338", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 1301, "src": "/root/.ansible/tmp/ansible-tmp-1496437094.68-127528118217344/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:94 ok: [openshift] => { "changed": false, "checksum": "b3e75eddc4a0765edc77da092384c0c6f95440e1", "dest": "/tmp/openshift-logging-ansible-jntWjs/fluentd-throttle-config.yaml", "gid": 0, "group": "root", "md5sum": "25871b8e0a9bedc166a6029872a6c336", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 133, "src": "/root/.ansible/tmp/ansible-tmp-1496437095.11-131684773197004/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:100 ok: [openshift] => { "changed": false, "checksum": "a3aa36da13f3108aa4ad5b98d4866007b44e9798", "dest": "/tmp/openshift-logging-ansible-jntWjs/secure-forward.conf", "gid": 0, "group": "root", "md5sum": "1084b00c427f4fa48dfc66d6ad6555d4", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 563, "src": "/root/.ansible/tmp/ansible-tmp-1496437095.41-197854426649785/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:107 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:113 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:119 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : Set Fluentd configmap] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:125 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-fluentd -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "fluent.conf": "# This file is the fluentd configuration entrypoint. Edit with care.\n\n@include configs.d/openshift/system.conf\n\n# In each section below, pre- and post- includes don't include anything initially;\n# they exist to enable future additions to openshift conf as needed.\n\n## sources\n## ordered so that syslog always runs last...\n@include configs.d/openshift/input-pre-*.conf\n@include configs.d/dynamic/input-docker-*.conf\n@include configs.d/dynamic/input-syslog-*.conf\n@include configs.d/openshift/input-post-*.conf\n##\n\n<label @INGRESS>\n## filters\n @include configs.d/openshift/filter-pre-*.conf\n @include configs.d/openshift/filter-retag-journal.conf\n @include configs.d/openshift/filter-k8s-meta.conf\n @include configs.d/openshift/filter-kibana-transform.conf\n @include configs.d/openshift/filter-k8s-flatten-hash.conf\n @include configs.d/openshift/filter-k8s-record-transform.conf\n @include configs.d/openshift/filter-syslog-record-transform.conf\n @include configs.d/openshift/filter-viaq-data-model.conf\n @include configs.d/openshift/filter-post-*.conf\n##\n\n## matches\n @include configs.d/openshift/output-pre-*.conf\n @include configs.d/openshift/output-operations.conf\n @include configs.d/openshift/output-applications.conf\n # no post - applications.conf matches everything left\n##\n</label>\n", "secure-forward.conf": "# @type secure_forward\n\n# self_hostname ${HOSTNAME}\n# shared_key <SECRET_STRING>\n\n# secure yes\n# enable_strict_verification yes\n\n# ca_cert_path /etc/fluent/keys/your_ca_cert\n# ca_private_key_path /etc/fluent/keys/your_private_key\n # for private CA secret key\n# ca_private_key_passphrase passphrase\n\n# <server>\n # or IP\n# host server.fqdn.example.com\n# port 24284\n# </server>\n# <server>\n # ip address to connect\n# host 203.0.113.8\n # specify hostlabel for FQDN verification if ipaddress is used for host\n# hostlabel server.fqdn.example.com\n# </server>\n", "throttle-config.yaml": "# Logging example fluentd throttling config file\n\n#example-project:\n# read_lines_limit: 10\n#\n#.operations:\n# read_lines_limit: 100\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-02T20:58:16Z", "name": "logging-fluentd", "namespace": "logging", "resourceVersion": "1579", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-fluentd", "uid": "333ca4ce-47d6-11e7-ab86-0e1196655f96" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Set logging-fluentd secret] ****************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:137 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-fluentd ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.fluentd.key cert=/etc/origin/logging/system.logging.fluentd.crt -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Generate logging-fluentd daemonset definition] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:154 ok: [openshift] => { "changed": false, "checksum": "df6febc374e9d91ba1ba88cd30553306354e1f0f", "dest": "/tmp/openshift-logging-ansible-jntWjs/templates/logging-fluentd.yaml", "gid": 0, "group": "root", "md5sum": "33839fdc09132833bc8c845700dabce8", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3414, "src": "/root/.ansible/tmp/ansible-tmp-1496437097.52-54771168642684/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : Set logging-fluentd daemonset] *************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:172 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get daemonset logging-fluentd -o json -n logging", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "creationTimestamp": "2017-06-02T20:58:18Z", "generation": 1, "labels": { "component": "fluentd", "logging-infra": "fluentd", "provider": "openshift" }, "name": "logging-fluentd", "namespace": "logging", "resourceVersion": "1584", "selfLink": "/apis/extensions/v1beta1/namespaces/logging/daemonsets/logging-fluentd", "uid": "344489c9-47d6-11e7-ab86-0e1196655f96" }, "spec": { "selector": { "matchLabels": { "component": "fluentd", "provider": "openshift" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "fluentd", "logging-infra": "fluentd", "provider": "openshift" }, "name": "fluentd-elasticsearch" }, "spec": { "containers": [ { "env": [ { "name": "K8S_HOST_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "ES_HOST", "value": "logging-es" }, { "name": "ES_PORT", "value": "9200" }, { "name": "ES_CLIENT_CERT", "value": "/etc/fluent/keys/cert" }, { "name": "ES_CLIENT_KEY", "value": "/etc/fluent/keys/key" }, { "name": "ES_CA", "value": "/etc/fluent/keys/ca" }, { "name": "OPS_HOST", "value": "logging-es-ops" }, { "name": "OPS_PORT", "value": "9200" }, { "name": "OPS_CLIENT_CERT", "value": "/etc/fluent/keys/cert" }, { "name": "OPS_CLIENT_KEY", "value": "/etc/fluent/keys/key" }, { "name": "OPS_CA", "value": "/etc/fluent/keys/ca" }, { "name": "ES_COPY", "value": "false" }, { "name": "USE_JOURNAL", "value": "true" }, { "name": "JOURNAL_SOURCE" }, { "name": "JOURNAL_READ_FROM_HEAD", "value": "false" } ], "image": "172.30.101.10:5000/logging/logging-fluentd:latest", "imagePullPolicy": "Always", "name": "fluentd-elasticsearch", "resources": { "limits": { "cpu": "100m", "memory": "512Mi" } }, "securityContext": { "privileged": true }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/run/log/journal", "name": "runlogjournal" }, { "mountPath": "/var/log", "name": "varlog" }, { "mountPath": "/var/lib/docker/containers", "name": "varlibdockercontainers", "readOnly": true }, { "mountPath": "/etc/fluent/configs.d/user", "name": "config", "readOnly": true }, { "mountPath": "/etc/fluent/keys", "name": "certs", "readOnly": true }, { "mountPath": "/etc/docker-hostname", "name": "dockerhostname", "readOnly": true }, { "mountPath": "/etc/localtime", "name": "localtime", "readOnly": true }, { "mountPath": "/etc/sysconfig/docker", "name": "dockercfg", "readOnly": true }, { "mountPath": "/etc/docker", "name": "dockerdaemoncfg", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeSelector": { "logging-infra-fluentd": "true" }, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-fluentd", "serviceAccountName": "aggregated-logging-fluentd", "terminationGracePeriodSeconds": 30, "volumes": [ { "hostPath": { "path": "/run/log/journal" }, "name": "runlogjournal" }, { "hostPath": { "path": "/var/log" }, "name": "varlog" }, { "hostPath": { "path": "/var/lib/docker/containers" }, "name": "varlibdockercontainers" }, { "configMap": { "defaultMode": 420, "name": "logging-fluentd" }, "name": "config" }, { "name": "certs", "secret": { "defaultMode": 420, "secretName": "logging-fluentd" } }, { "hostPath": { "path": "/etc/hostname" }, "name": "dockerhostname" }, { "hostPath": { "path": "/etc/localtime" }, "name": "localtime" }, { "hostPath": { "path": "/etc/sysconfig/docker" }, "name": "dockercfg" }, { "hostPath": { "path": "/etc/docker" }, "name": "dockerdaemoncfg" } ] } }, "templateGeneration": 1, "updateStrategy": { "rollingUpdate": { "maxUnavailable": 1 }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Retrieve list of Fluentd hosts] ************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:183 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get node -o json -n default", "results": [ { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2017-06-02T20:44:24Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "172.18.8.225" }, "name": "172.18.8.225", "namespace": "", "resourceVersion": "1581", "selfLink": "/api/v1/nodes/172.18.8.225", "uid": "4336ec61-47d4-11e7-ab86-0e1196655f96" }, "spec": { "externalID": "172.18.8.225", "providerID": "aws:////i-0433d36afa0b59316" }, "status": { "addresses": [ { "address": "172.18.8.225", "type": "LegacyHostIP" }, { "address": "172.18.8.225", "type": "InternalIP" }, { "address": "172.18.8.225", "type": "Hostname" } ], "allocatable": { "cpu": "4", "memory": "7129288Ki", "pods": "40" }, "capacity": { "cpu": "4", "memory": "7231688Ki", "pods": "40" }, "conditions": [ { "lastHeartbeatTime": "2017-06-02T20:58:17Z", "lastTransitionTime": "2017-06-02T20:44:24Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2017-06-02T20:58:17Z", "lastTransitionTime": "2017-06-02T20:44:24Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2017-06-02T20:58:17Z", "lastTransitionTime": "2017-06-02T20:44:24Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2017-06-02T20:58:17Z", "lastTransitionTime": "2017-06-02T20:44:24Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "openshift/origin-gitserver:b8a4a2c", "openshift/origin-gitserver:latest" ], "sizeBytes": 1078084889 }, { "names": [ "openshift/openvswitch:b8a4a2c", "openshift/openvswitch:latest" ], "sizeBytes": 1056771547 }, { "names": [ "openshift/node:b8a4a2c", "openshift/node:latest" ], "sizeBytes": 1055090088 }, { "names": [ "openshift/origin-keepalived-ipfailover:b8a4a2c", "openshift/origin-keepalived-ipfailover:latest" ], "sizeBytes": 1020242247 }, { "names": [ "openshift/origin-haproxy-router:b8a4a2c", "openshift/origin-haproxy-router:latest" ], "sizeBytes": 1014481963 }, { "names": [ "openshift/origin-sti-builder:b8a4a2c", "openshift/origin-sti-builder:latest" ], "sizeBytes": 993535536 }, { "names": [ "openshift/origin-docker-builder:b8a4a2c", "openshift/origin-docker-builder:latest" ], "sizeBytes": 993535536 }, { "names": [ "openshift/origin-recycler:b8a4a2c", "openshift/origin-recycler:latest" ], "sizeBytes": 993535536 }, { "names": [ "openshift/origin:b8a4a2c", "openshift/origin:latest" ], "sizeBytes": 993535536 }, { "names": [ "openshift/origin-deployer:b8a4a2c", "openshift/origin-deployer:latest" ], "sizeBytes": 993535536 }, { "names": [ "openshift/origin-f5-router:b8a4a2c", "openshift/origin-f5-router:latest" ], "sizeBytes": 993535536 }, { "names": [ "rhel7.1:latest" ], "sizeBytes": 986201487 }, { "names": [ "docker.io/openshift/origin-release@sha256:e29efb9b91708975ea538d80a66f2da584e1476de81104a5cfefe1f4138a4fd2", "docker.io/openshift/origin-release:golang-1.7" ], "sizeBytes": 825325850 }, { "names": [ "172.30.101.10:5000/logging/logging-auth-proxy@sha256:a451b67709f0e807c662bc9c143a2c8b0bfbb553410284bceba0a6c299ac806d", "172.30.101.10:5000/logging/logging-auth-proxy:latest" ], "sizeBytes": 715520966 }, { "names": [ "openshift/dind-master:latest" ], "sizeBytes": 709532011 }, { "names": [ "openshift/dind-node:latest" ], "sizeBytes": 709528287 }, { "names": [ "docker.io/node@sha256:46db0dd19955beb87b841c30a6b9812ba626473283e84117d1c016deee5949a9", "docker.io/node:0.10.36" ], "sizeBytes": 697128386 }, { "names": [ "docker.io/openshift/origin-logging-kibana@sha256:ce0197985a74ba53c5f931ef4e086f4233f7da967732fe6cd09ac87eb8ef3b57", "docker.io/openshift/origin-logging-kibana:latest" ], "sizeBytes": 682851494 }, { "names": [ "172.30.101.10:5000/logging/logging-kibana@sha256:74264eef5990df5696e0d147506b9abe16cb2846d902f870b3235ed6dece283c", "172.30.101.10:5000/logging/logging-kibana:latest" ], "sizeBytes": 682851487 }, { "names": [ "172.30.101.10:5000/logging/logging-elasticsearch@sha256:8742bfaf3ebfafd741e285e37ad655f010d9ae36abc6079ab4827146a719f6f5", "172.30.101.10:5000/logging/logging-elasticsearch:latest" ], "sizeBytes": 623379800 }, { "names": [ "openshift/dind:latest" ], "sizeBytes": 619374911 }, { "names": [ "172.30.101.10:5000/logging/logging-fluentd@sha256:2873e48cca51154c276a4a44d3752acd37d86e6acb9931708f38a56a0ca06db6", "172.30.101.10:5000/logging/logging-fluentd:latest" ], "sizeBytes": 472182182 }, { "names": [ "openshift/origin-docker-registry:b8a4a2c", "openshift/origin-docker-registry:latest" ], "sizeBytes": 461161898 }, { "names": [ "docker.io/openshift/origin-logging-elasticsearch@sha256:b019d3224117d0da040262d89dda70b900b03f376ada0ffdfc3b2f5d72ca6209", "docker.io/openshift/origin-logging-elasticsearch:latest" ], "sizeBytes": 425428290 }, { "names": [ "172.30.101.10:5000/logging/logging-curator@sha256:d9efea6fccb1be0a5773e213b8f42adea6eae15f39b770de58a6b02a49246ad1", "172.30.101.10:5000/logging/logging-curator:latest" ], "sizeBytes": 418288236 }, { "names": [ "docker.io/openshift/origin-logging-fluentd@sha256:82ab5554786b6880995b5b0d1fecfb49d52f26afcc1478558e82e70396502e11", "docker.io/openshift/origin-logging-fluentd:latest" ], "sizeBytes": 385027863 }, { "names": [ "docker.io/openshift/base-centos7@sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c", "docker.io/openshift/base-centos7:latest" ], "sizeBytes": 383049978 }, { "names": [ "rhel7.2:latest" ], "sizeBytes": 377493440 }, { "names": [ "openshift/origin-egress-router:b8a4a2c", "openshift/origin-egress-router:latest" ], "sizeBytes": 364720710 }, { "names": [ "openshift/origin-base:latest" ], "sizeBytes": 363024702 }, { "names": [ "docker.io/fedora@sha256:69281ddd7b2600e5f2b17f1e12d7fba25207f459204fb2d15884f8432c479136", "docker.io/fedora:25" ], "sizeBytes": 230864375 }, { "names": [ "docker.io/openshift/origin-logging-curator@sha256:b7a90ccb4806591205705e4e71ad6518a8d21979b5cf2f516fc82c789dd24bbf", "docker.io/openshift/origin-logging-curator:latest" ], "sizeBytes": 224972122 }, { "names": [ "rhel7.3:latest", "rhel7:latest" ], "sizeBytes": 215403650 }, { "names": [ "registry.access.redhat.com/rhel7.2@sha256:98e6ca5d226c26e31a95cd67716afe22833c943e1926a21daf1a030906a02249", "registry.access.redhat.com/rhel7.2:latest" ], "sizeBytes": 201376319 }, { "names": [ "registry.access.redhat.com/rhel7.3@sha256:5cbb9eecfc1cfeb385012ad1962f469bf25c6bcc2999e89c74817030d12286fd", "registry.access.redhat.com/rhel7.3:latest" ], "sizeBytes": 192682716 }, { "names": [ "docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077", "docker.io/centos:centos7" ], "sizeBytes": 192548999 }, { "names": [ "registry.access.redhat.com/rhel7.1@sha256:1bc5a4c43bbb29a5a96a61896ff696933be3502e2f5fdc4cde02d9e101731fdd", "registry.access.redhat.com/rhel7.1:latest" ], "sizeBytes": 158229901 }, { "names": [ "openshift/hello-openshift:b8a4a2c", "openshift/hello-openshift:latest" ], "sizeBytes": 5635113 }, { "names": [ "openshift/origin-pod:b8a4a2c", "openshift/origin-pod:latest" ], "sizeBytes": 1143145 } ], "nodeInfo": { "architecture": "amd64", "bootID": "15cd60ac-cea3-42bb-b148-4ee994ca1032", "containerRuntimeVersion": "docker://1.12.6", "kernelVersion": "3.10.0-327.22.2.el7.x86_64", "kubeProxyVersion": "v1.6.1+5115d708d7", "kubeletVersion": "v1.6.1+5115d708d7", "machineID": "f9370ed252a14f73b014c1301a9b6d1b", "operatingSystem": "linux", "osImage": "Red Hat Enterprise Linux Server 7.3 (Maipo)", "systemUUID": "EC286797-52DF-2D23-968A-42C055B97B29" } } } ], "kind": "List", "metadata": {}, "resourceVersion": "", "selfLink": "" } ], "returncode": 0 }, "state": "list" } TASK [openshift_logging_fluentd : Set openshift_logging_fluentd_hosts] ********* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:190 ok: [openshift] => { "ansible_facts": { "openshift_logging_fluentd_hosts": [ "172.18.8.225" ] }, "changed": false } TASK [openshift_logging_fluentd : include] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:195 included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml for openshift TASK [openshift_logging_fluentd : Label 172.18.8.225 for Fluentd deployment] *** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:2 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc label node 172.18.8.225 logging-infra-fluentd=true --overwrite", "results": "", "returncode": 0 }, "state": "add" } TASK [openshift_logging_fluentd : command] ************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:10 changed: [openshift -> 127.0.0.1] => { "changed": true, "cmd": [ "sleep", "0.5" ], "delta": "0:00:01.503174", "end": "2017-06-02 16:58:21.692126", "rc": 0, "start": "2017-06-02 16:58:20.188952" } TASK [openshift_logging_fluentd : Delete temp directory] *********************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:202 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-jntWjs", "state": "absent" } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:250 included: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/update_master_config.yaml for openshift TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:36 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Cleaning up local temp dir] ************************** task path: /tmp/tmp.mpOgoWqQmj/openhift-ansible/roles/openshift_logging/tasks/main.yaml:40 ok: [openshift -> 127.0.0.1] => { "changed": false, "path": "/tmp/openshift-logging-ansible-RfkC5G", "state": "absent" } META: ran handlers META: ran handlers PLAY [Update Master configs] *************************************************** skipping: no hosts matched PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 openshift : ok=207 changed=70 unreachable=0 failed=0 /data/src/github.com/openshift/origin Running /data/src/github.com/openshift/origin/logging.sh:170: executing 'oc get pods -l component=es' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.306s: /data/src/github.com/openshift/origin/logging.sh:170: executing 'oc get pods -l component=es' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-es-data-master-i5jtydma-1-21qpf 1/1 Running 0 1m There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:171: executing 'oc get pods -l component=kibana' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.268s: /data/src/github.com/openshift/origin/logging.sh:171: executing 'oc get pods -l component=kibana' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-kibana-1-54cs7 2/2 Running 0 39s There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:172: executing 'oc get pods -l component=curator' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.434s: /data/src/github.com/openshift/origin/logging.sh:172: executing 'oc get pods -l component=curator' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-curator-1-4pfk6 1/1 Running 0 18s There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:175: executing 'oc get pods -l component=es-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.321s: /data/src/github.com/openshift/origin/logging.sh:175: executing 'oc get pods -l component=es-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-es-ops-data-master-tycs4wrj-1-70xts 1/1 Running 0 51s There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:176: executing 'oc get pods -l component=kibana-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.264s: /data/src/github.com/openshift/origin/logging.sh:176: executing 'oc get pods -l component=kibana-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-kibana-ops-1-zpdzf 2/2 Running 0 28s There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:177: executing 'oc get pods -l component=curator-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.223s: /data/src/github.com/openshift/origin/logging.sh:177: executing 'oc get pods -l component=curator-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-curator-ops-1-cmv6c 1/1 Running 0 14s There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:185: executing 'oc project logging > /dev/null' expecting success... SUCCESS after 0.213s: /data/src/github.com/openshift/origin/logging.sh:185: executing 'oc project logging > /dev/null' expecting success There was no output from the command. There was no error output from the command. /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin --> Deploying template "logging/logging-fluentd-template-maker" for "-" to project logging logging-fluentd-template-maker --------- Template to create template for fluentd * With parameters: * MASTER_URL=https://kubernetes.default.svc.cluster.local * ES_HOST=logging-es * ES_PORT=9200 * ES_CLIENT_CERT=/etc/fluent/keys/cert * ES_CLIENT_KEY=/etc/fluent/keys/key * ES_CA=/etc/fluent/keys/ca * OPS_HOST=logging-es-ops * OPS_PORT=9200 * OPS_CLIENT_CERT=/etc/fluent/keys/cert * OPS_CLIENT_KEY=/etc/fluent/keys/key * OPS_CA=/etc/fluent/keys/ca * ES_COPY=false * ES_COPY_HOST= * ES_COPY_PORT= * ES_COPY_SCHEME=https * ES_COPY_CLIENT_CERT= * ES_COPY_CLIENT_KEY= * ES_COPY_CA= * ES_COPY_USERNAME= * ES_COPY_PASSWORD= * OPS_COPY_HOST= * OPS_COPY_PORT= * OPS_COPY_SCHEME=https * OPS_COPY_CLIENT_CERT= * OPS_COPY_CLIENT_KEY= * OPS_COPY_CA= * OPS_COPY_USERNAME= * OPS_COPY_PASSWORD= * IMAGE_PREFIX_DEFAULT=172.30.101.10:5000/logging/ * IMAGE_VERSION_DEFAULT=latest * USE_JOURNAL= * JOURNAL_SOURCE= * JOURNAL_READ_FROM_HEAD=false * USE_MUX=false * USE_MUX_CLIENT=false * MUX_ALLOW_EXTERNAL=false * BUFFER_QUEUE_LIMIT=1024 * BUFFER_SIZE_LIMIT=16777216 --> Creating resources ... template "logging-fluentd-template" created --> Success Run 'oc status' to view your app. WARNING: bridge-nf-call-ip6tables is disabled START wait_for_fluentd_to_catch_up at 2017-06-02 20:58:35.452538081+00:00 added es message 33bdff67-fda6-470e-8e74-42f95975dde8 added es-ops message 4a3a776c-f987-43ec-a16c-b39d06ba8cad good - wait_for_fluentd_to_catch_up: found 1 record project logging for 33bdff67-fda6-470e-8e74-42f95975dde8 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 4a3a776c-f987-43ec-a16c-b39d06ba8cad END wait_for_fluentd_to_catch_up took 11 seconds at 2017-06-02 20:58:46.637430561+00:00 Running /data/src/github.com/openshift/origin/logging.sh:223: executing 'oc login --username=admin --password=admin' expecting success... SUCCESS after 0.248s: /data/src/github.com/openshift/origin/logging.sh:223: executing 'oc login --username=admin --password=admin' expecting success Standard output from the command: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:224: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.233s: /data/src/github.com/openshift/origin/logging.sh:224: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.8.225:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': * default kube-public kube-system logging openshift openshift-infra Using project "default". There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:225: executing 'oadm policy add-cluster-role-to-user cluster-admin admin' expecting success... SUCCESS after 0.288s: /data/src/github.com/openshift/origin/logging.sh:225: executing 'oadm policy add-cluster-role-to-user cluster-admin admin' expecting success Standard output from the command: cluster role "cluster-admin" added: "admin" There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:226: executing 'oc login --username=loguser --password=loguser' expecting success... SUCCESS after 0.240s: /data/src/github.com/openshift/origin/logging.sh:226: executing 'oc login --username=loguser --password=loguser' expecting success Standard output from the command: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:227: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.279s: /data/src/github.com/openshift/origin/logging.sh:227: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.8.225:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': * default kube-public kube-system logging openshift openshift-infra Using project "default". There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:228: executing 'oc project logging > /dev/null' expecting success... SUCCESS after 0.234s: /data/src/github.com/openshift/origin/logging.sh:228: executing 'oc project logging > /dev/null' expecting success There was no output from the command. There was no error output from the command. Running /data/src/github.com/openshift/origin/logging.sh:229: executing 'oadm policy add-role-to-user view loguser' expecting success... SUCCESS after 0.237s: /data/src/github.com/openshift/origin/logging.sh:229: executing 'oadm policy add-role-to-user view loguser' expecting success Standard output from the command: role "view" added: "loguser" There was no error output from the command. Checking if Elasticsearch logging-es-data-master-i5jtydma-1-21qpf is ready { "_id": "0", "_index": ".searchguard.logging-es-data-master-i5jtydma-1-21qpf", "_shards": { "failed": 0, "successful": 1, "total": 1 }, "_type": "rolesmapping", "_version": 2, "created": false } Checking if Elasticsearch logging-es-ops-data-master-tycs4wrj-1-70xts is ready { "_id": "0", "_index": ".searchguard.logging-es-ops-data-master-tycs4wrj-1-70xts", "_shards": { "failed": 0, "successful": 1, "total": 1 }, "_type": "rolesmapping", "_version": 2, "created": false } ------------------------------------------ Test 'admin' user can access cluster stats ------------------------------------------ Running /data/src/github.com/openshift/origin/logging.sh:265: executing 'test 200 = 200' expecting success... SUCCESS after 0.010s: /data/src/github.com/openshift/origin/logging.sh:265: executing 'test 200 = 200' expecting success There was no output from the command. There was no error output from the command. ------------------------------------------ Test 'admin' user can access cluster stats for OPS cluster ------------------------------------------ Running /data/src/github.com/openshift/origin/logging.sh:274: executing 'test 200 = 200' expecting success... SUCCESS after 0.017s: /data/src/github.com/openshift/origin/logging.sh:274: executing 'test 200 = 200' expecting success There was no output from the command. There was no error output from the command. Running e2e tests Checking installation of the EFK stack... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:20: executing 'oc project logging' expecting success... SUCCESS after 0.228s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:20: executing 'oc project logging' expecting success Standard output from the command: Already on project "logging" on server "https://172.18.8.225:8443". There was no error output from the command. [INFO] Checking for DeploymentConfigurations... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana' expecting success... SUCCESS after 0.216s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-kibana 1 1 1 config There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana' expecting success... SUCCESS after 0.224s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana' expecting success Standard output from the command: replication controller "logging-kibana-1" successfully rolled out There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator' expecting success... SUCCESS after 0.217s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-curator 1 1 1 config There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator' expecting success... SUCCESS after 0.210s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator' expecting success Standard output from the command: replication controller "logging-curator-1" successfully rolled out There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana-ops' expecting success... SUCCESS after 0.216s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana-ops' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-kibana-ops 1 1 1 config There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana-ops' expecting success... SUCCESS after 0.211s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana-ops' expecting success Standard output from the command: replication controller "logging-kibana-ops-1" successfully rolled out There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator-ops' expecting success... SUCCESS after 0.222s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator-ops' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-curator-ops 1 1 1 config There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator-ops' expecting success... SUCCESS after 0.271s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator-ops' expecting success Standard output from the command: replication controller "logging-curator-ops-1" successfully rolled out There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-data-master-i5jtydma' expecting success... SUCCESS after 0.278s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-data-master-i5jtydma' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-es-data-master-i5jtydma 1 1 1 config There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-data-master-i5jtydma' expecting success... SUCCESS after 0.265s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-data-master-i5jtydma' expecting success Standard output from the command: replication controller "logging-es-data-master-i5jtydma-1" successfully rolled out There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-ops-data-master-tycs4wrj' expecting success... SUCCESS after 0.216s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-ops-data-master-tycs4wrj' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-es-ops-data-master-tycs4wrj 1 1 1 config There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-ops-data-master-tycs4wrj' expecting success... SUCCESS after 0.231s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-ops-data-master-tycs4wrj' expecting success Standard output from the command: replication controller "logging-es-ops-data-master-tycs4wrj-1" successfully rolled out There was no error output from the command. [INFO] Checking for Routes... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:30: executing 'oc get route logging-kibana' expecting success... SUCCESS after 0.212s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:30: executing 'oc get route logging-kibana' expecting success Standard output from the command: NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD logging-kibana kibana.router.default.svc.cluster.local logging-kibana <all> reencrypt/Redirect None There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:30: executing 'oc get route logging-kibana-ops' expecting success... SUCCESS after 0.211s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:30: executing 'oc get route logging-kibana-ops' expecting success Standard output from the command: NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD logging-kibana-ops kibana-ops.router.default.svc.cluster.local logging-kibana-ops <all> reencrypt/Redirect None There was no error output from the command. [INFO] Checking for Services... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es' expecting success... SUCCESS after 0.214s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es 172.30.152.168 <none> 9200/TCP 1m There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es-cluster' expecting success... SUCCESS after 0.223s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es-cluster' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es-cluster 172.30.112.81 <none> 9300/TCP 1m There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-kibana' expecting success... SUCCESS after 0.211s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-kibana' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-kibana 172.30.140.3 <none> 443/TCP 1m There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops' expecting success... SUCCESS after 0.219s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es-ops 172.30.184.176 <none> 9200/TCP 1m There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops-cluster' expecting success... SUCCESS after 0.264s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops-cluster' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es-ops-cluster 172.30.163.59 <none> 9300/TCP 1m There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-kibana-ops' expecting success... SUCCESS after 0.213s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:35: executing 'oc get service logging-kibana-ops' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-kibana-ops 172.30.5.53 <none> 443/TCP 1m There was no error output from the command. [INFO] Checking for OAuthClients... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:40: executing 'oc get oauthclient kibana-proxy' expecting success... SUCCESS after 0.221s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:40: executing 'oc get oauthclient kibana-proxy' expecting success Standard output from the command: NAME SECRET WWW-CHALLENGE REDIRECT URIS kibana-proxy Vg595zvOOeIz3OagFdCUDooKCuwTVeesSVSqnAEJoBJJsAj1yYEnC7gY2gALrZkK FALSE https://kibana-ops.router.default.svc.cluster.local There was no error output from the command. [INFO] Checking for DaemonSets... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:45: executing 'oc get daemonset logging-fluentd' expecting success... SUCCESS after 0.221s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:45: executing 'oc get daemonset logging-fluentd' expecting success Standard output from the command: NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE logging-fluentd 1 1 1 1 1 logging-infra-fluentd=true 51s There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:47: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '1'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.216s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/rollout.sh:47: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '1'; re-trying every 0.2s until completion or 60.000s Standard output from the command: 1 There was no error output from the command. Checking for log entry matches between ES and their sources... WARNING: bridge-nf-call-ip6tables is disabled Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success... SUCCESS after 0.246s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success Standard output from the command: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.259s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.8.225:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:45: executing 'oc project logging' expecting success... SUCCESS after 0.284s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:45: executing 'oc project logging' expecting success Standard output from the command: Already on project "logging" on server "https://172.18.8.225:8443". There was no error output from the command. [INFO] Testing Kibana pod logging-kibana-1-54cs7 for a successful start... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-1-54cs7 -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 120.293s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-1-54cs7 -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-1-54cs7 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.223s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-1-54cs7 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-1-54cs7 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.221s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-1-54cs7 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Testing Elasticsearch pod logging-es-data-master-i5jtydma-1-21qpf for a successful start... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.376s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:60: executing 'oc get pod logging-es-data-master-i5jtydma-1-21qpf -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.229s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:60: executing 'oc get pod logging-es-data-master-i5jtydma-1-21qpf -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-data-master-i5jtydma-1-21qpf recovered its indices after starting... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.432s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s Standard output from the command: {"cluster_name":"logging-es","master_node":"zPCQGkNKSL6rRpjlpHKtzw"}200 There was no error output from the command. [INFO] Elasticsearch pod logging-es-data-master-i5jtydma-1-21qpf is the master [INFO] Checking that Elasticsearch pod logging-es-data-master-i5jtydma-1-21qpf has persisted indices created by Fluentd... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.391s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s Standard output from the command: .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 project.logging.447251fb-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.default.41013bd2-47d4-11e7-ab86-0e1196655f96.2017.06.02 .kibana .searchguard.logging-es-data-master-i5jtydma-1-21qpf There was no error output from the command. [INFO] Cheking for index project.logging.447251fb-47d4-11e7-ab86-0e1196655f96 with Kibana pod logging-kibana-1-54cs7... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-54cs7' 'logging-es:9200' 'project.logging.447251fb-47d4-11e7-ab86-0e1196655f96' '/var/log/containers/*_447251fb-47d4-11e7-ab86-0e1196655f96_*.log' '500' 'admin' 'Fkj-VYM0kH4WfrSbIs6V8QuEFE6YA2joXiOR4WCrIOY' '127.0.0.1'' expecting success... SUCCESS after 8.331s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-54cs7' 'logging-es:9200' 'project.logging.447251fb-47d4-11e7-ab86-0e1196655f96' '/var/log/containers/*_447251fb-47d4-11e7-ab86-0e1196655f96_*.log' '500' 'admin' 'Fkj-VYM0kH4WfrSbIs6V8QuEFE6YA2joXiOR4WCrIOY' '127.0.0.1'' expecting success Standard output from the command: Executing command [oc exec logging-kibana-1-54cs7 -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer Fkj-VYM0kH4WfrSbIs6V8QuEFE6YA2joXiOR4WCrIOY' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es:9200/project.logging.447251fb-47d4-11e7-ab86-0e1196655f96.*/_search?q=hostname:ip-172-18-8-225&fields=message&size=500"] Failure - no log entries found in Elasticsearch logging-es:9200 for index project.logging.447251fb-47d4-11e7-ab86-0e1196655f96 There was no error output from the command. [INFO] Cheking for index project.default.41013bd2-47d4-11e7-ab86-0e1196655f96 with Kibana pod logging-kibana-1-54cs7... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-54cs7' 'logging-es:9200' 'project.default.41013bd2-47d4-11e7-ab86-0e1196655f96' '/var/log/containers/*_41013bd2-47d4-11e7-ab86-0e1196655f96_*.log' '500' 'admin' 'Fkj-VYM0kH4WfrSbIs6V8QuEFE6YA2joXiOR4WCrIOY' '127.0.0.1'' expecting success... SUCCESS after 0.546s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-54cs7' 'logging-es:9200' 'project.default.41013bd2-47d4-11e7-ab86-0e1196655f96' '/var/log/containers/*_41013bd2-47d4-11e7-ab86-0e1196655f96_*.log' '500' 'admin' 'Fkj-VYM0kH4WfrSbIs6V8QuEFE6YA2joXiOR4WCrIOY' '127.0.0.1'' expecting success Standard output from the command: Executing command [oc exec logging-kibana-1-54cs7 -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer Fkj-VYM0kH4WfrSbIs6V8QuEFE6YA2joXiOR4WCrIOY' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es:9200/project.default.41013bd2-47d4-11e7-ab86-0e1196655f96.*/_search?q=hostname:ip-172-18-8-225&fields=message&size=500"] Failure - no log entries found in Elasticsearch logging-es:9200 for index project.default.41013bd2-47d4-11e7-ab86-0e1196655f96 There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-data-master-i5jtydma-1-21qpf contains common data model index templates... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:105: executing 'oc exec logging-es-data-master-i5jtydma-1-21qpf -- ls -1 /usr/share/elasticsearch/index_templates' expecting success... SUCCESS after 0.289s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:105: executing 'oc exec logging-es-data-master-i5jtydma-1-21qpf -- ls -1 /usr/share/elasticsearch/index_templates' expecting success Standard output from the command: com.redhat.viaq-openshift-operations.template.json com.redhat.viaq-openshift-project.template.json There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.417s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.397s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-i5jtydma-1-21qpf' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success... SUCCESS after 0.235s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success Standard output from the command: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.232s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.8.225:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:45: executing 'oc project logging' expecting success... SUCCESS after 0.239s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:45: executing 'oc project logging' expecting success Standard output from the command: Already on project "logging" on server "https://172.18.8.225:8443". There was no error output from the command. [INFO] Testing Kibana pod logging-kibana-ops-1-zpdzf for a successful start... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-ops-1-zpdzf -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 120.300s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-ops-1-zpdzf -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-ops-1-zpdzf -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.215s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-ops-1-zpdzf -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-ops-1-zpdzf -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.240s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-ops-1-zpdzf -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Testing Elasticsearch pod logging-es-ops-data-master-tycs4wrj-1-70xts for a successful start... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.369s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:60: executing 'oc get pod logging-es-ops-data-master-tycs4wrj-1-70xts -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.234s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:60: executing 'oc get pod logging-es-ops-data-master-tycs4wrj-1-70xts -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-ops-data-master-tycs4wrj-1-70xts recovered its indices after starting... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.362s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s Standard output from the command: {"cluster_name":"logging-es-ops","master_node":"jvfcG8GQTEax5AxAgwU0XA"}200 There was no error output from the command. [INFO] Elasticsearch pod logging-es-ops-data-master-tycs4wrj-1-70xts is the master [INFO] Checking that Elasticsearch pod logging-es-ops-data-master-tycs4wrj-1-70xts has persisted indices created by Fluentd... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.376s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s Standard output from the command: .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .kibana .operations.2017.06.02 .searchguard.logging-es-ops-data-master-tycs4wrj-1-70xts There was no error output from the command. [INFO] Cheking for index .operations with Kibana pod logging-kibana-ops-1-zpdzf... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-zpdzf' 'logging-es-ops:9200' '.operations' '/var/log/messages' '500' 'admin' 'pFibpeAUUOMob8_O9bWbP9q95C13ABprqPdfF2HQFZs' '127.0.0.1'' expecting success... SUCCESS after 0.752s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-zpdzf' 'logging-es-ops:9200' '.operations' '/var/log/messages' '500' 'admin' 'pFibpeAUUOMob8_O9bWbP9q95C13ABprqPdfF2HQFZs' '127.0.0.1'' expecting success Standard output from the command: Executing command [oc exec logging-kibana-ops-1-zpdzf -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer pFibpeAUUOMob8_O9bWbP9q95C13ABprqPdfF2HQFZs' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es-ops:9200/.operations.*/_search?q=hostname:ip-172-18-8-225&fields=message&size=500"] Failure - no log entries found in Elasticsearch logging-es-ops:9200 for index .operations There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-ops-data-master-tycs4wrj-1-70xts contains common data model index templates... Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:105: executing 'oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- ls -1 /usr/share/elasticsearch/index_templates' expecting success... SUCCESS after 0.308s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:105: executing 'oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- ls -1 /usr/share/elasticsearch/index_templates' expecting success Standard output from the command: com.redhat.viaq-openshift-operations.template.json com.redhat.viaq-openshift-project.template.json There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.367s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.364s: /data/src/github.com/openshift/origin-aggregated-logging/test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-tycs4wrj-1-70xts' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. running test test-curator.sh configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator" scaled deploymentconfig "logging-curator" scaled configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator" scaled deploymentconfig "logging-curator" scaled configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator" scaled deploymentconfig "logging-curator" scaled current indices before 1st deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.curatortest.2017.03.31 .operations.curatortest.2017.06.02 .searchguard.logging-es-data-master-i5jtydma-1-21qpf default-index.curatortest.2017.05.02 default-index.curatortest.2017.06.02 project.default.41013bd2-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.logging.447251fb-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.project-dev.curatortest.2017.06.01 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.05.05 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.05.26 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.05.26 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.05.26 project.project3-qe.curatortest.2017.06.02 Fri Jun 2 21:05:27 UTC 2017 configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator" scaled deploymentconfig "logging-curator" scaled current indices after 1st deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.curatortest.2017.06.02 .searchguard.logging-es-data-master-i5jtydma-1-21qpf default-index.curatortest.2017.06.02 project.default.41013bd2-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.logging.447251fb-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.06.02 good - index project.project-dev.curatortest.2017.06.02 is present good - index project.project-dev.curatortest.2017.06.01 is missing good - index project.project-qe.curatortest.2017.06.02 is present good - index project.project-qe.curatortest.2017.05.26 is missing good - index project.project-prod.curatortest.2017.06.02 is present good - index project.project-prod.curatortest.2017.05.05 is missing good - index .operations.curatortest.2017.06.02 is present good - index .operations.curatortest.2017.03.31 is missing good - index default-index.curatortest.2017.06.02 is present good - index default-index.curatortest.2017.05.02 is missing good - index project.project2-qe.curatortest.2017.06.02 is present good - index project.project2-qe.curatortest.2017.05.26 is missing good - index project.project3-qe.curatortest.2017.06.02 is present good - index project.project3-qe.curatortest.2017.05.26 is missing current indices before 2nd deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.curatortest.2017.03.31 .operations.curatortest.2017.06.02 .searchguard.logging-es-data-master-i5jtydma-1-21qpf default-index.curatortest.2017.05.02 default-index.curatortest.2017.06.02 project.default.41013bd2-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.logging.447251fb-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.project-dev.curatortest.2017.06.01 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.05.05 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.05.26 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.05.26 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.05.26 project.project3-qe.curatortest.2017.06.02 sleeping 211 seconds to see if runhour and runminute are working . . . verify indices deletion again current indices after 2nd deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.curatortest.2017.06.02 .searchguard.logging-es-data-master-i5jtydma-1-21qpf default-index.curatortest.2017.06.02 project.default.41013bd2-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.logging.447251fb-47d4-11e7-ab86-0e1196655f96.2017.06.02 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.06.02 good - index project.project-dev.curatortest.2017.06.02 is present good - index project.project-dev.curatortest.2017.06.01 is missing good - index project.project-qe.curatortest.2017.06.02 is present good - index project.project-qe.curatortest.2017.05.26 is missing good - index project.project-prod.curatortest.2017.06.02 is present good - index project.project-prod.curatortest.2017.05.05 is missing good - index .operations.curatortest.2017.06.02 is present good - index .operations.curatortest.2017.03.31 is missing good - index default-index.curatortest.2017.06.02 is present good - index default-index.curatortest.2017.05.02 is missing good - index project.project2-qe.curatortest.2017.06.02 is present good - index project.project2-qe.curatortest.2017.05.26 is missing good - index project.project3-qe.curatortest.2017.06.02 is present good - index project.project3-qe.curatortest.2017.05.26 is missing current indices before 1st deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.2017.06.02 .operations.curatortest.2017.03.31 .operations.curatortest.2017.06.02 .searchguard.logging-es-ops-data-master-tycs4wrj-1-70xts default-index.curatortest.2017.05.02 default-index.curatortest.2017.06.02 project.project-dev.curatortest.2017.06.01 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.05.05 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.05.26 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.05.26 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.05.26 project.project3-qe.curatortest.2017.06.02 Fri Jun 2 21:10:49 UTC 2017 configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator-ops" scaled deploymentconfig "logging-curator-ops" scaled current indices after 1st deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.2017.06.02 .operations.curatortest.2017.06.02 .searchguard.logging-es-ops-data-master-tycs4wrj-1-70xts default-index.curatortest.2017.06.02 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.06.02 good - index project.project-dev.curatortest.2017.06.02 is present good - index project.project-dev.curatortest.2017.06.01 is missing good - index project.project-qe.curatortest.2017.06.02 is present good - index project.project-qe.curatortest.2017.05.26 is missing good - index project.project-prod.curatortest.2017.06.02 is present good - index project.project-prod.curatortest.2017.05.05 is missing good - index .operations.curatortest.2017.06.02 is present good - index .operations.curatortest.2017.03.31 is missing good - index default-index.curatortest.2017.06.02 is present good - index default-index.curatortest.2017.05.02 is missing good - index project.project2-qe.curatortest.2017.06.02 is present good - index project.project2-qe.curatortest.2017.05.26 is missing good - index project.project3-qe.curatortest.2017.06.02 is present good - index project.project3-qe.curatortest.2017.05.26 is missing current indices before 2nd deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.2017.06.02 .operations.curatortest.2017.03.31 .operations.curatortest.2017.06.02 .searchguard.logging-es-ops-data-master-tycs4wrj-1-70xts default-index.curatortest.2017.05.02 default-index.curatortest.2017.06.02 project.project-dev.curatortest.2017.06.01 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.05.05 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.05.26 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.05.26 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.05.26 project.project3-qe.curatortest.2017.06.02 sleeping 223 seconds to see if runhour and runminute are working . . . verify indices deletion again current indices after 2nd deletion are: .kibana .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.2017.06.02 .operations.curatortest.2017.06.02 .searchguard.logging-es-ops-data-master-tycs4wrj-1-70xts default-index.curatortest.2017.06.02 project.project-dev.curatortest.2017.06.02 project.project-prod.curatortest.2017.06.02 project.project-qe.curatortest.2017.06.02 project.project2-qe.curatortest.2017.06.02 project.project3-qe.curatortest.2017.06.02 good - index project.project-dev.curatortest.2017.06.02 is present good - index project.project-dev.curatortest.2017.06.01 is missing good - index project.project-qe.curatortest.2017.06.02 is present good - index project.project-qe.curatortest.2017.05.26 is missing good - index project.project-prod.curatortest.2017.06.02 is present good - index project.project-prod.curatortest.2017.05.05 is missing good - index .operations.curatortest.2017.06.02 is present good - index .operations.curatortest.2017.03.31 is missing good - index default-index.curatortest.2017.06.02 is present good - index default-index.curatortest.2017.05.02 is missing good - index project.project2-qe.curatortest.2017.06.02 is present good - index project.project2-qe.curatortest.2017.05.26 is missing good - index project.project3-qe.curatortest.2017.06.02 is present good - index project.project3-qe.curatortest.2017.05.26 is missing curator running [5] jobs curator run finish curator running [5] jobs curator run finish configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator-ops" scaled deploymentconfig "logging-curator-ops" scaled running test test-datetime-future.sh ++ set -o nounset ++ set -o pipefail ++ type get_running_pod ++ [[ 1 -ne 1 ]] ++ [[ true = \f\a\l\s\e ]] ++ CLUSTER=true ++ ops=-ops ++ INDEX_PREFIX= ++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts ++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']' ++ get_test_user_token ++ oc login --username=admin --password=admin +++ oc whoami -t ++ test_token=3td-7gk1jZW2nmsu1yCOr3dQVANK5CpPCM4yDfBNeVI +++ oc whoami ++ test_name=admin ++ test_ip=127.0.0.1 ++ oc login --username=system:admin ++ TEST_DIVIDER=------------------------------------------ +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-466vp +++ date +%z ++ nodetz=-0400 +++ oc exec logging-fluentd-466vp -- date +%z ++ podtz=-0400 ++ '[' x-0400 = x-0400 ']' +++ date +%Z ++ echo Good - node timezone -0400 EDT is equal to the fluentd pod timezone Good - node timezone -0400 EDT is equal to the fluentd pod timezone ++ docker_uses_journal ++ type -p docker ++ sudo docker info ++ grep -q 'Logging Driver: journald' WARNING: bridge-nf-call-ip6tables is disabled ++ return 0 ++ echo The rest of the test is not applicable when using the journal - skipping The rest of the test is not applicable when using the journal - skipping ++ exit 0 running test test-es-copy.sh ++ set -o nounset ++ set -o pipefail ++ type get_running_pod ++ [[ 1 -ne 1 ]] ++ [[ true = \f\a\l\s\e ]] ++ CLUSTER=true ++ ops=-ops ++ INDEX_PREFIX= ++ PROJ_PREFIX=project. ++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts ++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']' ++ get_test_user_token ++ oc login --username=admin --password=admin +++ oc whoami -t ++ test_token=KyYZyr3EWyRCPa249RIwfAFkD2_BGOZ4A5GoAXwguvM +++ oc whoami ++ test_name=admin ++ test_ip=127.0.0.1 ++ oc login --username=system:admin ++ TEST_DIVIDER=------------------------------------------ +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-466vp +++ mktemp ++ cfg=/tmp/tmp.WB2nLmxyNO ++ oc get template logging-fluentd-template -o yaml ++ sed '/- name: ES_COPY/,/value:/ s/value: .*$/value: "false"/' ++ oc replace -f - template "logging-fluentd-template" replaced ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-466vp ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-466vp ++ '[' -z logging-fluentd-466vp -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-466vp ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' pod for component=fluentd not running yet ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-ndfct ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-ndfct ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-ndfct +++ mktemp ++ origconfig=/tmp/tmp.P0Hi0WLSBl ++ oc get template logging-fluentd-template -o yaml ++ write_and_verify_logs 1 ++ rc=0 ++ wait_for_fluentd_to_catch_up '' '' 1 +++ date +%s ++ local starttime=1496438496 +++ date -u --rfc-3339=ns ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:21:36.192711805+00:00 START wait_for_fluentd_to_catch_up at 2017-06-02 21:21:36.192711805+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=834fd4fe-692c-4583-ab63-5520553f351d +++ uuidgen ++ local uuid_es_ops=d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 ++ local expected=1 ++ local timeout=300 ++ add_test_message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/834fd4fe-692c-4583-ab63-5520553f351d added es message 834fd4fe-692c-4583-ab63-5520553f351d ++ echo added es message 834fd4fe-692c-4583-ab63-5520553f351d ++ logger -i -p local6.info -t d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 added es-ops message d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 ++ echo added es-ops message d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=834fd4fe-692c-4583-ab63-5520553f351d ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 834fd4fe-692c-4583-ab63-5520553f351d +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:834fd4fe-692c-4583-ab63-5520553f351d' good - wait_for_fluentd_to_catch_up: found 1 record project logging for 834fd4fe-692c-4583-ab63-5520553f351d ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 292 -le 0 ']' ++ return 0 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 834fd4fe-692c-4583-ab63-5520553f351d ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4' +++ shift +++ shift +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for d6437eef-1ddf-4e60-b1e0-85f1f91bd8a4 ++ '[' -n '' ']' ++ '[' -n '' ']' +++ date +%s ++ local endtime=1496438508 +++ expr 1496438508 - 1496438496 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 12 seconds at 2017-06-02 21:21:48.983084671+00:00 ++ echo END wait_for_fluentd_to_catch_up took 12 seconds at 2017-06-02 21:21:48.983084671+00:00 ++ return 0 ++ '[' 0 -ne 0 ']' ++ return 0 ++ trap cleanup INT TERM EXIT +++ mktemp ++ nocopy=/tmp/tmp.DesJqkTBHr ++ sed /_COPY/,/value/d /tmp/tmp.P0Hi0WLSBl +++ mktemp ++ envpatch=/tmp/tmp.hvJxOdWC6q ++ sed -n '/^ - env:/,/^ image:/ { /^ image:/d /^ - env:/d /name: K8S_HOST_URL/,/value/d s/ES_/ES_COPY_/ s/OPS_/OPS_COPY_/ p }' /tmp/tmp.DesJqkTBHr ++ cat ++ cat /tmp/tmp.DesJqkTBHr ++ oc replace -f - ++ sed '/^ - env:/r /tmp/tmp.hvJxOdWC6q' template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.hvJxOdWC6q /tmp/tmp.DesJqkTBHr ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-ndfct ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-ndfct ++ '[' -z logging-fluentd-ndfct -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-ndfct ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-85lmd ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-85lmd ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-85lmd ++ write_and_verify_logs 2 ++ rc=0 ++ wait_for_fluentd_to_catch_up '' '' 2 +++ date +%s ++ local starttime=1496438535 +++ date -u --rfc-3339=ns START wait_for_fluentd_to_catch_up at 2017-06-02 21:22:15.583731662+00:00 ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:22:15.583731662+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ uuidgen ++ local uuid_es_ops=0ea7efbe-dfba-4466-bf97-90cf9cf3e45a ++ local expected=2 ++ local timeout=300 ++ add_test_message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 ++ echo added es message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 added es message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 ++ logger -i -p local6.info -t 0ea7efbe-dfba-4466-bf97-90cf9cf3e45a 0ea7efbe-dfba-4466-bf97-90cf9cf3e45a added es-ops message 0ea7efbe-dfba-4466-bf97-90cf9cf3e45a ++ echo added es-ops message 0ea7efbe-dfba-4466-bf97-90cf9cf3e45a ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 ++ expected=2 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ get_count_from_json +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=0 ++ test 0 = 2 ++ sleep 1 ++ let ii=ii-1 ++ '[' 291 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ get_count_from_json +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2' ++ local nrecs=2 ++ test 2 = 2 ++ break ++ '[' 291 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 2 record project logging for 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 ++ echo good - wait_for_fluentd_to_catch_up: found 2 record project logging for 4f26ed4e-2bef-43ea-b818-72b6c0cb8ce2 ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=0ea7efbe-dfba-4466-bf97-90cf9cf3e45a ++ expected=2 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER 0ea7efbe-dfba-4466-bf97-90cf9cf3e45a +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:0ea7efbe-dfba-4466-bf97-90cf9cf3e45a' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:0ea7efbe-dfba-4466-bf97-90cf9cf3e45a' +++ shift +++ shift +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:0ea7efbe-dfba-4466-bf97-90cf9cf3e45a' good - wait_for_fluentd_to_catch_up: found 2 record project .operations for 0ea7efbe-dfba-4466-bf97-90cf9cf3e45a ++ local nrecs=2 ++ test 2 = 2 ++ break ++ '[' 300 -le 0 ']' ++ return 0 ++ echo good - wait_for_fluentd_to_catch_up: found 2 record project .operations for 0ea7efbe-dfba-4466-bf97-90cf9cf3e45a ++ '[' -n '' ']' ++ '[' -n '' ']' +++ date +%s ++ local endtime=1496438549 +++ expr 1496438549 - 1496438535 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:22:29.910844424+00:00 ++ echo END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:22:29.910844424+00:00 ++ return 0 ++ '[' 0 -ne 0 ']' ++ return 0 ++ oc replace --force -f /tmp/tmp.P0Hi0WLSBl template "logging-fluentd-template" deleted template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.P0Hi0WLSBl ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-85lmd ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-85lmd ++ '[' -z logging-fluentd-85lmd -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-85lmd ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' pod for component=fluentd not running yet ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-w9gvv ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-w9gvv ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-w9gvv ++ write_and_verify_logs 1 ++ rc=0 ++ wait_for_fluentd_to_catch_up '' '' 1 +++ date +%s ++ local starttime=1496438575 +++ date -u --rfc-3339=ns START wait_for_fluentd_to_catch_up at 2017-06-02 21:22:55.571504718+00:00 ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:22:55.571504718+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ uuidgen ++ local uuid_es_ops=fb86491a-5557-4135-a5fa-11c91c9d2218 ++ local expected=1 ++ local timeout=300 ++ add_test_message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/71f7cbad-a3a3-42dd-b208-9a543013f4c8 ++ echo added es message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 added es message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 ++ logger -i -p local6.info -t fb86491a-5557-4135-a5fa-11c91c9d2218 fb86491a-5557-4135-a5fa-11c91c9d2218 added es-ops message fb86491a-5557-4135-a5fa-11c91c9d2218 ++ echo added es-ops message fb86491a-5557-4135-a5fa-11c91c9d2218 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=71f7cbad-a3a3-42dd-b208-9a543013f4c8 ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 291 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 71f7cbad-a3a3-42dd-b208-9a543013f4c8 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:71f7cbad-a3a3-42dd-b208-9a543013f4c8' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 291 -le 0 ']' ++ return 0 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 71f7cbad-a3a3-42dd-b208-9a543013f4c8 good - wait_for_fluentd_to_catch_up: found 1 record project logging for 71f7cbad-a3a3-42dd-b208-9a543013f4c8 ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=fb86491a-5557-4135-a5fa-11c91c9d2218 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER fb86491a-5557-4135-a5fa-11c91c9d2218 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:fb86491a-5557-4135-a5fa-11c91c9d2218' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:fb86491a-5557-4135-a5fa-11c91c9d2218' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:fb86491a-5557-4135-a5fa-11c91c9d2218' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for fb86491a-5557-4135-a5fa-11c91c9d2218 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for fb86491a-5557-4135-a5fa-11c91c9d2218 ++ '[' -n '' ']' ++ '[' -n '' ']' +++ date +%s ++ local endtime=1496438589 +++ expr 1496438589 - 1496438575 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:23:09.862402389+00:00 ++ echo END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:23:09.862402389+00:00 ++ return 0 ++ '[' 0 -ne 0 ']' ++ return 0 ++ cleanup ++ '[' '!' -f /tmp/tmp.P0Hi0WLSBl ']' ++ return 0 running test test-fluentd-forward.sh ++ set -o nounset ++ set -o pipefail ++ type get_running_pod ++ [[ 1 -ne 1 ]] ++ [[ true = \f\a\l\s\e ]] ++ CLUSTER=true ++ ops=-ops ++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts ++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']' ++ PROJ_PREFIX=project. ++ get_test_user_token ++ oc login --username=admin --password=admin +++ oc whoami -t ++ test_token=vtza0n0HVMcQ9i0kbBUxJTYqQmQfAchRLjmBpNBxhq4 +++ oc whoami ++ test_name=admin ++ test_ip=127.0.0.1 ++ oc login --username=system:admin ++ TEST_DIVIDER=------------------------------------------ +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-w9gvv ++ write_and_verify_logs 1 ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up '' '' +++ date +%s ++ local starttime=1496438590 +++ date -u --rfc-3339=ns ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:23:10.916707795+00:00 START wait_for_fluentd_to_catch_up at 2017-06-02 21:23:10.916707795+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ uuidgen ++ local uuid_es_ops=7cc6783f-508c-4f78-9a12-e8078e737b33 ++ local expected=1 ++ local timeout=300 ++ add_test_message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd added es message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd ++ echo added es message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd ++ logger -i -p local6.info -t 7cc6783f-508c-4f78-9a12-e8078e737b33 7cc6783f-508c-4f78-9a12-e8078e737b33 added es-ops message 7cc6783f-508c-4f78-9a12-e8078e737b33 ++ echo added es-ops message 7cc6783f-508c-4f78-9a12-e8078e737b33 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 295 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for f9a1f8bd-69f1-4b07-81e9-1ce14ccf72dd ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=7cc6783f-508c-4f78-9a12-e8078e737b33 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER 7cc6783f-508c-4f78-9a12-e8078e737b33 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:7cc6783f-508c-4f78-9a12-e8078e737b33' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:7cc6783f-508c-4f78-9a12-e8078e737b33' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:7cc6783f-508c-4f78-9a12-e8078e737b33' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 7cc6783f-508c-4f78-9a12-e8078e737b33 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 7cc6783f-508c-4f78-9a12-e8078e737b33 ++ '[' -n '' ']' ++ '[' -n '' ']' +++ date +%s ++ local endtime=1496438599 +++ expr 1496438599 - 1496438590 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 9 seconds at 2017-06-02 21:23:19.524887180+00:00 ++ echo END wait_for_fluentd_to_catch_up took 9 seconds at 2017-06-02 21:23:19.524887180+00:00 ++ return 0 ++ return 0 ++ trap cleanup INT TERM EXIT ++ create_forwarding_fluentd ++ oc create configmap logging-forward-fluentd --from-file=fluent.conf=../templates/forward-fluent.conf configmap "logging-forward-fluentd" created ++ oc get template/logging-fluentd-template -o yaml ++ sed -e 's/logging-infra-fluentd: "true"/logging-infra-forward-fluentd: "true"/' -e 's/name: logging-fluentd/name: logging-forward-fluentd/' -e 's/ fluentd/ forward-fluentd/' -e '/image:/ a \ ports: \ - containerPort: 24284' ++ oc new-app -f - --> Deploying template "logging/logging-forward-fluentd-template" for "-" to project logging logging-forward-fluentd-template --------- Template for logging forward-fluentd deployment. * With parameters: * IMAGE_PREFIX=172.30.101.10:5000/logging/ * IMAGE_VERSION=latest --> Creating resources ... daemonset "logging-forward-fluentd" created --> Success Run 'oc status' to view your app. ++ oc label node --all logging-infra-forward-fluentd=true node "172.18.8.225" labeled ++ wait_for_pod_ACTION start forward-fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod forward-fluentd +++ oc get pods -l component=forward-fluentd +++ awk -v sel=forward-fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=forward-fluentd not running yet pod for component=forward-fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod forward-fluentd +++ oc get pods -l component=forward-fluentd +++ awk -v sel=forward-fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-forward-fluentd-306fx ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-forward-fluentd-306fx ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 ++ update_current_fluentd ++ oc label node --all logging-infra-fluentd- node "172.18.8.225" labeled ++ wait_for_pod_ACTION stop logging-fluentd-w9gvv ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-w9gvv ++ '[' -z logging-fluentd-w9gvv -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-w9gvv ++ '[' -n 1 ']' ++ echo pod logging-fluentd-w9gvv still running pod logging-fluentd-w9gvv still running ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' stop = start ']' ++ '[' 110 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-w9gvv ++ '[' -n 1 ']' pod logging-fluentd-w9gvv still running ++ echo pod logging-fluentd-w9gvv still running ++ sleep 10 +++ expr 110 - 10 ++ ii=100 ++ '[' stop = start ']' ++ '[' 100 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-w9gvv ++ '[' stop = start ']' ++ break ++ '[' 100 -le 0 ']' ++ return 0 ++ oc get configmap/logging-fluentd -o yaml ++ sed '/## matches/ a\ <match **>\ @include configs.d/user/secure-forward.conf\ </match>' ++ oc replace -f - configmap "logging-fluentd" replaced +++ oc get pods -l component=forward-fluentd -o name ++ POD=pods/logging-forward-fluentd-306fx +++ oc get pods/logging-forward-fluentd-306fx '--template={{.status.podIP}}' ++ FLUENTD_FORWARD=172.17.0.10 ++ oc patch configmap/logging-fluentd --type=json --patch '[{ "op": "replace", "path": "/data/secure-forward.conf", "value": "\ @type secure_forward\n\ self_hostname forwarding-${HOSTNAME}\n\ shared_key aggregated_logging_ci_testing\n\ secure no\n\ buffer_queue_limit \"#{ENV['\''BUFFER_QUEUE_LIMIT'\'']}\"\n\ buffer_chunk_limit \"#{ENV['\''BUFFER_SIZE_LIMIT'\'']}\"\n\ <server>\n\ host 172.17.0.10\n\ port 24284\n\ </server>"}]' configmap "logging-fluentd" patched ++ oc label node --all logging-infra-fluentd=true node "172.18.8.225" labeled ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' pod for component=fluentd not running yet ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-45p2d ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-45p2d ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 ++ write_and_verify_logs 1 ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up '' '' +++ date +%s ++ local starttime=1496438643 +++ date -u --rfc-3339=ns START wait_for_fluentd_to_catch_up at 2017-06-02 21:24:03.209141213+00:00 ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:24:03.209141213+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ uuidgen ++ local uuid_es_ops=398e37aa-dd8b-4cc3-926c-2a65f3166b72 ++ local expected=1 ++ local timeout=300 ++ add_test_message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/b0b106a0-927f-46b5-abbc-17e086a3fc07 added es message b0b106a0-927f-46b5-abbc-17e086a3fc07 ++ echo added es message b0b106a0-927f-46b5-abbc-17e086a3fc07 ++ logger -i -p local6.info -t 398e37aa-dd8b-4cc3-926c-2a65f3166b72 398e37aa-dd8b-4cc3-926c-2a65f3166b72 added es-ops message 398e37aa-dd8b-4cc3-926c-2a65f3166b72 ++ echo added es-ops message 398e37aa-dd8b-4cc3-926c-2a65f3166b72 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=b0b106a0-927f-46b5-abbc-17e086a3fc07 ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 291 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 290 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 289 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 288 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 287 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 286 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 285 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 284 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 283 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 282 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 281 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 280 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 279 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 278 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 277 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 276 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 275 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 274 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 273 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 272 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 271 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 270 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 269 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 268 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 267 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 266 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 265 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 264 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 263 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 262 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 261 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 260 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 259 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 258 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 257 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 256 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 255 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 254 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 253 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 252 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 251 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 250 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 249 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 248 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 247 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 246 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b0b106a0-927f-46b5-abbc-17e086a3fc07 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b0b106a0-927f-46b5-abbc-17e086a3fc07' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 246 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for b0b106a0-927f-46b5-abbc-17e086a3fc07 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for b0b106a0-927f-46b5-abbc-17e086a3fc07 ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=398e37aa-dd8b-4cc3-926c-2a65f3166b72 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER 398e37aa-dd8b-4cc3-926c-2a65f3166b72 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:398e37aa-dd8b-4cc3-926c-2a65f3166b72' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:398e37aa-dd8b-4cc3-926c-2a65f3166b72' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:398e37aa-dd8b-4cc3-926c-2a65f3166b72' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 398e37aa-dd8b-4cc3-926c-2a65f3166b72 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 398e37aa-dd8b-4cc3-926c-2a65f3166b72 ++ '[' -n '' ']' ++ '[' -n '' ']' +++ date +%s ++ local endtime=1496438719 +++ expr 1496438719 - 1496438643 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 76 seconds at 2017-06-02 21:25:19.644214624+00:00 ++ echo END wait_for_fluentd_to_catch_up took 76 seconds at 2017-06-02 21:25:19.644214624+00:00 ++ return 0 ++ return 0 ++ cleanup ++ cleanup_forward ++ oc label node --all logging-infra-fluentd- node "172.18.8.225" labeled ++ wait_for_pod_ACTION stop logging-fluentd-w9gvv ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-w9gvv ++ '[' -z logging-fluentd-w9gvv -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-w9gvv ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc delete daemonset/logging-forward-fluentd daemonset "logging-forward-fluentd" deleted +++ oc get configmap/logging-fluentd -o yaml +++ grep '<match \*\*>' ++ '[' -n ' <match **>' ']' ++ oc get configmap/logging-fluentd -o yaml ++ sed -e '/<match \*\*>/ d' -e '/@include configs\.d\/user\/secure-forward\.conf/ d' -e '/<\/match>/ d' ++ oc replace -f - configmap "logging-fluentd" replaced ++ oc patch configmap/logging-fluentd --type=json --patch '[{ "op": "replace", "path": "/data/secure-forward.conf", "value": "\ # @type secure_forward\n\ # self_hostname forwarding-${HOSTNAME}\n\ # shared_key aggregated_logging_ci_testing\n\ # secure no\n\ # <server>\n\ # host ${FLUENTD_FORWARD}\n\ # port 24284\n\ # </server>"}]' configmap "logging-fluentd" patched ++ oc label node --all logging-infra-fluentd=true node "172.18.8.225" labeled ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' pod for component=fluentd not running yet ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet pod for component=fluentd not running yet ++ sleep 10 +++ expr 110 - 10 ++ ii=100 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-6n0kv ++ '[' 100 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-6n0kv ']' ++ break ++ '[' 100 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-6n0kv ++ oc get events -o yaml ++ write_and_verify_logs 1 ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up '' '' +++ date +%s ++ local starttime=1496438747 +++ date -u --rfc-3339=ns START wait_for_fluentd_to_catch_up at 2017-06-02 21:25:47.443453894+00:00 ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:25:47.443453894+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=aa2e688a-d063-457f-bd86-6416333cd3c4 +++ uuidgen ++ local uuid_es_ops=f9247e63-f1a7-4515-a0aa-54cdc684bf24 ++ local expected=1 ++ local timeout=300 ++ add_test_message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/aa2e688a-d063-457f-bd86-6416333cd3c4 ++ echo added es message aa2e688a-d063-457f-bd86-6416333cd3c4 added es message aa2e688a-d063-457f-bd86-6416333cd3c4 ++ logger -i -p local6.info -t f9247e63-f1a7-4515-a0aa-54cdc684bf24 f9247e63-f1a7-4515-a0aa-54cdc684bf24 added es-ops message f9247e63-f1a7-4515-a0aa-54cdc684bf24 ++ echo added es-ops message f9247e63-f1a7-4515-a0aa-54cdc684bf24 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=aa2e688a-d063-457f-bd86-6416333cd3c4 ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message aa2e688a-d063-457f-bd86-6416333cd3c4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:aa2e688a-d063-457f-bd86-6416333cd3c4' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 293 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for aa2e688a-d063-457f-bd86-6416333cd3c4 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for aa2e688a-d063-457f-bd86-6416333cd3c4 ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=f9247e63-f1a7-4515-a0aa-54cdc684bf24 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER f9247e63-f1a7-4515-a0aa-54cdc684bf24 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:f9247e63-f1a7-4515-a0aa-54cdc684bf24' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:f9247e63-f1a7-4515-a0aa-54cdc684bf24' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:f9247e63-f1a7-4515-a0aa-54cdc684bf24' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for f9247e63-f1a7-4515-a0aa-54cdc684bf24 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for f9247e63-f1a7-4515-a0aa-54cdc684bf24 ++ '[' -n '' ']' ++ '[' -n '' ']' +++ date +%s ++ local endtime=1496438759 +++ expr 1496438759 - 1496438747 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 12 seconds at 2017-06-02 21:25:59.110162025+00:00 ++ echo END wait_for_fluentd_to_catch_up took 12 seconds at 2017-06-02 21:25:59.110162025+00:00 ++ return 0 ++ return 0 ++ cleanup ++ cleanup_forward ++ oc label node --all logging-infra-fluentd- node "172.18.8.225" labeled ++ wait_for_pod_ACTION stop logging-fluentd-6n0kv ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-6n0kv ++ '[' -z logging-fluentd-6n0kv -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-6n0kv ++ '[' -n 1 ']' pod logging-fluentd-6n0kv still running ++ echo pod logging-fluentd-6n0kv still running ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' stop = start ']' ++ '[' 110 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-6n0kv ++ '[' -n 1 ']' pod logging-fluentd-6n0kv still running ++ echo pod logging-fluentd-6n0kv still running ++ sleep 10 +++ expr 110 - 10 ++ ii=100 ++ '[' stop = start ']' ++ '[' 100 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-6n0kv ++ '[' stop = start ']' ++ break ++ '[' 100 -le 0 ']' ++ return 0 ++ oc delete daemonset/logging-forward-fluentd Error from server (NotFound): daemonsets.extensions "logging-forward-fluentd" not found ++ : +++ oc get configmap/logging-fluentd -o yaml +++ grep '<match \*\*>' ++ '[' -n '' ']' ++ oc patch configmap/logging-fluentd --type=json --patch '[{ "op": "replace", "path": "/data/secure-forward.conf", "value": "\ # @type secure_forward\n\ # self_hostname forwarding-${HOSTNAME}\n\ # shared_key aggregated_logging_ci_testing\n\ # secure no\n\ # <server>\n\ # host ${FLUENTD_FORWARD}\n\ # port 24284\n\ # </server>"}]' configmap "logging-fluentd" patched ++ oc label node --all logging-infra-fluentd=true node "172.18.8.225" labeled ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' pod for component=fluentd not running yet ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-30qz3 ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-30qz3 ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-30qz3 ++ oc get events -o yaml running test test-json-parsing.sh ++ set -o nounset ++ set -o pipefail ++ type get_running_pod ++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts ++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']' +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' Adding test message 6bd1519d-7b5b-465b-bf80-bbeacd28faab to Kibana . . . ++ es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ uuidgen ++ uuid_es=6bd1519d-7b5b-465b-bf80-bbeacd28faab ++ echo Adding test message 6bd1519d-7b5b-465b-bf80-bbeacd28faab to Kibana . . . ++ add_test_message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/6bd1519d-7b5b-465b-bf80-bbeacd28faab ++ rc=0 ++ timeout=600 Waiting 600 seconds for 6bd1519d-7b5b-465b-bf80-bbeacd28faab to show up in Elasticsearch . . . ++ echo Waiting 600 seconds for 6bd1519d-7b5b-465b-bf80-bbeacd28faab to show up in Elasticsearch . . . ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging. ++ mymessage=6bd1519d-7b5b-465b-bf80-bbeacd28faab ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 600 ++ let ii=600 ++ local interval=1 ++ '[' 600 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 599 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 598 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 597 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 596 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 595 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 594 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 593 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 592 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _count message 6bd1519d-7b5b-465b-bf80-bbeacd28faab +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_count?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 592 -le 0 ']' ++ return 0 good - ./logging.sh: found 1 record project logging for 6bd1519d-7b5b-465b-bf80-bbeacd28faab ++ echo good - ./logging.sh: found 1 record project logging for 6bd1519d-7b5b-465b-bf80-bbeacd28faab Testing if record is in correct format . . . ++ echo Testing if record is in correct format . . . ++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging. _search message 6bd1519d-7b5b-465b-bf80-bbeacd28faab ++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging.*/_search?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' --connect-timeout 1 ++ python test-json-parsing.py 6bd1519d-7b5b-465b-bf80-bbeacd28faab ++ local pod=logging-es-data-master-i5jtydma-1-21qpf ++ local 'endpoint=/project.logging.*/_search?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' ++ shift ++ shift ++ args=("${@:-}") ++ local args ++ local secret_dir=/etc/elasticsearch/secret/ ++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging.*/_search?q=message:6bd1519d-7b5b-465b-bf80-bbeacd28faab' Success: record contains all of the expected fields/values Success: ./logging.sh passed ++ echo Success: ./logging.sh passed ++ exit 0 running test test-mux.sh ++ set -o nounset ++ set -o pipefail ++ type get_running_pod ++ '[' false == false -o false == false ']' Skipping -- This test requires both USE_MUX_CLIENT and MUX_ALLOW_EXTERNAL are true. ++ echo 'Skipping -- This test requires both USE_MUX_CLIENT and MUX_ALLOW_EXTERNAL are true.' ++ exit 0 SKIPPING upgrade test for now running test test-viaq-data-model.sh ++ set -o nounset ++ set -o pipefail ++ type get_running_pod ++ [[ 1 -ne 1 ]] ++ [[ true = \f\a\l\s\e ]] ++ CLUSTER=true ++ ops=-ops ++ INDEX_PREFIX= ++ PROJ_PREFIX=project. ++ ARTIFACT_DIR=/tmp/origin-aggregated-logging/artifacts ++ '[' '!' -d /tmp/origin-aggregated-logging/artifacts ']' ++ get_test_user_token ++ oc login --username=admin --password=admin +++ oc whoami -t ++ test_token=M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg +++ oc whoami ++ test_name=admin ++ test_ip=127.0.0.1 ++ oc login --username=system:admin ++ TEST_DIVIDER=------------------------------------------ +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-30qz3 ++ remove_test_volume ++ oc get template logging-fluentd-template -o json ++ oc replace -f - ++ python -c 'import json, sys; obj = json.loads(sys.stdin.read()); vm = obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"]; obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"] = [xx for xx in vm if xx["name"] != "cdmtest"]; vs = obj["objects"][0]["spec"]["template"]["spec"]["volumes"]; obj["objects"][0]["spec"]["template"]["spec"]["volumes"] = [xx for xx in vs if xx["name"] != "cdmtest"]; print json.dumps(obj, indent=2)' template "logging-fluentd-template" replaced +++ mktemp ++ cfg=/tmp/tmp.Y2LYH1Lbjk ++ cat ++ add_test_volume /tmp/tmp.Y2LYH1Lbjk ++ oc get template logging-fluentd-template -o json ++ oc replace -f - ++ python -c 'import json, sys; obj = json.loads(sys.stdin.read()); obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"].append({"name": "cdmtest", "mountPath": "/etc/fluent/configs.d/openshift/filter-pre-cdm-test.conf", "readOnly": True}); obj["objects"][0]["spec"]["template"]["spec"]["volumes"].append({"name": "cdmtest", "hostPath": {"path": "/tmp/tmp.Y2LYH1Lbjk"}}); print json.dumps(obj, indent=2)' template "logging-fluentd-template" replaced ++ trap cleanup INT TERM EXIT ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-30qz3 ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-30qz3 ++ '[' -z logging-fluentd-30qz3 -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-30qz3 ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' pod for component=fluentd not running yet ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-9nvhp ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-9nvhp ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-9nvhp ++ keep_fields=method,statusCode,type,@timestamp,req,res ++ write_and_verify_logs test1 ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2 +++ date +%s ++ local starttime=1496438825 +++ date -u --rfc-3339=ns START wait_for_fluentd_to_catch_up at 2017-06-02 21:27:05.797865404+00:00 ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:27:05.797865404+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ uuidgen ++ local uuid_es_ops=609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ local expected=1 ++ local timeout=300 ++ add_test_message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ echo added es message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad added es message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ logger -i -p local6.info -t 609b5ff8-8735-4292-a3d6-45d3f4ed8112 609b5ff8-8735-4292-a3d6-45d3f4ed8112 added es-ops message 609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ echo added es-ops message 609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 292 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER 609b5ff8-8735-4292-a3d6-45d3f4ed8112 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:609b5ff8-8735-4292-a3d6-45d3f4ed8112' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:609b5ff8-8735-4292-a3d6-45d3f4ed8112' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:609b5ff8-8735-4292-a3d6-45d3f4ed8112' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ '[' -n get_logmessage ']' ++ get_logmessage 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ logmessage=0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ '[' -n get_logmessage2 ']' ++ get_logmessage2 609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ logmessage2=609b5ff8-8735-4292-a3d6-45d3f4ed8112 +++ date +%s ++ local endtime=1496438838 +++ expr 1496438838 - 1496438825 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-02 21:27:18.651049356+00:00 ++ echo END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-02 21:27:18.651049356+00:00 ++ return 0 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ kpod=logging-kibana-1-54cs7 ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es project.logging _search message 0bc7d44a-2d06-4eb8-8564-6671cd0f48ad ++ python test-viaq-data-model.py test1 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:0bc7d44a-2d06-4eb8-8564-6671cd0f48ad' ++ : ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es-ops .operations _search message 609b5ff8-8735-4292-a3d6-45d3f4ed8112 ++ python test-viaq-data-model.py test1 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:609b5ff8-8735-4292-a3d6-45d3f4ed8112' ++ : ++ '[' 0 '!=' 0 ']' ++ return 0 ++ add_cdm_env_var_val CDM_USE_UNDEFINED '"true"' +++ mktemp ++ junk=/tmp/tmp.THsMbzPNum ++ cat ++ oc get template logging-fluentd-template -o yaml ++ sed '/env:/r /tmp/tmp.THsMbzPNum' ++ oc replace -f - template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.THsMbzPNum ++ add_cdm_env_var_val CDM_EXTRA_KEEP_FIELDS method,statusCode,type,@timestamp,req,res +++ mktemp ++ junk=/tmp/tmp.EDDAYNBuT0 ++ cat ++ oc get template logging-fluentd-template -o yaml ++ oc replace -f - ++ sed '/env:/r /tmp/tmp.EDDAYNBuT0' template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.EDDAYNBuT0 ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-9nvhp ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-9nvhp ++ '[' -z logging-fluentd-9nvhp -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-9nvhp ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' pod for component=fluentd not running yet ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-tr8c5 ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-tr8c5 ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-tr8c5 ++ write_and_verify_logs test2 ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2 +++ date +%s ++ local starttime=1496438866 +++ date -u --rfc-3339=ns START wait_for_fluentd_to_catch_up at 2017-06-02 21:27:46.139183619+00:00 ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:27:46.139183619+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ uuidgen ++ local uuid_es_ops=9203c199-5347-4f79-b503-79289825a37f ++ local expected=1 ++ local timeout=300 ++ add_test_message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ echo added es message d8d12809-1721-4b10-bf1b-4d93921b63f4 added es message d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ logger -i -p local6.info -t 9203c199-5347-4f79-b503-79289825a37f 9203c199-5347-4f79-b503-79289825a37f added es-ops message 9203c199-5347-4f79-b503-79289825a37f ++ echo added es-ops message 9203c199-5347-4f79-b503-79289825a37f ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ get_count_from_json +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message d8d12809-1721-4b10-bf1b-4d93921b63f4 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 292 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=9203c199-5347-4f79-b503-79289825a37f ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER 9203c199-5347-4f79-b503-79289825a37f +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:9203c199-5347-4f79-b503-79289825a37f' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:9203c199-5347-4f79-b503-79289825a37f' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:9203c199-5347-4f79-b503-79289825a37f' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 9203c199-5347-4f79-b503-79289825a37f ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 9203c199-5347-4f79-b503-79289825a37f ++ '[' -n get_logmessage ']' ++ get_logmessage d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ logmessage=d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ '[' -n get_logmessage2 ']' ++ get_logmessage2 9203c199-5347-4f79-b503-79289825a37f ++ logmessage2=9203c199-5347-4f79-b503-79289825a37f +++ date +%s ++ local endtime=1496438879 +++ expr 1496438879 - 1496438866 +++ date -u --rfc-3339=ns ++ echo END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-02 21:27:59.133590718+00:00 END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-02 21:27:59.133590718+00:00 ++ return 0 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ kpod=logging-kibana-1-54cs7 ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es project.logging _search message d8d12809-1721-4b10-bf1b-4d93921b63f4 ++ python test-viaq-data-model.py test2 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:d8d12809-1721-4b10-bf1b-4d93921b63f4' ++ : ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es-ops .operations _search message 9203c199-5347-4f79-b503-79289825a37f ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:9203c199-5347-4f79-b503-79289825a37f' ++ python test-viaq-data-model.py test2 ++ : ++ '[' 0 '!=' 0 ']' ++ return 0 ++ del_cdm_env_var CDM_EXTRA_KEEP_FIELDS ++ oc get template logging-fluentd-template -o yaml ++ sed '/- name: CDM_EXTRA_KEEP_FIELDS$/,/value:/d' ++ oc replace -f - template "logging-fluentd-template" replaced ++ add_cdm_env_var_val CDM_EXTRA_KEEP_FIELDS undefined4,undefined5,method,statusCode,type,@timestamp,req,res +++ mktemp ++ junk=/tmp/tmp.HveLGO6sEV ++ cat ++ oc get template logging-fluentd-template -o yaml ++ sed '/env:/r /tmp/tmp.HveLGO6sEV' ++ oc replace -f - template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.HveLGO6sEV ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-tr8c5 ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-tr8c5 ++ '[' -z logging-fluentd-tr8c5 -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-tr8c5 ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' pod for component=fluentd not running yet ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-4gn2k ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-4gn2k ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-4gn2k ++ write_and_verify_logs test3 ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2 +++ date +%s ++ local starttime=1496438906 +++ date -u --rfc-3339=ns START wait_for_fluentd_to_catch_up at 2017-06-02 21:28:26.167129417+00:00 ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:28:26.167129417+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=20dc649f-fd8c-4337-8e51-96293e418695 +++ uuidgen ++ local uuid_es_ops=eb0713dc-21ed-4841-9983-17ed6b91e840 ++ local expected=1 ++ local timeout=300 ++ add_test_message 20dc649f-fd8c-4337-8e51-96293e418695 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/20dc649f-fd8c-4337-8e51-96293e418695 added es message 20dc649f-fd8c-4337-8e51-96293e418695 ++ echo added es message 20dc649f-fd8c-4337-8e51-96293e418695 ++ logger -i -p local6.info -t eb0713dc-21ed-4841-9983-17ed6b91e840 eb0713dc-21ed-4841-9983-17ed6b91e840 added es-ops message eb0713dc-21ed-4841-9983-17ed6b91e840 ++ echo added es-ops message eb0713dc-21ed-4841-9983-17ed6b91e840 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=20dc649f-fd8c-4337-8e51-96293e418695 ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message 20dc649f-fd8c-4337-8e51-96293e418695 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:20dc649f-fd8c-4337-8e51-96293e418695' +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 292 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for 20dc649f-fd8c-4337-8e51-96293e418695 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for 20dc649f-fd8c-4337-8e51-96293e418695 ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=eb0713dc-21ed-4841-9983-17ed6b91e840 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER eb0713dc-21ed-4841-9983-17ed6b91e840 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:eb0713dc-21ed-4841-9983-17ed6b91e840' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:eb0713dc-21ed-4841-9983-17ed6b91e840' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:eb0713dc-21ed-4841-9983-17ed6b91e840' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for eb0713dc-21ed-4841-9983-17ed6b91e840 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for eb0713dc-21ed-4841-9983-17ed6b91e840 ++ '[' -n get_logmessage ']' ++ get_logmessage 20dc649f-fd8c-4337-8e51-96293e418695 ++ logmessage=20dc649f-fd8c-4337-8e51-96293e418695 ++ '[' -n get_logmessage2 ']' ++ get_logmessage2 eb0713dc-21ed-4841-9983-17ed6b91e840 ++ logmessage2=eb0713dc-21ed-4841-9983-17ed6b91e840 +++ date +%s ++ local endtime=1496438919 +++ expr 1496438919 - 1496438906 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-02 21:28:39.188484943+00:00 ++ echo END wait_for_fluentd_to_catch_up took 13 seconds at 2017-06-02 21:28:39.188484943+00:00 ++ return 0 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ kpod=logging-kibana-1-54cs7 ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es project.logging _search message 20dc649f-fd8c-4337-8e51-96293e418695 ++ python test-viaq-data-model.py test3 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:20dc649f-fd8c-4337-8e51-96293e418695' ++ : ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es-ops .operations _search message eb0713dc-21ed-4841-9983-17ed6b91e840 ++ python test-viaq-data-model.py test3 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:eb0713dc-21ed-4841-9983-17ed6b91e840' ++ : ++ '[' 0 '!=' 0 ']' ++ return 0 ++ add_cdm_env_var_val CDM_UNDEFINED_NAME myname +++ mktemp ++ junk=/tmp/tmp.Xq2tcYBcQq ++ cat ++ oc get template logging-fluentd-template -o yaml ++ sed '/env:/r /tmp/tmp.Xq2tcYBcQq' ++ oc replace -f - template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.Xq2tcYBcQq ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-4gn2k ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-4gn2k ++ '[' -z logging-fluentd-4gn2k -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-4gn2k ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' ++ echo pod for component=fluentd not running yet pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-z3676 ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-z3676 ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-z3676 ++ write_and_verify_logs test4 ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2 +++ date +%s ++ local starttime=1496438936 +++ date -u --rfc-3339=ns ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:28:56.958786052+00:00 START wait_for_fluentd_to_catch_up at 2017-06-02 21:28:56.958786052+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ uuidgen ++ local uuid_es_ops=5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ local expected=1 ++ local timeout=300 ++ add_test_message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 added es message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ echo added es message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ logger -i -p local6.info -t 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 added es-ops message 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ echo added es-ops message 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ get_count_from_json +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' --connect-timeout 1 +++ get_count_from_json +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 292 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5fb5acdc-a9d3-4caa-b8a7-93152fdf7682' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5fb5acdc-a9d3-4caa-b8a7-93152fdf7682' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:5fb5acdc-a9d3-4caa-b8a7-93152fdf7682' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ '[' -n get_logmessage ']' ++ get_logmessage b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ logmessage=b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ '[' -n get_logmessage2 ']' ++ get_logmessage2 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ logmessage2=5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 +++ date +%s ++ local endtime=1496438950 +++ expr 1496438950 - 1496438936 +++ date -u --rfc-3339=ns ++ echo END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:29:10.011706458+00:00 END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:29:10.011706458+00:00 ++ return 0 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ kpod=logging-kibana-1-54cs7 ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es project.logging _search message b7fa6b26-dc4c-4519-ac2d-5cff58205cf6 ++ python test-viaq-data-model.py test4 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:b7fa6b26-dc4c-4519-ac2d-5cff58205cf6' ++ : ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es-ops .operations _search message 5fb5acdc-a9d3-4caa-b8a7-93152fdf7682 ++ python test-viaq-data-model.py test4 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:5fb5acdc-a9d3-4caa-b8a7-93152fdf7682' ++ : ++ '[' 0 '!=' 0 ']' ++ return 0 ++ del_cdm_env_var CDM_EXTRA_KEEP_FIELDS ++ oc get template logging-fluentd-template -o yaml ++ sed '/- name: CDM_EXTRA_KEEP_FIELDS$/,/value:/d' ++ oc replace -f - template "logging-fluentd-template" replaced ++ add_cdm_env_var_val CDM_EXTRA_KEEP_FIELDS undefined4,undefined5,empty1,undefined3,method,statusCode,type,@timestamp,req,res +++ mktemp ++ junk=/tmp/tmp.cH8HSDnxbd ++ cat ++ oc get template logging-fluentd-template -o yaml ++ sed '/env:/r /tmp/tmp.cH8HSDnxbd' ++ oc replace -f - template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.cH8HSDnxbd ++ add_cdm_env_var_val CDM_KEEP_EMPTY_FIELDS undefined4,undefined5,empty1,undefined3 +++ mktemp ++ junk=/tmp/tmp.wMMvTzBeiF ++ cat ++ oc get template logging-fluentd-template -o yaml ++ sed '/env:/r /tmp/tmp.wMMvTzBeiF' ++ oc replace -f - template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.wMMvTzBeiF ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-z3676 ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-z3676 ++ '[' -z logging-fluentd-z3676 -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-z3676 ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' pod for component=fluentd not running yet ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-0s6st ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-0s6st ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ fpod=logging-fluentd-0s6st ++ write_and_verify_logs test5 allow_empty ++ expected=1 ++ rc=0 ++ wait_for_fluentd_to_catch_up get_logmessage get_logmessage2 +++ date +%s ++ local starttime=1496438975 +++ date -u --rfc-3339=ns ++ echo START wait_for_fluentd_to_catch_up at 2017-06-02 21:29:35.721178873+00:00 START wait_for_fluentd_to_catch_up at 2017-06-02 21:29:35.721178873+00:00 +++ get_running_pod es +++ oc get pods -l component=es +++ awk -v sel=es '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_pod=logging-es-data-master-i5jtydma-1-21qpf +++ get_running_pod es-ops +++ oc get pods -l component=es-ops +++ awk -v sel=es-ops '$1 ~ sel && $3 == "Running" {print $1}' ++ local es_ops_pod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ '[' -z logging-es-ops-data-master-tycs4wrj-1-70xts ']' +++ uuidgen ++ local uuid_es=e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ uuidgen ++ local uuid_es_ops=9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 ++ local expected=1 ++ local timeout=300 ++ add_test_message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ local kib_pod=logging-kibana-1-54cs7 ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s http://localhost:5601/e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ echo added es message e315915e-8c11-4f05-8e1a-785caa3e4bbf added es message e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ logger -i -p local6.info -t 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 added es-ops message 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 ++ echo added es-ops message 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 ++ local rc=0 ++ espod=logging-es-data-master-i5jtydma-1-21qpf ++ myproject=project.logging ++ mymessage=e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ expected=1 ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 299 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 298 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 297 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 296 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 295 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 294 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 293 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 292 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=0 ++ test 0 = 1 ++ sleep 1 ++ let ii=ii-1 ++ '[' 291 -gt 0 ']' ++ test_count_expected ++ myfield=message +++ query_es_from_es logging-es-data-master-i5jtydma-1-21qpf project.logging _count message e315915e-8c11-4f05-8e1a-785caa3e4bbf +++ get_count_from_json +++ curl_es logging-es-data-master-i5jtydma-1-21qpf '/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' --connect-timeout 1 +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ local pod=logging-es-data-master-i5jtydma-1-21qpf +++ local 'endpoint=/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-data-master-i5jtydma-1-21qpf -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/project.logging*/_count?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 291 -le 0 ']' ++ return 0 good - wait_for_fluentd_to_catch_up: found 1 record project logging for e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project logging for e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ espod=logging-es-ops-data-master-tycs4wrj-1-70xts ++ myproject=.operations ++ mymessage=9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 ++ expected=1 ++ myfield=systemd.u.SYSLOG_IDENTIFIER ++ wait_until_cmd_or_err test_count_expected test_count_err 300 ++ let ii=300 ++ local interval=1 ++ '[' 300 -gt 0 ']' ++ test_count_expected ++ myfield=systemd.u.SYSLOG_IDENTIFIER +++ query_es_from_es logging-es-ops-data-master-tycs4wrj-1-70xts .operations _count systemd.u.SYSLOG_IDENTIFIER 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 +++ get_count_from_json +++ curl_es logging-es-ops-data-master-tycs4wrj-1-70xts '/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:9de196e6-e1cd-4ed2-af66-a3bcf6618fe0' --connect-timeout 1 +++ local pod=logging-es-ops-data-master-tycs4wrj-1-70xts +++ local 'endpoint=/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:9de196e6-e1cd-4ed2-af66-a3bcf6618fe0' +++ python -c 'import json, sys; print json.loads(sys.stdin.read()).get("count", 0)' +++ shift +++ shift +++ args=("${@:-}") +++ local args +++ local secret_dir=/etc/elasticsearch/secret/ +++ oc exec logging-es-ops-data-master-tycs4wrj-1-70xts -- curl --silent --insecure --connect-timeout 1 --key /etc/elasticsearch/secret/admin-key --cert /etc/elasticsearch/secret/admin-cert 'https://localhost:9200/.operations*/_count?q=systemd.u.SYSLOG_IDENTIFIER:9de196e6-e1cd-4ed2-af66-a3bcf6618fe0' ++ local nrecs=1 ++ test 1 = 1 ++ break ++ '[' 300 -le 0 ']' ++ return 0 ++ echo good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 ++ '[' -n get_logmessage ']' ++ get_logmessage e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ logmessage=e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ '[' -n get_logmessage2 ']' ++ get_logmessage2 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 ++ logmessage2=9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 +++ date +%s ++ local endtime=1496438989 +++ expr 1496438989 - 1496438975 +++ date -u --rfc-3339=ns END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:29:49.915367539+00:00 ++ echo END wait_for_fluentd_to_catch_up took 14 seconds at 2017-06-02 21:29:49.915367539+00:00 ++ return 0 +++ get_running_pod kibana +++ oc get pods -l component=kibana +++ awk -v sel=kibana '$1 ~ sel && $3 == "Running" {print $1}' ++ kpod=logging-kibana-1-54cs7 ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es project.logging _search message e315915e-8c11-4f05-8e1a-785caa3e4bbf ++ python test-viaq-data-model.py test5 allow_empty ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es:9200/project.logging*/_search?q=message:e315915e-8c11-4f05-8e1a-785caa3e4bbf' ++ : ++ '[' 0 = 0 ']' ++ curl_es_from_kibana logging-kibana-1-54cs7 logging-es-ops .operations _search message 9de196e6-e1cd-4ed2-af66-a3bcf6618fe0 ++ python test-viaq-data-model.py test5 allow_empty ++ oc exec logging-kibana-1-54cs7 -c kibana -- curl --connect-timeout 1 -s -k --cert /etc/kibana/keys/cert --key /etc/kibana/keys/key -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer M06ucLbXQmf1r0rTr11YC9NsgRypaEh3NTq39If9gPg' -H 'X-Forwarded-For: 127.0.0.1' 'https://logging-es-ops:9200/.operations*/_search?q=message:9de196e6-e1cd-4ed2-af66-a3bcf6618fe0' ++ : ++ '[' 0 '!=' 0 ']' ++ return 0 ++ cleanup ++ remove_test_volume ++ oc get template logging-fluentd-template -o json ++ python -c 'import json, sys; obj = json.loads(sys.stdin.read()); vm = obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"]; obj["objects"][0]["spec"]["template"]["spec"]["containers"][0]["volumeMounts"] = [xx for xx in vm if xx["name"] != "cdmtest"]; vs = obj["objects"][0]["spec"]["template"]["spec"]["volumes"]; obj["objects"][0]["spec"]["template"]["spec"]["volumes"] = [xx for xx in vs if xx["name"] != "cdmtest"]; print json.dumps(obj, indent=2)' ++ oc replace -f - template "logging-fluentd-template" replaced ++ remove_cdm_env ++ oc get template logging-fluentd-template -o yaml ++ sed '/- name: CDM_/,/value:/d' ++ oc replace -f - template "logging-fluentd-template" replaced ++ rm -f /tmp/tmp.Y2LYH1Lbjk ++ restart_fluentd ++ oc delete daemonset logging-fluentd daemonset "logging-fluentd" deleted ++ wait_for_pod_ACTION stop logging-fluentd-0s6st ++ local ii=120 ++ local incr=10 ++ '[' stop = start ']' ++ curpod=logging-fluentd-0s6st ++ '[' -z logging-fluentd-0s6st -a -n '' ']' ++ '[' 120 -gt 0 ']' ++ '[' stop = stop ']' ++ oc describe pod/logging-fluentd-0s6st ++ '[' stop = start ']' ++ break ++ '[' 120 -le 0 ']' ++ return 0 ++ oc process logging-fluentd-template ++ oc create -f - daemonset "logging-fluentd" created ++ wait_for_pod_ACTION start fluentd ++ local ii=120 ++ local incr=10 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod= ++ '[' 120 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z '' ']' ++ '[' -n '' ']' ++ '[' -n 1 ']' pod for component=fluentd not running yet ++ echo pod for component=fluentd not running yet ++ sleep 10 +++ expr 120 - 10 ++ ii=110 ++ '[' start = start ']' +++ get_running_pod fluentd +++ oc get pods -l component=fluentd +++ awk -v sel=fluentd '$1 ~ sel && $3 == "Running" {print $1}' ++ curpod=logging-fluentd-7hhbv ++ '[' 110 -gt 0 ']' ++ '[' start = stop ']' ++ '[' start = start ']' ++ '[' -z logging-fluentd-7hhbv ']' ++ break ++ '[' 110 -le 0 ']' ++ return 0 SKIPPING reinstall test for now /data/src/github.com/openshift/origin/hack/lib/log/system.sh: line 31: 4604 Terminated sar -A -o "${binary_logfile}" 1 86400 > /dev/null 2> "${stderr_logfile}" (wd: /data/src/github.com/openshift/origin) [INFO] [CLEANUP] Beginning cleanup routines... [INFO] [CLEANUP] Dumping cluster events to /tmp/origin-aggregated-logging/artifacts/events.txt [INFO] [CLEANUP] Dumping etcd contents to /tmp/origin-aggregated-logging/artifacts/etcd [WARNING] No compiled `etcdhelper` binary was found. Attempting to build one using: [WARNING] $ hack/build-go.sh tools/etcdhelper ++ Building go targets for linux/amd64: tools/etcdhelper /data/src/github.com/openshift/origin/hack/build-go.sh took 166 seconds 2017-06-02 17:33:05.232436 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated [INFO] [CLEANUP] Dumping container logs to /tmp/origin-aggregated-logging/logs/containers [INFO] [CLEANUP] Truncating log files over 100M [INFO] [CLEANUP] Stopping docker containers [INFO] [CLEANUP] Removing docker containers Error response from daemon: No such container: 794853d3226873782b75eb5508530f38ea1fe01206c69f4e664593d6fe9d6268 Error response from daemon: No such container: 46f6ac9b33b77903c633fa07aa9e864da56245983b8f497b152a6c79abc2941b Error response from daemon: No such container: 7b58cde36dfe27350e62f8167dc69370086494f862a6fc21fb8d9bfedfcf0b7b Error response from daemon: No such container: 3a123be74c95a10599c3ccf289540b06f755c1bd8dc21d6f5cef11552b86b965 Error response from daemon: No such container: 4c8c6d24a2933419cf1a2eab77e7e734bbd1558a0f3db24aa602c5fb95a70c18 Error response from daemon: No such container: 9fff3da16c904e41054a5212d95fc36c994ed4f3ab6d3b3ea3f0ed75cbbf8e11 Error response from daemon: No such container: 7ea161620b46db33dbc5e41a41efc976effea48b3d94a82bfc2815f1819eae69 Error response from daemon: No such container: 8ab707e605107648abde7abd8603867f93b01e1f291b28348345e446f660bf10 Error response from daemon: No such container: 2d41466f02d3dd14f4d0c036334f20368199bb39b428582cafae1f1695388053 Error response from daemon: No such container: 4fdb7a5991a4e321805ac8ef4fbdb03017787e4d1c589d1297833f21128e490c Error response from daemon: No such container: c48a6dcda310f3334d3e5a0d092f621090eba7e0e34ebf23726c001cef262b27 Error response from daemon: No such container: c0f9839de01bd0239ef269f0b426974f3d52cc28923c10541783194eeb5ff719 Error response from daemon: No such container: e8ecf00948da746a39294af6dcc9196f5bde6f735b788c0b24ba55cad16ec51f [INFO] [CLEANUP] Killing child processes [INFO] [CLEANUP] Pruning etcd data directory [INFO] /data/src/github.com/openshift/origin/logging.sh exited with code 0 after 00h 55m 24s Finished GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh *************************************************** real 55m24.556s user 9m6.802s sys 1m16.460s ==> openshiftdev: Downloading logs ==> openshiftdev: Downloading artifacts from '/var/log/yum.log' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts/yum.log' ==> openshiftdev: Downloading artifacts from '/var/log/secure' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts/secure' ==> openshiftdev: Downloading artifacts from '/var/log/audit/audit.log' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts/audit.log' ==> openshiftdev: Downloading artifacts from '/tmp/origin-aggregated-logging/' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts' + false + false + test_pull_requests --mark_test_success 462 --repo origin-aggregated-logging --config /var/lib/jenkins/.test_pull_requests_logging.json Rate limit remaining: 1421 Marking SUCCESS for pull request #462 in repo 'origin-aggregated-logging' Recreating comment #305901746 with Aggregated Logging Test Results: SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1550/) (Base Commit: 0986045cc4f5bb216c462883f4f9ae77025f3e62) Deleting comment #305901746 Updating status of 'f1648e6796ba5adbad0fc4b3a3b56bb144328091' with state: success Rate limit remaining: 1416; delta: 5 [description-setter] Description set: <a href="https://github.com/openshift/origin-aggregated-logging/pull/462">https://github.com/openshift/origin-aggregated-logging/pull/462</a> [PostBuildScript] - Execution post build scripts. [workspace@2] $ /bin/sh -xe /tmp/hudson1656869019568887179.sh + INSTANCE_NAME=origin_logging-rhel7-1550 + pushd origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 + rc=0 + '[' -f .vagrant-openshift.json ']' ++ /usr/bin/vagrant ssh -c 'sudo ausearch -m avc' + ausearchresult='<no matches>' + rc=1 + '[' '<no matches>' = '<no matches>' ']' + rc=0 + /usr/bin/vagrant destroy -f ==> openshiftdev: Terminating the instance... ==> openshiftdev: Running cleanup tasks for 'shell' provisioner... + popd ~/jobs/test-origin-aggregated-logging/workspace@2 + exit 0 Finished: SUCCESS