Started by upstream project "test_pull_request_origin_aggregated_logging" build number 91 originally caused by: Started by remote host 50.17.198.52 [EnvInject] - Loading node environment variables. Building in workspace /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content OS_ROOT=/data/src/github.com/openshift/origin INSTANCE_TYPE=c4.xlarge GITHUB_REPO=openshift OS=rhel7 TESTNAME=logging [EnvInject] - Variables injected successfully. [workspace@2] $ /bin/sh -xe /tmp/hudson4307702087416095772.sh + false + unset GOPATH + REPO_NAME=origin-aggregated-logging + rm -rf origin-aggregated-logging + vagrant origin-local-checkout --replace --repo origin-aggregated-logging -b master You don't seem to have the GOPATH environment variable set on your system. See: 'go help gopath' for more details about GOPATH. Waiting for the cloning process to finish Cloning origin-aggregated-logging ... Submodule 'deployer/common' (https://github.com/openshift/origin-integration-common) registered for path 'deployer/common' Submodule 'kibana-proxy' (https://github.com/fabric8io/openshift-auth-proxy.git) registered for path 'kibana-proxy' Cloning into 'deployer/common'... Submodule path 'deployer/common': checked out '45bf993212cdcbab5cbce3b3fab74a72b851402e' Cloning into 'kibana-proxy'... Submodule path 'kibana-proxy': checked out '118dfb40f7a8082d370ba7f4805255c9ec7c8178' Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 + pushd origin-aggregated-logging ~/jobs/test-origin-aggregated-logging/workspace@2/origin-aggregated-logging ~/jobs/test-origin-aggregated-logging/workspace@2 + git checkout master Already on 'master' + popd ~/jobs/test-origin-aggregated-logging/workspace@2 + '[' -n '' ']' + vagrant origin-local-checkout --replace You don't seem to have the GOPATH environment variable set on your system. See: 'go help gopath' for more details about GOPATH. Waiting for the cloning process to finish Checking repo integrity for /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 # On branch master # Untracked files: # (use "git add ..." to include in what will be committed) # # artifacts/ # # It took 2.35 seconds to enumerate untracked files. 'status -uno' # may speed it up, but you have to be careful not to forget to add # new files yourself (see 'git help status'). nothing added to commit but untracked files present (use "git add" to track) ~/jobs/test-origin-aggregated-logging/workspace@2 Replacing: /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 From https://github.com/openshift/origin 2458531..cc2ed8f master -> origin/master Already on 'master' Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded. (use "git pull" to update your local branch) HEAD is now at cc2ed8f Merge pull request #13586 from danwinship/egress-router-proxy Removing .vagrant-openshift.json Removing .vagrant/ Removing artifacts/ fatal: branch name required ~/jobs/test-origin-aggregated-logging/workspace@2 Origin repositories cloned into /var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 + pushd origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 + INSTANCE_NAME=origin_logging-rhel7-1653 + GIT_URL=https://github.com/openshift/origin-aggregated-logging ++ sed s,https://,, ++ echo https://github.com/openshift/origin-aggregated-logging + OAL_LOCAL_PATH=github.com/openshift/origin-aggregated-logging + OS_O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging + env + sort _=/bin/env BRANCH=master BUILD_CAUSE=UPSTREAMTRIGGER BUILD_CAUSE_UPSTREAMTRIGGER=true BUILD_DISPLAY_NAME=#1653 BUILD_ID=1653 BUILD_NUMBER=1653 BUILD_TAG=jenkins-test-origin-aggregated-logging-1653 BUILD_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1653/ EXECUTOR_NUMBER=93 GITHUB_REPO=openshift HOME=/var/lib/jenkins HUDSON_COOKIE=a8d2e912-5200-4ce4-b6b1-6a6ad8ccc252 HUDSON_HOME=/var/lib/jenkins HUDSON_SERVER_COOKIE=ec11f8b2841c966f HUDSON_URL=https://ci.openshift.redhat.com/jenkins/ INSTANCE_TYPE=c4.xlarge JENKINS_HOME=/var/lib/jenkins JENKINS_SERVER_COOKIE=ec11f8b2841c966f JENKINS_URL=https://ci.openshift.redhat.com/jenkins/ JOB_BASE_NAME=test-origin-aggregated-logging JOB_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/display/redirect JOB_NAME=test-origin-aggregated-logging JOB_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/ LANG=en_US.UTF-8 LOGNAME=jenkins MERGE=false MERGE_SEVERITY=none NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat NODE_LABELS=master NODE_NAME=master OLDPWD=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 OPENSHIFT_ANSIBLE_TARGET_BRANCH=master ORIGIN_AGGREGATED_LOGGING_PULL_ID=421 ORIGIN_AGGREGATED_LOGGING_TARGET_BRANCH=master OS_ANSIBLE_BRANCH=master OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS=rhel7 OS_ROOT=/data/src/github.com/openshift/origin PATH=/sbin:/usr/sbin:/bin:/usr/bin PWD=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin ROOT_BUILD_CAUSE=REMOTECAUSE ROOT_BUILD_CAUSE_REMOTECAUSE=true RUN_CHANGES_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1653/display/redirect?page=changes RUN_DISPLAY_URL=https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1653/display/redirect SHELL=/bin/bash SHLVL=3 TESTNAME=logging TEST_PERF=false USER=jenkins WORKSPACE=/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2 XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt + vagrant origin-init --stage inst --os rhel7 --instance-type c4.xlarge origin_logging-rhel7-1653 Reading AWS credentials from /var/lib/jenkins/.awscred Searching devenv-rhel7_* for latest base AMI (required_name_tag=) Found: ami-83a1fc95 (devenv-rhel7_6323) ++ seq 0 2 + for i in '$(seq 0 2)' + vagrant up --provider aws Bringing machine 'openshiftdev' up with 'aws' provider... ==> openshiftdev: Warning! The AWS provider doesn't support any of the Vagrant ==> openshiftdev: high-level network configurations (`config.vm.network`). They ==> openshiftdev: will be silently ignored. ==> openshiftdev: Warning! You're launching this instance into a VPC without an ==> openshiftdev: elastic IP. Please verify you're properly connected to a VPN so ==> openshiftdev: you can access this machine, otherwise Vagrant will not be able ==> openshiftdev: to SSH into it. ==> openshiftdev: Launching an instance with the following settings... ==> openshiftdev: -- Type: c4.xlarge ==> openshiftdev: -- AMI: ami-83a1fc95 ==> openshiftdev: -- Region: us-east-1 ==> openshiftdev: -- Keypair: libra ==> openshiftdev: -- Subnet ID: subnet-cf57c596 ==> openshiftdev: -- User Data: yes ==> openshiftdev: -- User Data: ==> openshiftdev: # cloud-config ==> openshiftdev: ==> openshiftdev: growpart: ==> openshiftdev: mode: auto ==> openshiftdev: devices: ['/'] ==> openshiftdev: runcmd: ==> openshiftdev: - [ sh, -xc, "sed -i s/^Defaults.*requiretty/#Defaults requiretty/g /etc/sudoers"] ==> openshiftdev: ==> openshiftdev: -- Block Device Mapping: [{"DeviceName"=>"/dev/sda1", "Ebs.VolumeSize"=>25, "Ebs.VolumeType"=>"gp2"}, {"DeviceName"=>"/dev/sdb", "Ebs.VolumeSize"=>35, "Ebs.VolumeType"=>"gp2"}] ==> openshiftdev: -- Terminate On Shutdown: false ==> openshiftdev: -- Monitoring: false ==> openshiftdev: -- EBS optimized: false ==> openshiftdev: -- Assigning a public IP address in a VPC: false ==> openshiftdev: Waiting for instance to become "ready"... ==> openshiftdev: Waiting for SSH to become available... ==> openshiftdev: Machine is booted and ready for use! ==> openshiftdev: Running provisioner: setup (shell)... openshiftdev: Running: /tmp/vagrant-shell20170609-24699-11sirr.sh ==> openshiftdev: Host: ec2-52-200-78-107.compute-1.amazonaws.com + break + vagrant sync-origin-aggregated-logging -c -s Running ssh/sudo command 'rm -rf /data/src/github.com/openshift/origin-aggregated-logging-bare; ' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /ec2-user/.ssh; mv /tmp/file20170609-27108-lw83ui /ec2-user/.ssh/config && chown ec2-user:ec2-user /ec2-user/.ssh/config && chmod 0600 /ec2-user/.ssh/config' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/' with timeout 14400. Attempt #0 Running ssh/sudo command 'mkdir -p /data/src/github.com/openshift/builder && chown -R ec2-user:ec2-user /data/src/github.com/openshift/' with timeout 14400. Attempt #0 Running ssh/sudo command 'set -e rm -fr /data/src/github.com/openshift/origin-aggregated-logging-bare; if [ ! -d /data/src/github.com/openshift/origin-aggregated-logging-bare ]; then git clone --quiet --bare https://github.com/openshift/origin-aggregated-logging.git /data/src/github.com/openshift/origin-aggregated-logging-bare >/dev/null fi ' with timeout 14400. Attempt #0 Synchronizing local sources Synchronizing [origin-aggregated-logging@master] from origin-aggregated-logging... Warning: Permanently added '52.200.78.107' (ECDSA) to the list of known hosts. Running ssh/sudo command 'set -e if [ -d /data/src/github.com/openshift/origin-aggregated-logging-bare ]; then rm -rf /data/src/github.com/openshift/origin-aggregated-logging echo 'Cloning origin-aggregated-logging ...' git clone --quiet --recurse-submodules /data/src/github.com/openshift/origin-aggregated-logging-bare /data/src/github.com/openshift/origin-aggregated-logging else MISSING_REPO+='origin-aggregated-logging-bare' fi if [ -n "$MISSING_REPO" ]; then echo 'Missing required upstream repositories:' echo $MISSING_REPO echo 'To fix, execute command: vagrant clone-upstream-repos' fi ' with timeout 14400. Attempt #0 Cloning origin-aggregated-logging ... Submodule 'deployer/common' (https://github.com/openshift/origin-integration-common) registered for path 'deployer/common' Submodule 'kibana-proxy' (https://github.com/fabric8io/openshift-auth-proxy.git) registered for path 'kibana-proxy' Cloning into 'deployer/common'... Submodule path 'deployer/common': checked out '45bf993212cdcbab5cbce3b3fab74a72b851402e' Cloning into 'kibana-proxy'... Submodule path 'kibana-proxy': checked out '118dfb40f7a8082d370ba7f4805255c9ec7c8178' + vagrant ssh -c 'if [ ! -d /tmp/openshift ] ; then mkdir /tmp/openshift ; fi ; sudo chmod 777 /tmp/openshift' + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/base-centos7 ... pulling image openshift/base-centos7 ... + vagrant ssh -c 'docker pull openshift/base-centos7' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/base-centos7 ... latest: Pulling from docker.io/openshift/base-centos7 45a2e645736c: Pulling fs layer 734fb161cf89: Pulling fs layer 78efc9e155c4: Pulling fs layer 8a3400b7e31a: Pulling fs layer 8a3400b7e31a: Waiting 734fb161cf89: Verifying Checksum 734fb161cf89: Download complete 8a3400b7e31a: Verifying Checksum 8a3400b7e31a: Download complete 45a2e645736c: Verifying Checksum 45a2e645736c: Download complete 78efc9e155c4: Verifying Checksum 78efc9e155c4: Download complete 45a2e645736c: Pull complete 734fb161cf89: Pull complete 78efc9e155c4: Pull complete 8a3400b7e31a: Pull complete Digest: sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c + echo done with openshift/base-centos7 done with openshift/base-centos7 + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image centos:centos7 ... pulling image centos:centos7 ... + vagrant ssh -c 'docker pull centos:centos7' -- -n Trying to pull repository docker.io/library/centos ... centos7: Pulling from docker.io/library/centos Digest: sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9 + echo done with centos:centos7 done with centos:centos7 + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-elasticsearch ... pulling image openshift/origin-logging-elasticsearch ... + vagrant ssh -c 'docker pull openshift/origin-logging-elasticsearch' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-elasticsearch ... latest: Pulling from docker.io/openshift/origin-logging-elasticsearch d5e46245fe40: Already exists c6633336ebec: Pulling fs layer c8922b1f556c: Pulling fs layer 8efa1082b536: Pulling fs layer 1b2efd2f86a8: Pulling fs layer 0c23282fef8c: Pulling fs layer 1181763a1fee: Pulling fs layer 4943cf20f63b: Pulling fs layer 3ca1d3f72a67: Pulling fs layer ff2b5d1f1edd: Pulling fs layer f633748cd255: Pulling fs layer 1181763a1fee: Waiting 4943cf20f63b: Waiting 3ca1d3f72a67: Waiting ff2b5d1f1edd: Waiting f633748cd255: Waiting 1b2efd2f86a8: Waiting 0c23282fef8c: Waiting c6633336ebec: Download complete 8efa1082b536: Verifying Checksum 8efa1082b536: Download complete 1b2efd2f86a8: Verifying Checksum 1b2efd2f86a8: Download complete 1181763a1fee: Verifying Checksum 1181763a1fee: Download complete 0c23282fef8c: Verifying Checksum 0c23282fef8c: Download complete 4943cf20f63b: Verifying Checksum 4943cf20f63b: Download complete ff2b5d1f1edd: Verifying Checksum ff2b5d1f1edd: Download complete f633748cd255: Verifying Checksum f633748cd255: Download complete 3ca1d3f72a67: Verifying Checksum 3ca1d3f72a67: Download complete c8922b1f556c: Verifying Checksum c8922b1f556c: Download complete c6633336ebec: Pull complete c8922b1f556c: Pull complete 8efa1082b536: Pull complete 1b2efd2f86a8: Pull complete 0c23282fef8c: Pull complete 1181763a1fee: Pull complete 4943cf20f63b: Pull complete 3ca1d3f72a67: Pull complete ff2b5d1f1edd: Pull complete f633748cd255: Pull complete Digest: sha256:6296f1719676e970438cac4d912542b35ac786c14a15df892507007c4ecbe490 + echo done with openshift/origin-logging-elasticsearch done with openshift/origin-logging-elasticsearch + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-fluentd ... pulling image openshift/origin-logging-fluentd ... + vagrant ssh -c 'docker pull openshift/origin-logging-fluentd' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-fluentd ... latest: Pulling from docker.io/openshift/origin-logging-fluentd d5e46245fe40: Already exists d7c28cc24dc2: Pulling fs layer 9175f9d06c1f: Pulling fs layer 91e5bb34ef30: Pulling fs layer 0c100caa1a42: Pulling fs layer 19d549a53f32: Pulling fs layer 0c100caa1a42: Waiting 19d549a53f32: Waiting 91e5bb34ef30: Verifying Checksum 91e5bb34ef30: Download complete 0c100caa1a42: Verifying Checksum 0c100caa1a42: Download complete 19d549a53f32: Download complete d7c28cc24dc2: Verifying Checksum d7c28cc24dc2: Download complete d7c28cc24dc2: Pull complete 9175f9d06c1f: Verifying Checksum 9175f9d06c1f: Download complete 9175f9d06c1f: Pull complete 91e5bb34ef30: Pull complete 0c100caa1a42: Pull complete 19d549a53f32: Pull complete Digest: sha256:cae7c21c9f111d4f5b481c14a65c597c67e715a8ffe3aee4c483100ee77296d7 + echo done with openshift/origin-logging-fluentd done with openshift/origin-logging-fluentd + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-curator ... pulling image openshift/origin-logging-curator ... + vagrant ssh -c 'docker pull openshift/origin-logging-curator' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-curator ... latest: Pulling from docker.io/openshift/origin-logging-curator d5e46245fe40: Already exists 73c202020d66: Pulling fs layer b330097dd4ed: Pulling fs layer 73c202020d66: Download complete b330097dd4ed: Download complete 73c202020d66: Pull complete b330097dd4ed: Pull complete Digest: sha256:daded10ff4e08dfb6659c964e305f16679596312da558af095835202cf66f703 + echo done with openshift/origin-logging-curator done with openshift/origin-logging-curator + for image in openshift/base-centos7 centos:centos7 openshift/origin-logging-elasticsearch openshift/origin-logging-fluentd openshift/origin-logging-curator openshift/origin-logging-kibana + echo pulling image openshift/origin-logging-kibana ... pulling image openshift/origin-logging-kibana ... + vagrant ssh -c 'docker pull openshift/origin-logging-kibana' -- -n Using default tag: latest Trying to pull repository docker.io/openshift/origin-logging-kibana ... latest: Pulling from docker.io/openshift/origin-logging-kibana 45a2e645736c: Already exists 734fb161cf89: Already exists 78efc9e155c4: Already exists 8a3400b7e31a: Already exists 4a36f2160feb: Pulling fs layer c746963bb35a: Pulling fs layer 1a70ab5cdf21: Pulling fs layer 41752a3d77bd: Pulling fs layer a8fda808a3c1: Pulling fs layer 3b950177e658: Pulling fs layer 41752a3d77bd: Waiting a8fda808a3c1: Waiting 3b950177e658: Waiting 4a36f2160feb: Verifying Checksum 4a36f2160feb: Download complete 1a70ab5cdf21: Verifying Checksum 1a70ab5cdf21: Download complete 41752a3d77bd: Verifying Checksum 41752a3d77bd: Download complete a8fda808a3c1: Verifying Checksum a8fda808a3c1: Download complete 4a36f2160feb: Pull complete c746963bb35a: Verifying Checksum c746963bb35a: Download complete 3b950177e658: Verifying Checksum 3b950177e658: Download complete c746963bb35a: Pull complete 1a70ab5cdf21: Pull complete 41752a3d77bd: Pull complete a8fda808a3c1: Pull complete 3b950177e658: Pull complete Digest: sha256:950568237cc7d0ff14ea9fe22c3967d888996db70c66181421ad68caeb5ba75f + echo done with openshift/origin-logging-kibana done with openshift/origin-logging-kibana + vagrant test-origin-aggregated-logging -d --env GIT_URL=https://github.com/openshift/origin-aggregated-logging --env GIT_BRANCH=master --env O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging --env OS_ROOT=/data/src/github.com/openshift/origin --env ENABLE_OPS_CLUSTER=true --env USE_LOCAL_SOURCE=true --env TEST_PERF=false --env VERBOSE=1 --env OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible --env OS_ANSIBLE_BRANCH=master *************************************************** Running GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh... /data/src/github.com/openshift/origin /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging/hack/testing Loaded plugins: amazon-id, rhui-lb, search-disabled-repos Metadata Cache Created Loaded plugins: amazon-id, rhui-lb, search-disabled-repos Resolving Dependencies --> Running transaction check ---> Package ansible.noarch 0:2.3.0.0-3.el7 will be installed --> Processing Dependency: sshpass for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-paramiko for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-keyczar for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-httplib2 for package: ansible-2.3.0.0-3.el7.noarch --> Processing Dependency: python-crypto for package: ansible-2.3.0.0-3.el7.noarch ---> Package python2-pip.noarch 0:8.1.2-5.el7 will be installed ---> Package python2-ruamel-yaml.x86_64 0:0.12.14-9.el7 will be installed --> Processing Dependency: python2-typing for package: python2-ruamel-yaml-0.12.14-9.el7.x86_64 --> Processing Dependency: python2-ruamel-ordereddict for package: python2-ruamel-yaml-0.12.14-9.el7.x86_64 --> Running transaction check ---> Package python-httplib2.noarch 0:0.9.1-2.el7aos will be installed ---> Package python-keyczar.noarch 0:0.71c-2.el7aos will be installed --> Processing Dependency: python-pyasn1 for package: python-keyczar-0.71c-2.el7aos.noarch ---> Package python-paramiko.noarch 0:2.1.1-1.el7 will be installed --> Processing Dependency: python-cryptography for package: python-paramiko-2.1.1-1.el7.noarch ---> Package python2-crypto.x86_64 0:2.6.1-13.el7 will be installed --> Processing Dependency: libtomcrypt.so.0()(64bit) for package: python2-crypto-2.6.1-13.el7.x86_64 ---> Package python2-ruamel-ordereddict.x86_64 0:0.4.9-3.el7 will be installed ---> Package python2-typing.noarch 0:3.5.2.2-3.el7 will be installed ---> Package sshpass.x86_64 0:1.06-1.el7 will be installed --> Running transaction check ---> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed --> Processing Dependency: libtommath >= 0.42.0 for package: libtomcrypt-1.17-23.el7.x86_64 --> Processing Dependency: libtommath.so.0()(64bit) for package: libtomcrypt-1.17-23.el7.x86_64 ---> Package python2-cryptography.x86_64 0:1.3.1-3.el7 will be installed --> Processing Dependency: python-idna >= 2.0 for package: python2-cryptography-1.3.1-3.el7.x86_64 --> Processing Dependency: python-cffi >= 1.4.1 for package: python2-cryptography-1.3.1-3.el7.x86_64 --> Processing Dependency: python-ipaddress for package: python2-cryptography-1.3.1-3.el7.x86_64 --> Processing Dependency: python-enum34 for package: python2-cryptography-1.3.1-3.el7.x86_64 ---> Package python2-pyasn1.noarch 0:0.1.9-7.el7 will be installed --> Running transaction check ---> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed ---> Package python-cffi.x86_64 0:1.6.0-5.el7 will be installed --> Processing Dependency: python-pycparser for package: python-cffi-1.6.0-5.el7.x86_64 ---> Package python-enum34.noarch 0:1.0.4-1.el7 will be installed ---> Package python-idna.noarch 0:2.0-1.el7 will be installed ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed --> Running transaction check ---> Package python-pycparser.noarch 0:2.14-1.el7 will be installed --> Processing Dependency: python-ply for package: python-pycparser-2.14-1.el7.noarch --> Running transaction check ---> Package python-ply.noarch 0:3.4-10.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: ansible noarch 2.3.0.0-3.el7 epel 5.7 M python2-pip noarch 8.1.2-5.el7 epel 1.7 M python2-ruamel-yaml x86_64 0.12.14-9.el7 li 245 k Installing for dependencies: libtomcrypt x86_64 1.17-23.el7 epel 224 k libtommath x86_64 0.42.0-4.el7 epel 35 k python-cffi x86_64 1.6.0-5.el7 oso-rhui-rhel-server-releases 218 k python-enum34 noarch 1.0.4-1.el7 oso-rhui-rhel-server-releases 52 k python-httplib2 noarch 0.9.1-2.el7aos li 115 k python-idna noarch 2.0-1.el7 oso-rhui-rhel-server-releases 92 k python-ipaddress noarch 1.0.16-2.el7 oso-rhui-rhel-server-releases 34 k python-keyczar noarch 0.71c-2.el7aos rhel-7-server-ose-3.1-rpms 217 k python-paramiko noarch 2.1.1-1.el7 rhel-7-server-ose-3.4-rpms 266 k python-ply noarch 3.4-10.el7 oso-rhui-rhel-server-releases 123 k python-pycparser noarch 2.14-1.el7 oso-rhui-rhel-server-releases 105 k python2-crypto x86_64 2.6.1-13.el7 epel 476 k python2-cryptography x86_64 1.3.1-3.el7 oso-rhui-rhel-server-releases 471 k python2-pyasn1 noarch 0.1.9-7.el7 oso-rhui-rhel-server-releases 100 k python2-ruamel-ordereddict x86_64 0.4.9-3.el7 li 38 k python2-typing noarch 3.5.2.2-3.el7 epel 39 k sshpass x86_64 1.06-1.el7 epel 21 k Transaction Summary ================================================================================ Install 3 Packages (+17 Dependent packages) Total download size: 10 M Installed size: 47 M Downloading packages: -------------------------------------------------------------------------------- Total 5.1 MB/s | 10 MB 00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : python2-pyasn1-0.1.9-7.el7.noarch 1/20 Installing : sshpass-1.06-1.el7.x86_64 2/20 Installing : libtommath-0.42.0-4.el7.x86_64 3/20 Installing : libtomcrypt-1.17-23.el7.x86_64 4/20 Installing : python2-crypto-2.6.1-13.el7.x86_64 5/20 Installing : python-keyczar-0.71c-2.el7aos.noarch 6/20 Installing : python-enum34-1.0.4-1.el7.noarch 7/20 Installing : python-ply-3.4-10.el7.noarch 8/20 Installing : python-pycparser-2.14-1.el7.noarch 9/20 Installing : python-cffi-1.6.0-5.el7.x86_64 10/20 Installing : python-httplib2-0.9.1-2.el7aos.noarch 11/20 Installing : python-idna-2.0-1.el7.noarch 12/20 Installing : python2-ruamel-ordereddict-0.4.9-3.el7.x86_64 13/20 Installing : python2-typing-3.5.2.2-3.el7.noarch 14/20 Installing : python-ipaddress-1.0.16-2.el7.noarch 15/20 Installing : python2-cryptography-1.3.1-3.el7.x86_64 16/20 Installing : python-paramiko-2.1.1-1.el7.noarch 17/20 Installing : ansible-2.3.0.0-3.el7.noarch 18/20 Installing : python2-ruamel-yaml-0.12.14-9.el7.x86_64 19/20 Installing : python2-pip-8.1.2-5.el7.noarch 20/20 Verifying : python-pycparser-2.14-1.el7.noarch 1/20 Verifying : python-ipaddress-1.0.16-2.el7.noarch 2/20 Verifying : ansible-2.3.0.0-3.el7.noarch 3/20 Verifying : python2-typing-3.5.2.2-3.el7.noarch 4/20 Verifying : python2-pip-8.1.2-5.el7.noarch 5/20 Verifying : python2-pyasn1-0.1.9-7.el7.noarch 6/20 Verifying : libtomcrypt-1.17-23.el7.x86_64 7/20 Verifying : python-cffi-1.6.0-5.el7.x86_64 8/20 Verifying : python2-ruamel-yaml-0.12.14-9.el7.x86_64 9/20 Verifying : python2-ruamel-ordereddict-0.4.9-3.el7.x86_64 10/20 Verifying : python-idna-2.0-1.el7.noarch 11/20 Verifying : python-httplib2-0.9.1-2.el7aos.noarch 12/20 Verifying : python-ply-3.4-10.el7.noarch 13/20 Verifying : python-enum34-1.0.4-1.el7.noarch 14/20 Verifying : python-keyczar-0.71c-2.el7aos.noarch 15/20 Verifying : libtommath-0.42.0-4.el7.x86_64 16/20 Verifying : sshpass-1.06-1.el7.x86_64 17/20 Verifying : python2-cryptography-1.3.1-3.el7.x86_64 18/20 Verifying : python-paramiko-2.1.1-1.el7.noarch 19/20 Verifying : python2-crypto-2.6.1-13.el7.x86_64 20/20 Installed: ansible.noarch 0:2.3.0.0-3.el7 python2-pip.noarch 0:8.1.2-5.el7 python2-ruamel-yaml.x86_64 0:0.12.14-9.el7 Dependency Installed: libtomcrypt.x86_64 0:1.17-23.el7 libtommath.x86_64 0:0.42.0-4.el7 python-cffi.x86_64 0:1.6.0-5.el7 python-enum34.noarch 0:1.0.4-1.el7 python-httplib2.noarch 0:0.9.1-2.el7aos python-idna.noarch 0:2.0-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-keyczar.noarch 0:0.71c-2.el7aos python-paramiko.noarch 0:2.1.1-1.el7 python-ply.noarch 0:3.4-10.el7 python-pycparser.noarch 0:2.14-1.el7 python2-crypto.x86_64 0:2.6.1-13.el7 python2-cryptography.x86_64 0:1.3.1-3.el7 python2-pyasn1.noarch 0:0.1.9-7.el7 python2-ruamel-ordereddict.x86_64 0:0.4.9-3.el7 python2-typing.noarch 0:3.5.2.2-3.el7 sshpass.x86_64 0:1.06-1.el7 Complete! Cloning into '/tmp/tmp.OLv9M4asi1/openhift-ansible'... Copying oc from path to /usr/local/bin for use by openshift-ansible Copying oc from path to /usr/bin for use by openshift-ansible Copying oadm from path to /usr/local/bin for use by openshift-ansible Copying oadm from path to /usr/bin for use by openshift-ansible [INFO] Starting logging tests at Fri Jun 9 10:21:34 EDT 2017 Generated new key pair as /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/serviceaccounts.public.key and /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/serviceaccounts.private.key Generating node credentials ... Created node config for 172.18.1.226 in /tmp/openshift/origin-aggregated-logging/openshift.local.config/node-172.18.1.226 Wrote master config to: /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/master-config.yaml Running hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 6.037s: hack/lib/start.sh:352: executing 'oc get --raw /healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Standard output from the command: ok Standard error from the command: The connection to the server 172.18.1.226:8443 was refused - did you specify the right host or port? ... repeated 5 times Error from server (Forbidden): User "system:admin" cannot "get" on "/healthz" ... repeated 7 times Running hack/lib/start.sh:353: executing 'oc get --raw https://172.18.1.226:10250/healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s... SUCCESS after 0.205s: hack/lib/start.sh:353: executing 'oc get --raw https://172.18.1.226:10250/healthz --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.5s until completion or 120.000s Standard output from the command: ok There was no error output from the command. Running hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s... SUCCESS after 0.245s: hack/lib/start.sh:354: executing 'oc get --raw /healthz/ready --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s Standard output from the command: ok There was no error output from the command. Running hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s... SUCCESS after 0.456s: hack/lib/start.sh:355: executing 'oc get service kubernetes --namespace default --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 160.000s Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 4s There was no error output from the command. Running hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.1.226 --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s... SUCCESS after 0.808s: hack/lib/start.sh:356: executing 'oc get --raw /api/v1/nodes/172.18.1.226 --config='/tmp/openshift/origin-aggregated-logging/openshift.local.config/master/admin.kubeconfig'' expecting success; re-trying every 0.25s until completion or 80.000s Standard output from the command: {"kind":"Node","apiVersion":"v1","metadata":{"name":"172.18.1.226","selfLink":"/api/v1/nodes/172.18.1.226","uid":"fdb15570-4d1e-11e7-94cc-0e3d36056ef8","resourceVersion":"290","creationTimestamp":"2017-06-09T14:21:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"172.18.1.226"},"annotations":{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"externalID":"172.18.1.226","providerID":"aws:////i-0736f322f058e6409"},"status":{"capacity":{"cpu":"4","memory":"7231688Ki","pods":"40"},"allocatable":{"cpu":"4","memory":"7129288Ki","pods":"40"},"conditions":[{"type":"OutOfDisk","status":"False","lastHeartbeatTime":"2017-06-09T14:21:55Z","lastTransitionTime":"2017-06-09T14:21:55Z","reason":"KubeletHasSufficientDisk","message":"kubelet has sufficient disk space available"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2017-06-09T14:21:55Z","lastTransitionTime":"2017-06-09T14:21:55Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2017-06-09T14:21:55Z","lastTransitionTime":"2017-06-09T14:21:55Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"Ready","status":"True","lastHeartbeatTime":"2017-06-09T14:21:55Z","lastTransitionTime":"2017-06-09T14:21:55Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"LegacyHostIP","address":"172.18.1.226"},{"type":"InternalIP","address":"172.18.1.226"},{"type":"Hostname","address":"172.18.1.226"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"f9370ed252a14f73b014c1301a9b6d1b","systemUUID":"EC2FED92-34F3-4070-9783-EF639907D332","bootID":"d12bf242-faf5-4acb-9bcb-62c23709ec03","kernelVersion":"3.10.0-327.22.2.el7.x86_64","osImage":"Red Hat Enterprise Linux Server 7.3 (Maipo)","containerRuntimeVersion":"docker://1.12.6","kubeletVersion":"v1.6.1+5115d708d7","kubeProxyVersion":"v1.6.1+5115d708d7","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["openshift/origin-federation:6acabdc","openshift/origin-federation:latest"],"sizeBytes":1205885664},{"names":["openshift/origin-docker-registry:6acabdc","openshift/origin-docker-registry:latest"],"sizeBytes":1100164272},{"names":["openshift/origin-gitserver:6acabdc","openshift/origin-gitserver:latest"],"sizeBytes":1086520226},{"names":["openshift/openvswitch:6acabdc","openshift/openvswitch:latest"],"sizeBytes":1053403667},{"names":["openshift/node:6acabdc","openshift/node:latest"],"sizeBytes":1051721928},{"names":["openshift/origin-keepalived-ipfailover:6acabdc","openshift/origin-keepalived-ipfailover:latest"],"sizeBytes":1028529711},{"names":["openshift/origin-haproxy-router:6acabdc","openshift/origin-haproxy-router:latest"],"sizeBytes":1022758742},{"names":["openshift/origin:6acabdc","openshift/origin:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-f5-router:6acabdc","openshift/origin-f5-router:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-sti-builder:6acabdc","openshift/origin-sti-builder:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-recycler:6acabdc","openshift/origin-recycler:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-deployer:6acabdc","openshift/origin-deployer:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-docker-builder:6acabdc","openshift/origin-docker-builder:latest"],"sizeBytes":1001728427},{"names":["openshift/origin-cluster-capacity:6acabdc","openshift/origin-cluster-capacity:latest"],"sizeBytes":962455026},{"names":["rhel7.1:latest"],"sizeBytes":765301508},{"names":["openshift/dind-master:latest"],"sizeBytes":731456758},{"names":["openshift/dind-node:latest"],"sizeBytes":731453034},{"names":["\u003cnone\u003e@\u003cnone\u003e","\u003cnone\u003e:\u003cnone\u003e"],"sizeBytes":709532011},{"names":["docker.io/openshift/origin-logging-kibana@sha256:950568237cc7d0ff14ea9fe22c3967d888996db70c66181421ad68caeb5ba75f","docker.io/openshift/origin-logging-kibana:latest"],"sizeBytes":682851513},{"names":["openshift/dind:latest"],"sizeBytes":640650210},{"names":["docker.io/openshift/origin-logging-elasticsearch@sha256:6296f1719676e970438cac4d912542b35ac786c14a15df892507007c4ecbe490","docker.io/openshift/origin-logging-elasticsearch:latest"],"sizeBytes":425567196},{"names":["docker.io/openshift/base-centos7@sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c","docker.io/openshift/base-centos7:latest"],"sizeBytes":383049978},{"names":["rhel7.2:latest"],"sizeBytes":377493597},{"names":["openshift/origin-egress-router:6acabdc","openshift/origin-egress-router:latest"],"sizeBytes":364745713},{"names":["openshift/origin-base:latest"],"sizeBytes":363070172},{"names":["\u003cnone\u003e@\u003cnone\u003e","\u003cnone\u003e:\u003cnone\u003e"],"sizeBytes":363024702},{"names":["docker.io/openshift/origin-logging-fluentd@sha256:cae7c21c9f111d4f5b481c14a65c597c67e715a8ffe3aee4c483100ee77296d7","docker.io/openshift/origin-logging-fluentd:latest"],"sizeBytes":359223728},{"names":["docker.io/fedora@sha256:69281ddd7b2600e5f2b17f1e12d7fba25207f459204fb2d15884f8432c479136","docker.io/fedora:25"],"sizeBytes":230864375},{"names":["docker.io/openshift/origin-logging-curator@sha256:daded10ff4e08dfb6659c964e305f16679596312da558af095835202cf66f703","docker.io/openshift/origin-logging-curator:latest"],"sizeBytes":224977669},{"names":["rhel7.3:latest","rhel7:latest"],"sizeBytes":219121266},{"names":["openshift/origin-pod:6acabdc","openshift/origin-pod:latest"],"sizeBytes":213199843},{"names":["registry.access.redhat.com/rhel7.2@sha256:98e6ca5d226c26e31a95cd67716afe22833c943e1926a21daf1a030906a02249","registry.access.redhat.com/rhel7.2:latest"],"sizeBytes":201376319},{"names":["registry.access.redhat.com/rhel7.3@sha256:1e232401d8e0ba53b36b757b4712fbcbd1dab9c21db039c45a84871a74e89e68","registry.access.redhat.com/rhel7.3:latest"],"sizeBytes":192693772},{"names":["docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077"],"sizeBytes":192548999},{"names":["openshift/origin-source:latest"],"sizeBytes":192548894},{"names":["docker.io/centos@sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9","docker.io/centos:7","docker.io/centos:centos7"],"sizeBytes":192548537},{"names":["registry.access.redhat.com/rhel7.1@sha256:1bc5a4c43bbb29a5a96a61896ff696933be3502e2f5fdc4cde02d9e101731fdd","registry.access.redhat.com/rhel7.1:latest"],"sizeBytes":158229901},{"names":["openshift/hello-openshift:6acabdc","openshift/hello-openshift:latest"],"sizeBytes":5643318}]}} Standard error from the command: Error from server (NotFound): nodes "172.18.1.226" not found serviceaccount "registry" created clusterrolebinding "registry-registry-role" created deploymentconfig "docker-registry" created service "docker-registry" created --> Creating router router ... info: password for stats user admin has been set to DTMJhsbpT5 serviceaccount "router" created clusterrolebinding "router-router-role" created deploymentconfig "router" created service "router" created --> Success Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:162: executing 'oadm new-project logging --node-selector=''' expecting success... SUCCESS after 0.869s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:162: executing 'oadm new-project logging --node-selector=''' expecting success Standard output from the command: Created project logging There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:163: executing 'oc project logging > /dev/null' expecting success... SUCCESS after 0.212s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:163: executing 'oc project logging > /dev/null' expecting success There was no output from the command. There was no error output from the command. apiVersion: v1 items: - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-elasticsearch component: development logging-infra: development provider: openshift name: logging-elasticsearch spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-fluentd component: development logging-infra: development provider: openshift name: logging-fluentd spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-kibana component: development logging-infra: development provider: openshift name: logging-kibana spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-curator component: development logging-infra: development provider: openshift name: logging-curator spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-auth-proxy component: development logging-infra: development provider: openshift name: logging-auth-proxy spec: {} - apiVersion: v1 kind: ImageStream metadata: labels: build: logging-deployment component: development logging-infra: development provider: openshift name: origin spec: dockerImageRepository: openshift/origin tags: - from: kind: DockerImage name: openshift/origin:v1.5.0-alpha.2 name: v1.5.0-alpha.2 - apiVersion: v1 kind: BuildConfig metadata: labels: app: logging-elasticsearch component: development logging-infra: development provider: openshift name: logging-elasticsearch spec: output: to: kind: ImageStreamTag name: logging-elasticsearch:latest resources: {} source: contextDir: elasticsearch git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-fluentd component: development logging-infra: development provider: openshift name: logging-fluentd spec: output: to: kind: ImageStreamTag name: logging-fluentd:latest resources: {} source: contextDir: fluentd git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-kibana component: development logging-infra: development provider: openshift name: logging-kibana spec: output: to: kind: ImageStreamTag name: logging-kibana:latest resources: {} source: contextDir: kibana git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-curator component: development logging-infra: development provider: openshift name: logging-curator spec: output: to: kind: ImageStreamTag name: logging-curator:latest resources: {} source: contextDir: curator git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: openshift/base-centos7 type: Docker - apiVersion: v1 kind: BuildConfig metadata: labels: build: logging-auth-proxy component: development logging-infra: development provider: openshift name: logging-auth-proxy spec: output: to: kind: ImageStreamTag name: logging-auth-proxy:latest resources: {} source: contextDir: kibana-proxy git: ref: master uri: https://github.com/openshift/origin-aggregated-logging type: Git strategy: dockerStrategy: from: kind: DockerImage name: library/node:0.10.36 type: Docker kind: List metadata: {} Running hack/testing/build-images:31: executing 'oc process -o yaml -f /data/src/github.com/openshift/origin-aggregated-logging/hack/templates/dev-builds-wo-deployer.yaml -p LOGGING_FORK_URL=https://github.com/openshift/origin-aggregated-logging -p LOGGING_FORK_BRANCH=master | build_filter | oc create -f -' expecting success... SUCCESS after 0.338s: hack/testing/build-images:31: executing 'oc process -o yaml -f /data/src/github.com/openshift/origin-aggregated-logging/hack/templates/dev-builds-wo-deployer.yaml -p LOGGING_FORK_URL=https://github.com/openshift/origin-aggregated-logging -p LOGGING_FORK_BRANCH=master | build_filter | oc create -f -' expecting success Standard output from the command: imagestream "logging-elasticsearch" created imagestream "logging-fluentd" created imagestream "logging-kibana" created imagestream "logging-curator" created imagestream "logging-auth-proxy" created imagestream "origin" created buildconfig "logging-elasticsearch" created buildconfig "logging-fluentd" created buildconfig "logging-kibana" created buildconfig "logging-curator" created buildconfig "logging-auth-proxy" created There was no error output from the command. Running hack/testing/build-images:9: executing 'oc get imagestreamtag origin:latest' expecting success; re-trying every 0.2s until completion or 60.000s... SUCCESS after 1.029s: hack/testing/build-images:9: executing 'oc get imagestreamtag origin:latest' expecting success; re-trying every 0.2s until completion or 60.000s Standard output from the command: NAME DOCKER REF UPDATED IMAGENAME origin:latest openshift/origin@sha256:c32d9de7ecabaee3cb2e9c253edfeb72c546a09a22906c80195258cff83ea77f 1 second ago sha256:c32d9de7ecabaee3cb2e9c253edfeb72c546a09a22906c80195258cff83ea77f Standard error from the command: Error from server (NotFound): imagestreamtags.image.openshift.io "origin:latest" not found ... repeated 2 times Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-auth-proxy-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-curator-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-elasticsearch-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-fluentd-1" started Uploading directory "/data/src/github.com/openshift/origin-aggregated-logging" as binary input for the build ... build "logging-kibana-1" started Running hack/testing/build-images:33: executing 'wait_for_builds_complete' expecting success... SUCCESS after 0.234s: hack/testing/build-images:33: executing 'wait_for_builds_complete' expecting success Standard output from the command: Builds are complete There was no error output from the command. /tmp/tmp.OLv9M4asi1/openhift-ansible /data/src/github.com/openshift/origin-aggregated-logging ### Created host inventory file ### [oo_first_master] openshift [oo_first_master:vars] ansible_become=true ansible_connection=local containerized=true docker_protect_installed_version=true openshift_deployment_type=origin deployment_type=origin required_packages=[] openshift_hosted_logging_hostname=kibana.127.0.0.1.xip.io openshift_master_logging_public_url=https://kibana.127.0.0.1.xip.io openshift_logging_master_public_url=https://172.18.1.226:8443 openshift_logging_image_prefix=172.30.106.159:5000/logging/ openshift_logging_use_ops=true openshift_logging_fluentd_journal_read_from_head=False openshift_logging_es_log_appenders=['console'] openshift_logging_use_mux=false openshift_logging_mux_allow_external=false openshift_logging_use_mux_client=false ################################### Running hack/testing/init-log-stack:58: executing 'oc login -u system:admin' expecting success... SUCCESS after 0.212s: hack/testing/init-log-stack:58: executing 'oc login -u system:admin' expecting success Standard output from the command: Logged into "https://172.18.1.226:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project ': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Using /tmp/tmp.OLv9M4asi1/openhift-ansible/ansible.cfg as config file PLAYBOOK: openshift-logging.yml ************************************************ 4 plays in /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml PLAY [Create initial host groups for localhost] ******************************** META: ran handlers TASK [include_vars] ************************************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/byo/openshift-cluster/initialize_groups.yml:10 ok: [localhost] => { "ansible_facts": { "g_all_hosts": "{{ g_master_hosts | union(g_node_hosts) | union(g_etcd_hosts) | union(g_lb_hosts) | union(g_nfs_hosts) | union(g_new_node_hosts)| union(g_new_master_hosts) | default([]) }}", "g_etcd_hosts": "{{ groups.etcd | default([]) }}", "g_glusterfs_hosts": "{{ groups.glusterfs | default([]) }}", "g_glusterfs_registry_hosts": "{{ groups.glusterfs_registry | default(g_glusterfs_hosts) }}", "g_lb_hosts": "{{ groups.lb | default([]) }}", "g_master_hosts": "{{ groups.masters | default([]) }}", "g_new_master_hosts": "{{ groups.new_masters | default([]) }}", "g_new_node_hosts": "{{ groups.new_nodes | default([]) }}", "g_nfs_hosts": "{{ groups.nfs | default([]) }}", "g_node_hosts": "{{ groups.nodes | default([]) }}" }, "changed": false } META: ran handlers META: ran handlers PLAY [Populate config host groups] ********************************************* META: ran handlers TASK [Evaluate groups - g_etcd_hosts required] ********************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:8 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:13 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:18 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_lb_hosts required] *********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:23 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_nfs_hosts required] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:28 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_nfs_hosts is single host] **************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:33 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate groups - g_glusterfs_hosts required] **************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:38 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate oo_all_hosts] *************************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:43 TASK [Evaluate oo_masters] ***************************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:52 TASK [Evaluate oo_first_master] ************************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:61 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate oo_masters_to_config] ******************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:70 TASK [Evaluate oo_etcd_to_config] ********************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:79 TASK [Evaluate oo_first_etcd] ************************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:88 skipping: [localhost] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [Evaluate oo_etcd_hosts_to_upgrade] *************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:100 TASK [Evaluate oo_etcd_hosts_to_backup] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:107 creating host via 'add_host': hostname=openshift ok: [localhost] => (item=openshift) => { "add_host": { "groups": [ "oo_etcd_hosts_to_backup" ], "host_name": "openshift", "host_vars": {} }, "changed": false, "item": "openshift" } TASK [Evaluate oo_nodes_to_config] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:114 TASK [Add master to oo_nodes_to_config] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:124 TASK [Evaluate oo_lb_to_config] ************************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:134 TASK [Evaluate oo_nfs_to_config] *********************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:143 TASK [Evaluate oo_glusterfs_to_config] ***************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/playbooks/common/openshift-cluster/evaluate_groups.yml:152 META: ran handlers META: ran handlers PLAY [OpenShift Aggregated Logging] ******************************************** TASK [Gathering Facts] ********************************************************* ok: [openshift] META: ran handlers TASK [openshift_sanitize_inventory : Abort when conflicting deployment type variables are set] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_sanitize_inventory : Standardize on latest variable names] ***** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:15 ok: [openshift] => { "ansible_facts": { "deployment_type": "origin", "openshift_deployment_type": "origin" }, "changed": false } TASK [openshift_sanitize_inventory : Abort when deployment type is invalid] **** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:23 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_sanitize_inventory : Normalize openshift_release] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:31 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:41 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Detecting Operating System] **************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:8 ok: [openshift] => { "ansible_facts": { "l_is_atomic": false }, "changed": false } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:10 ok: [openshift] => { "ansible_facts": { "l_is_containerized": true, "l_is_etcd_system_container": false, "l_is_master_system_container": false, "l_is_node_system_container": false, "l_is_openvswitch_system_container": false }, "changed": false } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:16 ok: [openshift] => { "ansible_facts": { "l_any_system_container": false }, "changed": false } TASK [openshift_facts : set_fact] ********************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:18 ok: [openshift] => { "ansible_facts": { "l_etcd_runtime": "docker" }, "changed": false } TASK [openshift_facts : Validate python version] ******************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:22 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Validate python version] ******************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:29 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Determine Atomic Host Docker Version] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:42 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : assert] ************************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:46 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Load variables] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:53 ok: [openshift] => (item=/tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/vars/default.yml) => { "ansible_facts": { "required_packages": [ "iproute", "python-dbus", "PyYAML", "yum-utils" ] }, "item": "/tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/vars/default.yml" } TASK [openshift_facts : Ensure various deps are installed] ********************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:59 ok: [openshift] => (item=iproute) => { "changed": false, "item": "iproute", "rc": 0, "results": [ "iproute-3.10.0-74.el7.x86_64 providing iproute is already installed" ] } ok: [openshift] => (item=python-dbus) => { "changed": false, "item": "python-dbus", "rc": 0, "results": [ "dbus-python-1.1.1-9.el7.x86_64 providing python-dbus is already installed" ] } ok: [openshift] => (item=PyYAML) => { "changed": false, "item": "PyYAML", "rc": 0, "results": [ "PyYAML-3.10-11.el7.x86_64 providing PyYAML is already installed" ] } ok: [openshift] => (item=yum-utils) => { "changed": false, "item": "yum-utils", "rc": 0, "results": [ "yum-utils-1.1.31-40.el7.noarch providing yum-utils is already installed" ] } TASK [openshift_facts : Ensure various deps for running system containers are installed] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:64 skipping: [openshift] => (item=atomic) => { "changed": false, "item": "atomic", "skip_reason": "Conditional result was False", "skipped": true } skipping: [openshift] => (item=ostree) => { "changed": false, "item": "ostree", "skip_reason": "Conditional result was False", "skipped": true } skipping: [openshift] => (item=runc) => { "changed": false, "item": "runc", "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_facts : Gather Cluster facts and set is_containerized if needed] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:71 changed: [openshift] => { "ansible_facts": { "openshift": { "common": { "admin_binary": "/usr/local/bin/oadm", "all_hostnames": [ "172.18.1.226", "ip-172-18-1-226.ec2.internal", "ec2-52-200-78-107.compute-1.amazonaws.com", "52.200.78.107" ], "cli_image": "openshift/origin", "client_binary": "/usr/local/bin/oc", "cluster_id": "default", "config_base": "/etc/origin", "data_dir": "/var/lib/origin", "debug_level": "2", "deployer_image": "openshift/origin-deployer", "deployment_subtype": "basic", "deployment_type": "origin", "dns_domain": "cluster.local", "etcd_runtime": "docker", "examples_content_version": "v3.6", "generate_no_proxy_hosts": true, "hostname": "ip-172-18-1-226.ec2.internal", "install_examples": true, "internal_hostnames": [ "172.18.1.226", "ip-172-18-1-226.ec2.internal" ], "ip": "172.18.1.226", "is_atomic": false, "is_containerized": true, "is_etcd_system_container": false, "is_master_system_container": false, "is_node_system_container": false, "is_openvswitch_system_container": false, "kube_svc_ip": "172.30.0.1", "pod_image": "openshift/origin-pod", "portal_net": "172.30.0.0/16", "public_hostname": "ec2-52-200-78-107.compute-1.amazonaws.com", "public_ip": "52.200.78.107", "registry_image": "openshift/origin-docker-registry", "router_image": "openshift/origin-haproxy-router", "sdn_network_plugin_name": "redhat/openshift-ovs-subnet", "service_type": "origin", "use_calico": false, "use_contiv": false, "use_dnsmasq": true, "use_flannel": false, "use_manageiq": true, "use_nuage": false, "use_openshift_sdn": true, "version_gte_3_1_1_or_1_1_1": true, "version_gte_3_1_or_1_1": true, "version_gte_3_2_or_1_2": true, "version_gte_3_3_or_1_3": true, "version_gte_3_4_or_1_4": true, "version_gte_3_5_or_1_5": true, "version_gte_3_6": true }, "current_config": { "roles": [ "node", "docker" ] }, "docker": { "api_version": 1.24, "disable_push_dockerhub": false, "gte_1_10": true, "options": "--log-driver=journald", "service_name": "docker", "version": "1.12.6" }, "hosted": { "logging": { "selector": null }, "metrics": { "selector": null }, "registry": { "selector": "region=infra" }, "router": { "selector": "region=infra" } }, "node": { "annotations": {}, "iptables_sync_period": "30s", "kubelet_args": { "node-labels": [] }, "labels": {}, "local_quota_per_fsgroup": "", "node_image": "openshift/node", "node_system_image": "openshift/node", "nodename": "ip-172-18-1-226.ec2.internal", "ovs_image": "openshift/openvswitch", "ovs_system_image": "openshift/openvswitch", "registry_url": "openshift/origin-${component}:${version}", "schedulable": true, "sdn_mtu": "8951", "set_node_ip": false, "storage_plugin_deps": [ "ceph", "glusterfs", "iscsi" ] }, "provider": { "metadata": { "ami-id": "ami-83a1fc95", "ami-launch-index": "0", "ami-manifest-path": "(unknown)", "block-device-mapping": { "ami": "/dev/sda1", "ebs17": "sdb", "root": "/dev/sda1" }, "hostname": "ip-172-18-1-226.ec2.internal", "instance-action": "none", "instance-id": "i-0736f322f058e6409", "instance-type": "c4.xlarge", "local-hostname": "ip-172-18-1-226.ec2.internal", "local-ipv4": "172.18.1.226", "mac": "0e:3d:36:05:6e:f8", "metrics": { "vhostmd": "" }, "network": { "interfaces": { "macs": { "0e:3d:36:05:6e:f8": { "device-number": "0", "interface-id": "eni-e053493a", "ipv4-associations": { "52.200.78.107": "172.18.1.226" }, "local-hostname": "ip-172-18-1-226.ec2.internal", "local-ipv4s": "172.18.1.226", "mac": "0e:3d:36:05:6e:f8", "owner-id": "531415883065", "public-hostname": "ec2-52-200-78-107.compute-1.amazonaws.com", "public-ipv4s": "52.200.78.107", "security-group-ids": "sg-7e73221a", "security-groups": "default", "subnet-id": "subnet-cf57c596", "subnet-ipv4-cidr-block": "172.18.0.0/20", "vpc-id": "vpc-69705d0c", "vpc-ipv4-cidr-block": "172.18.0.0/16", "vpc-ipv4-cidr-blocks": "172.18.0.0/16" } } } }, "placement": { "availability-zone": "us-east-1d" }, "profile": "default-hvm", "public-hostname": "ec2-52-200-78-107.compute-1.amazonaws.com", "public-ipv4": "52.200.78.107", "public-keys/": "0=libra", "reservation-id": "r-0466dc5a6b97ed873", "security-groups": "default", "services": { "domain": "amazonaws.com", "partition": "aws" } }, "name": "aws", "network": { "hostname": "ip-172-18-1-226.ec2.internal", "interfaces": [ { "ips": [ "172.18.1.226" ], "network_id": "subnet-cf57c596", "network_type": "vpc", "public_ips": [ "52.200.78.107" ] } ], "ip": "172.18.1.226", "ipv6_enabled": false, "public_hostname": "ec2-52-200-78-107.compute-1.amazonaws.com", "public_ip": "52.200.78.107" }, "zone": "us-east-1d" } } }, "changed": true } TASK [openshift_facts : Set repoquery command] ********************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_facts/tasks/main.yml:99 ok: [openshift] => { "ansible_facts": { "repoquery_cmd": "repoquery --plugins" }, "changed": false } TASK [openshift_logging : fail] ************************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Set default image variables based on deployment_type] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:6 ok: [openshift] => (item=/tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/vars/default_images.yml) => { "ansible_facts": { "__openshift_logging_image_prefix": "{{ openshift_hosted_logging_deployer_prefix | default('docker.io/openshift/origin-') }}", "__openshift_logging_image_version": "{{ openshift_hosted_logging_deployer_version | default('latest') }}" }, "item": "/tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/vars/default_images.yml" } TASK [openshift_logging : Set logging image facts] ***************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:12 ok: [openshift] => { "ansible_facts": { "openshift_logging_image_prefix": "172.30.106.159:5000/logging/", "openshift_logging_image_version": "latest" }, "changed": false } TASK [openshift_logging : Create temp directory for doing work in] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:17 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.001948", "end": "2017-06-09 10:34:08.091728", "rc": 0, "start": "2017-06-09 10:34:08.089780" } STDOUT: /tmp/openshift-logging-ansible-wPPOkB TASK [openshift_logging : debug] *********************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:24 ok: [openshift] => { "changed": false } MSG: Created temp dir /tmp/openshift-logging-ansible-wPPOkB TASK [openshift_logging : Create local temp directory for doing work in] ******* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:26 ok: [openshift -> 127.0.0.1] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:01.002100", "end": "2017-06-09 10:34:09.243698", "rc": 0, "start": "2017-06-09 10:34:08.241598" } STDOUT: /tmp/openshift-logging-ansible-6X6iCA TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:33 included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml for openshift TASK [openshift_logging : Gather OpenShift Logging Facts] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:2 ok: [openshift] => { "ansible_facts": { "openshift_logging_facts": { "curator": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "curator_ops": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "elasticsearch": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "elasticsearch_ops": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "fluentd": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "kibana": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} }, "kibana_ops": { "clusterrolebindings": {}, "configmaps": {}, "daemonsets": {}, "deploymentconfigs": {}, "oauthclients": {}, "pvcs": {}, "rolebindings": {}, "routes": {}, "sccs": {}, "secrets": {}, "services": {} } } }, "changed": false } TASK [openshift_logging : Set logging project] ********************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:7 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get namespace logging -o json", "results": { "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "openshift.io/description": "", "openshift.io/display-name": "", "openshift.io/node-selector": "", "openshift.io/sa.scc.mcs": "s0:c7,c4", "openshift.io/sa.scc.supplemental-groups": "1000050000/10000", "openshift.io/sa.scc.uid-range": "1000050000/10000" }, "creationTimestamp": "2017-06-09T14:21:58Z", "name": "logging", "resourceVersion": "859", "selfLink": "/api/v1/namespaces/logging", "uid": "ff72e857-4d1e-11e7-94cc-0e3d36056ef8" }, "spec": { "finalizers": [ "openshift.io/origin", "kubernetes" ] }, "status": { "phase": "Active" } }, "returncode": 0 }, "state": "present" } TASK [openshift_logging : Labeling logging project] **************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:13 TASK [openshift_logging : Labeling logging project] **************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create logging cert directory] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:39 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/origin/logging", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:47 included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml for openshift TASK [openshift_logging : Checking for ca.key] ********************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:3 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for ca.crt] ********************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:8 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for ca.serial.txt] ************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:13 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Generate certificates] ******************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:18 changed: [openshift] => { "changed": true, "cmd": [ "/usr/local/bin/oc", "adm", "--config=/tmp/openshift-logging-ansible-wPPOkB/admin.kubeconfig", "ca", "create-signer-cert", "--key=/etc/origin/logging/ca.key", "--cert=/etc/origin/logging/ca.crt", "--serial=/etc/origin/logging/ca.serial.txt", "--name=logging-signer-test" ], "delta": "0:00:00.250745", "end": "2017-06-09 10:34:13.376366", "rc": 0, "start": "2017-06-09 10:34:13.125621" } TASK [openshift_logging : Checking for signing.conf] *************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:29 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : template] ******************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:34 changed: [openshift] => { "changed": true, "checksum": "a5a1bda430be44f982fa9097778b7d35d2e42780", "dest": "/etc/origin/logging/signing.conf", "gid": 0, "group": "root", "md5sum": "449087446670073f2899aac33113350c", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 4263, "src": "/root/.ansible/tmp/ansible-tmp-1497018853.54-165468945447366/source", "state": "file", "uid": 0 } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:39 included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml for openshift TASK [openshift_logging : Checking for kibana.crt] ***************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for kibana.key] ***************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Trying to discover server cert variable name for kibana] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Trying to discover the server key variable name for kibana] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating signed server cert and key for kibana] ****** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying server key for kibana to generated certs directory] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying Server cert for kibana to generated certs directory] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Checking for kibana-ops.crt] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for kibana-ops.key] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Trying to discover server cert variable name for kibana-ops] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Trying to discover the server key variable name for kibana-ops] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating signed server cert and key for kibana-ops] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying server key for kibana-ops to generated certs directory] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying Server cert for kibana-ops to generated certs directory] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Checking for kibana-internal.crt] ******************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for kibana-internal.key] ******************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Trying to discover server cert variable name for kibana-internal] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Trying to discover the server key variable name for kibana-internal] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating signed server cert and key for kibana-internal] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:28 changed: [openshift] => { "changed": true, "cmd": [ "/usr/local/bin/oc", "adm", "--config=/tmp/openshift-logging-ansible-wPPOkB/admin.kubeconfig", "ca", "create-server-cert", "--key=/etc/origin/logging/kibana-internal.key", "--cert=/etc/origin/logging/kibana-internal.crt", "--hostnames=kibana, kibana-ops, kibana.127.0.0.1.xip.io, kibana-ops.router.default.svc.cluster.local", "--signer-cert=/etc/origin/logging/ca.crt", "--signer-key=/etc/origin/logging/ca.key", "--signer-serial=/etc/origin/logging/ca.serial.txt" ], "delta": "0:00:00.207512", "end": "2017-06-09 10:34:15.336941", "rc": 0, "start": "2017-06-09 10:34:15.129429" } TASK [openshift_logging : Copying server key for kibana-internal to generated certs directory] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copying Server cert for kibana-internal to generated certs directory] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/procure_server_certs.yaml:50 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:48 skipping: [openshift] => (item={u'procure_component': u'mux', u'hostnames': u'logging-mux, mux.router.default.svc.cluster.local'}) => { "cert_info": { "hostnames": "logging-mux, mux.router.default.svc.cluster.local", "procure_component": "mux" }, "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:56 skipping: [openshift] => (item={u'procure_component': u'mux'}) => { "changed": false, "shared_key_info": { "procure_component": "mux" }, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:63 skipping: [openshift] => (item={u'procure_component': u'es', u'hostnames': u'es, es.router.default.svc.cluster.local'}) => { "cert_info": { "hostnames": "es, es.router.default.svc.cluster.local", "procure_component": "es" }, "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:71 skipping: [openshift] => (item={u'procure_component': u'es-ops', u'hostnames': u'es-ops, es-ops.router.default.svc.cluster.local'}) => { "cert_info": { "hostnames": "es-ops, es-ops.router.default.svc.cluster.local", "procure_component": "es-ops" }, "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Copy proxy TLS configuration file] ******************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:81 changed: [openshift] => { "changed": true, "checksum": "36991681e03970736a99be9f084773521c44db06", "dest": "/etc/origin/logging/server-tls.json", "gid": 0, "group": "root", "md5sum": "2a954195add2b2fdde4ed09ff5c8e1c5", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 321, "src": "/root/.ansible/tmp/ansible-tmp-1497018855.77-55679468041062/source", "state": "file", "uid": 0 } TASK [openshift_logging : Copy proxy TLS configuration file] ******************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:86 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Checking for ca.db] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:91 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : copy] ************************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:96 changed: [openshift] => { "changed": true, "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "dest": "/etc/origin/logging/ca.db", "gid": 0, "group": "root", "md5sum": "d41d8cd98f00b204e9800998ecf8427e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 0, "src": "/root/.ansible/tmp/ansible-tmp-1497018856.12-118678508705801/source", "state": "file", "uid": 0 } TASK [openshift_logging : Checking for ca.crt.srl] ***************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:101 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : copy] ************************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:106 changed: [openshift] => { "changed": true, "checksum": "da39a3ee5e6b4b0d3255bfef95601890afd80709", "dest": "/etc/origin/logging/ca.crt.srl", "gid": 0, "group": "root", "md5sum": "d41d8cd98f00b204e9800998ecf8427e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 0, "src": "/root/.ansible/tmp/ansible-tmp-1497018856.44-280552070896056/source", "state": "file", "uid": 0 } TASK [openshift_logging : Generate PEM certs] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:111 included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml for openshift TASK [openshift_logging : Checking for system.logging.fluentd.key] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.logging.fluentd.crt] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.logging.fluentd] ******** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.logging.fluentd] ******** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.logging.fluentd.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.logging.fluentd.key", "-subj", "/CN=system.logging.fluentd/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:00.264709", "end": "2017-06-09 10:34:17.488737", "rc": 0, "start": "2017-06-09 10:34:17.224028" } STDERR: Generating a 2048 bit RSA private key .....................+++ .............................................................+++ writing new private key to '/etc/origin/logging/system.logging.fluentd.key' ----- TASK [openshift_logging : Sign cert request with CA for system.logging.fluentd] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.logging.fluentd.csr", "-notext", "-out", "/etc/origin/logging/system.logging.fluentd.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007792", "end": "2017-06-09 10:34:17.616683", "rc": 0, "start": "2017-06-09 10:34:17.608891" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 2 (0x2) Validity Not Before: Jun 9 14:34:17 2017 GMT Not After : Jun 9 14:34:17 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.logging.fluentd X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 16:D7:83:FD:CC:ED:1A:1E:FC:5C:30:EA:51:A2:4E:C6:70:C0:8B:B0 X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 9 14:34:17 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Checking for system.logging.kibana.key] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.logging.kibana.crt] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.logging.kibana] ********* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.logging.kibana] ********* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.logging.kibana.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.logging.kibana.key", "-subj", "/CN=system.logging.kibana/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:01.393880", "end": "2017-06-09 10:34:19.393839", "rc": 0, "start": "2017-06-09 10:34:17.999959" } STDERR: Generating a 2048 bit RSA private key ..............................................................+++ .................................................................+++ writing new private key to '/etc/origin/logging/system.logging.kibana.key' ----- TASK [openshift_logging : Sign cert request with CA for system.logging.kibana] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.logging.kibana.csr", "-notext", "-out", "/etc/origin/logging/system.logging.kibana.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007290", "end": "2017-06-09 10:34:19.518816", "rc": 0, "start": "2017-06-09 10:34:19.511526" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 3 (0x3) Validity Not Before: Jun 9 14:34:19 2017 GMT Not After : Jun 9 14:34:19 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.logging.kibana X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: A4:2E:1A:77:50:71:15:B6:56:C4:08:4D:C9:38:D2:A4:85:7A:34:A6 X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 9 14:34:19 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Checking for system.logging.curator.key] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.logging.curator.crt] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.logging.curator] ******** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.logging.curator] ******** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.logging.curator.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.logging.curator.key", "-subj", "/CN=system.logging.curator/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:00.193537", "end": "2017-06-09 10:34:20.093639", "rc": 0, "start": "2017-06-09 10:34:19.900102" } STDERR: Generating a 2048 bit RSA private key ..........................................................................................................................+++ ................+++ writing new private key to '/etc/origin/logging/system.logging.curator.key' ----- TASK [openshift_logging : Sign cert request with CA for system.logging.curator] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.logging.curator.csr", "-notext", "-out", "/etc/origin/logging/system.logging.curator.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007120", "end": "2017-06-09 10:34:20.219219", "rc": 0, "start": "2017-06-09 10:34:20.212099" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 4 (0x4) Validity Not Before: Jun 9 14:34:20 2017 GMT Not After : Jun 9 14:34:20 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.logging.curator X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: E2:16:63:BA:0F:40:88:02:FE:9C:65:3B:62:78:BE:3B:43:BF:97:41 X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 9 14:34:20 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Checking for system.admin.key] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:2 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.admin.crt] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:7 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Creating cert req for system.admin] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating cert req for system.admin] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:22 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "req", "-out", "/etc/origin/logging/system.admin.csr", "-new", "-newkey", "rsa:2048", "-keyout", "/etc/origin/logging/system.admin.key", "-subj", "/CN=system.admin/OU=OpenShift/O=Logging", "-days", "712", "-nodes" ], "delta": "0:00:00.098791", "end": "2017-06-09 10:34:20.691476", "rc": 0, "start": "2017-06-09 10:34:20.592685" } STDERR: Generating a 2048 bit RSA private key ....................+++ ..............................................+++ writing new private key to '/etc/origin/logging/system.admin.key' ----- TASK [openshift_logging : Sign cert request with CA for system.admin] ********** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_pems.yaml:31 changed: [openshift] => { "changed": true, "cmd": [ "openssl", "ca", "-in", "/etc/origin/logging/system.admin.csr", "-notext", "-out", "/etc/origin/logging/system.admin.crt", "-config", "/etc/origin/logging/signing.conf", "-extensions", "v3_req", "-batch", "-extensions", "server_ext" ], "delta": "0:00:00.007389", "end": "2017-06-09 10:34:20.820413", "rc": 0, "start": "2017-06-09 10:34:20.813024" } STDERR: Using configuration from /etc/origin/logging/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 5 (0x5) Validity Not Before: Jun 9 14:34:20 2017 GMT Not After : Jun 9 14:34:20 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.admin X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 31:ED:D2:48:6F:8C:5F:82:E8:C1:27:02:EB:12:AB:B7:6E:32:E4:0A X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 9 14:34:20 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated TASK [openshift_logging : Generate PEM cert for mux] *************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:121 skipping: [openshift] => (item=system.logging.mux) => { "changed": false, "node_name": "system.logging.mux", "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Generate PEM cert for Elasticsearch external route] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:129 skipping: [openshift] => (item=system.logging.es) => { "changed": false, "node_name": "system.logging.es", "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Creating necessary JKS certs] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:137 included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml for openshift TASK [openshift_logging : Checking for elasticsearch.jks] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:3 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for logging-es.jks] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:8 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for system.admin.jks] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:13 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Checking for truststore.jks] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:18 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:23 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:28 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:33 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Create placeholder for previously created JKS certs to prevent recreating...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:38 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : pulling down signing items from host] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:43 changed: [openshift] => (item=ca.crt) => { "changed": true, "checksum": "49e403f936f1667baa432f577f18314052f34ad0", "dest": "/tmp/openshift-logging-ansible-6X6iCA/ca.crt", "item": "ca.crt", "md5sum": "d22a72c5be5dbb99c0d764af865b3aa4", "remote_checksum": "49e403f936f1667baa432f577f18314052f34ad0", "remote_md5sum": null } changed: [openshift] => (item=ca.key) => { "changed": true, "checksum": "fcc450e64b4ca9742ecd2fb47df966f1c9297d60", "dest": "/tmp/openshift-logging-ansible-6X6iCA/ca.key", "item": "ca.key", "md5sum": "5eaf5c6e7b3bc90ded08d1a8613ff5f1", "remote_checksum": "fcc450e64b4ca9742ecd2fb47df966f1c9297d60", "remote_md5sum": null } changed: [openshift] => (item=ca.serial.txt) => { "changed": true, "checksum": "b649682b92a811746098e5c91e891e5142a41950", "dest": "/tmp/openshift-logging-ansible-6X6iCA/ca.serial.txt", "item": "ca.serial.txt", "md5sum": "76b01ce73ac53fdac1c67d27ac040473", "remote_checksum": "b649682b92a811746098e5c91e891e5142a41950", "remote_md5sum": null } ok: [openshift] => (item=ca.crl.srl) => { "changed": false, "file": "/etc/origin/logging/ca.crl.srl", "item": "ca.crl.srl" } MSG: the remote file does not exist, not transferring, ignored changed: [openshift] => (item=ca.db) => { "changed": true, "checksum": "1486a91364aec0d2092856774408d7a4692500e5", "dest": "/tmp/openshift-logging-ansible-6X6iCA/ca.db", "item": "ca.db", "md5sum": "1a6986849abd2027aca9dc34cef27232", "remote_checksum": "1486a91364aec0d2092856774408d7a4692500e5", "remote_md5sum": null } TASK [openshift_logging : template] ******************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:56 changed: [openshift -> 127.0.0.1] => { "changed": true, "checksum": "d15d71f2eeafd832a0ee0531a01b2ad76d4b0c9e", "dest": "/tmp/openshift-logging-ansible-6X6iCA/signing.conf", "gid": 0, "group": "root", "md5sum": "0eb77f03c5c0893f504051f687513c84", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 4281, "src": "/root/.ansible/tmp/ansible-tmp-1497018862.33-45484658100122/source", "state": "file", "uid": 0 } TASK [openshift_logging : Run JKS generation script] *************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:61 changed: [openshift -> 127.0.0.1] => { "changed": true, "rc": 0 } STDOUT: Generating keystore and certificate for node system.admin Generating certificate signing request for node system.admin Sign certificate request with CA Import back to keystore (including CA chain) All done for system.admin Generating keystore and certificate for node elasticsearch Generating certificate signing request for node elasticsearch Sign certificate request with CA Import back to keystore (including CA chain) All done for elasticsearch Generating keystore and certificate for node logging-es Generating certificate signing request for node logging-es Sign certificate request with CA Import back to keystore (including CA chain) All done for logging-es Import CA to truststore for validating client certs STDERR: + '[' 2 -lt 1 ']' + dir=/tmp/openshift-logging-ansible-6X6iCA + SCRATCH_DIR=/tmp/openshift-logging-ansible-6X6iCA + PROJECT=logging + [[ ! -f /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks ]] + generate_JKS_client_cert system.admin + NODE_NAME=system.admin + ks_pass=kspass + ts_pass=tspass + dir=/tmp/openshift-logging-ansible-6X6iCA + echo Generating keystore and certificate for node system.admin + keytool -genkey -alias system.admin -keystore /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks -keyalg RSA -keysize 2048 -validity 712 -keypass kspass -storepass kspass -dname 'CN=system.admin, OU=OpenShift, O=Logging' + echo Generating certificate signing request for node system.admin + keytool -certreq -alias system.admin -keystore /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks -file /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks.csr -keyalg rsa -keypass kspass -storepass kspass -dname 'CN=system.admin, OU=OpenShift, O=Logging' + echo Sign certificate request with CA + openssl ca -in /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks.csr -notext -out /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks.crt -config /tmp/openshift-logging-ansible-6X6iCA/signing.conf -extensions v3_req -batch -extensions server_ext Using configuration from /tmp/openshift-logging-ansible-6X6iCA/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 6 (0x6) Validity Not Before: Jun 9 14:34:33 2017 GMT Not After : Jun 9 14:34:33 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = system.admin X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 2F:9A:74:02:B7:1E:FA:42:C4:55:20:F4:99:C4:B2:10:E9:C2:26:AB X509v3 Authority Key Identifier: 0. Certificate is to be certified until Jun 9 14:34:33 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated + echo 'Import back to keystore (including CA chain)' + keytool -import -file /tmp/openshift-logging-ansible-6X6iCA/ca.crt -keystore /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks -storepass kspass -noprompt -alias sig-ca Certificate was added to keystore + keytool -import -file /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks.crt -keystore /tmp/openshift-logging-ansible-6X6iCA/system.admin.jks -storepass kspass -noprompt -alias system.admin Certificate reply was installed in keystore + echo All done for system.admin + [[ ! -f /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.jks ]] ++ join , logging-es logging-es-ops ++ local IFS=, ++ shift ++ echo logging-es,logging-es-ops + generate_JKS_chain true elasticsearch logging-es,logging-es-ops + dir=/tmp/openshift-logging-ansible-6X6iCA + ADD_OID=true + NODE_NAME=elasticsearch + CERT_NAMES=logging-es,logging-es-ops + ks_pass=kspass + ts_pass=tspass + rm -rf elasticsearch + extension_names= + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es-ops + '[' true = true ']' + extension_names=,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5 + echo Generating keystore and certificate for node elasticsearch + keytool -genkey -alias elasticsearch -keystore /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.jks -keypass kspass -storepass kspass -keyalg RSA -keysize 2048 -validity 712 -dname 'CN=elasticsearch, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5 + echo Generating certificate signing request for node elasticsearch + keytool -certreq -alias elasticsearch -keystore /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.jks -storepass kspass -file /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.csr -keyalg rsa -dname 'CN=elasticsearch, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es-ops,oid:1.2.3.4.5.5 + echo Sign certificate request with CA + openssl ca -in /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.csr -notext -out /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.crt -config /tmp/openshift-logging-ansible-6X6iCA/signing.conf -extensions v3_req -batch -extensions server_ext Using configuration from /tmp/openshift-logging-ansible-6X6iCA/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 7 (0x7) Validity Not Before: Jun 9 14:34:34 2017 GMT Not After : Jun 9 14:34:34 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = elasticsearch X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 14:93:B5:E7:2B:FA:55:95:B5:47:D3:7A:1E:77:97:07:41:E2:8B:CF X509v3 Authority Key Identifier: 0. X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1, DNS:logging-es, DNS:logging-es-ops, Registered ID:1.2.3.4.5.5 Certificate is to be certified until Jun 9 14:34:34 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated + echo 'Import back to keystore (including CA chain)' + keytool -import -file /tmp/openshift-logging-ansible-6X6iCA/ca.crt -keystore /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.jks -storepass kspass -noprompt -alias sig-ca Certificate was added to keystore + keytool -import -file /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.crt -keystore /tmp/openshift-logging-ansible-6X6iCA/elasticsearch.jks -storepass kspass -noprompt -alias elasticsearch Certificate reply was installed in keystore + echo All done for elasticsearch + [[ ! -f /tmp/openshift-logging-ansible-6X6iCA/logging-es.jks ]] ++ join , logging-es logging-es.logging.svc.cluster.local logging-es-cluster logging-es-cluster.logging.svc.cluster.local logging-es-ops logging-es-ops.logging.svc.cluster.local logging-es-ops-cluster logging-es-ops-cluster.logging.svc.cluster.local ++ local IFS=, ++ shift ++ echo logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local + generate_JKS_chain false logging-es logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local + dir=/tmp/openshift-logging-ansible-6X6iCA + ADD_OID=false + NODE_NAME=logging-es + CERT_NAMES=logging-es,logging-es.logging.svc.cluster.local,logging-es-cluster,logging-es-cluster.logging.svc.cluster.local,logging-es-ops,logging-es-ops.logging.svc.cluster.local,logging-es-ops-cluster,logging-es-ops-cluster.logging.svc.cluster.local + ks_pass=kspass + ts_pass=tspass + rm -rf logging-es + extension_names= + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster + for name in '${CERT_NAMES//,/ }' + extension_names=,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local + '[' false = true ']' + echo Generating keystore and certificate for node logging-es + keytool -genkey -alias logging-es -keystore /tmp/openshift-logging-ansible-6X6iCA/logging-es.jks -keypass kspass -storepass kspass -keyalg RSA -keysize 2048 -validity 712 -dname 'CN=logging-es, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local + echo Generating certificate signing request for node logging-es + keytool -certreq -alias logging-es -keystore /tmp/openshift-logging-ansible-6X6iCA/logging-es.jks -storepass kspass -file /tmp/openshift-logging-ansible-6X6iCA/logging-es.csr -keyalg rsa -dname 'CN=logging-es, OU=OpenShift, O=Logging' -ext san=dns:localhost,ip:127.0.0.1,dns:logging-es,dns:logging-es.logging.svc.cluster.local,dns:logging-es-cluster,dns:logging-es-cluster.logging.svc.cluster.local,dns:logging-es-ops,dns:logging-es-ops.logging.svc.cluster.local,dns:logging-es-ops-cluster,dns:logging-es-ops-cluster.logging.svc.cluster.local + echo Sign certificate request with CA + openssl ca -in /tmp/openshift-logging-ansible-6X6iCA/logging-es.csr -notext -out /tmp/openshift-logging-ansible-6X6iCA/logging-es.crt -config /tmp/openshift-logging-ansible-6X6iCA/signing.conf -extensions v3_req -batch -extensions server_ext Using configuration from /tmp/openshift-logging-ansible-6X6iCA/signing.conf Check that the request matches the signature Signature ok Certificate Details: Serial Number: 8 (0x8) Validity Not Before: Jun 9 14:34:35 2017 GMT Not After : Jun 9 14:34:35 2019 GMT Subject: organizationName = Logging organizationalUnitName = OpenShift commonName = logging-es X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: 42:D0:DF:69:AD:9C:43:14:54:F3:A4:BA:F7:3F:CB:0F:6A:C8:A0:8E X509v3 Authority Key Identifier: 0. X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1, DNS:logging-es, DNS:logging-es.logging.svc.cluster.local, DNS:logging-es-cluster, DNS:logging-es-cluster.logging.svc.cluster.local, DNS:logging-es-ops, DNS:logging-es-ops.logging.svc.cluster.local, DNS:logging-es-ops-cluster, DNS:logging-es-ops-cluster.logging.svc.cluster.local Certificate is to be certified until Jun 9 14:34:35 2019 GMT (730 days) Write out database with 1 new entries Data Base Updated + echo 'Import back to keystore (including CA chain)' + keytool -import -file /tmp/openshift-logging-ansible-6X6iCA/ca.crt -keystore /tmp/openshift-logging-ansible-6X6iCA/logging-es.jks -storepass kspass -noprompt -alias sig-ca Certificate was added to keystore + keytool -import -file /tmp/openshift-logging-ansible-6X6iCA/logging-es.crt -keystore /tmp/openshift-logging-ansible-6X6iCA/logging-es.jks -storepass kspass -noprompt -alias logging-es Certificate reply was installed in keystore + echo All done for logging-es + '[' '!' -f /tmp/openshift-logging-ansible-6X6iCA/truststore.jks ']' + createTruststore + echo 'Import CA to truststore for validating client certs' + keytool -import -file /tmp/openshift-logging-ansible-6X6iCA/ca.crt -keystore /tmp/openshift-logging-ansible-6X6iCA/truststore.jks -storepass tspass -noprompt -alias sig-ca Certificate was added to keystore + exit 0 TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:66 changed: [openshift] => { "changed": true, "checksum": "e3e2dd6070ce3049c62814d78eb75b61485fec2c", "dest": "/etc/origin/logging/elasticsearch.jks", "gid": 0, "group": "root", "md5sum": "2fba9d4f396d642d41b7144d075acb7e", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 3766, "src": "/root/.ansible/tmp/ansible-tmp-1497018876.35-116214791207160/source", "state": "file", "uid": 0 } TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:72 changed: [openshift] => { "changed": true, "checksum": "c4127e3fe0dfdabd6e166096f0659ff8147e9cc3", "dest": "/etc/origin/logging/logging-es.jks", "gid": 0, "group": "root", "md5sum": "866eac76d598bbd43966454387a51dc8", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 3985, "src": "/root/.ansible/tmp/ansible-tmp-1497018876.57-129247782659315/source", "state": "file", "uid": 0 } TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:78 changed: [openshift] => { "changed": true, "checksum": "7f1eeafae72d5a74761e61a54e4b4e8d9fbc9e01", "dest": "/etc/origin/logging/system.admin.jks", "gid": 0, "group": "root", "md5sum": "f5752f72ce2e8765694950e03b2f0926", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 3700, "src": "/root/.ansible/tmp/ansible-tmp-1497018876.79-253457090725397/source", "state": "file", "uid": 0 } TASK [openshift_logging : Pushing locally generated JKS certs to remote host...] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_jks.yaml:84 changed: [openshift] => { "changed": true, "checksum": "bebfb235db90aa4d11972f53459dbb60c804b73e", "dest": "/etc/origin/logging/truststore.jks", "gid": 0, "group": "root", "md5sum": "ce756565668abfaacbe19935d7fd2ad6", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 797, "src": "/root/.ansible/tmp/ansible-tmp-1497018877.01-26578322865867/source", "state": "file", "uid": 0 } TASK [openshift_logging : Generate proxy session] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:141 ok: [openshift] => { "ansible_facts": { "session_secret": "7KcHkEs9sWZlgjyZJAurgeD3ZecbIGlXtwEpAThuesCCxLbC3H3nFdIpjercH93MxZzXpe5v5V6faye0hoxrbk3nc4PGKodZl4GCIzZah92VN3DuM4Gkip28mUPU600xJB8U9qo8qslmZOvnuCRvzcETcHF5W8nAFAU4O2iH9o2vlEUc3ocuT3Y8GvuviLaiGx1t1LIK" }, "changed": false } TASK [openshift_logging : Generate oauth client secret] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/generate_certs.yaml:146 ok: [openshift] => { "ansible_facts": { "oauth_secret": "L273cY2dUyEw2U7pbxCKfxx1y0tlk5v0pwM9EgGghH18ZvpZWAIxtwZBhUWUdfHj" }, "changed": false } TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:53 TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:57 ok: [openshift] => { "ansible_facts": { "es_indices": "[]" }, "changed": false } TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:60 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:64 TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:85 statically included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml TASK [openshift_logging_elasticsearch : Validate Elasticsearch cluster size] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Validate Elasticsearch Ops cluster size] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:6 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:10 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:14 ok: [openshift] => { "ansible_facts": { "elasticsearch_name": "logging-elasticsearch", "es_component": "es" }, "changed": false } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "es_version": "3_5" }, "changed": false } TASK [openshift_logging_elasticsearch : debug] ********************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:11 ok: [openshift] => { "changed": false, "openshift_logging_image_version": "latest" } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:14 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:17 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create temp directory for doing work in] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:21 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.001829", "end": "2017-06-09 10:34:37.901577", "rc": 0, "start": "2017-06-09 10:34:37.899748" } STDOUT: /tmp/openshift-logging-ansible-SRwGtz TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:26 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-SRwGtz" }, "changed": false } TASK [openshift_logging_elasticsearch : Create templates subdirectory] ********* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:30 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-SRwGtz/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:48 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-elasticsearch -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-elasticsearch-dockercfg-sb6sd" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-09T14:34:38Z", "name": "aggregated-logging-elasticsearch", "namespace": "logging", "resourceVersion": "1264", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-elasticsearch", "uid": "c4620683-4d20-11e7-94cc-0e3d36056ef8" }, "secrets": [ { "name": "aggregated-logging-elasticsearch-token-b9jxb" }, { "name": "aggregated-logging-elasticsearch-dockercfg-sb6sd" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:57 changed: [openshift] => { "changed": true, "checksum": "e5015364391ac609da8655a9a1224131599a5cea", "dest": "/tmp/openshift-logging-ansible-SRwGtz/rolebinding-reader.yml", "gid": 0, "group": "root", "md5sum": "446fb96447527f48f97e69bb41bad7be", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 135, "src": "/root/.ansible/tmp/ansible-tmp-1497018879.05-121060742393182/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Create rolebinding-reader role] ******** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:61 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get clusterrole rolebinding-reader -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "ClusterRole", "metadata": { "creationTimestamp": "2017-06-09T14:34:39Z", "name": "rolebinding-reader", "resourceVersion": "122", "selfLink": "/oapi/v1/clusterroles/rolebinding-reader", "uid": "c5058e4f-4d20-11e7-94cc-0e3d36056ef8" }, "rules": [ { "apiGroups": [ "" ], "attributeRestrictions": null, "resources": [ "clusterrolebindings" ], "verbs": [ "get" ] } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set rolebinding-reader permissions for ES] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:72 changed: [openshift] => { "changed": true, "present": "present", "results": { "cmd": "/bin/oc adm policy add-cluster-role-to-user rolebinding-reader system:serviceaccount:logging:aggregated-logging-elasticsearch -n logging", "results": "", "returncode": 0 } } TASK [openshift_logging_elasticsearch : Generate logging-elasticsearch-view-role] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "d752c09323565f80ed14fa806d42284f0c5aef2a", "dest": "/tmp/openshift-logging-ansible-SRwGtz/logging-elasticsearch-view-role.yaml", "gid": 0, "group": "root", "md5sum": "8299dca2fb036c06ba7c4f620680e0f6", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 183, "src": "/root/.ansible/tmp/ansible-tmp-1497018880.82-150239194232265/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:94 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get rolebinding logging-elasticsearch-view-role -o json -n logging", "results": [ { "apiVersion": "v1", "groupNames": null, "kind": "RoleBinding", "metadata": { "creationTimestamp": "2017-06-09T14:34:41Z", "name": "logging-elasticsearch-view-role", "namespace": "logging", "resourceVersion": "881", "selfLink": "/oapi/v1/namespaces/logging/rolebindings/logging-elasticsearch-view-role", "uid": "c610e682-4d20-11e7-94cc-0e3d36056ef8" }, "roleRef": { "name": "view" }, "subjects": [ { "kind": "ServiceAccount", "name": "aggregated-logging-elasticsearch", "namespace": "logging" } ], "userNames": [ "system:serviceaccount:logging:aggregated-logging-elasticsearch" ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:105 ok: [openshift] => { "changed": false, "checksum": "f91458d5dad42c496e2081ef872777a6f6eb9ff9", "dest": "/tmp/openshift-logging-ansible-SRwGtz/elasticsearch-logging.yml", "gid": 0, "group": "root", "md5sum": "e4be7c33c1927bbdd8c909bfbe3d9f0b", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2171, "src": "/root/.ansible/tmp/ansible-tmp-1497018881.81-93830128563448/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:111 ok: [openshift] => { "changed": false, "checksum": "6d4f976f6e77a6e0c8dca7e01fb5bedb68678b1d", "dest": "/tmp/openshift-logging-ansible-SRwGtz/elasticsearch.yml", "gid": 0, "group": "root", "md5sum": "75abfd3a190832e593a8e5e7c5695e8e", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2454, "src": "/root/.ansible/tmp/ansible-tmp-1497018882.04-9725026939372/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:121 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:127 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES configmap] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:133 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-elasticsearch -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "elasticsearch.yml": "cluster:\n name: ${CLUSTER_NAME}\n\nscript:\n inline: on\n indexed: on\n\nindex:\n number_of_shards: 1\n number_of_replicas: 0\n unassigned.node_left.delayed_timeout: 2m\n translog:\n flush_threshold_size: 256mb\n flush_threshold_period: 5m\n\nnode:\n master: ${IS_MASTER}\n data: ${HAS_DATA}\n\nnetwork:\n host: 0.0.0.0\n\ncloud:\n kubernetes:\n service: ${SERVICE_DNS}\n namespace: ${NAMESPACE}\n\ndiscovery:\n type: kubernetes\n zen.ping.multicast.enabled: false\n zen.minimum_master_nodes: ${NODE_QUORUM}\n\ngateway:\n recover_after_nodes: ${NODE_QUORUM}\n expected_nodes: ${RECOVER_EXPECTED_NODES}\n recover_after_time: ${RECOVER_AFTER_TIME}\n\nio.fabric8.elasticsearch.authentication.users: [\"system.logging.kibana\", \"system.logging.fluentd\", \"system.logging.curator\", \"system.admin\"]\nio.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.empty: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\n\nopenshift.config:\n use_common_data_model: true\n project_index_prefix: \"project\"\n time_field_name: \"@timestamp\"\n\nopenshift.searchguard:\n keystore.path: /etc/elasticsearch/secret/admin.jks\n truststore.path: /etc/elasticsearch/secret/searchguard.truststore\n\nopenshift.operations.allow_cluster_reader: false\n\npath:\n data: /elasticsearch/persistent/${CLUSTER_NAME}/data\n logs: /elasticsearch/${CLUSTER_NAME}/logs\n work: /elasticsearch/${CLUSTER_NAME}/work\n scripts: /elasticsearch/${CLUSTER_NAME}/scripts\n\nsearchguard:\n authcz.admin_dn:\n - CN=system.admin,OU=OpenShift,O=Logging\n config_index_name: \".searchguard.${HOSTNAME}\"\n ssl:\n transport:\n enabled: true\n enforce_hostname_verification: false\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/searchguard.key\n keystore_password: kspass\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/searchguard.truststore\n truststore_password: tspass\n http:\n enabled: true\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/key\n keystore_password: kspass\n clientauth_mode: OPTIONAL\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/truststore\n truststore_password: tspass\n", "logging.yml": "# you can override this using by setting a system property, for example -Des.logger.level=DEBUG\nes.logger.level: INFO\nrootLogger: ${es.logger.level}, console, file\nlogger:\n # log action execution errors for easier debugging\n action: WARN\n # reduce the logging for aws, too much is logged under the default INFO\n com.amazonaws: WARN\n io.fabric8.elasticsearch: ${PLUGIN_LOGLEVEL}\n io.fabric8.kubernetes: ${PLUGIN_LOGLEVEL}\n\n # gateway\n #gateway: DEBUG\n #index.gateway: DEBUG\n\n # peer shard recovery\n #indices.recovery: DEBUG\n\n # discovery\n #discovery: TRACE\n\n index.search.slowlog: TRACE, index_search_slow_log_file\n index.indexing.slowlog: TRACE, index_indexing_slow_log_file\n\n # search-guard\n com.floragunn.searchguard: WARN\n\nadditivity:\n index.search.slowlog: false\n index.indexing.slowlog: false\n\nappender:\n console:\n type: console\n layout:\n type: consolePattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.\n # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html\n #file:\n #type: extrasRollingFile\n #file: ${path.logs}/${cluster.name}.log\n #rollingPolicy: timeBased\n #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz\n #layout:\n #type: pattern\n #conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_search_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_search_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_indexing_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-09T14:34:42Z", "name": "logging-elasticsearch", "namespace": "logging", "resourceVersion": "1271", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-elasticsearch", "uid": "c6e1e286-4d20-11e7-94cc-0e3d36056ef8" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set ES secret] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:144 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-elasticsearch key=/etc/origin/logging/logging-es.jks truststore=/etc/origin/logging/truststore.jks searchguard.key=/etc/origin/logging/elasticsearch.jks searchguard.truststore=/etc/origin/logging/truststore.jks admin-key=/etc/origin/logging/system.admin.key admin-cert=/etc/origin/logging/system.admin.crt admin-ca=/etc/origin/logging/ca.crt admin.jks=/etc/origin/logging/system.admin.jks -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es-cluster service] ******** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:168 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.75.164", "cmd": "/bin/oc get service logging-es-cluster -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-09T14:34:44Z", "name": "logging-es-cluster", "namespace": "logging", "resourceVersion": "1276", "selfLink": "/api/v1/namespaces/logging/services/logging-es-cluster", "uid": "c7e83fb7-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "clusterIP": "172.30.75.164", "ports": [ { "port": 9300, "protocol": "TCP", "targetPort": 9300 } ], "selector": { "component": "es", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es service] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:182 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.1.243", "cmd": "/bin/oc get service logging-es -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-09T14:34:45Z", "name": "logging-es", "namespace": "logging", "resourceVersion": "1279", "selfLink": "/api/v1/namespaces/logging/services/logging-es", "uid": "c87faaf6-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "clusterIP": "172.30.1.243", "ports": [ { "port": 9200, "protocol": "TCP", "targetPort": "restapi" } ], "selector": { "component": "es", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:197 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:210 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES storage] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:225 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:237 ok: [openshift] => { "ansible_facts": { "es_deploy_name": "logging-es-data-master-bj8649rd" }, "changed": false } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:241 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES dc templates] ******************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:246 changed: [openshift] => { "changed": true, "checksum": "67f82f556266f6896be8f9ff684717cdfe2b2cf7", "dest": "/tmp/openshift-logging-ansible-SRwGtz/templates/logging-es-dc.yml", "gid": 0, "group": "root", "md5sum": "efe04bbf8bbef98e86981cc928650ec4", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3139, "src": "/root/.ansible/tmp/ansible-tmp-1497018886.07-90323797348635/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set ES dc] ***************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:262 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-es-data-master-bj8649rd -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-09T14:34:46Z", "generation": 2, "labels": { "component": "es", "deployment": "logging-es-data-master-bj8649rd", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-data-master-bj8649rd", "namespace": "logging", "resourceVersion": "1293", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-es-data-master-bj8649rd", "uid": "c93308b3-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "replicas": 1, "selector": { "component": "es", "deployment": "logging-es-data-master-bj8649rd", "logging-infra": "elasticsearch", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "es", "deployment": "logging-es-data-master-bj8649rd", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-data-master-bj8649rd" }, "spec": { "containers": [ { "env": [ { "name": "NAMESPACE", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "metadata.namespace" } } }, { "name": "KUBERNETES_TRUST_CERT", "value": "true" }, { "name": "SERVICE_DNS", "value": "logging-es-cluster" }, { "name": "CLUSTER_NAME", "value": "logging-es" }, { "name": "INSTANCE_RAM", "value": "8Gi" }, { "name": "NODE_QUORUM", "value": "1" }, { "name": "RECOVER_EXPECTED_NODES", "value": "1" }, { "name": "RECOVER_AFTER_TIME", "value": "5m" }, { "name": "READINESS_PROBE_TIMEOUT", "value": "30" }, { "name": "IS_MASTER", "value": "true" }, { "name": "HAS_DATA", "value": "true" } ], "image": "172.30.106.159:5000/logging/logging-elasticsearch:latest", "imagePullPolicy": "Always", "name": "elasticsearch", "ports": [ { "containerPort": 9200, "name": "restapi", "protocol": "TCP" }, { "containerPort": 9300, "name": "cluster", "protocol": "TCP" } ], "readinessProbe": { "exec": { "command": [ "/usr/share/elasticsearch/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 10, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 30 }, "resources": { "limits": { "cpu": "1", "memory": "8Gi" }, "requests": { "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/elasticsearch/secret", "name": "elasticsearch", "readOnly": true }, { "mountPath": "/usr/share/java/elasticsearch/config", "name": "elasticsearch-config", "readOnly": true }, { "mountPath": "/elasticsearch/persistent", "name": "elasticsearch-storage" } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": { "supplementalGroups": [ 65534 ] }, "serviceAccount": "aggregated-logging-elasticsearch", "serviceAccountName": "aggregated-logging-elasticsearch", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "elasticsearch", "secret": { "defaultMode": 420, "secretName": "logging-elasticsearch" } }, { "configMap": { "defaultMode": 420, "name": "logging-elasticsearch" }, "name": "elasticsearch-config" }, { "emptyDir": {}, "name": "elasticsearch-storage" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-09T14:34:46Z", "lastUpdateTime": "2017-06-09T14:34:46Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-09T14:34:46Z", "lastUpdateTime": "2017-06-09T14:34:46Z", "message": "replication controller \"logging-es-data-master-bj8649rd-1\" is waiting for pod \"logging-es-data-master-bj8649rd-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Delete temp directory] ***************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:274 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-SRwGtz", "state": "absent" } TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:99 TASK [openshift_logging : set_fact] ******************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:105 ok: [openshift] => { "ansible_facts": { "es_ops_indices": "[]" }, "changed": false } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:109 TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:132 statically included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml TASK [openshift_logging_elasticsearch : Validate Elasticsearch cluster size] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:2 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Validate Elasticsearch Ops cluster size] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:6 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:10 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:14 ok: [openshift] => { "ansible_facts": { "elasticsearch_name": "logging-elasticsearch-ops", "es_component": "es-ops" }, "changed": false } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "es_version": "3_5" }, "changed": false } TASK [openshift_logging_elasticsearch : debug] ********************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:11 ok: [openshift] => { "changed": false, "openshift_logging_image_version": "latest" } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:14 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : fail] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/determine_version.yaml:17 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create temp directory for doing work in] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:21 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.002326", "end": "2017-06-09 10:34:48.051070", "rc": 0, "start": "2017-06-09 10:34:48.048744" } STDOUT: /tmp/openshift-logging-ansible-nlPqto TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:26 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-nlPqto" }, "changed": false } TASK [openshift_logging_elasticsearch : Create templates subdirectory] ********* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:30 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-nlPqto/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:40 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Create ES service account] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:48 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get sa aggregated-logging-elasticsearch -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-elasticsearch-dockercfg-sb6sd" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-09T14:34:38Z", "name": "aggregated-logging-elasticsearch", "namespace": "logging", "resourceVersion": "1264", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-elasticsearch", "uid": "c4620683-4d20-11e7-94cc-0e3d36056ef8" }, "secrets": [ { "name": "aggregated-logging-elasticsearch-token-b9jxb" }, { "name": "aggregated-logging-elasticsearch-dockercfg-sb6sd" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:57 changed: [openshift] => { "changed": true, "checksum": "e5015364391ac609da8655a9a1224131599a5cea", "dest": "/tmp/openshift-logging-ansible-nlPqto/rolebinding-reader.yml", "gid": 0, "group": "root", "md5sum": "446fb96447527f48f97e69bb41bad7be", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 135, "src": "/root/.ansible/tmp/ansible-tmp-1497018888.76-64287161081327/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Create rolebinding-reader role] ******** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:61 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get clusterrole rolebinding-reader -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "ClusterRole", "metadata": { "creationTimestamp": "2017-06-09T14:34:39Z", "name": "rolebinding-reader", "resourceVersion": "122", "selfLink": "/oapi/v1/clusterroles/rolebinding-reader", "uid": "c5058e4f-4d20-11e7-94cc-0e3d36056ef8" }, "rules": [ { "apiGroups": [ "" ], "attributeRestrictions": null, "resources": [ "clusterrolebindings" ], "verbs": [ "get" ] } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set rolebinding-reader permissions for ES] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:72 ok: [openshift] => { "changed": false, "present": "present" } TASK [openshift_logging_elasticsearch : Generate logging-elasticsearch-view-role] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "d752c09323565f80ed14fa806d42284f0c5aef2a", "dest": "/tmp/openshift-logging-ansible-nlPqto/logging-elasticsearch-view-role.yaml", "gid": 0, "group": "root", "md5sum": "8299dca2fb036c06ba7c4f620680e0f6", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 183, "src": "/root/.ansible/tmp/ansible-tmp-1497018890.39-106851878014006/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set logging-elasticsearch-view-role role] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:94 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get rolebinding logging-elasticsearch-view-role -o json -n logging", "results": [ { "apiVersion": "v1", "groupNames": null, "kind": "RoleBinding", "metadata": { "creationTimestamp": "2017-06-09T14:34:41Z", "name": "logging-elasticsearch-view-role", "namespace": "logging", "resourceVersion": "1269", "selfLink": "/oapi/v1/namespaces/logging/rolebindings/logging-elasticsearch-view-role", "uid": "c610e682-4d20-11e7-94cc-0e3d36056ef8" }, "roleRef": { "name": "view" }, "subjects": [ { "kind": "ServiceAccount", "name": "aggregated-logging-elasticsearch", "namespace": "logging" } ], "userNames": [ "system:serviceaccount:logging:aggregated-logging-elasticsearch" ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:105 ok: [openshift] => { "changed": false, "checksum": "f91458d5dad42c496e2081ef872777a6f6eb9ff9", "dest": "/tmp/openshift-logging-ansible-nlPqto/elasticsearch-logging.yml", "gid": 0, "group": "root", "md5sum": "e4be7c33c1927bbdd8c909bfbe3d9f0b", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2171, "src": "/root/.ansible/tmp/ansible-tmp-1497018891.71-204363958984287/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : template] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:111 ok: [openshift] => { "changed": false, "checksum": "6d4f976f6e77a6e0c8dca7e01fb5bedb68678b1d", "dest": "/tmp/openshift-logging-ansible-nlPqto/elasticsearch.yml", "gid": 0, "group": "root", "md5sum": "75abfd3a190832e593a8e5e7c5695e8e", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2454, "src": "/root/.ansible/tmp/ansible-tmp-1497018892.02-32866246493020/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:121 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : copy] ********************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:127 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES configmap] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:133 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-elasticsearch-ops -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "elasticsearch.yml": "cluster:\n name: ${CLUSTER_NAME}\n\nscript:\n inline: on\n indexed: on\n\nindex:\n number_of_shards: 1\n number_of_replicas: 0\n unassigned.node_left.delayed_timeout: 2m\n translog:\n flush_threshold_size: 256mb\n flush_threshold_period: 5m\n\nnode:\n master: ${IS_MASTER}\n data: ${HAS_DATA}\n\nnetwork:\n host: 0.0.0.0\n\ncloud:\n kubernetes:\n service: ${SERVICE_DNS}\n namespace: ${NAMESPACE}\n\ndiscovery:\n type: kubernetes\n zen.ping.multicast.enabled: false\n zen.minimum_master_nodes: ${NODE_QUORUM}\n\ngateway:\n recover_after_nodes: ${NODE_QUORUM}\n expected_nodes: ${RECOVER_EXPECTED_NODES}\n recover_after_time: ${RECOVER_AFTER_TIME}\n\nio.fabric8.elasticsearch.authentication.users: [\"system.logging.kibana\", \"system.logging.fluentd\", \"system.logging.curator\", \"system.admin\"]\nio.fabric8.elasticsearch.kibana.mapping.app: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.ops: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\nio.fabric8.elasticsearch.kibana.mapping.empty: /usr/share/elasticsearch/index_patterns/com.redhat.viaq-openshift.index-pattern.json\n\nopenshift.config:\n use_common_data_model: true\n project_index_prefix: \"project\"\n time_field_name: \"@timestamp\"\n\nopenshift.searchguard:\n keystore.path: /etc/elasticsearch/secret/admin.jks\n truststore.path: /etc/elasticsearch/secret/searchguard.truststore\n\nopenshift.operations.allow_cluster_reader: false\n\npath:\n data: /elasticsearch/persistent/${CLUSTER_NAME}/data\n logs: /elasticsearch/${CLUSTER_NAME}/logs\n work: /elasticsearch/${CLUSTER_NAME}/work\n scripts: /elasticsearch/${CLUSTER_NAME}/scripts\n\nsearchguard:\n authcz.admin_dn:\n - CN=system.admin,OU=OpenShift,O=Logging\n config_index_name: \".searchguard.${HOSTNAME}\"\n ssl:\n transport:\n enabled: true\n enforce_hostname_verification: false\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/searchguard.key\n keystore_password: kspass\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/searchguard.truststore\n truststore_password: tspass\n http:\n enabled: true\n keystore_type: JKS\n keystore_filepath: /etc/elasticsearch/secret/key\n keystore_password: kspass\n clientauth_mode: OPTIONAL\n truststore_type: JKS\n truststore_filepath: /etc/elasticsearch/secret/truststore\n truststore_password: tspass\n", "logging.yml": "# you can override this using by setting a system property, for example -Des.logger.level=DEBUG\nes.logger.level: INFO\nrootLogger: ${es.logger.level}, console, file\nlogger:\n # log action execution errors for easier debugging\n action: WARN\n # reduce the logging for aws, too much is logged under the default INFO\n com.amazonaws: WARN\n io.fabric8.elasticsearch: ${PLUGIN_LOGLEVEL}\n io.fabric8.kubernetes: ${PLUGIN_LOGLEVEL}\n\n # gateway\n #gateway: DEBUG\n #index.gateway: DEBUG\n\n # peer shard recovery\n #indices.recovery: DEBUG\n\n # discovery\n #discovery: TRACE\n\n index.search.slowlog: TRACE, index_search_slow_log_file\n index.indexing.slowlog: TRACE, index_indexing_slow_log_file\n\n # search-guard\n com.floragunn.searchguard: WARN\n\nadditivity:\n index.search.slowlog: false\n index.indexing.slowlog: false\n\nappender:\n console:\n type: console\n layout:\n type: consolePattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files.\n # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html\n #file:\n #type: extrasRollingFile\n #file: ${path.logs}/${cluster.name}.log\n #rollingPolicy: timeBased\n #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz\n #layout:\n #type: pattern\n #conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_search_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_search_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n\n index_indexing_slow_log_file:\n type: dailyRollingFile\n file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log\n datePattern: \"'.'yyyy-MM-dd\"\n layout:\n type: pattern\n conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-09T14:34:52Z", "name": "logging-elasticsearch-ops", "namespace": "logging", "resourceVersion": "1319", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-elasticsearch-ops", "uid": "ccda60b8-4d20-11e7-94cc-0e3d36056ef8" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set ES secret] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:144 ok: [openshift] => { "changed": false, "results": { "apiVersion": "v1", "data": { "admin-ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE1sb1hEVEl5TURZd09ERTBNelF4TTFvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU9NZWh0UnZGekhNN1RZR2h3bzNVUlBHZWdPZ3JVeWhId2NNbERsMHpDUisKQm9vcGxiTUVRWHlvM1Bad294YXYrUjM2eDhMbXlJbGdhWnJtVHNFcXN2cm16aTZBY05xdGNoWVJwYlRpSkRuUQpuR3ZrYkt6SEpZazJLTlVza1R4VytyWitNTkxWVHdrU0QvVThVeHE0RHFiZGtsRTZZY2M3bDhkWVpMRTB4M0thCnU4UTFrclFZa2ozMk5idzA1UUFnYmg3MmdKc0J6akQwZm9XZDcwUHlXNTNJVTk3RERsU0o0MkdVc3hEd0tzWmUKTkt6eGNJTCsrc0c2ZHl0aG9qRFkzNWMyWjNERXV6NkpwTHJZUFhuK3hsbEhrTklBMkNtTFZYanp3eEFDcTF2RQpBRnJwamxydEVTRGMweU1WOG9IeW03Ymtyci9RaGFVL2FtcmdsSXMwdWNrQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKTmQKV2Z6WEZVNEtrYitDLzZnZ1ZxbmtoQjEzQ0NHS1pvRE9VbkphYWVaTzhEMTNUMGhvTnpORXF5NlRBREVQMUpobgpaMTA1Z3E4M3NES3ZoQlp6RVZ1ajZXWlNvbFZDZGh4M21TOWZjK0EyZjlDcUtsci9MejJjQk5ZcUFrQVJtKzAxCm1aU21ONzNxQUhOWkNDS0lMZGdrRXdKQSthTFNLbG1xbVZXZU9ZdVBVc3JDTTUweGJLWlJmUGJPZk1aL1ZJTkcKTlQxMys3VmJwaWF4RG1HcXliMnRWRmNOUEg5NzhuSXM5OXFSWUhDWU00cnE5SzRZRkFzaXhTY1VpU1ZZdHl0NApsdEVyT3dYMXB3UllvemsraTIrMFVtRGpvOGVZdVJqaXp5cFJlMmJleEdrZUpIMW15eGthNXVGWk1uN0VRM3kvCi9STSsrOXdRVVJSNjY5enNZNnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "admin-cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURQRENDQWlTZ0F3SUJBZ0lCQlRBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReU1Gb1hEVEU1TURZd09URTBNelF5TUZvdwpQVEVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVJVd0V3WURWUVFECkRBeHplWE4wWlcwdVlXUnRhVzR3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRREMKYUVXV2pUUThTL2VwV1llUjZYR0JqTkxTaWp3R1BWOXlLek5kc3k5eUdKY0xiU29pTDNjWGZGYjFhWjNMd3M1awpyUExjMU5XTHZ4VmNuY0JGc2pEWTBOZlFrM3duYjlrWE96YnBPSlVqcGFCd0kwSzFpazFnMVE4L0p0Q1N6Z1c2Cld4c3RKUHgwRXhzUW8vTHZYanVObVlDaVVlV2dtWTFjVFRqSktRU213dFduK3JRL09yWFYwT05ZOEcrMXF6SFMKbVkwUnRUaDA5dCszMk42Uy9rdlkwUS9OQWZ0U2tHUm1YeUc4ampPQTVOUUZmVTNvY1gyb29UVTdzbFJKb2dPcgpHN2ZsdjR1eUVnbDh4ZGZpTFJBeG1zUFJwZjRnM2UvYmhvZTNPL2NMNk5HZFdBVGVLMXlEd1pINm5McG1ySXV6Ci9hM3RSSHc5VU5saUFTZys2K2F0QWdNQkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUZvREFKQmdOVkhSTUUKQWpBQU1CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFkQmdOVkhRNEVGZ1FVTWUzUwpTRytNWDRMb3dTY0M2eEtydDI0eTVBb3dDUVlEVlIwakJBSXdBREFOQmdrcWhraUc5dzBCQVFVRkFBT0NBUUVBCnl4dGY5RERjNkFLclllSUVpeWV5dWVmSkFxbWZCZjBTbFk5UWFiWkVsQ1VlVVY4bGdrNS8yRGJ3ZnNUTys1RnAKNFl3S1prb3hPc0p3VkVqWU5hT0hQWGpZSVJ0Sk5xMWVteFgrQklxSjd1b0E3Mm5MV05pa1U0RTNhMTdwc2gyWgp1dHRDcTRYUzZveWIyNy9yeUk1dFJJeHNSTmJUUHdDK3RmL0ZXbzE2UmNiQTRuUDh4b09tVUVyMnArVDBkYy8zCmRybURteE9LdDJEOWh2WDZpcWFYdUg4eEEwR04zeUhvd29GK0RiYUhscStJNHpIMmNvVGtFZnM4dWtoS3JWWVMKYjFhTmZWQmVtVW9tNUsxYTNsU3FWTXI3QnlVYkh1Mzhzc3VRVFdBcENxRWNjaVhSc0NGZm0wWXpmOFN5UE9NZQowNno5WnVQZm1KWDRnQXY5cElOdFF3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "admin-key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2d0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktrd2dnU2xBZ0VBQW9JQkFRRENhRVdXalRROFMvZXAKV1llUjZYR0JqTkxTaWp3R1BWOXlLek5kc3k5eUdKY0xiU29pTDNjWGZGYjFhWjNMd3M1a3JQTGMxTldMdnhWYwpuY0JGc2pEWTBOZlFrM3duYjlrWE96YnBPSlVqcGFCd0kwSzFpazFnMVE4L0p0Q1N6Z1c2V3hzdEpQeDBFeHNRCm8vTHZYanVObVlDaVVlV2dtWTFjVFRqSktRU213dFduK3JRL09yWFYwT05ZOEcrMXF6SFNtWTBSdFRoMDl0KzMKMk42Uy9rdlkwUS9OQWZ0U2tHUm1YeUc4ampPQTVOUUZmVTNvY1gyb29UVTdzbFJKb2dPckc3Zmx2NHV5RWdsOAp4ZGZpTFJBeG1zUFJwZjRnM2UvYmhvZTNPL2NMNk5HZFdBVGVLMXlEd1pINm5McG1ySXV6L2EzdFJIdzlVTmxpCkFTZys2K2F0QWdNQkFBRUNnZ0VCQUk2TEVBTWNrK0ZtUGppUTZjT0Y3SEQyQlpyVU9zREVmVmhqN3F6VWRvUnQKSFVzR2h1ODc2RkZ6SFB1aXJrMjZEOFZudmtkSFV6QzlNZmVQdjJ3YkJJL2xTV2lveTA3TFJ0MHUwTXRlYnBRTgpuRDY4eSt2NmRWUDd4TXNrTmFoK29WcUw5TGc4TjFNUXN6YVhUOGhOU3RNL2F6OFpWNHBUTElBeUt1SHNUbm5LCmRldExHOXpkdkYyd0FmTm91eWZCZy9UTUVUelNyTEgxYU8yamt5YjBFMnlwalY3UThDcFhqOUNhajJjSTB1NUQKQVp6NVk1MHNUTktNRldjdEI1eWhsbWQwTk41eG42SktrdHFTM29ycEJ6RFdtcnhVUW9sWEZ3akJSZ0Ewbkl1RApLZ0xnUk5MVjVMRGJ0bHpoREhGU082aWtGbXVrSnFEUFZkSkRZRXdvRWswQ2dZRUEvelFsaHdDNDVUalRTV2JuClhtMTdkaVVjMENQUGRscHBIVnJSSTEySHZlRlpRS2szZTJsRjg3MGY1d3NmQU1Zb1RZbG9wdUtyZVV5QmwzNXYKYWQ3SC9ZY1dRWEVuTGhZejVHQU94a21JckpZQ052ZXBFK1JVY1ZET01la1pqU1RTaytZekpWekNBbkVaYlIzYQpoNFpaUHBMZTcrK2RxRU0rSVJIdkZLVlhRMk1DZ1lFQXd3T1AxdEVyS3dIV1dHVHZLdkp4WVNibnNuSUk0OWJQCkw2Nno0amF2N0lKT2pFQnZZUWdxTVRNYjJDU0xHOTV2bm5FOVFxMVJKTGc0T3JDZ3NNd29EMi9YL0dKVWZsZFcKOHhDNEcySkxOUCttMzJhVnNPWTFMaGtqSFZUcERJeVIySWFOQldDMzdjOUxYQ1JqcVZITDUzdDIzN1JTVS9GQwovOUhwQW5RT3NxOENnWUVBc2pEM0F3eStEVTlnT0NCaDdNMEZKN2xDSlJMY0NRZVgzYWRMNENXdVlpYTI2eTg4ClRpOXphSHpsaWExNk9GQWtVLzlkMHlqeUVnQVpmRzRMM1NCeEE5VU85U0xNK0tFSUdxMzNvdncxTWt5THYxV3QKK1BXMUFHb3JqeTN6YVZvTXJyaE5mZ2tHYmk0S1V1WkZiOXVlOU5JWVYvQTNaUVdPbkFpcHB3RExyWUVDZ1lBSQpJZVRrczNKV1o4dzFnWGdMMVhKdTk3MWZ6cXVhUE1JRkhnYjRYd2wxRm5ZS0dVSEx6UmhkVnVGSllUUy80OFhKCjJMVVNTOEgvZ3dNdFIySmNIUmRxbFdKdmJ2WlJFbmxZeDVDMTY2SnhRbHdHSXZRSkhZQ0lQSm9mUmdRMTlzSzYKUWRvdHFEdmpXZXF0bkFMZjg3NUtGL2I4R3p2M3JpNzZGaG5lZFZLTGx3S0JnUURjQ3Z5Ulo3em45ajMxZ3V1KwpsazBCV2MxaU4ySWpPcjh2NmJhREthOUE3K2dNM1hoVndENThCOEsyTmFSUXpxTUNGQm4wZkt5WWtyemJnNkdXCjhCNXRPMUJLU3g1NCsrYXdQNXZrMzlhL0R6dHIzd1V3WnlsK0ZPbm9NOGp5Vll6Z3oremNhdk9zOG5vUEtlT1kKcGsrMTFiL3ZURE9Kb3ZVMDFNNlB1RDZkNkE9PQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg==", "admin.jks": "/u3+7QAAAAIAAAACAAAAAQAMc3lzdGVtLmFkbWluAAABXI1HOJwAAAUBMIIE/TAOBgorBgEEASoCEQEBBQAEggTpDC/mPYl/7x1TXM1wyXKqxe/fVD1o9UApKVvS0wTrMH6Trvy7H7vWDfafSYii96tFYx6SLxQWE78b8vWBIBiBzosS2U3Svrl36Ne8Z8PEf5Vo+fZhcw5a2zqeVG4rzaLMxTZwxnbepUr19+cEbeAO5qlCS0gsp6TWi5hNrrS15IMqnKDYZzoYj3Z2sTkWpeuWE23yUVZ57Faq7caZgaVGE0L6iMQ7+gCvUOCIIPMxEgBQ6OsR/4IqzWdXjTQ4gqwJfcaGhfRpO7ZVnBznkXlnbrmrz8lZ0lKVoqUKXqeUvhhBFpSC6giD20r5WmmC0lhfeDIrZ+2Zlv2mjoXXYYb8WY9fYy9rh0V2szVLx9gOIsdXQJVXVRgtmqnPs78HUP8fuEjl2y2gC2W+CHECYJZ1/KWONcg603xR6M6FqGAGukFtAv7/WrD7MslWXI+jgUQAT50Css8vydi1AftavdD2mJ0H9SXkBeGTLM33rKXJ7N12T48fwqf0BdDoFgK7e8T3Lal+GAMGuXNJ9oRNkjLqSQ4Kxzta+nzIasye+L66l9ws47nBF/pjH+Y1kPq9wTVK/tl6uL6uQcOo2uaxB+eGvP1AQmObUIs3PRj+9Pr3pi6L+FugwU8SLVnsXmJiQEgZuEZtUGVeF12u4g8aDjL3rM3BqGIShDVyr5p/zZ6MqJmWUNZwJfwyRpPPc92ybrH9mXHot04hoifDl1mFIPsYjv6Xo8b8v6q8XwOtlv+Pijg9lnzRhSGI1E7VOR1KFsw4KNGiSEvTV1wOPic5I/t2U4ZOXSnLTDtu1L1ZGMSmiwf+wbIM2d0ZBxrtKes3ueUMCz1ITRKnCb61qHKOpFT9WPP8vuZeGRb9/P27dJGncuAvDkIi2zUVX5kBvCAri/H3rhiKtYWwjsq6KSy8SOOZURksiE01S8aHYOuew5f7kR97C20tMl4Qxxg6tSaJXkSIUwlchvDW8pYiEEDsk1oEUiwrsGAw2TG85AfXh4vuO0bQKKnueDTCJz5CDwfdL9t+Fj+7mV6AwgnMSQQ79r1kpaPYnIFTx7jYC77qedx3l9oI5uKP9oP2ZXuAH8LYAqAD1ygdz1lX+7Zx1LrcsXKbOvEC4qC1rDxjwdw/G0KfUDJP4N2jH1Ig0WyKrkT9Xyizsnlixycu1feW1ggjdp1FIfcaaAe7/Wmw/ZQ9Umix5djCjHe5BcYjx+tmF1P/i5Nizu4p8caDIAhorXY8nWPRz9Bh4ON152imxV5VrHxx9nKsYWRpmVy2h1BNt/ulbPeYWS3q3m9uuW3VQjtpB11Wql6BGdSar6f2cahr9mRfvihJe1CxqE2bN77HbVNhn6DhD/g8k8bU8Iy2ydZMtSfCLq6qingscKvJu5WtbmUqZqlSoDKWx6U7MV2iC59AifP+jrLHkfBTKyo+vH8pd+uy2cDPBvwP7XKH/1gsPeRCa2DwwW5KayjrC2YKF3+BlZFV4FOC+0y2FaYfdAGlygyXmQ0GKCEXpcv4eqTZNavnOS9okSiuWG0PAxC9COQJzaZur087SEiuzdB2MXjmJDzwCdqKV/zMmUEjABzU+wwvSh9Efcqdp8RR2O/tuHunrIOl7x2POYTSjSyTaMalfh5UjJR/OvTPGD+iTlWifBicgQujNB6Nrc6CWjUvHxtea0tedY5Wmz0kGEEqAAAAAgAFWC41MDkAAANAMIIDPDCCAiSgAwIBAgIBBjANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQzM1oXDTE5MDYwOTE0MzQzM1owPTEQMA4GA1UEChMHTG9nZ2luZzESMBAGA1UECxMJT3BlblNoaWZ0MRUwEwYDVQQDEwxzeXN0ZW0uYWRtaW4wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3OJUj7uS7wvG3mK4u5M3zjMMR7z63CEFZOXyZVdkVf8swH+mq8v5wSDaFbLZ4oNOejKJXCWXDq+becw+qCnAScppHirC3it/w0ZbmApTo4PmerjgDH/4Wl2cEZorkuQlmrKleWx5QR5ptukZDGvqDPHi+5K0E7S7X9lBiJH426p7v4+pWx0UA6rViM7IqXOyflQdY8K3pNfgWreTRPd2LotwalpSDSNAiTmmCjwpgDdH3xeng9KpCzt97OCY2poedaeB307Fm5rNsFjL7QfGh4Vhjdb8LWFgNDKpXmW9d+AdqV+UyUCZvxDU64sXupNDj6jRYpD7BQT1BqD8t6wwLAgMBAAGjZjBkMA4GA1UdDwEB/wQEAwIFoDAJBgNVHRMEAjAAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQUL5p0Arce+kLEVSD0mcSyEOnCJqswCQYDVR0jBAIwADANBgkqhkiG9w0BAQUFAAOCAQEAMsRzAyyNUKYOB9vXRuQSnw3QYmMH7nzo5+GJ1xETr97nyhCbMu7WQOvsLjbTjSjrLTn406z/VYeJOY/CHm7sXrSaBGMbddbxDyAyo48hpDVlEH0dYxKzdqJLM78QTzPX8clWbqvs3o8NgsQyjyJjNYB2N+MHdnPcYA54NRLwp8yvAhlbwPUmfV/hFlIOPNWy/FS7jXMr9k1Qwu7DT11sdZW7icEEgWWJkuUMDzOJZEXdlFqi9nNdHM8ld7j2kd8V26pl7SzEgDjptYSORdXlg3ODFZrrwvVAsr5OSEPrPYF0L8lLs8H+RVbr7DSQzi086+TrYRMhb5aSwhZ5qNzDbgAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1owHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+BooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Kau8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZeNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vEAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNdWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1JhnZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01mZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VINGNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4ltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y//RM++9wQURR669zsY6wAAAACAAZzaWctY2EAAAFcjUc4EgAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1owHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+BooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Kau8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZeNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vEAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNdWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1JhnZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01mZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VINGNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4ltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y//RM++9wQURR669zsY6zEtIJPIfzrARWdsd4L4f5GkKWGwA==", "key": "/u3+7QAAAAIAAAACAAAAAQAKbG9nZ2luZy1lcwAAAVyNR0DEAAAFBDCCBQAwDgYKKwYBBAEqAhEBAQUABIIE7Nk6w7v/NG0qzCXv3072wkj/CEQjyQAn0ygStwGJ3akofIROv44IU8b3JS3jEdLNGzSOsKYTr4j6FLnmB3MIQzNKHsu9VGDue0GXAFYmfCB115MyGbJ4jR2LYS84Z65QnAO4h1Ivpbki/eE5lQr3Fqm76eHsKP1lSFfpmwUw593pEYtYV1I3pq84KBppWTOS/aJOFoPgju+i8I0e6OwcAq9IJlyI+EFGeOvjFAos4r6SJ++nJgeJVaP7sLSylg7GNHGFHt0EJDvSYf2dmpGOx5SZVdVVWbVoP1ptH6awAOrcrE4XLevPa9Z1LipOxQJ+ZzjeQPr2ozrby5ZCR2rW85pH85QBxORvCLr5kC3NIUh+s8CQoJc08mjCtwxC5PaY6oJxIaebe3bPezIheNUj0bHW8ylnB8sL/LjGqCqnm/G1JB3Tcrj7nRl3DgheoOkGImrKwitU3wp179Er2O5NclVnryIMdC/ZAmG9tGo54VGH6jDiCY8E/iENj5Jr4cqqDU2QZQW22nob/8Kyw4oa5ZKx4k4VEBxovgr3NNlMSzyAK2m1dlpakO32H8jQojOJ7V+h3VeGtDSMi+JlSLACUan9YoIIZZwb+xqE1oMMSjtibnTJSM6aEPPzJ/3JUtThtDpLhD0zUaUBoyzNt6ibM6DT+yVRHjFDv91frwIlOkm+F+hR6Pw/pBRHKc3fhJwKAUN/OZkJY+mppmYxzxxwurF4X3OMorSKnL9jJn2+yBwfBNtnV573+qqWGulhyZDXuG894Bsrg9YHd2BtJHUOnbFmGvyB0ztTk0b8J3JdpZpTy7wXVQjI4ESyhb7/2iyW2WafTwfSAbSDaSMVWOyNqK5pJ+t6emQ4gcLIngrw6AA7dO+PyX/XVdmxJfll50PUFArvbbQI6CPt7rSNCoUOp147x5FVr/UAgteVw4577hXDYC3OUi2zs9e58+CFjdvGEOINCybrCjztiuH0/tYS09GCjUNqXSM1g0Zortl62wA91/EzlW3naRXVUlmcg+5zwlcmHriTJ4MiIAM6XEFmxOjybhvnIzrDpCHbybGxI0/J4suhapI99UIoiH70bOTKGdQM420HK/iX3FKRXsUkYqXppm1zaZnsEO1WH9so8yKqqGdHIZ7tG8X9mm2fMyaYbxUtwZwe4YsBYdo+ozkHZXIL1IRJD3U1lq/8h9L6f8Nwl5EYvA9O7dIu+hnQ1Q1LodDPNcQ7pcP/lTXxoRiNaxdm4qJ0UwXSFin2aRpFdYEwumGnM1OrGchOh6MAunoDER8jHLxSUdj68F1c3h3KmZ9MpCThciAsyRuMwRSnKPLg0jbSkQyLXh/Shs0AHDHzyGZJlTGcAMWBDnioApAESnnCtcISZi8NnZmkW5CbLnXDiU2+WpDQfYukovN0r3rbuoXPoxxn2HDGx+IhNl6HBRqVxOt6Vx/I2FHXWafsL9ZC8JDL9OFw5J9/kpzHCZS605TPL9JzQTBKeTV0KFaHmw/sBmjzp3rkmXHS9tS09Oy8Y+8bnck6GAGFXDVu6JBdQbGQmeiRQqIEbIEthkmaSCGX6dLcHSw2eRr7toKD3ealGn4L9Bx0iHJ7lSXRSsZDOOcJGGVjLxL9ht/P5w18MVIoGXzhjsejH4G6i0WLlQ+xea77cbkYFCmYrmNS9TC8ti82YjPOXfIl07lMXgAAAAIABVguNTA5AAAEXDCCBFgwggNAoAMCAQICAQgwDQYJKoZIhvcNAQEFBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkxNDM0MzVaFw0xOTA2MDkxNDM0MzVaMDsxEDAOBgNVBAoTB0xvZ2dpbmcxEjAQBgNVBAsTCU9wZW5TaGlmdDETMBEGA1UEAxMKbG9nZ2luZy1lczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMlW3mIgk0xiMIGksIyZB8/AouvDx/horGc5VWypA/BFAA+xnJJM0vhrZIgkjTBHzy4iKMXUIkbNrPqe7aMU3Z/Q9Phpe/67Gk2H9FWIXfsb1cepp+r217hYikC2CyQRP2BdqXwwSN587vIEvpmAsDKA8SPeGrnrzhrlt2xNU5K01Yh6+p+1Mz3rCmq68IEUZMztQTBi/EkIo7ndCO1yRv/ZEdWdDQRWU5+LfU/lCtjx+DFpiCqyXQjD2ghED/QlDjUx1eFB+RQarbc9tLa23kIDzsKg3gCL+yXVIinfTzXmOlTAg5zY1xZ83EqOf9PeExLamv4r7AaQ7xMlPmF884UCAwEAAaOCAYIwggF+MA4GA1UdDwEB/wQEAwIFoDAJBgNVHRMEAjAAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQUQtDfaa2cQxRU86S69z/LD2rIoI4wCQYDVR0jBAIwADCCARYGA1UdEQSCAQ0wggEJgglsb2NhbGhvc3SHBH8AAAGCCmxvZ2dpbmctZXOCJGxvZ2dpbmctZXMubG9nZ2luZy5zdmMuY2x1c3Rlci5sb2NhbIISbG9nZ2luZy1lcy1jbHVzdGVygixsb2dnaW5nLWVzLWNsdXN0ZXIubG9nZ2luZy5zdmMuY2x1c3Rlci5sb2NhbIIObG9nZ2luZy1lcy1vcHOCKGxvZ2dpbmctZXMtb3BzLmxvZ2dpbmcuc3ZjLmNsdXN0ZXIubG9jYWyCFmxvZ2dpbmctZXMtb3BzLWNsdXN0ZXKCMGxvZ2dpbmctZXMtb3BzLWNsdXN0ZXIubG9nZ2luZy5zdmMuY2x1c3Rlci5sb2NhbDANBgkqhkiG9w0BAQUFAAOCAQEAU4WTyCIvT6KTccterd+moqwtpSKP00OFXcEJKc1KbtVsKymwJUSzWblh9SHEyWz3Pgyr90DFxXaN2pLq3DlOMzrQH4DpAOPfVlQOVoUYzuY7RBWZzIOU8w3SubaOt6BPAc4dWVcUD55/6zvEUPV9Lgt2SSo/CGPHdHavQd3UtB5hnRhVk3BfyJgMKiXRY+p3SAAPsm0jdG2oi4hiO4lklQl05igpi94vaJS2VbMKSb3cxs8mMxa9vNKrZHYcMrrsUtUqa3e3i8g9jMPq5LahvVK1hhFpwY97OUhyv2R0o5/xWvrrV3eiZ115DXJstKNSSKoM+3VnPahPRvxJdgvDYQAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1owHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+BooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Kau8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZeNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vEAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNdWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1JhnZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01mZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VINGNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4ltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y//RM++9wQURR669zsY6wAAAACAAZzaWctY2EAAAFcjUdAOgAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1owHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+BooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Kau8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZeNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vEAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNdWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1JhnZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01mZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VINGNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4ltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y//RM++9wQURR669zsY6xCw+rKAjQ56YZ8mjDLAXW/30qprA==", "searchguard.key": "/u3+7QAAAAIAAAACAAAAAQANZWxhc3RpY3NlYXJjaAAAAVyNRzzeAAAFADCCBPwwDgYKKwYBBAEqAhEBAQUABIIE6JIXaxETe/g+gU7HZ0e7D5z+0rpSsiYPQPVycLdV9uTQovUFq0zdJ0XuStbuiUtlS1U5Z2Hqi6f3URiFRa5rex7zueOeGKFKlXqTS6Vl310LmZ8KkWP2jniRrkIvdldBWyY9uyNiU8xqkl8uuO/AQ1oaXHODqAP61v8us0hmxHw96+SBnK+QQPZCyfFvVXHUReEzHtT2nm+MYBf+MRmTyBHtrp4VXMViD6TahwnTMZBqdJatkF/H6e52MN0Jm8V3OhnES6L8xncCEKXinkkNrQWXMedISzK7YUBXJ6NF6HI2lT8PvSoaspee79+NXBRvoMKS2PtxdfKSguzjMnr12Fxt7uLf4hYXmLa4LsmbWzy7JkYXTuAbM84qpY/1d7yWJ0MDeFOMUAupxDruSrFEwlLpWp3Wu05CAuR/gANDc0JFx1RzZ3h425uXXXD5INZCyVNa1fM59L1YF1dIPiU3zmIyyR4qc05yQjHzZgk11o1WMUkXhnzVGqNIczZbcWT1Xy0PZ04739pZO9wgC26tAjX9b4HHhDlgCm7fDSvwTuUzfv7JWPDcdteEkMPxjQNxJbWaUJmeu+tzHvT4upkLUJpRL3T4nj5k+9E8R+fI05KDS8HhCkdLPtCJIhyUP6D3xUgAwh4UUEcGgJ1ElPlsbK9AQDnqIIhk6yJ4CxdXViOf1X5npUmvRk0nHYApM9gUiE1BCRtYvBWSZpG9pwqtOoWauanBeKIIptQa4kudOxGCAoDRzWJglmKi32DXGguUiuaKe93eQixIkRJe78eCGuBvNagmJVgB1BYuBhnd6jjd6pYA3hCKBFheisS1lFaE8hgcAXSpSuR5mbrJcV1LcEcteeecEYaCogMcuaBvqCGx9N3yIiySXu1WB9RjKPvH5wWHbGs31sydJ1eX5esmTE5D3BY9kKuroeLSBPRDUyPgo2Mb2HkGzau+D2iCJh6sFCzOG3LUQehAbinF1IY8LI4+GlL49gHFYITDtDTTa2RkKiSEXGaFpXLuhA0dH2CbBvTIgUTHkPg5FG+kH+JPazgegWhyZYZLULoeToQ47+ZplfHNG4XFJ82x2QLb6PIUotM8lnsqSrI8r3x1WdRXTNXtHr6pZeREGTuLsyIIUtCGvhLXU7Eo2zZSFEvIj9fK97gaM8JFsc+j8n/kH4GAgnuOh2rlPgFEYtoBZC3bsRO8qWwi6zxUkkFILfNs9TVlZfb7byLLwzmO1lKpmKbdEFN4/J7o5z0h8gLV1y2bUeQg7a9O3baMNeII/qiqcPAG34cfshWQuakZzcSy8IMcoYOQ02ropwJ63JHDEcYkImHk894jkaYQXfLXoM7S/pcuQBAfwK1avh8MYQgKFmlC7l5u2+NaxB3LlPaBhhYQmdp6Cu8HXHSSK8/una61toIymAgSFMgMIv/lSmL/LCUW1OcXcFS2cuBMl4o9JwRqnFQRMNm7EYUHY2k8VKEWudoCcHY8RNCY0Pp6vArzIo6Dhkzau12vv+FinWcOGEsLq2iqiEqL2SKg6VZrRyHdliuXS6sQ8J6wkVSZow0I31qoh0Z6wVfBN5ONTd7Z3DJLu5QfIrm4eOPjXXqtV/8hNxEi9dZZtG4h/a5RiarU+Rbn5dy6gi19fF5VKi7wdgMLGt/dxJBVHIQtC/L59GRIwhOlj7i0LvYIhZYQAAAAAgAFWC41MDkAAAOCMIIDfjCCAmagAwIBAgIBBzANBgkqhkiG9w0BAQUFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQzNFoXDTE5MDYwOTE0MzQzNFowPjEQMA4GA1UEChMHTG9nZ2luZzESMBAGA1UECxMJT3BlblNoaWZ0MRYwFAYDVQQDEw1lbGFzdGljc2VhcmNoMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArFgXLTzpm0WGwMnR/Aadjqx6QeXensrtOHimtqMAZ9k2660YVVnpVk05dWsmx7GRqhzwS7rxEf/EBYxZj+cjXLtDnxfdWFIfNDYZOgS8gWhBHordSF+yAeQXDSHZhHI4ym78pFgkKunmy35rCyy5QU93chfO0Nq+8IGeRtR8M7IwJupuVEzq7QNdShMR90NxIQsewKsuzwCv0Fi3iwataaabLFBuMUiuOl8liil0RBQ9D2S3oxajiNy1JAmLDcqIGzYDO1ajO7d051JGPzMHGonAtU6Z/q6StY75TFihxCI8JYt3iGHpiTT24VI/u2sJw2bgwT2J0eaygKZ3GgSqlQIDAQABo4GmMIGjMA4GA1UdDwEB/wQEAwIFoDAJBgNVHRMEAjAAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQUFJO15yv6VZW1R9N6HneXB0Hii88wCQYDVR0jBAIwADA9BgNVHREENjA0gglsb2NhbGhvc3SHBH8AAAGCCmxvZ2dpbmctZXOCDmxvZ2dpbmctZXMtb3BziAUqAwQFBTANBgkqhkiG9w0BAQUFAAOCAQEAA0QE5tzbpE4DlWHO1ZYCs5OAkBAMJAzn+pntmnJA9dIy2FetPSlNu06oh0JPjr9P6f9HlHTHj3fmDNSel29tdvxf3ECivtRF1ntMRmLzUIi2UlO08uws6UWBOSnp+Ptp0/aYill/EG4c2yY4eFJCuUQUx1leVxRk1536GME9ndqYMfZ+LXjUfTCWeLh8S7z9JBxdX6mSj9NxKs54nk/oMz8YQVamSPoqd5hz4yIRUinJrqSw6U+PfptQlr5hN0bgmVwXFK5/hcgQGkdO4/4RNi+zLZ/KXfByWDscRCfUTKx3TWnNBqsMF4eKDQwKSFeND+0OacQb4en84uB9ZOiQVwAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1owHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+BooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Kau8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZeNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vEAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNdWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1JhnZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01mZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VINGNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4ltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y//RM++9wQURR669zsY6wAAAACAAZzaWctY2EAAAFcjUc8WQAFWC41MDkAAALeMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dnaW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1owHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+BooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Kau8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZeNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vEAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNdWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1JhnZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01mZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VINGNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4ltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y//RM++9wQURR669zsY6wkCJYXkSLYGXn3Hc0iGTjJ0ozFrg==", "searchguard.truststore": "/u3+7QAAAAIAAAABAAAAAgAGc2lnLWNhAAABXI1HQUMABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkxNDM0MTJaFw0yMjA2MDgxNDM0MTNaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDjHobUbxcxzO02BocKN1ETxnoDoK1MoR8HDJQ5dMwkfgaKKZWzBEF8qNz2cKMWr/kd+sfC5siJYGma5k7BKrL65s4ugHDarXIWEaW04iQ50Jxr5GysxyWJNijVLJE8Vvq2fjDS1U8JEg/1PFMauA6m3ZJROmHHO5fHWGSxNMdymrvENZK0GJI99jW8NOUAIG4e9oCbAc4w9H6Fne9D8ludyFPeww5UieNhlLMQ8CrGXjSs8XCC/vrBuncrYaIw2N+XNmdwxLs+iaS62D15/sZZR5DSANgpi1V488MQAqtbxABa6Y5a7REg3NMjFfKB8pu25K6/0IWlP2pq4JSLNLnJAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCTXVn81xVOCpG/gv+oIFap5IQddwghimaAzlJyWmnmTvA9d09IaDczRKsukwAxD9SYZ2ddOYKvN7Ayr4QWcxFbo+lmUqJVQnYcd5kvX3PgNn/Qqipa/y89nATWKgJAEZvtNZmUpje96gBzWQgiiC3YJBMCQPmi0ipZqplVnjmLj1LKwjOdMWymUXz2znzGf1SDRjU9d/u1W6YmsQ5hqsm9rVRXDTx/e/JyLPfakWBwmDOK6vSuGBQLIsUnFIklWLcreJbRKzsF9acEWKM5PotvtFJg46PHmLkY4s8qUXtm3sRpHiR9ZssZGubhWTJ+xEN8v/0TPvvcEFEUeuvc7GOsX/x7CFcjPv+MrIS01klyvV5seVg=", "truststore": "/u3+7QAAAAIAAAABAAAAAgAGc2lnLWNhAAABXI1HQUMABVguNTA5AAAC3jCCAtowggHCoAMCAQICAQEwDQYJKoZIhvcNAQELBQAwHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDAeFw0xNzA2MDkxNDM0MTJaFw0yMjA2MDgxNDM0MTNaMB4xHDAaBgNVBAMTE2xvZ2dpbmctc2lnbmVyLXRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDjHobUbxcxzO02BocKN1ETxnoDoK1MoR8HDJQ5dMwkfgaKKZWzBEF8qNz2cKMWr/kd+sfC5siJYGma5k7BKrL65s4ugHDarXIWEaW04iQ50Jxr5GysxyWJNijVLJE8Vvq2fjDS1U8JEg/1PFMauA6m3ZJROmHHO5fHWGSxNMdymrvENZK0GJI99jW8NOUAIG4e9oCbAc4w9H6Fne9D8ludyFPeww5UieNhlLMQ8CrGXjSs8XCC/vrBuncrYaIw2N+XNmdwxLs+iaS62D15/sZZR5DSANgpi1V488MQAqtbxABa6Y5a7REg3NMjFfKB8pu25K6/0IWlP2pq4JSLNLnJAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQCTXVn81xVOCpG/gv+oIFap5IQddwghimaAzlJyWmnmTvA9d09IaDczRKsukwAxD9SYZ2ddOYKvN7Ayr4QWcxFbo+lmUqJVQnYcd5kvX3PgNn/Qqipa/y89nATWKgJAEZvtNZmUpje96gBzWQgiiC3YJBMCQPmi0ipZqplVnjmLj1LKwjOdMWymUXz2znzGf1SDRjU9d/u1W6YmsQ5hqsm9rVRXDTx/e/JyLPfakWBwmDOK6vSuGBQLIsUnFIklWLcreJbRKzsF9acEWKM5PotvtFJg46PHmLkY4s8qUXtm3sRpHiR9ZssZGubhWTJ+xEN8v/0TPvvcEFEUeuvc7GOsX/x7CFcjPv+MrIS01klyvV5seVg=" }, "kind": "Secret", "metadata": { "creationTimestamp": null, "name": "logging-elasticsearch" }, "type": "Opaque" }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es-ops-cluster service] **** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:168 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.130.163", "cmd": "/bin/oc get service logging-es-ops-cluster -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-09T14:34:54Z", "name": "logging-es-ops-cluster", "namespace": "logging", "resourceVersion": "1323", "selfLink": "/api/v1/namespaces/logging/services/logging-es-ops-cluster", "uid": "cdf37ace-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "clusterIP": "172.30.130.163", "ports": [ { "port": 9300, "protocol": "TCP", "targetPort": 9300 } ], "selector": { "component": "es-ops", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Set logging-es-ops service] ************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:182 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.148.99", "cmd": "/bin/oc get service logging-es-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-09T14:34:56Z", "name": "logging-es-ops", "namespace": "logging", "resourceVersion": "1326", "selfLink": "/api/v1/namespaces/logging/services/logging-es-ops", "uid": "cec18352-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "clusterIP": "172.30.148.99", "ports": [ { "port": 9200, "protocol": "TCP", "targetPort": "restapi" } ], "selector": { "component": "es-ops", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:197 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Creating ES storage template] ********** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:210 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES storage] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:225 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:237 ok: [openshift] => { "ansible_facts": { "es_deploy_name": "logging-es-ops-data-master-f31li9lz" }, "changed": false } TASK [openshift_logging_elasticsearch : set_fact] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:241 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_elasticsearch : Set ES dc templates] ******************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:246 changed: [openshift] => { "changed": true, "checksum": "33b57a2c1245fccedeecc47fb0bd34ed36b81102", "dest": "/tmp/openshift-logging-ansible-nlPqto/templates/logging-es-dc.yml", "gid": 0, "group": "root", "md5sum": "3996bb1b1c3aa6b7a4db442c30428089", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3179, "src": "/root/.ansible/tmp/ansible-tmp-1497018896.65-249169397768889/source", "state": "file", "uid": 0 } TASK [openshift_logging_elasticsearch : Set ES dc] ***************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:262 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-es-ops-data-master-f31li9lz -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-09T14:34:57Z", "generation": 2, "labels": { "component": "es-ops", "deployment": "logging-es-ops-data-master-f31li9lz", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-ops-data-master-f31li9lz", "namespace": "logging", "resourceVersion": "1342", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-es-ops-data-master-f31li9lz", "uid": "cf9c46c4-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "replicas": 1, "selector": { "component": "es-ops", "deployment": "logging-es-ops-data-master-f31li9lz", "logging-infra": "elasticsearch", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "es-ops", "deployment": "logging-es-ops-data-master-f31li9lz", "logging-infra": "elasticsearch", "provider": "openshift" }, "name": "logging-es-ops-data-master-f31li9lz" }, "spec": { "containers": [ { "env": [ { "name": "NAMESPACE", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "metadata.namespace" } } }, { "name": "KUBERNETES_TRUST_CERT", "value": "true" }, { "name": "SERVICE_DNS", "value": "logging-es-ops-cluster" }, { "name": "CLUSTER_NAME", "value": "logging-es-ops" }, { "name": "INSTANCE_RAM", "value": "8Gi" }, { "name": "NODE_QUORUM", "value": "1" }, { "name": "RECOVER_EXPECTED_NODES", "value": "1" }, { "name": "RECOVER_AFTER_TIME", "value": "5m" }, { "name": "READINESS_PROBE_TIMEOUT", "value": "30" }, { "name": "IS_MASTER", "value": "true" }, { "name": "HAS_DATA", "value": "true" } ], "image": "172.30.106.159:5000/logging/logging-elasticsearch:latest", "imagePullPolicy": "Always", "name": "elasticsearch", "ports": [ { "containerPort": 9200, "name": "restapi", "protocol": "TCP" }, { "containerPort": 9300, "name": "cluster", "protocol": "TCP" } ], "readinessProbe": { "exec": { "command": [ "/usr/share/elasticsearch/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 10, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 30 }, "resources": { "limits": { "cpu": "1", "memory": "8Gi" }, "requests": { "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/elasticsearch/secret", "name": "elasticsearch", "readOnly": true }, { "mountPath": "/usr/share/java/elasticsearch/config", "name": "elasticsearch-config", "readOnly": true }, { "mountPath": "/elasticsearch/persistent", "name": "elasticsearch-storage" } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": { "supplementalGroups": [ 65534 ] }, "serviceAccount": "aggregated-logging-elasticsearch", "serviceAccountName": "aggregated-logging-elasticsearch", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "elasticsearch", "secret": { "defaultMode": 420, "secretName": "logging-elasticsearch" } }, { "configMap": { "defaultMode": 420, "name": "logging-elasticsearch" }, "name": "elasticsearch-config" }, { "emptyDir": {}, "name": "elasticsearch-storage" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-09T14:34:57Z", "lastUpdateTime": "2017-06-09T14:34:57Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-09T14:34:57Z", "lastUpdateTime": "2017-06-09T14:34:57Z", "message": "replication controller \"logging-es-ops-data-master-f31li9lz-1\" is waiting for pod \"logging-es-ops-data-master-f31li9lz-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_elasticsearch : Delete temp directory] ***************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_elasticsearch/tasks/main.yaml:274 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-nlPqto", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:151 statically included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "kibana_version": "3_5" }, "changed": false } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create temp directory for doing work in] ****** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:7 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.006454", "end": "2017-06-09 10:34:58.861211", "rc": 0, "start": "2017-06-09 10:34:58.854757" } STDOUT: /tmp/openshift-logging-ansible-cJoPue TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:12 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-cJoPue" }, "changed": false } TASK [openshift_logging_kibana : Create templates subdirectory] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:16 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-cJoPue/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:34 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-kibana-dockercfg-9l960" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-09T14:34:59Z", "name": "aggregated-logging-kibana", "namespace": "logging", "resourceVersion": "1351", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-kibana", "uid": "d0fec0c5-4d20-11e7-94cc-0e3d36056ef8" }, "secrets": [ { "name": "aggregated-logging-kibana-token-t878l" }, { "name": "aggregated-logging-kibana-dockercfg-9l960" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:42 ok: [openshift] => { "ansible_facts": { "kibana_component": "kibana", "kibana_name": "logging-kibana" }, "changed": false } TASK [openshift_logging_kibana : Checking for session_secret] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:47 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging_kibana : Checking for oauth_secret] ******************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:51 ok: [openshift] => { "changed": false, "stat": { "exists": false } } TASK [openshift_logging_kibana : Generate session secret] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:56 changed: [openshift] => { "changed": true, "checksum": "60db28e971103bb52f35f58fa57ab0935025155d", "dest": "/etc/origin/logging/session_secret", "gid": 0, "group": "root", "md5sum": "0cce6ef12ab47bdc2c54a11cff14756f", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 200, "src": "/root/.ansible/tmp/ansible-tmp-1497018900.7-121613287754812/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Generate oauth secret] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:64 changed: [openshift] => { "changed": true, "checksum": "ceb52763c135dbf083c01a343ede0d27f384413f", "dest": "/etc/origin/logging/oauth_secret", "gid": 0, "group": "root", "md5sum": "f6295b9374ea879db18f669ff24bb93b", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_t:s0", "size": 64, "src": "/root/.ansible/tmp/ansible-tmp-1497018901.0-207145085991624/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Retrieving the cert to use when generating secrets for the logging components] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:71 ok: [openshift] => (item={u'name': u'ca_file', u'file': u'ca.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE1sb1hEVEl5TURZd09ERTBNelF4TTFvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU9NZWh0UnZGekhNN1RZR2h3bzNVUlBHZWdPZ3JVeWhId2NNbERsMHpDUisKQm9vcGxiTUVRWHlvM1Bad294YXYrUjM2eDhMbXlJbGdhWnJtVHNFcXN2cm16aTZBY05xdGNoWVJwYlRpSkRuUQpuR3ZrYkt6SEpZazJLTlVza1R4VytyWitNTkxWVHdrU0QvVThVeHE0RHFiZGtsRTZZY2M3bDhkWVpMRTB4M0thCnU4UTFrclFZa2ozMk5idzA1UUFnYmg3MmdKc0J6akQwZm9XZDcwUHlXNTNJVTk3RERsU0o0MkdVc3hEd0tzWmUKTkt6eGNJTCsrc0c2ZHl0aG9qRFkzNWMyWjNERXV6NkpwTHJZUFhuK3hsbEhrTklBMkNtTFZYanp3eEFDcTF2RQpBRnJwamxydEVTRGMweU1WOG9IeW03Ymtyci9RaGFVL2FtcmdsSXMwdWNrQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKTmQKV2Z6WEZVNEtrYitDLzZnZ1ZxbmtoQjEzQ0NHS1pvRE9VbkphYWVaTzhEMTNUMGhvTnpORXF5NlRBREVQMUpobgpaMTA1Z3E4M3NES3ZoQlp6RVZ1ajZXWlNvbFZDZGh4M21TOWZjK0EyZjlDcUtsci9MejJjQk5ZcUFrQVJtKzAxCm1aU21ONzNxQUhOWkNDS0lMZGdrRXdKQSthTFNLbG1xbVZXZU9ZdVBVc3JDTTUweGJLWlJmUGJPZk1aL1ZJTkcKTlQxMys3VmJwaWF4RG1HcXliMnRWRmNOUEg5NzhuSXM5OXFSWUhDWU00cnE5SzRZRkFzaXhTY1VpU1ZZdHl0NApsdEVyT3dYMXB3UllvemsraTIrMFVtRGpvOGVZdVJqaXp5cFJlMmJleEdrZUpIMW15eGthNXVGWk1uN0VRM3kvCi9STSsrOXdRVVJSNjY5enNZNnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "encoding": "base64", "item": { "file": "ca.crt", "name": "ca_file" }, "source": "/etc/origin/logging/ca.crt" } ok: [openshift] => (item={u'name': u'kibana_internal_key', u'file': u'kibana-internal.key'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcGdJQkFBS0NBUUVBeGxlS1NCWU5ic1pPOW9DVVlOZFR1Nm1BL29meHBUVlhTYU82eEs2TGIwOGlrTTZ3Ck9YM1Z5ZnpNc3dBNjkwMlhrUmlXTkpkKzJLeVMyN3c4RkpFbm5CL0lIZk8xQmd3N2p2QmZzMlRERDBZSlBKalgKcjVZZk9xSk5TRjRWUEZsSXF2VWl5MHZxUkxRaTJaQXlBeWdSKzk4R05mSmdWOTdqcnJ5Zk5nY0FVQmN4WlN0cwpqU0x4aVlkc2wvSHhjMEVrM3JGbHJ1cWprMnJQV1dkZnEwVTFGSTBsYUdvS1V3bmJlWlE5VjlOWXFwSDVuWmVTCkR4REg3Sy8vdW1mMWJSSWptdnFyeVJxRjNtTWtPdm1uMXR4TDNLT2ZGK09Sb2dSWmZzRWNqbnBydUkxRFpMd1MKcTZja3ZXU0h0TlE4ZjlEc2xTUmJ4d01FcWs5VHd6b1FWYXFQVFFJREFRQUJBb0lCQVFDOVh5V3pjQUxCU214bwpKUm9HWUhFZEUxa0xMT2NHY3loMU1mT0lDSk11NHFMQkdlYmQ3WXhxLzRpK083RVJJQzlmcE5iOVBjd3B1cE81Cll6OEY4QldlbGlXdW0xcXlmSWw5RDNxQVFPdVFzTER1LzR1bnBURUovWjdHUXJZSjJjRnRJUUpva29JSnVPZ3gKUytERWJNVEc5QWp0Qnc3L3R0c3lvZnR0VFQvNk5veHM0bDlMNXUxSlBTTldYSUVFVEsrZHp0M0RuNEZDZ0RFOQpyUHovUTJadXIzZGpROVYzNjU0bUtaQVNoOXVtNzhMU0ZGWHBXSTA1THpHVTY3bmVLL05mSVd2bDVjcWdvTUEyCkpHcFloQ1FFWEd3Z1BjRHREa0JqYVB2MVZTbXZ6Tko1WUUxWWplOUd5Q2JvSmRQdFZYaEVrK093L1p1Zkxzd2wKWWpnV2NuK2hBb0dCQU9PelBqa0RwRVBaY04yakc2RkE0RVhBRTJWamRkaVZaMERHSU1JeUtMcHJBQnFCVTVvaQp6Y2lDcElQY00wUmwxZVJmUWtTN1RNQkFrWXk0Y2F1Yjh1RE9YMytEeEVRZm9aNzUrZUw0dDdOREgrK0JTOC9uCnlpVFVOUGRDYUdKR0Jrb2kxMElwZDVPMTljVTdUZlFiazBoemEzSkFWWVdXQzRoUFlIbERhUzlsQW9HQkFONysKTkVIRThwTGwyV0E5cEI3Z3YxckZTWmpQTkdqczlzUEdKQnh4Y0ViYjRhNTFZem82OVJoZXVEalp0aDgybmtpVQpycEI2VEk5VWhCaTY5U2o0Q05LTVRpVzdWVWZzM2RZbWFuWkVlWVQ5QndoV0hxYmRnbU94TEhlRmJ0ZVRWaldTCnpvc0M5TExaSlp3aWQvRlhqRlpndkJMWkVNVi9oTUQ1WTlqUWcrWEpBb0dCQUpCQmpyb3dSSEYzNExtS0RJY3MKd3VsdHR0d1ZGeVFRQTBwV080ck1uR0QrU1NLQnJLV0tSelV4RDJrNnFJQTh4RFhhNC9FSGVLaVVQNklYZUd4dwpjSDljUDhSWmhvNWlPOUtzTEZSUG5wSkRoSWdJTWkrVmVjdTdaWk1BejREelBDamJ5ZVJ3d1FFajFvRU9BV1VWCjAwbWpWZjhjSXhKdTdQOSt5bkFJOVNyQkFvR0JBS2dsc3kzczNxVmFZSUdydVhmM0xSTzdOSFhmdUx0dUE5MDQKS2I2dzQyTHJKdEF3Z0RSR2hNNXRqaWlBTWs1ekZ3UFA2Wm5VUHFyTnBoWW4wL21pbnJSMVMvQXp4R2pKK2JVagpucCt6bnBaalhjd3hkRWVMUEdrRURtM0oxZjBFZ3JzL0NqUFVkTVB2N2VaQUw0Vnk2TVd4aCtBR2doa0t3UVhxCmlCblRrY0hSQW9HQkFMMU5LNTNVVFM4bGVmbkwwQS9mL055VEk1UWdteFN0Tm9yL3pKaElONFJGL2dqbHoyQ3UKOVl3V3VJcjREOUROSTgvZWF4RC9SSUxGc0xoSmVmMUJVcWJxQjJFV0ordTc1cEx0QlZ5M2N1NjdJSmZXRVl4dwoybmY0aG9mNG1mUVpTRk5EOVhVSno1N05PMFVzdzc3KzhCQ3lJdFhjMGZIMVBhb1lvenpKSDBEZAotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.key", "name": "kibana_internal_key" }, "source": "/etc/origin/logging/kibana-internal.key" } ok: [openshift] => (item={u'name': u'kibana_internal_cert', u'file': u'kibana-internal.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE5Gb1hEVEU1TURZd09URTBNelF4TlZvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFER1Y0cElGZzF1eGs3MmdKUmcxMU83cVlEK2gvR2xOVmRKbzdyRXJvdHZUeUtRenJBNWZkWEoKL015ekFEcjNUWmVSR0pZMGwzN1lySkxidkR3VWtTZWNIOGdkODdVR0REdU84Rit6Wk1NUFJnazhtTmV2bGg4NgpvazFJWGhVOFdVaXE5U0xMUytwRXRDTFprRElES0JINzN3WTE4bUJYM3VPdXZKODJCd0JRRnpGbEsyeU5JdkdKCmgyeVg4ZkZ6UVNUZXNXV3U2cU9UYXM5WloxK3JSVFVValNWb2FncFRDZHQ1bEQxWDAxaXFrZm1kbDVJUEVNZnMKci8rNlovVnRFaU9hK3F2SkdvWGVZeVE2K2FmVzNFdmNvNThYNDVHaUJGbCt3UnlPZW11NGpVTmt2QktycHlTOQpaSWUwMUR4LzBPeVZKRnZIQXdTcVQxUERPaEJWcW85TkFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXY1YTNmbDBqRUpXaGFDVTZ6YUg4N2c4NTY2b0ZLZEhNUlhidjl6QzkKcmV0UFhCSnJtU1dHWVRTOWxDblhyZXZ6b01xeTUxWDBlSllFTTdUaGVrVUVYeERJVEZibUJFakJ5d3ZsQlJRMQpaUXFOcFVBWEFnYjdreWRBSmtHSDFGbkFuVlNhSGdtblhQOVJLOGdMY1BSeG1nMnpZaUMzeVFZcFNaWHJkMzYrCmNOenAzY2ZUY1BOdEQ1VGQxWjVhTVFsWDNOMjh4TUVsRDBXRHZXQ0R3ZmtuS2NoUWhMNUxiajdkZ1lzSjN4bVAKR2tLdXJzbkVXb2UxUTRDTGY1UDM4R1pjSkV6NEtWRnY2Yk90T2ZKVk54c3BHRlhkY1hCQy9qQ1V4TndPQ2piTQo3QUNIbWNFaTdiMjVobDNvQUoxdForb2d3eUZxNWpTVThYcW43MEdLKzlVSWRBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd09URTBNelF4TWxvWERUSXlNRFl3T0RFME16UXhNMW93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBT01laHRSdkZ6SE03VFlHaHdvM1VSUEdlZ09nclV5aEh3Y01sRGwwekNSKwpCb29wbGJNRVFYeW8zUFp3b3hhditSMzZ4OExteUlsZ2Facm1Uc0Vxc3ZybXppNkFjTnF0Y2hZUnBiVGlKRG5RCm5HdmtiS3pISllrMktOVXNrVHhXK3JaK01OTFZUd2tTRC9VOFV4cTREcWJka2xFNlljYzdsOGRZWkxFMHgzS2EKdThRMWtyUVlrajMyTmJ3MDVRQWdiaDcyZ0pzQnpqRDBmb1dkNzBQeVc1M0lVOTdERGxTSjQyR1VzeER3S3NaZQpOS3p4Y0lMKytzRzZkeXRob2pEWTM1YzJaM0RFdXo2SnBMcllQWG4reGxsSGtOSUEyQ21MVlhqend4QUNxMXZFCkFGcnBqbHJ0RVNEYzB5TVY4b0h5bTdia3JyL1FoYVUvYW1yZ2xJczB1Y2tDQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUpOZApXZnpYRlU0S2tiK0MvNmdnVnFua2hCMTNDQ0dLWm9ET1VuSmFhZVpPOEQxM1QwaG9Oek5FcXk2VEFERVAxSmhuCloxMDVncTgzc0RLdmhCWnpFVnVqNldaU29sVkNkaHgzbVM5ZmMrQTJmOUNxS2xyL0x6MmNCTllxQWtBUm0rMDEKbVpTbU43M3FBSE5aQ0NLSUxkZ2tFd0pBK2FMU0tsbXFtVldlT1l1UFVzckNNNTB4YktaUmZQYk9mTVovVklORwpOVDEzKzdWYnBpYXhEbUdxeWIydFZGY05QSDk3OG5Jczk5cVJZSENZTTRycTlLNFlGQXNpeFNjVWlTVll0eXQ0Cmx0RXJPd1gxcHdSWW96aytpMiswVW1Eam84ZVl1UmppenlwUmUyYmV4R2tlSkgxbXl4a2E1dUZaTW43RVEzeS8KL1JNKys5d1FVUlI2Njl6c1k2dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.crt", "name": "kibana_internal_cert" }, "source": "/etc/origin/logging/kibana-internal.crt" } ok: [openshift] => (item={u'name': u'server_tls', u'file': u'server-tls.json'}) => { "changed": false, "content": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", "encoding": "base64", "item": { "file": "server-tls.json", "name": "server_tls" }, "source": "/etc/origin/logging/server-tls.json" } ok: [openshift] => (item={u'name': u'session_secret', u'file': u'session_secret'}) => { "changed": false, "content": "Q0pTNDV1eXJINWJqUGZMWGlPSktMVXJsVWs3dWVNODNIRGFqU0drb25aQkp5Qk5md2NqelJCNFlHUE5JRUk5SmZuRjVTVGExZjRlWk1nWUlJRE5nVEg4dmRuWVdJakF5b3dtdXdyZ0ZUUWZIVzFwTFRSMXBkQzJBdnRTdms1S0ZuMEh2YkVCYVk2VXZRdWZmc3RWSmw0ZTN1cngyVkpwMThHUHFkWlN2eUxTdXJEVnB2OFVqTkU3RFZvOW1QTGlpbm01cGFnNG4=", "encoding": "base64", "item": { "file": "session_secret", "name": "session_secret" }, "source": "/etc/origin/logging/session_secret" } ok: [openshift] => (item={u'name': u'oauth_secret', u'file': u'oauth_secret'}) => { "changed": false, "content": "MEtsREp2NlA2NlJxRjRYQ2RYVXdTaEJLT0p3eU16N3JDdDhzZ0k3WnlYbjIwcVBIVHpiQnVTSUxacVF3R3J0aA==", "encoding": "base64", "item": { "file": "oauth_secret", "name": "oauth_secret" }, "source": "/etc/origin/logging/oauth_secret" } TASK [openshift_logging_kibana : Set logging-kibana service] ******************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:84 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.216.229", "cmd": "/bin/oc get service logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-09T14:35:02Z", "name": "logging-kibana", "namespace": "logging", "resourceVersion": "1370", "selfLink": "/api/v1/namespaces/logging/services/logging-kibana", "uid": "d2d9c8ab-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "clusterIP": "172.30.216.229", "ports": [ { "port": 443, "protocol": "TCP", "targetPort": "oaproxy" } ], "selector": { "component": "kibana", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:101 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_key | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:106 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_cert | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:111 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_ca | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:116 ok: [openshift] => { "ansible_facts": { "kibana_ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE1sb1hEVEl5TURZd09ERTBNelF4TTFvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU9NZWh0UnZGekhNN1RZR2h3bzNVUlBHZWdPZ3JVeWhId2NNbERsMHpDUisKQm9vcGxiTUVRWHlvM1Bad294YXYrUjM2eDhMbXlJbGdhWnJtVHNFcXN2cm16aTZBY05xdGNoWVJwYlRpSkRuUQpuR3ZrYkt6SEpZazJLTlVza1R4VytyWitNTkxWVHdrU0QvVThVeHE0RHFiZGtsRTZZY2M3bDhkWVpMRTB4M0thCnU4UTFrclFZa2ozMk5idzA1UUFnYmg3MmdKc0J6akQwZm9XZDcwUHlXNTNJVTk3RERsU0o0MkdVc3hEd0tzWmUKTkt6eGNJTCsrc0c2ZHl0aG9qRFkzNWMyWjNERXV6NkpwTHJZUFhuK3hsbEhrTklBMkNtTFZYanp3eEFDcTF2RQpBRnJwamxydEVTRGMweU1WOG9IeW03Ymtyci9RaGFVL2FtcmdsSXMwdWNrQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKTmQKV2Z6WEZVNEtrYitDLzZnZ1ZxbmtoQjEzQ0NHS1pvRE9VbkphYWVaTzhEMTNUMGhvTnpORXF5NlRBREVQMUpobgpaMTA1Z3E4M3NES3ZoQlp6RVZ1ajZXWlNvbFZDZGh4M21TOWZjK0EyZjlDcUtsci9MejJjQk5ZcUFrQVJtKzAxCm1aU21ONzNxQUhOWkNDS0lMZGdrRXdKQSthTFNLbG1xbVZXZU9ZdVBVc3JDTTUweGJLWlJmUGJPZk1aL1ZJTkcKTlQxMys3VmJwaWF4RG1HcXliMnRWRmNOUEg5NzhuSXM5OXFSWUhDWU00cnE5SzRZRkFzaXhTY1VpU1ZZdHl0NApsdEVyT3dYMXB3UllvemsraTIrMFVtRGpvOGVZdVJqaXp5cFJlMmJleEdrZUpIMW15eGthNXVGWk1uN0VRM3kvCi9STSsrOXdRVVJSNjY5enNZNnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" }, "changed": false } TASK [openshift_logging_kibana : Generating Kibana route template] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:121 ok: [openshift] => { "changed": false, "checksum": "69c8fc3327cc9c94b33060a4913a1504bb3a9d48", "dest": "/tmp/openshift-logging-ansible-cJoPue/templates/kibana-route.yaml", "gid": 0, "group": "root", "md5sum": "f968a68b3faa852cea6d277bcdac4d76", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2714, "src": "/root/.ansible/tmp/ansible-tmp-1497018903.48-210625764598451/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Setting Kibana route] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:141 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get route logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Route", "metadata": { "creationTimestamp": "2017-06-09T14:35:04Z", "labels": { "component": "support", "logging-infra": "support", "provider": "openshift" }, "name": "logging-kibana", "namespace": "logging", "resourceVersion": "1375", "selfLink": "/oapi/v1/namespaces/logging/routes/logging-kibana", "uid": "d3ba8e44-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "host": "kibana.router.default.svc.cluster.local", "tls": { "caCertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1ow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+\nBooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQ\nnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Ka\nu8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZe\nNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vE\nAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNd\nWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1Jhn\nZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01\nmZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VING\nNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4\nltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y/\n/RM++9wQURR669zsY6w=\n-----END CERTIFICATE-----\n", "destinationCACertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1ow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+\nBooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQ\nnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Ka\nu8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZe\nNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vE\nAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNd\nWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1Jhn\nZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01\nmZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VING\nNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4\nltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y/\n/RM++9wQURR669zsY6w=\n-----END CERTIFICATE-----\n", "insecureEdgeTerminationPolicy": "Redirect", "termination": "reencrypt" }, "to": { "kind": "Service", "name": "logging-kibana", "weight": 100 }, "wildcardPolicy": "None" }, "status": { "ingress": [ { "conditions": [ { "lastTransitionTime": "2017-06-09T14:35:04Z", "status": "True", "type": "Admitted" } ], "host": "kibana.router.default.svc.cluster.local", "routerName": "router", "wildcardPolicy": "None" } ] } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Get current oauthclient hostnames] ************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:151 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", "results": [ {} ], "returncode": 0, "stderr": "Error from server (NotFound): oauthclients.oauth.openshift.io \"kibana-proxy\" not found\n", "stdout": "" }, "state": "list" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:159 ok: [openshift] => { "ansible_facts": { "proxy_hostnames": [ "https://kibana.router.default.svc.cluster.local" ] }, "changed": false } TASK [openshift_logging_kibana : Create oauth-client template] ***************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:162 changed: [openshift] => { "changed": true, "checksum": "4857ff35024b8c3a6049ca47d80441ffc58405e4", "dest": "/tmp/openshift-logging-ansible-cJoPue/templates/oauth-client.yml", "gid": 0, "group": "root", "md5sum": "40f3c0d559ce4e2d2ccb8b00af6d1534", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 328, "src": "/root/.ansible/tmp/ansible-tmp-1497018905.27-257866010936733/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set kibana-proxy oauth-client] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:170 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "OAuthClient", "metadata": { "creationTimestamp": "2017-06-09T14:35:06Z", "labels": { "logging-infra": "support" }, "name": "kibana-proxy", "resourceVersion": "1382", "selfLink": "/oapi/v1/oauthclients/kibana-proxy", "uid": "d4bd9f50-4d20-11e7-94cc-0e3d36056ef8" }, "redirectURIs": [ "https://kibana.router.default.svc.cluster.local" ], "scopeRestrictions": [ { "literals": [ "user:info", "user:check-access", "user:list-projects" ] } ], "secret": "0KlDJv6P66RqF4XCdXUwShBKOJwyMz7rCt8sgI7ZyXn20qPHTzbBuSILZqQwGrth" } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana secret] **************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:181 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-kibana ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.kibana.key cert=/etc/origin/logging/system.logging.kibana.crt -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana Proxy secret] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:195 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-kibana-proxy oauth-secret=/tmp/oauth-secret-dMmR5w session-secret=/tmp/session-secret-L2flTB server-key=/tmp/server-key-pM47BC server-cert=/tmp/server-cert-2gp0vZ server-tls.json=/tmp/server-tls.json-vhcFPs -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Generate Kibana DC template] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:221 changed: [openshift] => { "changed": true, "checksum": "4416422cee2e3eba6c9f6eee4729c5292a2fc851", "dest": "/tmp/openshift-logging-ansible-cJoPue/templates/kibana-dc.yaml", "gid": 0, "group": "root", "md5sum": "66f123d939b80d868826b54e08a8d5e6", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3739, "src": "/root/.ansible/tmp/ansible-tmp-1497018908.05-32497936824718/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set Kibana DC] ******************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:240 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-09T14:35:09Z", "generation": 2, "labels": { "component": "kibana", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana", "namespace": "logging", "resourceVersion": "1397", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-kibana", "uid": "d687e50a-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "replicas": 1, "selector": { "component": "kibana", "logging-infra": "kibana", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Rolling" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "kibana", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana" }, "spec": { "containers": [ { "env": [ { "name": "ES_HOST", "value": "logging-es" }, { "name": "ES_PORT", "value": "9200" }, { "name": "KIBANA_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.106.159:5000/logging/logging-kibana:latest", "imagePullPolicy": "Always", "name": "kibana", "readinessProbe": { "exec": { "command": [ "/usr/share/kibana/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 5, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 4 }, "resources": { "limits": { "memory": "736Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/kibana/keys", "name": "kibana", "readOnly": true } ] }, { "env": [ { "name": "OAP_BACKEND_URL", "value": "http://localhost:5601" }, { "name": "OAP_AUTH_MODE", "value": "oauth2" }, { "name": "OAP_TRANSFORM", "value": "user_header,token_header" }, { "name": "OAP_OAUTH_ID", "value": "kibana-proxy" }, { "name": "OAP_MASTER_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "OAP_PUBLIC_MASTER_URL", "value": "https://172.18.1.226:8443" }, { "name": "OAP_LOGOUT_REDIRECT", "value": "https://172.18.1.226:8443/console/logout" }, { "name": "OAP_MASTER_CA_FILE", "value": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" }, { "name": "OAP_DEBUG", "value": "False" }, { "name": "OAP_OAUTH_SECRET_FILE", "value": "/secret/oauth-secret" }, { "name": "OAP_SERVER_CERT_FILE", "value": "/secret/server-cert" }, { "name": "OAP_SERVER_KEY_FILE", "value": "/secret/server-key" }, { "name": "OAP_SERVER_TLS_FILE", "value": "/secret/server-tls.json" }, { "name": "OAP_SESSION_SECRET_FILE", "value": "/secret/session-secret" }, { "name": "OCP_AUTH_PROXY_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana-proxy", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.106.159:5000/logging/logging-auth-proxy:latest", "imagePullPolicy": "Always", "name": "kibana-proxy", "ports": [ { "containerPort": 3000, "name": "oaproxy", "protocol": "TCP" } ], "resources": { "limits": { "memory": "96Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/secret", "name": "kibana-proxy", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-kibana", "serviceAccountName": "aggregated-logging-kibana", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "kibana", "secret": { "defaultMode": 420, "secretName": "logging-kibana" } }, { "name": "kibana-proxy", "secret": { "defaultMode": 420, "secretName": "logging-kibana-proxy" } } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-09T14:35:09Z", "lastUpdateTime": "2017-06-09T14:35:09Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-09T14:35:09Z", "lastUpdateTime": "2017-06-09T14:35:09Z", "message": "replication controller \"logging-kibana-1\" is waiting for pod \"logging-kibana-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Delete temp directory] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:252 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-cJoPue", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:166 statically included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "kibana_version": "3_5" }, "changed": false } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : fail] ***************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create temp directory for doing work in] ****** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:7 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.018379", "end": "2017-06-09 10:35:11.281403", "rc": 0, "start": "2017-06-09 10:35:11.263024" } STDOUT: /tmp/openshift-logging-ansible-whEBc5 TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:12 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-whEBc5" }, "changed": false } TASK [openshift_logging_kibana : Create templates subdirectory] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:16 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-whEBc5/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Create Kibana service account] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:34 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get sa aggregated-logging-kibana -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-kibana-dockercfg-9l960" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-09T14:34:59Z", "name": "aggregated-logging-kibana", "namespace": "logging", "resourceVersion": "1351", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-kibana", "uid": "d0fec0c5-4d20-11e7-94cc-0e3d36056ef8" }, "secrets": [ { "name": "aggregated-logging-kibana-token-t878l" }, { "name": "aggregated-logging-kibana-dockercfg-9l960" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:42 ok: [openshift] => { "ansible_facts": { "kibana_component": "kibana-ops", "kibana_name": "logging-kibana-ops" }, "changed": false } TASK [openshift_logging_kibana : Checking for session_secret] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:47 ok: [openshift] => { "changed": false, "stat": { "atime": 1497018901.7902544, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "60db28e971103bb52f35f58fa57ab0935025155d", "ctime": 1497018900.816245, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 136312470, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "md5": "0cce6ef12ab47bdc2c54a11cff14756f", "mimetype": "text/plain", "mode": "0644", "mtime": 1497018900.715244, "nlink": 1, "path": "/etc/origin/logging/session_secret", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 200, "uid": 0, "version": "18446744071983323620", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_logging_kibana : Checking for oauth_secret] ******************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:51 ok: [openshift] => { "changed": false, "stat": { "atime": 1497018901.9142556, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "ceb52763c135dbf083c01a343ede0d27f384413f", "ctime": 1497018901.116248, "dev": 51714, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 17189516, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "md5": "f6295b9374ea879db18f669ff24bb93b", "mimetype": "text/plain", "mode": "0644", "mtime": 1497018901.016247, "nlink": 1, "path": "/etc/origin/logging/oauth_secret", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 64, "uid": 0, "version": "2015011887", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false } } TASK [openshift_logging_kibana : Generate session secret] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:56 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Generate oauth secret] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:64 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Retrieving the cert to use when generating secrets for the logging components] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:71 ok: [openshift] => (item={u'name': u'ca_file', u'file': u'ca.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE1sb1hEVEl5TURZd09ERTBNelF4TTFvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU9NZWh0UnZGekhNN1RZR2h3bzNVUlBHZWdPZ3JVeWhId2NNbERsMHpDUisKQm9vcGxiTUVRWHlvM1Bad294YXYrUjM2eDhMbXlJbGdhWnJtVHNFcXN2cm16aTZBY05xdGNoWVJwYlRpSkRuUQpuR3ZrYkt6SEpZazJLTlVza1R4VytyWitNTkxWVHdrU0QvVThVeHE0RHFiZGtsRTZZY2M3bDhkWVpMRTB4M0thCnU4UTFrclFZa2ozMk5idzA1UUFnYmg3MmdKc0J6akQwZm9XZDcwUHlXNTNJVTk3RERsU0o0MkdVc3hEd0tzWmUKTkt6eGNJTCsrc0c2ZHl0aG9qRFkzNWMyWjNERXV6NkpwTHJZUFhuK3hsbEhrTklBMkNtTFZYanp3eEFDcTF2RQpBRnJwamxydEVTRGMweU1WOG9IeW03Ymtyci9RaGFVL2FtcmdsSXMwdWNrQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKTmQKV2Z6WEZVNEtrYitDLzZnZ1ZxbmtoQjEzQ0NHS1pvRE9VbkphYWVaTzhEMTNUMGhvTnpORXF5NlRBREVQMUpobgpaMTA1Z3E4M3NES3ZoQlp6RVZ1ajZXWlNvbFZDZGh4M21TOWZjK0EyZjlDcUtsci9MejJjQk5ZcUFrQVJtKzAxCm1aU21ONzNxQUhOWkNDS0lMZGdrRXdKQSthTFNLbG1xbVZXZU9ZdVBVc3JDTTUweGJLWlJmUGJPZk1aL1ZJTkcKTlQxMys3VmJwaWF4RG1HcXliMnRWRmNOUEg5NzhuSXM5OXFSWUhDWU00cnE5SzRZRkFzaXhTY1VpU1ZZdHl0NApsdEVyT3dYMXB3UllvemsraTIrMFVtRGpvOGVZdVJqaXp5cFJlMmJleEdrZUpIMW15eGthNXVGWk1uN0VRM3kvCi9STSsrOXdRVVJSNjY5enNZNnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "encoding": "base64", "item": { "file": "ca.crt", "name": "ca_file" }, "source": "/etc/origin/logging/ca.crt" } ok: [openshift] => (item={u'name': u'kibana_internal_key', u'file': u'kibana-internal.key'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcGdJQkFBS0NBUUVBeGxlS1NCWU5ic1pPOW9DVVlOZFR1Nm1BL29meHBUVlhTYU82eEs2TGIwOGlrTTZ3Ck9YM1Z5ZnpNc3dBNjkwMlhrUmlXTkpkKzJLeVMyN3c4RkpFbm5CL0lIZk8xQmd3N2p2QmZzMlRERDBZSlBKalgKcjVZZk9xSk5TRjRWUEZsSXF2VWl5MHZxUkxRaTJaQXlBeWdSKzk4R05mSmdWOTdqcnJ5Zk5nY0FVQmN4WlN0cwpqU0x4aVlkc2wvSHhjMEVrM3JGbHJ1cWprMnJQV1dkZnEwVTFGSTBsYUdvS1V3bmJlWlE5VjlOWXFwSDVuWmVTCkR4REg3Sy8vdW1mMWJSSWptdnFyeVJxRjNtTWtPdm1uMXR4TDNLT2ZGK09Sb2dSWmZzRWNqbnBydUkxRFpMd1MKcTZja3ZXU0h0TlE4ZjlEc2xTUmJ4d01FcWs5VHd6b1FWYXFQVFFJREFRQUJBb0lCQVFDOVh5V3pjQUxCU214bwpKUm9HWUhFZEUxa0xMT2NHY3loMU1mT0lDSk11NHFMQkdlYmQ3WXhxLzRpK083RVJJQzlmcE5iOVBjd3B1cE81Cll6OEY4QldlbGlXdW0xcXlmSWw5RDNxQVFPdVFzTER1LzR1bnBURUovWjdHUXJZSjJjRnRJUUpva29JSnVPZ3gKUytERWJNVEc5QWp0Qnc3L3R0c3lvZnR0VFQvNk5veHM0bDlMNXUxSlBTTldYSUVFVEsrZHp0M0RuNEZDZ0RFOQpyUHovUTJadXIzZGpROVYzNjU0bUtaQVNoOXVtNzhMU0ZGWHBXSTA1THpHVTY3bmVLL05mSVd2bDVjcWdvTUEyCkpHcFloQ1FFWEd3Z1BjRHREa0JqYVB2MVZTbXZ6Tko1WUUxWWplOUd5Q2JvSmRQdFZYaEVrK093L1p1Zkxzd2wKWWpnV2NuK2hBb0dCQU9PelBqa0RwRVBaY04yakc2RkE0RVhBRTJWamRkaVZaMERHSU1JeUtMcHJBQnFCVTVvaQp6Y2lDcElQY00wUmwxZVJmUWtTN1RNQkFrWXk0Y2F1Yjh1RE9YMytEeEVRZm9aNzUrZUw0dDdOREgrK0JTOC9uCnlpVFVOUGRDYUdKR0Jrb2kxMElwZDVPMTljVTdUZlFiazBoemEzSkFWWVdXQzRoUFlIbERhUzlsQW9HQkFONysKTkVIRThwTGwyV0E5cEI3Z3YxckZTWmpQTkdqczlzUEdKQnh4Y0ViYjRhNTFZem82OVJoZXVEalp0aDgybmtpVQpycEI2VEk5VWhCaTY5U2o0Q05LTVRpVzdWVWZzM2RZbWFuWkVlWVQ5QndoV0hxYmRnbU94TEhlRmJ0ZVRWaldTCnpvc0M5TExaSlp3aWQvRlhqRlpndkJMWkVNVi9oTUQ1WTlqUWcrWEpBb0dCQUpCQmpyb3dSSEYzNExtS0RJY3MKd3VsdHR0d1ZGeVFRQTBwV080ck1uR0QrU1NLQnJLV0tSelV4RDJrNnFJQTh4RFhhNC9FSGVLaVVQNklYZUd4dwpjSDljUDhSWmhvNWlPOUtzTEZSUG5wSkRoSWdJTWkrVmVjdTdaWk1BejREelBDamJ5ZVJ3d1FFajFvRU9BV1VWCjAwbWpWZjhjSXhKdTdQOSt5bkFJOVNyQkFvR0JBS2dsc3kzczNxVmFZSUdydVhmM0xSTzdOSFhmdUx0dUE5MDQKS2I2dzQyTHJKdEF3Z0RSR2hNNXRqaWlBTWs1ekZ3UFA2Wm5VUHFyTnBoWW4wL21pbnJSMVMvQXp4R2pKK2JVagpucCt6bnBaalhjd3hkRWVMUEdrRURtM0oxZjBFZ3JzL0NqUFVkTVB2N2VaQUw0Vnk2TVd4aCtBR2doa0t3UVhxCmlCblRrY0hSQW9HQkFMMU5LNTNVVFM4bGVmbkwwQS9mL055VEk1UWdteFN0Tm9yL3pKaElONFJGL2dqbHoyQ3UKOVl3V3VJcjREOUROSTgvZWF4RC9SSUxGc0xoSmVmMUJVcWJxQjJFV0ordTc1cEx0QlZ5M2N1NjdJSmZXRVl4dwoybmY0aG9mNG1mUVpTRk5EOVhVSno1N05PMFVzdzc3KzhCQ3lJdFhjMGZIMVBhb1lvenpKSDBEZAotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.key", "name": "kibana_internal_key" }, "source": "/etc/origin/logging/kibana-internal.key" } ok: [openshift] => (item={u'name': u'kibana_internal_cert', u'file': u'kibana-internal.crt'}) => { "changed": false, "content": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE5Gb1hEVEU1TURZd09URTBNelF4TlZvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFER1Y0cElGZzF1eGs3MmdKUmcxMU83cVlEK2gvR2xOVmRKbzdyRXJvdHZUeUtRenJBNWZkWEoKL015ekFEcjNUWmVSR0pZMGwzN1lySkxidkR3VWtTZWNIOGdkODdVR0REdU84Rit6Wk1NUFJnazhtTmV2bGg4NgpvazFJWGhVOFdVaXE5U0xMUytwRXRDTFprRElES0JINzN3WTE4bUJYM3VPdXZKODJCd0JRRnpGbEsyeU5JdkdKCmgyeVg4ZkZ6UVNUZXNXV3U2cU9UYXM5WloxK3JSVFVValNWb2FncFRDZHQ1bEQxWDAxaXFrZm1kbDVJUEVNZnMKci8rNlovVnRFaU9hK3F2SkdvWGVZeVE2K2FmVzNFdmNvNThYNDVHaUJGbCt3UnlPZW11NGpVTmt2QktycHlTOQpaSWUwMUR4LzBPeVZKRnZIQXdTcVQxUERPaEJWcW85TkFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXY1YTNmbDBqRUpXaGFDVTZ6YUg4N2c4NTY2b0ZLZEhNUlhidjl6QzkKcmV0UFhCSnJtU1dHWVRTOWxDblhyZXZ6b01xeTUxWDBlSllFTTdUaGVrVUVYeERJVEZibUJFakJ5d3ZsQlJRMQpaUXFOcFVBWEFnYjdreWRBSmtHSDFGbkFuVlNhSGdtblhQOVJLOGdMY1BSeG1nMnpZaUMzeVFZcFNaWHJkMzYrCmNOenAzY2ZUY1BOdEQ1VGQxWjVhTVFsWDNOMjh4TUVsRDBXRHZXQ0R3ZmtuS2NoUWhMNUxiajdkZ1lzSjN4bVAKR2tLdXJzbkVXb2UxUTRDTGY1UDM4R1pjSkV6NEtWRnY2Yk90T2ZKVk54c3BHRlhkY1hCQy9qQ1V4TndPQ2piTQo3QUNIbWNFaTdiMjVobDNvQUoxdForb2d3eUZxNWpTVThYcW43MEdLKzlVSWRBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd09URTBNelF4TWxvWERUSXlNRFl3T0RFME16UXhNMW93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBT01laHRSdkZ6SE03VFlHaHdvM1VSUEdlZ09nclV5aEh3Y01sRGwwekNSKwpCb29wbGJNRVFYeW8zUFp3b3hhditSMzZ4OExteUlsZ2Facm1Uc0Vxc3ZybXppNkFjTnF0Y2hZUnBiVGlKRG5RCm5HdmtiS3pISllrMktOVXNrVHhXK3JaK01OTFZUd2tTRC9VOFV4cTREcWJka2xFNlljYzdsOGRZWkxFMHgzS2EKdThRMWtyUVlrajMyTmJ3MDVRQWdiaDcyZ0pzQnpqRDBmb1dkNzBQeVc1M0lVOTdERGxTSjQyR1VzeER3S3NaZQpOS3p4Y0lMKytzRzZkeXRob2pEWTM1YzJaM0RFdXo2SnBMcllQWG4reGxsSGtOSUEyQ21MVlhqend4QUNxMXZFCkFGcnBqbHJ0RVNEYzB5TVY4b0h5bTdia3JyL1FoYVUvYW1yZ2xJczB1Y2tDQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUpOZApXZnpYRlU0S2tiK0MvNmdnVnFua2hCMTNDQ0dLWm9ET1VuSmFhZVpPOEQxM1QwaG9Oek5FcXk2VEFERVAxSmhuCloxMDVncTgzc0RLdmhCWnpFVnVqNldaU29sVkNkaHgzbVM5ZmMrQTJmOUNxS2xyL0x6MmNCTllxQWtBUm0rMDEKbVpTbU43M3FBSE5aQ0NLSUxkZ2tFd0pBK2FMU0tsbXFtVldlT1l1UFVzckNNNTB4YktaUmZQYk9mTVovVklORwpOVDEzKzdWYnBpYXhEbUdxeWIydFZGY05QSDk3OG5Jczk5cVJZSENZTTRycTlLNFlGQXNpeFNjVWlTVll0eXQ0Cmx0RXJPd1gxcHdSWW96aytpMiswVW1Eam84ZVl1UmppenlwUmUyYmV4R2tlSkgxbXl4a2E1dUZaTW43RVEzeS8KL1JNKys5d1FVUlI2Njl6c1k2dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "encoding": "base64", "item": { "file": "kibana-internal.crt", "name": "kibana_internal_cert" }, "source": "/etc/origin/logging/kibana-internal.crt" } ok: [openshift] => (item={u'name': u'server_tls', u'file': u'server-tls.json'}) => { "changed": false, "content": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", "encoding": "base64", "item": { "file": "server-tls.json", "name": "server_tls" }, "source": "/etc/origin/logging/server-tls.json" } ok: [openshift] => (item={u'name': u'session_secret', u'file': u'session_secret'}) => { "changed": false, "content": "Q0pTNDV1eXJINWJqUGZMWGlPSktMVXJsVWs3dWVNODNIRGFqU0drb25aQkp5Qk5md2NqelJCNFlHUE5JRUk5SmZuRjVTVGExZjRlWk1nWUlJRE5nVEg4dmRuWVdJakF5b3dtdXdyZ0ZUUWZIVzFwTFRSMXBkQzJBdnRTdms1S0ZuMEh2YkVCYVk2VXZRdWZmc3RWSmw0ZTN1cngyVkpwMThHUHFkWlN2eUxTdXJEVnB2OFVqTkU3RFZvOW1QTGlpbm01cGFnNG4=", "encoding": "base64", "item": { "file": "session_secret", "name": "session_secret" }, "source": "/etc/origin/logging/session_secret" } ok: [openshift] => (item={u'name': u'oauth_secret', u'file': u'oauth_secret'}) => { "changed": false, "content": "MEtsREp2NlA2NlJxRjRYQ2RYVXdTaEJLT0p3eU16N3JDdDhzZ0k3WnlYbjIwcVBIVHpiQnVTSUxacVF3R3J0aA==", "encoding": "base64", "item": { "file": "oauth_secret", "name": "oauth_secret" }, "source": "/etc/origin/logging/oauth_secret" } TASK [openshift_logging_kibana : Set logging-kibana-ops service] *************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:84 changed: [openshift] => { "changed": true, "results": { "clusterip": "172.30.132.127", "cmd": "/bin/oc get service logging-kibana-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Service", "metadata": { "creationTimestamp": "2017-06-09T14:35:15Z", "name": "logging-kibana-ops", "namespace": "logging", "resourceVersion": "1427", "selfLink": "/api/v1/namespaces/logging/services/logging-kibana-ops", "uid": "da41ac1f-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "clusterIP": "172.30.132.127", "ports": [ { "port": 443, "protocol": "TCP", "targetPort": "oaproxy" } ], "selector": { "component": "kibana-ops", "provider": "openshift" }, "sessionAffinity": "None", "type": "ClusterIP" }, "status": { "loadBalancer": {} } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:101 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_key | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:106 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_cert | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:111 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_kibana_ca | trim | length > 0 }} skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:116 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_kibana : Generating Kibana route template] ************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:121 ok: [openshift] => { "changed": false, "checksum": "3429f4d39ec21ed5f2eeb16011af5491681d4d97", "dest": "/tmp/openshift-logging-ansible-whEBc5/templates/kibana-route.yaml", "gid": 0, "group": "root", "md5sum": "36b02953c378ee80a31f313390a027d9", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2726, "src": "/root/.ansible/tmp/ansible-tmp-1497018916.39-225818212976994/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Setting Kibana route] ************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:141 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get route logging-kibana-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "Route", "metadata": { "creationTimestamp": "2017-06-09T14:35:17Z", "labels": { "component": "support", "logging-infra": "support", "provider": "openshift" }, "name": "logging-kibana-ops", "namespace": "logging", "resourceVersion": "1433", "selfLink": "/oapi/v1/namespaces/logging/routes/logging-kibana-ops", "uid": "db8b9f87-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "host": "kibana-ops.router.default.svc.cluster.local", "tls": { "caCertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1ow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+\nBooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQ\nnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Ka\nu8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZe\nNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vE\nAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNd\nWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1Jhn\nZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01\nmZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VING\nNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4\nltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y/\n/RM++9wQURR669zsY6w=\n-----END CERTIFICATE-----\n", "destinationCACertificate": "-----BEGIN CERTIFICATE-----\nMIIC2jCCAcKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAeMRwwGgYDVQQDExNsb2dn\naW5nLXNpZ25lci10ZXN0MB4XDTE3MDYwOTE0MzQxMloXDTIyMDYwODE0MzQxM1ow\nHjEcMBoGA1UEAxMTbG9nZ2luZy1zaWduZXItdGVzdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAOMehtRvFzHM7TYGhwo3URPGegOgrUyhHwcMlDl0zCR+\nBooplbMEQXyo3PZwoxav+R36x8LmyIlgaZrmTsEqsvrmzi6AcNqtchYRpbTiJDnQ\nnGvkbKzHJYk2KNUskTxW+rZ+MNLVTwkSD/U8Uxq4DqbdklE6Ycc7l8dYZLE0x3Ka\nu8Q1krQYkj32Nbw05QAgbh72gJsBzjD0foWd70PyW53IU97DDlSJ42GUsxDwKsZe\nNKzxcIL++sG6dythojDY35c2Z3DEuz6JpLrYPXn+xllHkNIA2CmLVXjzwxACq1vE\nAFrpjlrtESDc0yMV8oHym7bkrr/QhaU/amrglIs0uckCAwEAAaMjMCEwDgYDVR0P\nAQH/BAQDAgKkMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAJNd\nWfzXFU4Kkb+C/6ggVqnkhB13CCGKZoDOUnJaaeZO8D13T0hoNzNEqy6TADEP1Jhn\nZ105gq83sDKvhBZzEVuj6WZSolVCdhx3mS9fc+A2f9CqKlr/Lz2cBNYqAkARm+01\nmZSmN73qAHNZCCKILdgkEwJA+aLSKlmqmVWeOYuPUsrCM50xbKZRfPbOfMZ/VING\nNT13+7VbpiaxDmGqyb2tVFcNPH978nIs99qRYHCYM4rq9K4YFAsixScUiSVYtyt4\nltErOwX1pwRYozk+i2+0UmDjo8eYuRjizypRe2bexGkeJH1myxka5uFZMn7EQ3y/\n/RM++9wQURR669zsY6w=\n-----END CERTIFICATE-----\n", "insecureEdgeTerminationPolicy": "Redirect", "termination": "reencrypt" }, "to": { "kind": "Service", "name": "logging-kibana-ops", "weight": 100 }, "wildcardPolicy": "None" }, "status": { "ingress": [ { "conditions": [ { "lastTransitionTime": "2017-06-09T14:35:17Z", "status": "True", "type": "Admitted" } ], "host": "kibana-ops.router.default.svc.cluster.local", "routerName": "router", "wildcardPolicy": "None" } ] } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Get current oauthclient hostnames] ************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:151 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "OAuthClient", "metadata": { "creationTimestamp": "2017-06-09T14:35:06Z", "labels": { "logging-infra": "support" }, "name": "kibana-proxy", "resourceVersion": "1382", "selfLink": "/oapi/v1/oauthclients/kibana-proxy", "uid": "d4bd9f50-4d20-11e7-94cc-0e3d36056ef8" }, "redirectURIs": [ "https://kibana.router.default.svc.cluster.local" ], "scopeRestrictions": [ { "literals": [ "user:info", "user:check-access", "user:list-projects" ] } ], "secret": "0KlDJv6P66RqF4XCdXUwShBKOJwyMz7rCt8sgI7ZyXn20qPHTzbBuSILZqQwGrth" } ], "returncode": 0 }, "state": "list" } TASK [openshift_logging_kibana : set_fact] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:159 ok: [openshift] => { "ansible_facts": { "proxy_hostnames": [ "https://kibana.router.default.svc.cluster.local", "https://kibana-ops.router.default.svc.cluster.local" ] }, "changed": false } TASK [openshift_logging_kibana : Create oauth-client template] ***************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:162 changed: [openshift] => { "changed": true, "checksum": "bbeb71f4f3c4cc398462d913a61d2b4c8e3b23b9", "dest": "/tmp/openshift-logging-ansible-whEBc5/templates/oauth-client.yml", "gid": 0, "group": "root", "md5sum": "5be338e3286abd0965237461a9946f0c", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 382, "src": "/root/.ansible/tmp/ansible-tmp-1497018918.28-102839643180107/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set kibana-proxy oauth-client] **************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:170 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get oauthclient kibana-proxy -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "OAuthClient", "metadata": { "creationTimestamp": "2017-06-09T14:35:06Z", "labels": { "logging-infra": "support" }, "name": "kibana-proxy", "resourceVersion": "1439", "selfLink": "/oapi/v1/oauthclients/kibana-proxy", "uid": "d4bd9f50-4d20-11e7-94cc-0e3d36056ef8" }, "redirectURIs": [ "https://kibana.router.default.svc.cluster.local", "https://kibana-ops.router.default.svc.cluster.local" ], "scopeRestrictions": [ { "literals": [ "user:info", "user:check-access", "user:list-projects" ] } ], "secret": "0KlDJv6P66RqF4XCdXUwShBKOJwyMz7rCt8sgI7ZyXn20qPHTzbBuSILZqQwGrth" } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana secret] **************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:181 ok: [openshift] => { "changed": false, "results": { "apiVersion": "v1", "data": { "ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE1sb1hEVEl5TURZd09ERTBNelF4TTFvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU9NZWh0UnZGekhNN1RZR2h3bzNVUlBHZWdPZ3JVeWhId2NNbERsMHpDUisKQm9vcGxiTUVRWHlvM1Bad294YXYrUjM2eDhMbXlJbGdhWnJtVHNFcXN2cm16aTZBY05xdGNoWVJwYlRpSkRuUQpuR3ZrYkt6SEpZazJLTlVza1R4VytyWitNTkxWVHdrU0QvVThVeHE0RHFiZGtsRTZZY2M3bDhkWVpMRTB4M0thCnU4UTFrclFZa2ozMk5idzA1UUFnYmg3MmdKc0J6akQwZm9XZDcwUHlXNTNJVTk3RERsU0o0MkdVc3hEd0tzWmUKTkt6eGNJTCsrc0c2ZHl0aG9qRFkzNWMyWjNERXV6NkpwTHJZUFhuK3hsbEhrTklBMkNtTFZYanp3eEFDcTF2RQpBRnJwamxydEVTRGMweU1WOG9IeW03Ymtyci9RaGFVL2FtcmdsSXMwdWNrQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKTmQKV2Z6WEZVNEtrYitDLzZnZ1ZxbmtoQjEzQ0NHS1pvRE9VbkphYWVaTzhEMTNUMGhvTnpORXF5NlRBREVQMUpobgpaMTA1Z3E4M3NES3ZoQlp6RVZ1ajZXWlNvbFZDZGh4M21TOWZjK0EyZjlDcUtsci9MejJjQk5ZcUFrQVJtKzAxCm1aU21ONzNxQUhOWkNDS0lMZGdrRXdKQSthTFNLbG1xbVZXZU9ZdVBVc3JDTTUweGJLWlJmUGJPZk1aL1ZJTkcKTlQxMys3VmJwaWF4RG1HcXliMnRWRmNOUEg5NzhuSXM5OXFSWUhDWU00cnE5SzRZRkFzaXhTY1VpU1ZZdHl0NApsdEVyT3dYMXB3UllvemsraTIrMFVtRGpvOGVZdVJqaXp5cFJlMmJleEdrZUpIMW15eGthNXVGWk1uN0VRM3kvCi9STSsrOXdRVVJSNjY5enNZNnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSVENDQWkyZ0F3SUJBZ0lCQXpBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE9Wb1hEVEU1TURZd09URTBNelF4T1ZvdwpSakVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVI0d0hBWURWUVFECkRCVnplWE4wWlcwdWJHOW5aMmx1Wnk1cmFXSmhibUV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXcKZ2dFS0FvSUJBUUNpUi9WQStqWEtYdHZPRCt1dVZWMEgxR2E1ZTRUVks0dkd0NUQyWjliYkcxTkZ5Zys0OHZJbApDcHI1S3p2b283bXhQZ2ZDaDYramdwU2RaZ1Ryek5scndLTEtMdUFkY2JlcUw4VDFUMVNCQ2RMOGFsNXZYbmpECnd3a1NyZ3dLd1FaaFFpUkdkQ0I4bm4rWFVSaC8xYmxTa2Q1Rks4VDVKaWttdkVtcmovTUxBM2FTM0ljbndJZEgKL0V4NjdYQ2lvOVNNOFgydDFiOUo2Y3l2RTRjaURiNnVoZlZ3Wmc0NUVYUjNnV1VQR3RlbWhKRXFpY1ViZk50VAo0R2NTNmp0NkR0ZG5ScExod0Y2L1l1NDVUZDRoWlVaSXdKOXp1TmcrK0ZrV2VsQVkxdkRnWVNKMG53clUwMGN5ClNnU3Vsemc5dWRSYWpqd1dBb08rbUpuWkJlMnNnSHpaQWdNQkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUYKb0RBSkJnTlZIUk1FQWpBQU1CMEdBMVVkSlFRV01CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFkQmdOVgpIUTRFRmdRVXBDNGFkMUJ4RmJaV3hBaE55VGpTcElWNk5LWXdDUVlEVlIwakJBSXdBREFOQmdrcWhraUc5dzBCCkFRVUZBQU9DQVFFQTA1SmgxRnJQQ2NqMFJMUWV2ZDZZc2Yvek1OYUQvN2F2c3QyYkF0TjJua291RlY1TjE5TzEKQkxyZWlpcXV2NEdIeHl0dU95NVNTZ3V0Vk5pcnd6T2VoeFo0QWhKcUFLRnBaS3E0MkYxclY1ZEVSL29PSHN2ZQp6TkdtTGJWeGRFWW9id2M4VmJ0ZXJWbVBWYnJQYTRBOUZBSEJEcW5IV2VTdEVpM092Vm5DU0ZYUHpuY2sraFVOCktDK2gzNjVmZmVFRFdNR0tHMVhtNkdNNGxDQmdBVldkMExsS0F4M3F2WFYzMW5XWExBeE9scmZranM4NzVERWYKWlV3ZmptbytFQTY0YkUzWlplUGdEQzNPN3NCWlBTaUJRNkJwT0lxQUkvMjR3Y3BNaFczajRaeXhSMHhkTCszeQpOaEFrUXgvVkJKRzNzYWJQcUN0TFNXVUtZbHhYNm5lcVNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2lSL1ZBK2pYS1h0dk8KRCt1dVZWMEgxR2E1ZTRUVks0dkd0NUQyWjliYkcxTkZ5Zys0OHZJbENwcjVLenZvbzdteFBnZkNoNitqZ3BTZApaZ1Ryek5scndLTEtMdUFkY2JlcUw4VDFUMVNCQ2RMOGFsNXZYbmpEd3drU3Jnd0t3UVpoUWlSR2RDQjhubitYClVSaC8xYmxTa2Q1Rks4VDVKaWttdkVtcmovTUxBM2FTM0ljbndJZEgvRXg2N1hDaW85U004WDJ0MWI5SjZjeXYKRTRjaURiNnVoZlZ3Wmc0NUVYUjNnV1VQR3RlbWhKRXFpY1ViZk50VDRHY1M2anQ2RHRkblJwTGh3RjYvWXU0NQpUZDRoWlVaSXdKOXp1TmcrK0ZrV2VsQVkxdkRnWVNKMG53clUwMGN5U2dTdWx6Zzl1ZFJhamp3V0FvTyttSm5aCkJlMnNnSHpaQWdNQkFBRUNnZ0VBSmJsNm52OUxiOTc3VS96SmVmNW9JUURVWEh3RlZoVElhb3FUeldRNFF6a24KODRwVWUxY215VXVjMlIyZTBLYjI3T2dSbjd3eExWNldzN3hhbW9KRmJOSWNSY245MlhwdENzZ09Ea1RCelRsKwpiYVZBakwwMmI3T3dNVUo2bENscEZVVHQ3OTdoQ25kQ0MrZ1RQT3h2SjIyM2NQY0sxQ3gxMW9aZ3pkd3AwTDQvCmg1S3oxZmdOMmp1SytPOGFRNzJrdVVKR1VMbmV4VU91TGgycm9QTUVkWTZtczNrSWxLQnFnTW14WGhPTDczcHUKK093WVVwcVpiZEo4cFhBNlVsdkY2ak9qQ1E4d2lQWlNJL1VsK3lsbDk2VnVaUDI3emJRRlhHWk9NTnYyRCtVNAp5MFN1aDA1QWNTaVJqYnRidDhlTjg5LzFQOXpBMXpLYTBLR0pvRDVYZ1FLQmdRRFRXVWQ0VFNVamVDZFVFMUNlCkUyTG93N1ZIbklYbERRaW5NdE9wdHR3UTdyTTFUQmN6cjY1aW1LTitaSVo4N2FsOTdOYW5kTE9YU1ZNWUlUV3UKMzBYQ1hMSU9UbnZaZThDVGVoZE9YV293SnptSURNOWM5WGs2UTRMbGZrQ3ZCc0ZpdWxvZDZMZlIrT0ZoZlFFNgpuWCtmSG9ZMzZoVysxRk40V1kxbFhZcWNDUUtCZ1FERWtOOUJTVmZrd2FTb05mbGJCZEh0TzVsbysyRmV2MHpNClphYkJxbmNBc3JiNFhTemNCV1Z2MlVCTW1ZYUdaTXlFRFp0dmlpTS9YTitSN0grU2RZd3FURTVLM0ZNNitVZ1IKYVlMSWErUVVsWlVKbmJqOWlqRlJ6RytBSUZLZjg1MDNnc3NOd2FIRmxvekZCOTFRUzk2aVE3N2VjM0N1VlFnTQozQUJWVkordVVRS0JnUUNRaXMyalMvZHJVNkJxRXErZS9KazNvYWxZS1ljMUNIM3pnNEpZM3BPUkRQOEpJMW4yCjRsNjhWYkh6SGlNUVM2WVFWaXJUNmE5dGR4dGFORlEzbmNGaTFPeDlkbFdqZnN4TTBFSWlPU2NIZWJ3Ui9OalEKdFoxTUtLSGIvRVdXcm1NUjkycnNhNTFVQUFkOEdmYitOSHIwd3ZaK1JSek1Id0JiSGJ2aktGOUVxUUtCZ0ZJSAorNE90YmdiRFlVbnBySFIyQzFPcnFhd01MR2gwQVVMVHNUSGxSK0I4dEtzVmgyRVN4M0JVQStkenNwWm5mb29sCmU1YWkyVzdaL1Z0U2pUSzc1NURIWTIwT1laV2M3cHlGb2RTdVlmTE5NZk5mWlJJNkY3Y0JVQTd5YmtqZVMrQWMKcjB6QVlCaXJhWGZZZmwzQ0s5a2YySW5STjFjcG1VQjBsNWNFeDliaEFvR0FLaG5hVENEdzhFZk8zWURqaWZTZgpjbmhoUHV5WTRFaWRzZ2o2dHNPTjY3N2hJZU1BMm94S3pYdy9uVTlpcnNvQTlhbHI5ZE53akg1TjU0cXFKY1hmCkliR1QxbGJVTjUrSlpzRE03YUlXNVp6Rjc5V01kUmxZemhGQmJ6QVM3OWFzd1hwYTdsNW1HNHJ2WURiRXFPMWgKZ2ZXUHlKN28wRXh5RU5UUERrTi9uQXM9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K" }, "kind": "Secret", "metadata": { "creationTimestamp": null, "name": "logging-kibana" }, "type": "Opaque" }, "state": "present" } TASK [openshift_logging_kibana : Set Kibana Proxy secret] ********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:195 ok: [openshift] => { "changed": false, "results": { "apiVersion": "v1", "data": { "oauth-secret": "MEtsREp2NlA2NlJxRjRYQ2RYVXdTaEJLT0p3eU16N3JDdDhzZ0k3WnlYbjIwcVBIVHpiQnVTSUxacVF3R3J0aA==", "server-cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURUakNDQWphZ0F3SUJBZ0lCQWpBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE5Gb1hEVEU1TURZd09URTBNelF4TlZvdwpGakVVTUJJR0ExVUVBeE1MSUd0cFltRnVZUzF2Y0hNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFER1Y0cElGZzF1eGs3MmdKUmcxMU83cVlEK2gvR2xOVmRKbzdyRXJvdHZUeUtRenJBNWZkWEoKL015ekFEcjNUWmVSR0pZMGwzN1lySkxidkR3VWtTZWNIOGdkODdVR0REdU84Rit6Wk1NUFJnazhtTmV2bGg4NgpvazFJWGhVOFdVaXE5U0xMUytwRXRDTFprRElES0JINzN3WTE4bUJYM3VPdXZKODJCd0JRRnpGbEsyeU5JdkdKCmgyeVg4ZkZ6UVNUZXNXV3U2cU9UYXM5WloxK3JSVFVValNWb2FncFRDZHQ1bEQxWDAxaXFrZm1kbDVJUEVNZnMKci8rNlovVnRFaU9hK3F2SkdvWGVZeVE2K2FmVzNFdmNvNThYNDVHaUJGbCt3UnlPZW11NGpVTmt2QktycHlTOQpaSWUwMUR4LzBPeVZKRnZIQXdTcVQxUERPaEJWcW85TkFnTUJBQUdqZ1o0d2dac3dEZ1lEVlIwUEFRSC9CQVFECkFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdaZ1lEVlIwUkJGOHcKWFlJTElHdHBZbUZ1WVMxdmNIT0NMQ0JyYVdKaGJtRXRiM0J6TG5KdmRYUmxjaTVrWldaaGRXeDBMbk4yWXk1agpiSFZ6ZEdWeUxteHZZMkZzZ2hnZ2EybGlZVzVoTGpFeU55NHdMakF1TVM1NGFYQXVhVytDQm10cFltRnVZVEFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXY1YTNmbDBqRUpXaGFDVTZ6YUg4N2c4NTY2b0ZLZEhNUlhidjl6QzkKcmV0UFhCSnJtU1dHWVRTOWxDblhyZXZ6b01xeTUxWDBlSllFTTdUaGVrVUVYeERJVEZibUJFakJ5d3ZsQlJRMQpaUXFOcFVBWEFnYjdreWRBSmtHSDFGbkFuVlNhSGdtblhQOVJLOGdMY1BSeG1nMnpZaUMzeVFZcFNaWHJkMzYrCmNOenAzY2ZUY1BOdEQ1VGQxWjVhTVFsWDNOMjh4TUVsRDBXRHZXQ0R3ZmtuS2NoUWhMNUxiajdkZ1lzSjN4bVAKR2tLdXJzbkVXb2UxUTRDTGY1UDM4R1pjSkV6NEtWRnY2Yk90T2ZKVk54c3BHRlhkY1hCQy9qQ1V4TndPQ2piTQo3QUNIbWNFaTdiMjVobDNvQUoxdForb2d3eUZxNWpTVThYcW43MEdLKzlVSWRBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQzJqQ0NBY0tnQXdJQkFnSUJBVEFOQmdrcWhraUc5dzBCQVFzRkFEQWVNUnd3R2dZRFZRUURFeE5zYjJkbgphVzVuTFhOcFoyNWxjaTEwWlhOME1CNFhEVEUzTURZd09URTBNelF4TWxvWERUSXlNRFl3T0RFME16UXhNMW93CkhqRWNNQm9HQTFVRUF4TVRiRzluWjJsdVp5MXphV2R1WlhJdGRHVnpkRENDQVNJd0RRWUpLb1pJaHZjTkFRRUIKQlFBRGdnRVBBRENDQVFvQ2dnRUJBT01laHRSdkZ6SE03VFlHaHdvM1VSUEdlZ09nclV5aEh3Y01sRGwwekNSKwpCb29wbGJNRVFYeW8zUFp3b3hhditSMzZ4OExteUlsZ2Facm1Uc0Vxc3ZybXppNkFjTnF0Y2hZUnBiVGlKRG5RCm5HdmtiS3pISllrMktOVXNrVHhXK3JaK01OTFZUd2tTRC9VOFV4cTREcWJka2xFNlljYzdsOGRZWkxFMHgzS2EKdThRMWtyUVlrajMyTmJ3MDVRQWdiaDcyZ0pzQnpqRDBmb1dkNzBQeVc1M0lVOTdERGxTSjQyR1VzeER3S3NaZQpOS3p4Y0lMKytzRzZkeXRob2pEWTM1YzJaM0RFdXo2SnBMcllQWG4reGxsSGtOSUEyQ21MVlhqend4QUNxMXZFCkFGcnBqbHJ0RVNEYzB5TVY4b0h5bTdia3JyL1FoYVUvYW1yZ2xJczB1Y2tDQXdFQUFhTWpNQ0V3RGdZRFZSMFAKQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUpOZApXZnpYRlU0S2tiK0MvNmdnVnFua2hCMTNDQ0dLWm9ET1VuSmFhZVpPOEQxM1QwaG9Oek5FcXk2VEFERVAxSmhuCloxMDVncTgzc0RLdmhCWnpFVnVqNldaU29sVkNkaHgzbVM5ZmMrQTJmOUNxS2xyL0x6MmNCTllxQWtBUm0rMDEKbVpTbU43M3FBSE5aQ0NLSUxkZ2tFd0pBK2FMU0tsbXFtVldlT1l1UFVzckNNNTB4YktaUmZQYk9mTVovVklORwpOVDEzKzdWYnBpYXhEbUdxeWIydFZGY05QSDk3OG5Jczk5cVJZSENZTTRycTlLNFlGQXNpeFNjVWlTVll0eXQ0Cmx0RXJPd1gxcHdSWW96aytpMiswVW1Eam84ZVl1UmppenlwUmUyYmV4R2tlSkgxbXl4a2E1dUZaTW43RVEzeS8KL1JNKys5d1FVUlI2Njl6c1k2dz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "server-key": "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcGdJQkFBS0NBUUVBeGxlS1NCWU5ic1pPOW9DVVlOZFR1Nm1BL29meHBUVlhTYU82eEs2TGIwOGlrTTZ3Ck9YM1Z5ZnpNc3dBNjkwMlhrUmlXTkpkKzJLeVMyN3c4RkpFbm5CL0lIZk8xQmd3N2p2QmZzMlRERDBZSlBKalgKcjVZZk9xSk5TRjRWUEZsSXF2VWl5MHZxUkxRaTJaQXlBeWdSKzk4R05mSmdWOTdqcnJ5Zk5nY0FVQmN4WlN0cwpqU0x4aVlkc2wvSHhjMEVrM3JGbHJ1cWprMnJQV1dkZnEwVTFGSTBsYUdvS1V3bmJlWlE5VjlOWXFwSDVuWmVTCkR4REg3Sy8vdW1mMWJSSWptdnFyeVJxRjNtTWtPdm1uMXR4TDNLT2ZGK09Sb2dSWmZzRWNqbnBydUkxRFpMd1MKcTZja3ZXU0h0TlE4ZjlEc2xTUmJ4d01FcWs5VHd6b1FWYXFQVFFJREFRQUJBb0lCQVFDOVh5V3pjQUxCU214bwpKUm9HWUhFZEUxa0xMT2NHY3loMU1mT0lDSk11NHFMQkdlYmQ3WXhxLzRpK083RVJJQzlmcE5iOVBjd3B1cE81Cll6OEY4QldlbGlXdW0xcXlmSWw5RDNxQVFPdVFzTER1LzR1bnBURUovWjdHUXJZSjJjRnRJUUpva29JSnVPZ3gKUytERWJNVEc5QWp0Qnc3L3R0c3lvZnR0VFQvNk5veHM0bDlMNXUxSlBTTldYSUVFVEsrZHp0M0RuNEZDZ0RFOQpyUHovUTJadXIzZGpROVYzNjU0bUtaQVNoOXVtNzhMU0ZGWHBXSTA1THpHVTY3bmVLL05mSVd2bDVjcWdvTUEyCkpHcFloQ1FFWEd3Z1BjRHREa0JqYVB2MVZTbXZ6Tko1WUUxWWplOUd5Q2JvSmRQdFZYaEVrK093L1p1Zkxzd2wKWWpnV2NuK2hBb0dCQU9PelBqa0RwRVBaY04yakc2RkE0RVhBRTJWamRkaVZaMERHSU1JeUtMcHJBQnFCVTVvaQp6Y2lDcElQY00wUmwxZVJmUWtTN1RNQkFrWXk0Y2F1Yjh1RE9YMytEeEVRZm9aNzUrZUw0dDdOREgrK0JTOC9uCnlpVFVOUGRDYUdKR0Jrb2kxMElwZDVPMTljVTdUZlFiazBoemEzSkFWWVdXQzRoUFlIbERhUzlsQW9HQkFONysKTkVIRThwTGwyV0E5cEI3Z3YxckZTWmpQTkdqczlzUEdKQnh4Y0ViYjRhNTFZem82OVJoZXVEalp0aDgybmtpVQpycEI2VEk5VWhCaTY5U2o0Q05LTVRpVzdWVWZzM2RZbWFuWkVlWVQ5QndoV0hxYmRnbU94TEhlRmJ0ZVRWaldTCnpvc0M5TExaSlp3aWQvRlhqRlpndkJMWkVNVi9oTUQ1WTlqUWcrWEpBb0dCQUpCQmpyb3dSSEYzNExtS0RJY3MKd3VsdHR0d1ZGeVFRQTBwV080ck1uR0QrU1NLQnJLV0tSelV4RDJrNnFJQTh4RFhhNC9FSGVLaVVQNklYZUd4dwpjSDljUDhSWmhvNWlPOUtzTEZSUG5wSkRoSWdJTWkrVmVjdTdaWk1BejREelBDamJ5ZVJ3d1FFajFvRU9BV1VWCjAwbWpWZjhjSXhKdTdQOSt5bkFJOVNyQkFvR0JBS2dsc3kzczNxVmFZSUdydVhmM0xSTzdOSFhmdUx0dUE5MDQKS2I2dzQyTHJKdEF3Z0RSR2hNNXRqaWlBTWs1ekZ3UFA2Wm5VUHFyTnBoWW4wL21pbnJSMVMvQXp4R2pKK2JVagpucCt6bnBaalhjd3hkRWVMUEdrRURtM0oxZjBFZ3JzL0NqUFVkTVB2N2VaQUw0Vnk2TVd4aCtBR2doa0t3UVhxCmlCblRrY0hSQW9HQkFMMU5LNTNVVFM4bGVmbkwwQS9mL055VEk1UWdteFN0Tm9yL3pKaElONFJGL2dqbHoyQ3UKOVl3V3VJcjREOUROSTgvZWF4RC9SSUxGc0xoSmVmMUJVcWJxQjJFV0ordTc1cEx0QlZ5M2N1NjdJSmZXRVl4dwoybmY0aG9mNG1mUVpTRk5EOVhVSno1N05PMFVzdzc3KzhCQ3lJdFhjMGZIMVBhb1lvenpKSDBEZAotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=", "server-tls.json": "Ly8gU2VlIGZvciBhdmFpbGFibGUgb3B0aW9uczogaHR0cHM6Ly9ub2RlanMub3JnL2FwaS90bHMuaHRtbCN0bHNfdGxzX2NyZWF0ZXNlcnZlcl9vcHRpb25zX3NlY3VyZWNvbm5lY3Rpb25saXN0ZW5lcgp0bHNfb3B0aW9ucyA9IHsKCWNpcGhlcnM6ICdrRUVDREg6K2tFRUNESCtTSEE6a0VESDora0VESCtTSEE6K2tFREgrQ0FNRUxMSUE6a0VDREg6K2tFQ0RIK1NIQTprUlNBOitrUlNBK1NIQTora1JTQStDQU1FTExJQTohYU5VTEw6IWVOVUxMOiFTU0x2MjohUkM0OiFERVM6IUVYUDohU0VFRDohSURFQTorM0RFUycsCglob25vckNpcGhlck9yZGVyOiB0cnVlCn0K", "session-secret": "Q0pTNDV1eXJINWJqUGZMWGlPSktMVXJsVWs3dWVNODNIRGFqU0drb25aQkp5Qk5md2NqelJCNFlHUE5JRUk5SmZuRjVTVGExZjRlWk1nWUlJRE5nVEg4dmRuWVdJakF5b3dtdXdyZ0ZUUWZIVzFwTFRSMXBkQzJBdnRTdms1S0ZuMEh2YkVCYVk2VXZRdWZmc3RWSmw0ZTN1cngyVkpwMThHUHFkWlN2eUxTdXJEVnB2OFVqTkU3RFZvOW1QTGlpbm01cGFnNG4=" }, "kind": "Secret", "metadata": { "creationTimestamp": null, "name": "logging-kibana-proxy" }, "type": "Opaque" }, "state": "present" } TASK [openshift_logging_kibana : Generate Kibana DC template] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:221 changed: [openshift] => { "changed": true, "checksum": "3c6660156f465439d093b72a26b168a36a476199", "dest": "/tmp/openshift-logging-ansible-whEBc5/templates/kibana-dc.yaml", "gid": 0, "group": "root", "md5sum": "9f0c9357922c5e81626d964d1cfa4a23", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3763, "src": "/root/.ansible/tmp/ansible-tmp-1497018921.38-280049945661711/source", "state": "file", "uid": 0 } TASK [openshift_logging_kibana : Set Kibana DC] ******************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:240 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-kibana-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-09T14:35:22Z", "generation": 2, "labels": { "component": "kibana-ops", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana-ops", "namespace": "logging", "resourceVersion": "1461", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-kibana-ops", "uid": "de772954-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "replicas": 1, "selector": { "component": "kibana-ops", "logging-infra": "kibana", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Rolling" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "kibana-ops", "logging-infra": "kibana", "provider": "openshift" }, "name": "logging-kibana-ops" }, "spec": { "containers": [ { "env": [ { "name": "ES_HOST", "value": "logging-es-ops" }, { "name": "ES_PORT", "value": "9200" }, { "name": "KIBANA_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.106.159:5000/logging/logging-kibana:latest", "imagePullPolicy": "Always", "name": "kibana", "readinessProbe": { "exec": { "command": [ "/usr/share/kibana/probe/readiness.sh" ] }, "failureThreshold": 3, "initialDelaySeconds": 5, "periodSeconds": 5, "successThreshold": 1, "timeoutSeconds": 4 }, "resources": { "limits": { "memory": "736Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/kibana/keys", "name": "kibana", "readOnly": true } ] }, { "env": [ { "name": "OAP_BACKEND_URL", "value": "http://localhost:5601" }, { "name": "OAP_AUTH_MODE", "value": "oauth2" }, { "name": "OAP_TRANSFORM", "value": "user_header,token_header" }, { "name": "OAP_OAUTH_ID", "value": "kibana-proxy" }, { "name": "OAP_MASTER_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "OAP_PUBLIC_MASTER_URL", "value": "https://172.18.1.226:8443" }, { "name": "OAP_LOGOUT_REDIRECT", "value": "https://172.18.1.226:8443/console/logout" }, { "name": "OAP_MASTER_CA_FILE", "value": "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" }, { "name": "OAP_DEBUG", "value": "False" }, { "name": "OAP_OAUTH_SECRET_FILE", "value": "/secret/oauth-secret" }, { "name": "OAP_SERVER_CERT_FILE", "value": "/secret/server-cert" }, { "name": "OAP_SERVER_KEY_FILE", "value": "/secret/server-key" }, { "name": "OAP_SERVER_TLS_FILE", "value": "/secret/server-tls.json" }, { "name": "OAP_SESSION_SECRET_FILE", "value": "/secret/session-secret" }, { "name": "OCP_AUTH_PROXY_MEMORY_LIMIT", "valueFrom": { "resourceFieldRef": { "containerName": "kibana-proxy", "divisor": "0", "resource": "limits.memory" } } } ], "image": "172.30.106.159:5000/logging/logging-auth-proxy:latest", "imagePullPolicy": "Always", "name": "kibana-proxy", "ports": [ { "containerPort": 3000, "name": "oaproxy", "protocol": "TCP" } ], "resources": { "limits": { "memory": "96Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/secret", "name": "kibana-proxy", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-kibana", "serviceAccountName": "aggregated-logging-kibana", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "kibana", "secret": { "defaultMode": 420, "secretName": "logging-kibana" } }, { "name": "kibana-proxy", "secret": { "defaultMode": 420, "secretName": "logging-kibana-proxy" } } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-09T14:35:22Z", "lastUpdateTime": "2017-06-09T14:35:22Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-09T14:35:22Z", "lastUpdateTime": "2017-06-09T14:35:22Z", "message": "replication controller \"logging-kibana-ops-1\" is waiting for pod \"logging-kibana-ops-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_kibana : Delete temp directory] ************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_kibana/tasks/main.yaml:252 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-whEBc5", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:195 statically included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "curator_version": "3_5" }, "changed": false } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create temp directory for doing work in] ***** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:5 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.004059", "end": "2017-06-09 10:35:24.538667", "rc": 0, "start": "2017-06-09 10:35:24.534608" } STDOUT: /tmp/openshift-logging-ansible-Y8p78b TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:10 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-Y8p78b" }, "changed": false } TASK [openshift_logging_curator : Create templates subdirectory] *************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:14 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-Y8p78b/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:24 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:32 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-curator-dockercfg-gvn5s" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-09T14:35:25Z", "name": "aggregated-logging-curator", "namespace": "logging", "resourceVersion": "1477", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-curator", "uid": "e055b88e-4d20-11e7-94cc-0e3d36056ef8" }, "secrets": [ { "name": "aggregated-logging-curator-token-09kjk" }, { "name": "aggregated-logging-curator-dockercfg-gvn5s" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:41 ok: [openshift] => { "changed": false, "checksum": "9008efd9a8892dcc42c28c6dfb6708527880a6d8", "dest": "/tmp/openshift-logging-ansible-Y8p78b/curator.yml", "gid": 0, "group": "root", "md5sum": "5498c5fd98f3dd06e34b20eb1f55dc12", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 320, "src": "/root/.ansible/tmp/ansible-tmp-1497018926.3-220376106452575/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:47 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Set Curator configmap] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:53 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "config.yaml": "# Logging example curator config file\n\n# uncomment and use this to override the defaults from env vars\n#.defaults:\n# delete:\n# days: 30\n# runhour: 0\n# runminute: 0\n\n# to keep ops logs for a different duration:\n#.operations:\n# delete:\n# weeks: 8\n\n# example for a normal project\n#myapp:\n# delete:\n# weeks: 1\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-09T14:35:27Z", "name": "logging-curator", "namespace": "logging", "resourceVersion": "1493", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-curator", "uid": "e15a48ec-4d20-11e7-94cc-0e3d36056ef8" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Set Curator secret] ************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:62 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-curator ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.curator.key cert=/etc/origin/logging/system.logging.curator.crt -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:75 ok: [openshift] => { "ansible_facts": { "curator_component": "curator", "curator_name": "logging-curator" }, "changed": false } TASK [openshift_logging_curator : Generate Curator deploymentconfig] *********** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "d0024ab4cc9a66819442d817ada6f899b4605e4d", "dest": "/tmp/openshift-logging-ansible-Y8p78b/templates/curator-dc.yaml", "gid": 0, "group": "root", "md5sum": "b074a84a69dc53b8b652b5018b6e4ad7", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2341, "src": "/root/.ansible/tmp/ansible-tmp-1497018928.65-10808804443098/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : Set Curator DC] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:99 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-09T14:35:29Z", "generation": 2, "labels": { "component": "curator", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator", "namespace": "logging", "resourceVersion": "1510", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-curator", "uid": "e2a13767-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "replicas": 1, "selector": { "component": "curator", "logging-infra": "curator", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "curator", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator" }, "spec": { "containers": [ { "env": [ { "name": "K8S_HOST_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "ES_HOST", "value": "logging-es" }, { "name": "ES_PORT", "value": "9200" }, { "name": "ES_CLIENT_CERT", "value": "/etc/curator/keys/cert" }, { "name": "ES_CLIENT_KEY", "value": "/etc/curator/keys/key" }, { "name": "ES_CA", "value": "/etc/curator/keys/ca" }, { "name": "CURATOR_DEFAULT_DAYS", "value": "30" }, { "name": "CURATOR_RUN_HOUR", "value": "0" }, { "name": "CURATOR_RUN_MINUTE", "value": "0" }, { "name": "CURATOR_RUN_TIMEZONE", "value": "UTC" }, { "name": "CURATOR_SCRIPT_LOG_LEVEL", "value": "INFO" }, { "name": "CURATOR_LOG_LEVEL", "value": "ERROR" } ], "image": "172.30.106.159:5000/logging/logging-curator:latest", "imagePullPolicy": "Always", "name": "curator", "resources": { "limits": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/curator/keys", "name": "certs", "readOnly": true }, { "mountPath": "/etc/curator/settings", "name": "config", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-curator", "serviceAccountName": "aggregated-logging-curator", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "certs", "secret": { "defaultMode": 420, "secretName": "logging-curator" } }, { "configMap": { "defaultMode": 420, "name": "logging-curator" }, "name": "config" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-09T14:35:29Z", "lastUpdateTime": "2017-06-09T14:35:29Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-09T14:35:29Z", "lastUpdateTime": "2017-06-09T14:35:29Z", "message": "replication controller \"logging-curator-1\" is waiting for pod \"logging-curator-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Delete temp directory] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:109 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-Y8p78b", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:207 statically included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "curator_version": "3_5" }, "changed": false } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create temp directory for doing work in] ***** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:5 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.002714", "end": "2017-06-09 10:35:32.481933", "rc": 0, "start": "2017-06-09 10:35:32.479219" } STDOUT: /tmp/openshift-logging-ansible-KQ0OYH TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:10 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-KQ0OYH" }, "changed": false } TASK [openshift_logging_curator : Create templates subdirectory] *************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:14 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-KQ0OYH/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:24 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Create Curator service account] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:32 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get sa aggregated-logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-curator-dockercfg-gvn5s" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-09T14:35:25Z", "name": "aggregated-logging-curator", "namespace": "logging", "resourceVersion": "1477", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-curator", "uid": "e055b88e-4d20-11e7-94cc-0e3d36056ef8" }, "secrets": [ { "name": "aggregated-logging-curator-token-09kjk" }, { "name": "aggregated-logging-curator-dockercfg-gvn5s" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:41 ok: [openshift] => { "changed": false, "checksum": "9008efd9a8892dcc42c28c6dfb6708527880a6d8", "dest": "/tmp/openshift-logging-ansible-KQ0OYH/curator.yml", "gid": 0, "group": "root", "md5sum": "5498c5fd98f3dd06e34b20eb1f55dc12", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 320, "src": "/root/.ansible/tmp/ansible-tmp-1497018933.42-109470053745947/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:47 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_curator : Set Curator configmap] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:53 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get configmap logging-curator -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "config.yaml": "# Logging example curator config file\n\n# uncomment and use this to override the defaults from env vars\n#.defaults:\n# delete:\n# days: 30\n# runhour: 0\n# runminute: 0\n\n# to keep ops logs for a different duration:\n#.operations:\n# delete:\n# weeks: 8\n\n# example for a normal project\n#myapp:\n# delete:\n# weeks: 1\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-09T14:35:27Z", "name": "logging-curator", "namespace": "logging", "resourceVersion": "1493", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-curator", "uid": "e15a48ec-4d20-11e7-94cc-0e3d36056ef8" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Set Curator secret] ************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:62 ok: [openshift] => { "changed": false, "results": { "apiVersion": "v1", "data": { "ca": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyakNDQWNLZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReE1sb1hEVEl5TURZd09ERTBNelF4TTFvdwpIakVjTUJvR0ExVUVBeE1UYkc5bloybHVaeTF6YVdkdVpYSXRkR1Z6ZERDQ0FTSXdEUVlKS29aSWh2Y05BUUVCCkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU9NZWh0UnZGekhNN1RZR2h3bzNVUlBHZWdPZ3JVeWhId2NNbERsMHpDUisKQm9vcGxiTUVRWHlvM1Bad294YXYrUjM2eDhMbXlJbGdhWnJtVHNFcXN2cm16aTZBY05xdGNoWVJwYlRpSkRuUQpuR3ZrYkt6SEpZazJLTlVza1R4VytyWitNTkxWVHdrU0QvVThVeHE0RHFiZGtsRTZZY2M3bDhkWVpMRTB4M0thCnU4UTFrclFZa2ozMk5idzA1UUFnYmg3MmdKc0J6akQwZm9XZDcwUHlXNTNJVTk3RERsU0o0MkdVc3hEd0tzWmUKTkt6eGNJTCsrc0c2ZHl0aG9qRFkzNWMyWjNERXV6NkpwTHJZUFhuK3hsbEhrTklBMkNtTFZYanp3eEFDcTF2RQpBRnJwamxydEVTRGMweU1WOG9IeW03Ymtyci9RaGFVL2FtcmdsSXMwdWNrQ0F3RUFBYU1qTUNFd0RnWURWUjBQCkFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKTmQKV2Z6WEZVNEtrYitDLzZnZ1ZxbmtoQjEzQ0NHS1pvRE9VbkphYWVaTzhEMTNUMGhvTnpORXF5NlRBREVQMUpobgpaMTA1Z3E4M3NES3ZoQlp6RVZ1ajZXWlNvbFZDZGh4M21TOWZjK0EyZjlDcUtsci9MejJjQk5ZcUFrQVJtKzAxCm1aU21ONzNxQUhOWkNDS0lMZGdrRXdKQSthTFNLbG1xbVZXZU9ZdVBVc3JDTTUweGJLWlJmUGJPZk1aL1ZJTkcKTlQxMys3VmJwaWF4RG1HcXliMnRWRmNOUEg5NzhuSXM5OXFSWUhDWU00cnE5SzRZRkFzaXhTY1VpU1ZZdHl0NApsdEVyT3dYMXB3UllvemsraTIrMFVtRGpvOGVZdVJqaXp5cFJlMmJleEdrZUpIMW15eGthNXVGWk1uN0VRM3kvCi9STSsrOXdRVVJSNjY5enNZNnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K", "cert": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURSakNDQWk2Z0F3SUJBZ0lCQkRBTkJna3Foa2lHOXcwQkFRVUZBREFlTVJ3d0dnWURWUVFERXhOc2IyZG4KYVc1bkxYTnBaMjVsY2kxMFpYTjBNQjRYRFRFM01EWXdPVEUwTXpReU1Gb1hEVEU1TURZd09URTBNelF5TUZvdwpSekVRTUE0R0ExVUVDZ3dIVEc5bloybHVaekVTTUJBR0ExVUVDd3dKVDNCbGJsTm9hV1owTVI4d0hRWURWUVFECkRCWnplWE4wWlcwdWJHOW5aMmx1Wnk1amRYSmhkRzl5TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEEKTUlJQkNnS0NBUUVBMjRZaDdLYWQ0T0J2R2pIdEZZNEdlaTRsREY3NDRhdkZ6L1FSRDBhaitXMDlidkR6ODBpYQpUUUJ6aVRFZTBZYklWVE5CeDdpN3o1eXo5VFd6TVRJM0NLbzdGamlmNEtzUWZQQytvdFBQa0dLYnNXUk9CVmZRClZJRzZsOThyWHYwZkFIN2prMnRIbi9aSXQ3cTAxWml5cjA2aHQ4UkkwRU9JaE1nYmNOVkFEcWlpd1lJeGxxekMKRjIyc2wwdUd6eCtvY092MmNiWkRzV2JEL3IvbmRFYnZhZW9nNFVpY2p3aUJHditHVHpNRjRpWlV2TkRlK25JeApDdEJpQy9QbXlnNWJxekxyTjRiMC9QTEJzZWJYakQ0MDErSVhUWFM0a29qOUVraWI4RlVneWxMNkFiV0F3S1ZwCm5CLy9WTVYzV3c4ai9Ob3hYN3plYmFxZjI3K0ozY3FjU3dJREFRQUJvMll3WkRBT0JnTlZIUThCQWY4RUJBTUMKQmFBd0NRWURWUjBUQkFJd0FEQWRCZ05WSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3SFFZRApWUjBPQkJZRUZPSVdZN29QUUlnQy9weGxPMko0dmp0RHY1ZEJNQWtHQTFVZEl3UUNNQUF3RFFZSktvWklodmNOCkFRRUZCUUFEZ2dFQkFKbUN2SEZSRWJmaEVJdDYvRTR1OWlxRTB2dkM2anVxRnRNM3VVTWRaRWh5Q0EyYXlpRXUKQlpUU3NZVXpFbnJHcVpTQXZGQVV5RGEzOTM4WTZjYStDamlKMElOSExqc0VJamFsSjgwMlJURUl2aTdJbzVwWQowVnpubUVYM2hUNWY4Q3VyWVlsdXVOV1NVelRDUEpUd0kvc2lkbUlWMGVhcngzaHg5NDF2aU9PZDZlU3BsQXVCClM1RFdMS0pvekcvV0RhRi9UL2ZoR05WNTBBYTc4NENmSDdJUmQ4ZWZIclc0a2x4S090TnZXOSt0SjhTTlcwS2oKcWl0Qm9wSzl1S1Vmd3FtOHZjSThqUFRER2F5ajgwODJxUW5BV0FFdXRuSHIveUV4UjZvdW1BYnhFUFZRZEorNwp3a0xFT29kOEZodE5NaXpscTNPbXpiVnorWXFEMlJLdnR2Zz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRGJoaUhzcHAzZzRHOGEKTWUwVmpnWjZMaVVNWHZqaHE4WFA5QkVQUnFQNWJUMXU4UFB6U0pwTkFIT0pNUjdSaHNoVk0wSEh1THZQbkxQMQpOYk14TWpjSXFqc1dPSi9ncXhCODhMNmkwOCtRWXB1eFpFNEZWOUJVZ2JxWDN5dGUvUjhBZnVPVGEwZWY5a2kzCnVyVFZtTEt2VHFHM3hFalFRNGlFeUJ0dzFVQU9xS0xCZ2pHV3JNSVhiYXlYUzRiUEg2aHc2L1p4dGtPeFpzUCsKditkMFJ1OXA2aURoU0p5UENJRWEvNFpQTXdYaUpsUzgwTjc2Y2pFSzBHSUw4K2JLRGx1ck11czNodlQ4OHNHeAo1dGVNUGpUWDRoZE5kTGlTaVAwU1NKdndWU0RLVXZvQnRZREFwV21jSC85VXhYZGJEeVA4MmpGZnZONXRxcC9iCnY0bmR5cHhMQWdNQkFBRUNnZ0VBVi96ZlJCZFVXSG9jamdkTTI4TGRYY041SGdoREFWRDBMSEhMRkxCZnNPM1UKSGM5K09CajFuNzk2ajVhY242YkNUVVFLTFo4aHlBa3JLREdwN1NJUFpPMjJXU1hCRHpBQm45SnUxcHpIS1R3Ywo0M0VzeEg5NkJTVXFRUTAyT1JDRGlKTlRiQmNuMGpuSTA3dUdGOGJvZDlPd2hoT3FpNjlGM05MSURPV3NrekxPClZGNXo5R1YwNHdJK3BSTGFwL0tlaitaSWhpL0xPQUx4djhPWCtTemxDaU5JNkVPMWxWdkxLVXl6Y0JVaWNKL2MKQkJFMlYyZ0wrbHFpT0kvQ0E5akN2YXE5NzJvSXVqQ3ZhNUtzaGp5R3ZqbFVPamNtN091RGNYU29jL1hhTStQZwptK1E2WjA4Q3piQ2dOS1E0ZjdocTZLT0g4YVgxVGlRclc1S2pKejlsUVFLQmdRRHlUbjkvRVR2SC9vWDc2VzV6Cno0SzFsbjBHdkJaYS9NVXJpUTg0aW5zemhkSXdxUWFVTTExRXloL1FhZmJFMFBZWXJwQXJGcndRdjZXVG04QWoKcmlWL2VCTFJHNHpZZEcyYy9yK2lDZEtUYkpIZ3p2ZXVhdldNZmhBZzJUMEgxTCswS1J3ZkpKV0NBbXQwZ0F0eQpMT25CbVFyWkpkck1WaFU5QzEyL256Qy9HUUtCZ1FEbjdnaFl3VTRkSTJaZVBMaU15dHRmNXc0M1J2MEdjNHhCCk9wUkVjWnA2SXU0OGIzMlVrWnVQT0w2dlZyVnRWWjZxcG92cWY1U1JaZlpBZ29FOEhZOE92SnB2RWgxWnUreTAKZDBGVG1HV0FMZFkyT2NTUDNXZ3FVcEtYSnR3VTkwOHFQNTljdmNTNXEvNWxieUloMVV1WjBDaUlGZUlvcVlWUApleVZyQjdvM0F3S0JnUUN5ZDFxMHJtN0htUkg2UHk1WklrZjFrMGUzQkNXN0VsM3UrTjQ3R05ReFdLazlxZURzCm13QmhRRFk2ZlRHQ09SNXBnM2t3STJpVk5YS3d5NUN6TnZycmJmYitDVHF0MnVNNU5QRFVXa084emNTTVBpUnoKVk5oU2lDODg0b1J2RmlXMGZtcjJEUzRKT2RzSFRhQWdrakFCcGNVMXR3bjJZcGoyQXo4amVnNmZVUUtCZ0hkUwpVajcvYktXM0ViS0lBTmFHZ3lpcTRmaDBjRGJDZWJVUll6aDNUZWRxVXpFS2x3dzVnVlBFK08yU2FaTFBpdXIyCjltTDFza1MvdFZwcENmNFlvd0lNN0ZNYWVia3g0c3pSMGUwbEtZc3hpZFNxRWNPR1FGSU4yMWNpYWZYcmFuSXMKKzFzbVVyREhtUUVzbE4zZE02RDFvL1NuZFl3Lzh0TDZZenpXWStwckFvR0FHS1JtT09Qd2NYRWFUOGRBZGZKWgpNZ1pQSVdGUWFjdEJXcXBLTUdQTi9STkdMcE9VRThOTWRScDZRMm5teW1pS2JLZ0twRlJVWWpUMnArc2pyenNMCkgvNityaVlWbmg4Tm1ZMEVUWFl0S0JtakU2VTkySjlvdkhkM3lMSkYxZUdtSVlPSmtqbnR2UjJWVWgySGozK3MKWjRwdURpQ2tOWGQ3cVZSaTJydDVJdUE9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K" }, "kind": "Secret", "metadata": { "creationTimestamp": null, "name": "logging-curator" }, "type": "Opaque" }, "state": "present" } TASK [openshift_logging_curator : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:75 ok: [openshift] => { "ansible_facts": { "curator_component": "curator-ops", "curator_name": "logging-curator-ops" }, "changed": false } TASK [openshift_logging_curator : Generate Curator deploymentconfig] *********** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:81 ok: [openshift] => { "changed": false, "checksum": "76bbcfa249c9755fe7eb7123c0b263659421455d", "dest": "/tmp/openshift-logging-ansible-KQ0OYH/templates/curator-dc.yaml", "gid": 0, "group": "root", "md5sum": "090c5fe597db756c8e2d6931974d4a34", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 2365, "src": "/root/.ansible/tmp/ansible-tmp-1497018935.54-263486882339047/source", "state": "file", "uid": 0 } TASK [openshift_logging_curator : Set Curator DC] ****************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:99 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get dc logging-curator-ops -o json -n logging", "results": [ { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-06-09T14:35:36Z", "generation": 2, "labels": { "component": "curator-ops", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator-ops", "namespace": "logging", "resourceVersion": "1556", "selfLink": "/oapi/v1/namespaces/logging/deploymentconfigs/logging-curator-ops", "uid": "e6c95f41-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "replicas": 1, "selector": { "component": "curator-ops", "logging-infra": "curator", "provider": "openshift" }, "strategy": { "activeDeadlineSeconds": 21600, "recreateParams": { "timeoutSeconds": 600 }, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Recreate" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "curator-ops", "logging-infra": "curator", "provider": "openshift" }, "name": "logging-curator-ops" }, "spec": { "containers": [ { "env": [ { "name": "K8S_HOST_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "ES_HOST", "value": "logging-es-ops" }, { "name": "ES_PORT", "value": "9200" }, { "name": "ES_CLIENT_CERT", "value": "/etc/curator/keys/cert" }, { "name": "ES_CLIENT_KEY", "value": "/etc/curator/keys/key" }, { "name": "ES_CA", "value": "/etc/curator/keys/ca" }, { "name": "CURATOR_DEFAULT_DAYS", "value": "30" }, { "name": "CURATOR_RUN_HOUR", "value": "0" }, { "name": "CURATOR_RUN_MINUTE", "value": "0" }, { "name": "CURATOR_RUN_TIMEZONE", "value": "UTC" }, { "name": "CURATOR_SCRIPT_LOG_LEVEL", "value": "INFO" }, { "name": "CURATOR_LOG_LEVEL", "value": "ERROR" } ], "image": "172.30.106.159:5000/logging/logging-curator:latest", "imagePullPolicy": "Always", "name": "curator", "resources": { "limits": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/etc/curator/keys", "name": "certs", "readOnly": true }, { "mountPath": "/etc/curator/settings", "name": "config", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-curator", "serviceAccountName": "aggregated-logging-curator", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "certs", "secret": { "defaultMode": 420, "secretName": "logging-curator" } }, { "configMap": { "defaultMode": 420, "name": "logging-curator" }, "name": "config" } ] } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] }, "status": { "availableReplicas": 0, "conditions": [ { "lastTransitionTime": "2017-06-09T14:35:36Z", "lastUpdateTime": "2017-06-09T14:35:36Z", "message": "Deployment config does not have minimum availability.", "status": "False", "type": "Available" }, { "lastTransitionTime": "2017-06-09T14:35:36Z", "lastUpdateTime": "2017-06-09T14:35:36Z", "message": "replication controller \"logging-curator-ops-1\" is waiting for pod \"logging-curator-ops-1-deploy\" to run", "status": "Unknown", "type": "Progressing" } ], "details": { "causes": [ { "type": "ConfigChange" } ], "message": "config change" }, "latestVersion": 1, "observedGeneration": 2, "replicas": 0, "unavailableReplicas": 0, "updatedReplicas": 0 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_curator : Delete temp directory] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_curator/tasks/main.yaml:109 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-KQ0OYH", "state": "absent" } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:226 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : include_role] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:241 statically included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:2 [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ openshift_logging_fluentd_nodeselector.keys() | count }} > 1 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:6 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:10 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:14 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:3 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:7 ok: [openshift] => { "ansible_facts": { "fluentd_version": "3_5" }, "changed": false } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:12 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : fail] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/determine_version.yaml:15 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:20 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:26 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : Create temp directory for doing work in] ***** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:33 ok: [openshift] => { "changed": false, "cmd": [ "mktemp", "-d", "/tmp/openshift-logging-ansible-XXXXXX" ], "delta": "0:00:00.003288", "end": "2017-06-09 10:35:40.446287", "rc": 0, "start": "2017-06-09 10:35:40.442999" } STDOUT: /tmp/openshift-logging-ansible-QoUo8V TASK [openshift_logging_fluentd : set_fact] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:38 ok: [openshift] => { "ansible_facts": { "tempdir": "/tmp/openshift-logging-ansible-QoUo8V" }, "changed": false } TASK [openshift_logging_fluentd : Create templates subdirectory] *************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:41 ok: [openshift] => { "changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/tmp/openshift-logging-ansible-QoUo8V/templates", "secontext": "unconfined_u:object_r:user_tmp_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [openshift_logging_fluentd : Create Fluentd service account] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:51 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : Create Fluentd service account] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:59 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get sa aggregated-logging-fluentd -o json -n logging", "results": [ { "apiVersion": "v1", "imagePullSecrets": [ { "name": "aggregated-logging-fluentd-dockercfg-bgdq6" } ], "kind": "ServiceAccount", "metadata": { "creationTimestamp": "2017-06-09T14:35:41Z", "name": "aggregated-logging-fluentd", "namespace": "logging", "resourceVersion": "1597", "selfLink": "/api/v1/namespaces/logging/serviceaccounts/aggregated-logging-fluentd", "uid": "e9bab766-4d20-11e7-94cc-0e3d36056ef8" }, "secrets": [ { "name": "aggregated-logging-fluentd-token-kxwtq" }, { "name": "aggregated-logging-fluentd-dockercfg-bgdq6" } ] } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Set privileged permissions for Fluentd] ****** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:68 changed: [openshift] => { "changed": true, "present": "present", "results": { "cmd": "/bin/oc adm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd -n logging", "results": "", "returncode": 0 } } TASK [openshift_logging_fluentd : Set cluster-reader permissions for Fluentd] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:77 changed: [openshift] => { "changed": true, "present": "present", "results": { "cmd": "/bin/oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:logging:aggregated-logging-fluentd -n logging", "results": "", "returncode": 0 } } TASK [openshift_logging_fluentd : template] ************************************ task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:86 ok: [openshift] => { "changed": false, "checksum": "a8c8596f5fc2c5dd7c8d33d244af17a2555be086", "dest": "/tmp/openshift-logging-ansible-QoUo8V/fluent.conf", "gid": 0, "group": "root", "md5sum": "579698b48ffce6276ee0e8d5ac71a338", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 1301, "src": "/root/.ansible/tmp/ansible-tmp-1497018943.17-111060309283606/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:94 ok: [openshift] => { "changed": false, "checksum": "b3e75eddc4a0765edc77da092384c0c6f95440e1", "dest": "/tmp/openshift-logging-ansible-QoUo8V/fluentd-throttle-config.yaml", "gid": 0, "group": "root", "md5sum": "25871b8e0a9bedc166a6029872a6c336", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 133, "src": "/root/.ansible/tmp/ansible-tmp-1497018943.59-257795185599564/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:100 ok: [openshift] => { "changed": false, "checksum": "a3aa36da13f3108aa4ad5b98d4866007b44e9798", "dest": "/tmp/openshift-logging-ansible-QoUo8V/secure-forward.conf", "gid": 0, "group": "root", "md5sum": "1084b00c427f4fa48dfc66d6ad6555d4", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 563, "src": "/root/.ansible/tmp/ansible-tmp-1497018943.85-103993685152550/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:107 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:113 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : copy] **************************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:119 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging_fluentd : Set Fluentd configmap] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:125 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get configmap logging-fluentd -o json -n logging", "results": [ { "apiVersion": "v1", "data": { "fluent.conf": "# This file is the fluentd configuration entrypoint. Edit with care.\n\n@include configs.d/openshift/system.conf\n\n# In each section below, pre- and post- includes don't include anything initially;\n# they exist to enable future additions to openshift conf as needed.\n\n## sources\n## ordered so that syslog always runs last...\n@include configs.d/openshift/input-pre-*.conf\n@include configs.d/dynamic/input-docker-*.conf\n@include configs.d/dynamic/input-syslog-*.conf\n@include configs.d/openshift/input-post-*.conf\n##\n\n\n", "secure-forward.conf": "# @type secure_forward\n\n# self_hostname ${HOSTNAME}\n# shared_key \n\n# secure yes\n# enable_strict_verification yes\n\n# ca_cert_path /etc/fluent/keys/your_ca_cert\n# ca_private_key_path /etc/fluent/keys/your_private_key\n # for private CA secret key\n# ca_private_key_passphrase passphrase\n\n# \n # or IP\n# host server.fqdn.example.com\n# port 24284\n# \n# \n # ip address to connect\n# host 203.0.113.8\n # specify hostlabel for FQDN verification if ipaddress is used for host\n# hostlabel server.fqdn.example.com\n# \n", "throttle-config.yaml": "# Logging example fluentd throttling config file\n\n#example-project:\n# read_lines_limit: 10\n#\n#.operations:\n# read_lines_limit: 100\n" }, "kind": "ConfigMap", "metadata": { "creationTimestamp": "2017-06-09T14:35:44Z", "name": "logging-fluentd", "namespace": "logging", "resourceVersion": "1615", "selfLink": "/api/v1/namespaces/logging/configmaps/logging-fluentd", "uid": "ebb23461-4d20-11e7-94cc-0e3d36056ef8" } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Set logging-fluentd secret] ****************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:137 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc secrets new logging-fluentd ca=/etc/origin/logging/ca.crt key=/etc/origin/logging/system.logging.fluentd.key cert=/etc/origin/logging/system.logging.fluentd.crt -n logging", "results": "", "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Generate logging-fluentd daemonset definition] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:154 ok: [openshift] => { "changed": false, "checksum": "2823bb3eb9fc47ae5871fdc0df216a09b805d7a8", "dest": "/tmp/openshift-logging-ansible-QoUo8V/templates/logging-fluentd.yaml", "gid": 0, "group": "root", "md5sum": "7a1935d41566c937912015ab539333c0", "mode": "0644", "owner": "root", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 3415, "src": "/root/.ansible/tmp/ansible-tmp-1497018946.04-26455085033806/source", "state": "file", "uid": 0 } TASK [openshift_logging_fluentd : Set logging-fluentd daemonset] *************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:172 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc get daemonset logging-fluentd -o json -n logging", "results": [ { "apiVersion": "extensions/v1beta1", "kind": "DaemonSet", "metadata": { "creationTimestamp": "2017-06-09T14:35:46Z", "generation": 1, "labels": { "component": "fluentd", "logging-infra": "fluentd", "provider": "openshift" }, "name": "logging-fluentd", "namespace": "logging", "resourceVersion": "1623", "selfLink": "/apis/extensions/v1beta1/namespaces/logging/daemonsets/logging-fluentd", "uid": "ed05c159-4d20-11e7-94cc-0e3d36056ef8" }, "spec": { "selector": { "matchLabels": { "component": "fluentd", "provider": "openshift" } }, "template": { "metadata": { "creationTimestamp": null, "labels": { "component": "fluentd", "logging-infra": "fluentd", "provider": "openshift" }, "name": "fluentd-elasticsearch" }, "spec": { "containers": [ { "env": [ { "name": "K8S_HOST_URL", "value": "https://kubernetes.default.svc.cluster.local" }, { "name": "ES_HOST", "value": "logging-es" }, { "name": "ES_PORT", "value": "9200" }, { "name": "ES_CLIENT_CERT", "value": "/etc/fluent/keys/cert" }, { "name": "ES_CLIENT_KEY", "value": "/etc/fluent/keys/key" }, { "name": "ES_CA", "value": "/etc/fluent/keys/ca" }, { "name": "OPS_HOST", "value": "logging-es-ops" }, { "name": "OPS_PORT", "value": "9200" }, { "name": "OPS_CLIENT_CERT", "value": "/etc/fluent/keys/cert" }, { "name": "OPS_CLIENT_KEY", "value": "/etc/fluent/keys/key" }, { "name": "OPS_CA", "value": "/etc/fluent/keys/ca" }, { "name": "ES_COPY", "value": "false" }, { "name": "USE_JOURNAL", "value": "true" }, { "name": "JOURNAL_SOURCE" }, { "name": "JOURNAL_READ_FROM_HEAD", "value": "false" } ], "image": "172.30.106.159:5000/logging/logging-fluentd:latest", "imagePullPolicy": "Always", "name": "fluentd-elasticsearch", "resources": { "limits": { "cpu": "100m", "memory": "512Mi" } }, "securityContext": { "privileged": true }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/run/log/journal", "name": "runlogjournal" }, { "mountPath": "/var/log", "name": "varlog" }, { "mountPath": "/var/lib/docker/containers", "name": "varlibdockercontainers", "readOnly": true }, { "mountPath": "/etc/fluent/configs.d/user", "name": "config", "readOnly": true }, { "mountPath": "/etc/fluent/keys", "name": "certs", "readOnly": true }, { "mountPath": "/etc/docker-hostname", "name": "dockerhostname", "readOnly": true }, { "mountPath": "/etc/localtime", "name": "localtime", "readOnly": true }, { "mountPath": "/etc/sysconfig/docker", "name": "dockercfg", "readOnly": true }, { "mountPath": "/etc/docker", "name": "dockerdaemoncfg", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeSelector": { "logging-infra-fluentd": "true" }, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "aggregated-logging-fluentd", "serviceAccountName": "aggregated-logging-fluentd", "terminationGracePeriodSeconds": 30, "volumes": [ { "hostPath": { "path": "/run/log/journal" }, "name": "runlogjournal" }, { "hostPath": { "path": "/var/log" }, "name": "varlog" }, { "hostPath": { "path": "/var/lib/docker/containers" }, "name": "varlibdockercontainers" }, { "configMap": { "defaultMode": 420, "name": "logging-fluentd" }, "name": "config" }, { "name": "certs", "secret": { "defaultMode": 420, "secretName": "logging-fluentd" } }, { "hostPath": { "path": "/etc/hostname" }, "name": "dockerhostname" }, { "hostPath": { "path": "/etc/localtime" }, "name": "localtime" }, { "hostPath": { "path": "/etc/sysconfig/docker" }, "name": "dockercfg" }, { "hostPath": { "path": "/etc/docker" }, "name": "dockerdaemoncfg" } ] } }, "templateGeneration": 1, "updateStrategy": { "rollingUpdate": { "maxUnavailable": 1 }, "type": "RollingUpdate" } }, "status": { "currentNumberScheduled": 0, "desiredNumberScheduled": 0, "numberMisscheduled": 0, "numberReady": 0, "observedGeneration": 1 } } ], "returncode": 0 }, "state": "present" } TASK [openshift_logging_fluentd : Retrieve list of Fluentd hosts] ************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:183 ok: [openshift] => { "changed": false, "results": { "cmd": "/bin/oc get node -o json -n default", "results": [ { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Node", "metadata": { "annotations": { "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2017-06-09T14:21:55Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "172.18.1.226" }, "name": "172.18.1.226", "namespace": "", "resourceVersion": "1614", "selfLink": "/api/v1/nodes/172.18.1.226", "uid": "fdb15570-4d1e-11e7-94cc-0e3d36056ef8" }, "spec": { "externalID": "172.18.1.226", "providerID": "aws:////i-0736f322f058e6409" }, "status": { "addresses": [ { "address": "172.18.1.226", "type": "LegacyHostIP" }, { "address": "172.18.1.226", "type": "InternalIP" }, { "address": "172.18.1.226", "type": "Hostname" } ], "allocatable": { "cpu": "4", "memory": "7129288Ki", "pods": "40" }, "capacity": { "cpu": "4", "memory": "7231688Ki", "pods": "40" }, "conditions": [ { "lastHeartbeatTime": "2017-06-09T14:35:44Z", "lastTransitionTime": "2017-06-09T14:21:55Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2017-06-09T14:35:44Z", "lastTransitionTime": "2017-06-09T14:21:55Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2017-06-09T14:35:44Z", "lastTransitionTime": "2017-06-09T14:21:55Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2017-06-09T14:35:44Z", "lastTransitionTime": "2017-06-09T14:21:55Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "images": [ { "names": [ "openshift/origin-federation:6acabdc", "openshift/origin-federation:latest" ], "sizeBytes": 1205885664 }, { "names": [ "docker.io/openshift/origin-docker-registry@sha256:0601ffd0ff2b7258926bde100b285cf824e012438e15e1ad808ea5e3bbdecc12", "docker.io/openshift/origin-docker-registry:latest" ], "sizeBytes": 1100570695 }, { "names": [ "openshift/origin-docker-registry:latest" ], "sizeBytes": 1100164272 }, { "names": [ "openshift/origin-gitserver:6acabdc", "openshift/origin-gitserver:latest" ], "sizeBytes": 1086520226 }, { "names": [ "openshift/openvswitch:6acabdc", "openshift/openvswitch:latest" ], "sizeBytes": 1053403667 }, { "names": [ "openshift/node:6acabdc", "openshift/node:latest" ], "sizeBytes": 1051721928 }, { "names": [ "openshift/origin-keepalived-ipfailover:6acabdc", "openshift/origin-keepalived-ipfailover:latest" ], "sizeBytes": 1028529711 }, { "names": [ "openshift/origin-haproxy-router:latest" ], "sizeBytes": 1022758742 }, { "names": [ "openshift/origin-deployer:6acabdc", "openshift/origin-deployer:latest" ], "sizeBytes": 1001728427 }, { "names": [ "openshift/origin-recycler:6acabdc", "openshift/origin-recycler:latest" ], "sizeBytes": 1001728427 }, { "names": [ "openshift/origin:6acabdc", "openshift/origin:latest" ], "sizeBytes": 1001728427 }, { "names": [ "openshift/origin-f5-router:6acabdc", "openshift/origin-f5-router:latest" ], "sizeBytes": 1001728427 }, { "names": [ "openshift/origin-docker-builder:6acabdc", "openshift/origin-docker-builder:latest" ], "sizeBytes": 1001728427 }, { "names": [ "rhel7.1:latest" ], "sizeBytes": 765301508 }, { "names": [ "openshift/dind-master:latest" ], "sizeBytes": 731456758 }, { "names": [ "openshift/dind-node:latest" ], "sizeBytes": 731453034 }, { "names": [ "172.30.106.159:5000/logging/logging-auth-proxy@sha256:1285e9c200e7324a363656a553ebfc443465a0767f76f41985dd881d9d9e53c4", "172.30.106.159:5000/logging/logging-auth-proxy:latest" ], "sizeBytes": 715536092 }, { "names": [ "@", ":" ], "sizeBytes": 709532011 }, { "names": [ "docker.io/node@sha256:46db0dd19955beb87b841c30a6b9812ba626473283e84117d1c016deee5949a9", "docker.io/node:0.10.36" ], "sizeBytes": 697128386 }, { "names": [ "docker.io/openshift/origin-logging-kibana@sha256:950568237cc7d0ff14ea9fe22c3967d888996db70c66181421ad68caeb5ba75f", "docker.io/openshift/origin-logging-kibana:latest" ], "sizeBytes": 682851513 }, { "names": [ "172.30.106.159:5000/logging/logging-kibana@sha256:cc4278931c155e77bff9c8fea924c3d27812204633db4b26eed22bcfe1da11fb", "172.30.106.159:5000/logging/logging-kibana:latest" ], "sizeBytes": 682851459 }, { "names": [ "openshift/dind:latest" ], "sizeBytes": 640650210 }, { "names": [ "172.30.106.159:5000/logging/logging-elasticsearch@sha256:caec1ec1fe6c8d6fc3fad900dc67304327a15987c22607543b48bac65926d6c8", "172.30.106.159:5000/logging/logging-elasticsearch:latest" ], "sizeBytes": 623513030 }, { "names": [ "172.30.106.159:5000/logging/logging-fluentd@sha256:72c8c7c4162d69c81fbd9d56e6ea3fa724347cd43f4e42a2bdd674e2428c0104", "172.30.106.159:5000/logging/logging-fluentd:latest" ], "sizeBytes": 472184908 }, { "names": [ "docker.io/openshift/origin-logging-elasticsearch@sha256:6296f1719676e970438cac4d912542b35ac786c14a15df892507007c4ecbe490", "docker.io/openshift/origin-logging-elasticsearch:latest" ], "sizeBytes": 425567196 }, { "names": [ "172.30.106.159:5000/logging/logging-curator@sha256:a7c2957c14cb360b2affd704fb758bd00e1e293fddebb72e01b846195889dc94", "172.30.106.159:5000/logging/logging-curator:latest" ], "sizeBytes": 418288308 }, { "names": [ "docker.io/openshift/base-centos7@sha256:aea292a3bddba020cde0ee83e6a45807931eb607c164ec6a3674f67039d8cd7c", "docker.io/openshift/base-centos7:latest" ], "sizeBytes": 383049978 }, { "names": [ "rhel7.2:latest" ], "sizeBytes": 377493597 }, { "names": [ "openshift/origin-egress-router:6acabdc", "openshift/origin-egress-router:latest" ], "sizeBytes": 364745713 }, { "names": [ "openshift/origin-base:latest" ], "sizeBytes": 363070172 }, { "names": [ "@", ":" ], "sizeBytes": 363024702 }, { "names": [ "docker.io/openshift/origin-logging-fluentd@sha256:cae7c21c9f111d4f5b481c14a65c597c67e715a8ffe3aee4c483100ee77296d7", "docker.io/openshift/origin-logging-fluentd:latest" ], "sizeBytes": 359223728 }, { "names": [ "docker.io/fedora@sha256:69281ddd7b2600e5f2b17f1e12d7fba25207f459204fb2d15884f8432c479136", "docker.io/fedora:25" ], "sizeBytes": 230864375 }, { "names": [ "docker.io/openshift/origin-logging-curator@sha256:daded10ff4e08dfb6659c964e305f16679596312da558af095835202cf66f703", "docker.io/openshift/origin-logging-curator:latest" ], "sizeBytes": 224977669 }, { "names": [ "rhel7.3:latest", "rhel7:latest" ], "sizeBytes": 219121266 }, { "names": [ "openshift/origin-pod:6acabdc", "openshift/origin-pod:latest" ], "sizeBytes": 213199843 }, { "names": [ "registry.access.redhat.com/rhel7.2@sha256:98e6ca5d226c26e31a95cd67716afe22833c943e1926a21daf1a030906a02249", "registry.access.redhat.com/rhel7.2:latest" ], "sizeBytes": 201376319 }, { "names": [ "registry.access.redhat.com/rhel7.3@sha256:1e232401d8e0ba53b36b757b4712fbcbd1dab9c21db039c45a84871a74e89e68", "registry.access.redhat.com/rhel7.3:latest" ], "sizeBytes": 192693772 }, { "names": [ "docker.io/centos@sha256:bba1de7c9d900a898e3cadbae040dfe8a633c06bc104a0df76ae24483e03c077" ], "sizeBytes": 192548999 }, { "names": [ "openshift/origin-source:latest" ], "sizeBytes": 192548894 }, { "names": [ "docker.io/centos@sha256:aebf12af704307dfa0079b3babdca8d7e8ff6564696882bcb5d11f1d461f9ee9", "docker.io/centos:7", "docker.io/centos:centos7" ], "sizeBytes": 192548537 }, { "names": [ "registry.access.redhat.com/rhel7.1@sha256:1bc5a4c43bbb29a5a96a61896ff696933be3502e2f5fdc4cde02d9e101731fdd", "registry.access.redhat.com/rhel7.1:latest" ], "sizeBytes": 158229901 }, { "names": [ "openshift/hello-openshift:6acabdc", "openshift/hello-openshift:latest" ], "sizeBytes": 5643318 } ], "nodeInfo": { "architecture": "amd64", "bootID": "d12bf242-faf5-4acb-9bcb-62c23709ec03", "containerRuntimeVersion": "docker://1.12.6", "kernelVersion": "3.10.0-327.22.2.el7.x86_64", "kubeProxyVersion": "v1.6.1+5115d708d7", "kubeletVersion": "v1.6.1+5115d708d7", "machineID": "f9370ed252a14f73b014c1301a9b6d1b", "operatingSystem": "linux", "osImage": "Red Hat Enterprise Linux Server 7.3 (Maipo)", "systemUUID": "EC2FED92-34F3-4070-9783-EF639907D332" } } } ], "kind": "List", "metadata": {}, "resourceVersion": "", "selfLink": "" } ], "returncode": 0 }, "state": "list" } TASK [openshift_logging_fluentd : Set openshift_logging_fluentd_hosts] ********* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:190 ok: [openshift] => { "ansible_facts": { "openshift_logging_fluentd_hosts": [ "172.18.1.226" ] }, "changed": false } TASK [openshift_logging_fluentd : include] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:195 included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml for openshift TASK [openshift_logging_fluentd : Label 172.18.1.226 for Fluentd deployment] *** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:2 changed: [openshift] => { "changed": true, "results": { "cmd": "/bin/oc label node 172.18.1.226 logging-infra-fluentd=true --overwrite", "results": "", "returncode": 0 }, "state": "add" } TASK [openshift_logging_fluentd : command] ************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/label_and_wait.yaml:10 changed: [openshift -> 127.0.0.1] => { "changed": true, "cmd": [ "sleep", "0.5" ], "delta": "0:00:00.502176", "end": "2017-06-09 10:35:49.121660", "rc": 0, "start": "2017-06-09 10:35:48.619484" } TASK [openshift_logging_fluentd : Delete temp directory] *********************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging_fluentd/tasks/main.yaml:202 ok: [openshift] => { "changed": false, "path": "/tmp/openshift-logging-ansible-QoUo8V", "state": "absent" } TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/install_logging.yaml:253 included: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/update_master_config.yaml for openshift TASK [openshift_logging : include] ********************************************* task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:36 skipping: [openshift] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } TASK [openshift_logging : Cleaning up local temp dir] ************************** task path: /tmp/tmp.OLv9M4asi1/openhift-ansible/roles/openshift_logging/tasks/main.yaml:40 ok: [openshift -> 127.0.0.1] => { "changed": false, "path": "/tmp/openshift-logging-ansible-6X6iCA", "state": "absent" } META: ran handlers META: ran handlers PLAY [Update Master configs] *************************************************** skipping: no hosts matched PLAY RECAP ********************************************************************* localhost : ok=2 changed=0 unreachable=0 failed=0 openshift : ok=213 changed=71 unreachable=0 failed=0 /data/src/github.com/openshift/origin-aggregated-logging Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:170: executing 'oc get pods -l component=es' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.288s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:170: executing 'oc get pods -l component=es' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-es-data-master-bj8649rd-1-vm3xb 1/1 Running 0 1m There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:171: executing 'oc get pods -l component=kibana' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.244s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:171: executing 'oc get pods -l component=kibana' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-kibana-1-fx6h4 2/2 Running 0 36s There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:172: executing 'oc get pods -l component=curator' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.265s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:172: executing 'oc get pods -l component=curator' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-curator-1-rw3d2 1/1 Running 0 17s There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:175: executing 'oc get pods -l component=es-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.218s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:175: executing 'oc get pods -l component=es-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-es-ops-data-master-f31li9lz-1-6grkm 1/1 Running 0 51s There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:176: executing 'oc get pods -l component=kibana-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.220s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:176: executing 'oc get pods -l component=kibana-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-kibana-ops-1-s0m98 2/2 Running 0 26s There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:177: executing 'oc get pods -l component=curator-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s... SUCCESS after 0.210s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:177: executing 'oc get pods -l component=curator-ops' expecting any result and text 'Running'; re-trying every 0.2s until completion or 180.000s Standard output from the command: NAME READY STATUS RESTARTS AGE logging-curator-ops-1-3pl75 1/1 Running 0 12s There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:185: executing 'oc project logging > /dev/null' expecting success... SUCCESS after 0.218s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:185: executing 'oc project logging > /dev/null' expecting success There was no output from the command. There was no error output from the command. /data/src/github.com/openshift/origin-aggregated-logging/hack/testing /data/src/github.com/openshift/origin-aggregated-logging --> Deploying template "logging/logging-fluentd-template-maker" for "-" to project logging logging-fluentd-template-maker --------- Template to create template for fluentd * With parameters: * MASTER_URL=https://kubernetes.default.svc.cluster.local * ES_HOST=logging-es * ES_PORT=9200 * ES_CLIENT_CERT=/etc/fluent/keys/cert * ES_CLIENT_KEY=/etc/fluent/keys/key * ES_CA=/etc/fluent/keys/ca * OPS_HOST=logging-es-ops * OPS_PORT=9200 * OPS_CLIENT_CERT=/etc/fluent/keys/cert * OPS_CLIENT_KEY=/etc/fluent/keys/key * OPS_CA=/etc/fluent/keys/ca * ES_COPY=false * ES_COPY_HOST= * ES_COPY_PORT= * ES_COPY_SCHEME=https * ES_COPY_CLIENT_CERT= * ES_COPY_CLIENT_KEY= * ES_COPY_CA= * ES_COPY_USERNAME= * ES_COPY_PASSWORD= * OPS_COPY_HOST= * OPS_COPY_PORT= * OPS_COPY_SCHEME=https * OPS_COPY_CLIENT_CERT= * OPS_COPY_CLIENT_KEY= * OPS_COPY_CA= * OPS_COPY_USERNAME= * OPS_COPY_PASSWORD= * IMAGE_PREFIX_DEFAULT=172.30.106.159:5000/logging/ * IMAGE_VERSION_DEFAULT=latest * USE_JOURNAL= * JOURNAL_SOURCE= * JOURNAL_READ_FROM_HEAD=false * USE_MUX=false * USE_MUX_CLIENT=false * MUX_ALLOW_EXTERNAL=false * BUFFER_QUEUE_LIMIT=1024 * BUFFER_SIZE_LIMIT=16777216 --> Creating resources ... template "logging-fluentd-template" created --> Success Run 'oc status' to view your app. WARNING: bridge-nf-call-ip6tables is disabled START wait_for_fluentd_to_catch_up at 2017-06-09 14:36:04.452046061+00:00 added es message 43e5391d-bc82-482b-b15f-28165bd31944 added es-ops message 8e01b80a-6735-403c-9ad9-b63ca333079d good - wait_for_fluentd_to_catch_up: found 1 record project logging for 43e5391d-bc82-482b-b15f-28165bd31944 good - wait_for_fluentd_to_catch_up: found 1 record project .operations for 8e01b80a-6735-403c-9ad9-b63ca333079d END wait_for_fluentd_to_catch_up took 11 seconds at 2017-06-09 14:36:15.398385962+00:00 Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:223: executing 'oc login --username=admin --password=admin' expecting success... SUCCESS after 0.236s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:223: executing 'oc login --username=admin --password=admin' expecting success Standard output from the command: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:224: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.707s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:224: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.1.226:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project ': * default kube-public kube-system logging openshift openshift-infra Using project "default". There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:225: executing 'oadm policy add-cluster-role-to-user cluster-admin admin' expecting success... SUCCESS after 0.248s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:225: executing 'oadm policy add-cluster-role-to-user cluster-admin admin' expecting success Standard output from the command: cluster role "cluster-admin" added: "admin" There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:226: executing 'oc login --username=loguser --password=loguser' expecting success... SUCCESS after 0.291s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:226: executing 'oc login --username=loguser --password=loguser' expecting success Standard output from the command: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:227: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.525s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:227: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.1.226:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project ': * default kube-public kube-system logging openshift openshift-infra Using project "default". There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:228: executing 'oc project logging > /dev/null' expecting success... SUCCESS after 0.365s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:228: executing 'oc project logging > /dev/null' expecting success There was no output from the command. There was no error output from the command. Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:229: executing 'oadm policy add-role-to-user view loguser' expecting success... SUCCESS after 0.219s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:229: executing 'oadm policy add-role-to-user view loguser' expecting success Standard output from the command: role "view" added: "loguser" There was no error output from the command. Checking if Elasticsearch logging-es-data-master-bj8649rd-1-vm3xb is ready { "_id": "0", "_index": ".searchguard.logging-es-data-master-bj8649rd-1-vm3xb", "_shards": { "failed": 0, "successful": 1, "total": 1 }, "_type": "rolesmapping", "_version": 2, "created": false } Checking if Elasticsearch logging-es-ops-data-master-f31li9lz-1-6grkm is ready { "_id": "0", "_index": ".searchguard.logging-es-ops-data-master-f31li9lz-1-6grkm", "_shards": { "failed": 0, "successful": 1, "total": 1 }, "_type": "rolesmapping", "_version": 2, "created": false } ------------------------------------------ Test 'admin' user can access cluster stats ------------------------------------------ Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:265: executing 'test 200 = 200' expecting success... SUCCESS after 0.009s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:265: executing 'test 200 = 200' expecting success There was no output from the command. There was no error output from the command. ------------------------------------------ Test 'admin' user can access cluster stats for OPS cluster ------------------------------------------ Running /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:274: executing 'test 200 = 200' expecting success... SUCCESS after 0.011s: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:274: executing 'test 200 = 200' expecting success There was no output from the command. There was no error output from the command. Running e2e tests Checking installation of the EFK stack... Running test/cluster/rollout.sh:20: executing 'oc project logging' expecting success... SUCCESS after 0.280s: test/cluster/rollout.sh:20: executing 'oc project logging' expecting success Standard output from the command: Already on project "logging" on server "https://172.18.1.226:8443". There was no error output from the command. [INFO] Checking for DeploymentConfigurations... Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana' expecting success... SUCCESS after 0.254s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-kibana 1 1 1 config There was no error output from the command. Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana' expecting success... SUCCESS after 0.207s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana' expecting success Standard output from the command: replication controller "logging-kibana-1" successfully rolled out There was no error output from the command. Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator' expecting success... SUCCESS after 0.217s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-curator 1 1 1 config There was no error output from the command. Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator' expecting success... SUCCESS after 0.230s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator' expecting success Standard output from the command: replication controller "logging-curator-1" successfully rolled out There was no error output from the command. Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana-ops' expecting success... SUCCESS after 0.227s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-kibana-ops' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-kibana-ops 1 1 1 config There was no error output from the command. Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana-ops' expecting success... SUCCESS after 0.206s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-kibana-ops' expecting success Standard output from the command: replication controller "logging-kibana-ops-1" successfully rolled out There was no error output from the command. Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator-ops' expecting success... SUCCESS after 0.245s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-curator-ops' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-curator-ops 1 1 1 config There was no error output from the command. Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator-ops' expecting success... SUCCESS after 0.203s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-curator-ops' expecting success Standard output from the command: replication controller "logging-curator-ops-1" successfully rolled out There was no error output from the command. Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-data-master-bj8649rd' expecting success... SUCCESS after 0.213s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-data-master-bj8649rd' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-es-data-master-bj8649rd 1 1 1 config There was no error output from the command. Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-data-master-bj8649rd' expecting success... SUCCESS after 0.205s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-data-master-bj8649rd' expecting success Standard output from the command: replication controller "logging-es-data-master-bj8649rd-1" successfully rolled out There was no error output from the command. Running test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-ops-data-master-f31li9lz' expecting success... SUCCESS after 0.217s: test/cluster/rollout.sh:24: executing 'oc get deploymentconfig logging-es-ops-data-master-f31li9lz' expecting success Standard output from the command: NAME REVISION DESIRED CURRENT TRIGGERED BY logging-es-ops-data-master-f31li9lz 1 1 1 config There was no error output from the command. Running test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-ops-data-master-f31li9lz' expecting success... SUCCESS after 0.205s: test/cluster/rollout.sh:25: executing 'oc rollout status deploymentconfig/logging-es-ops-data-master-f31li9lz' expecting success Standard output from the command: replication controller "logging-es-ops-data-master-f31li9lz-1" successfully rolled out There was no error output from the command. [INFO] Checking for Routes... Running test/cluster/rollout.sh:30: executing 'oc get route logging-kibana' expecting success... SUCCESS after 0.207s: test/cluster/rollout.sh:30: executing 'oc get route logging-kibana' expecting success Standard output from the command: NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD logging-kibana kibana.router.default.svc.cluster.local logging-kibana reencrypt/Redirect None There was no error output from the command. Running test/cluster/rollout.sh:30: executing 'oc get route logging-kibana-ops' expecting success... SUCCESS after 0.213s: test/cluster/rollout.sh:30: executing 'oc get route logging-kibana-ops' expecting success Standard output from the command: NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD logging-kibana-ops kibana-ops.router.default.svc.cluster.local logging-kibana-ops reencrypt/Redirect None There was no error output from the command. [INFO] Checking for Services... Running test/cluster/rollout.sh:35: executing 'oc get service logging-es' expecting success... SUCCESS after 0.234s: test/cluster/rollout.sh:35: executing 'oc get service logging-es' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es 172.30.1.243 9200/TCP 1m There was no error output from the command. Running test/cluster/rollout.sh:35: executing 'oc get service logging-es-cluster' expecting success... SUCCESS after 0.228s: test/cluster/rollout.sh:35: executing 'oc get service logging-es-cluster' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es-cluster 172.30.75.164 9300/TCP 1m There was no error output from the command. Running test/cluster/rollout.sh:35: executing 'oc get service logging-kibana' expecting success... SUCCESS after 0.204s: test/cluster/rollout.sh:35: executing 'oc get service logging-kibana' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-kibana 172.30.216.229 443/TCP 1m There was no error output from the command. Running test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops' expecting success... SUCCESS after 0.230s: test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es-ops 172.30.148.99 9200/TCP 1m There was no error output from the command. Running test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops-cluster' expecting success... SUCCESS after 0.226s: test/cluster/rollout.sh:35: executing 'oc get service logging-es-ops-cluster' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-es-ops-cluster 172.30.130.163 9300/TCP 1m There was no error output from the command. Running test/cluster/rollout.sh:35: executing 'oc get service logging-kibana-ops' expecting success... SUCCESS after 0.207s: test/cluster/rollout.sh:35: executing 'oc get service logging-kibana-ops' expecting success Standard output from the command: NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE logging-kibana-ops 172.30.132.127 443/TCP 1m There was no error output from the command. [INFO] Checking for OAuthClients... Running test/cluster/rollout.sh:40: executing 'oc get oauthclient kibana-proxy' expecting success... SUCCESS after 0.226s: test/cluster/rollout.sh:40: executing 'oc get oauthclient kibana-proxy' expecting success Standard output from the command: NAME SECRET WWW-CHALLENGE REDIRECT URIS kibana-proxy 0KlDJv6P66RqF4XCdXUwShBKOJwyMz7rCt8sgI7ZyXn20qPHTzbBuSILZqQwGrth FALSE https://kibana.router.default.svc.cluster.local,https://kibana-ops.router.default.svc.cluster.local There was no error output from the command. [INFO] Checking for DaemonSets... Running test/cluster/rollout.sh:45: executing 'oc get daemonset logging-fluentd' expecting success... SUCCESS after 0.219s: test/cluster/rollout.sh:45: executing 'oc get daemonset logging-fluentd' expecting success Standard output from the command: NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE logging-fluentd 1 1 1 1 1 logging-infra-fluentd=true 54s There was no error output from the command. Running test/cluster/rollout.sh:47: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '1'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.220s: test/cluster/rollout.sh:47: executing 'oc get daemonset logging-fluentd -o jsonpath='{ .status.numberReady }'' expecting any result and text '1'; re-trying every 0.2s until completion or 60.000s Standard output from the command: 1 There was no error output from the command. Checking for log entry matches between ES and their sources... WARNING: bridge-nf-call-ip6tables is disabled Running test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success... SUCCESS after 0.227s: test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success Standard output from the command: Login successful. You have access to the following projects and can switch between them with 'oc project ': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.231s: test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.1.226:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project ': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running test/cluster/functionality.sh:45: executing 'oc project logging' expecting success... SUCCESS after 0.222s: test/cluster/functionality.sh:45: executing 'oc project logging' expecting success Standard output from the command: Already on project "logging" on server "https://172.18.1.226:8443". There was no error output from the command. [INFO] Testing Kibana pod logging-kibana-1-fx6h4 for a successful start... Running test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-1-fx6h4 -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 120.281s: test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-1-fx6h4 -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-1-fx6h4 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.205s: test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-1-fx6h4 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. Running test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-1-fx6h4 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.211s: test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-1-fx6h4 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Testing Elasticsearch pod logging-es-data-master-bj8649rd-1-vm3xb for a successful start... Running test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.366s: test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:60: executing 'oc get pod logging-es-data-master-bj8649rd-1-vm3xb -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.205s: test/cluster/functionality.sh:60: executing 'oc get pod logging-es-data-master-bj8649rd-1-vm3xb -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-data-master-bj8649rd-1-vm3xb recovered its indices after starting... Running test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.390s: test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s Standard output from the command: {"cluster_name":"logging-es","master_node":"umN2j3O4TjKTt6-bpvu-jA"}200 There was no error output from the command. [INFO] Elasticsearch pod logging-es-data-master-bj8649rd-1-vm3xb is the master [INFO] Checking that Elasticsearch pod logging-es-data-master-bj8649rd-1-vm3xb has persisted indices created by Fluentd... Running test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.371s: test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s Standard output from the command: .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .searchguard.logging-es-data-master-bj8649rd-1-vm3xb .kibana project.default.fab451cd-4d1e-11e7-94cc-0e3d36056ef8.2017.06.09 project.logging.ff72e857-4d1e-11e7-94cc-0e3d36056ef8.2017.06.09 There was no error output from the command. [INFO] Cheking for index project.default.fab451cd-4d1e-11e7-94cc-0e3d36056ef8 with Kibana pod logging-kibana-1-fx6h4... Running test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fx6h4' 'logging-es:9200' 'project.default.fab451cd-4d1e-11e7-94cc-0e3d36056ef8' '/var/log/containers/*_fab451cd-4d1e-11e7-94cc-0e3d36056ef8_*.log' '500' 'admin' '1CqxNy8CDu5IYEL7NyswPl0qIh5hEuVFx9keF_g_SUI' '127.0.0.1'' expecting success... SUCCESS after 8.963s: test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fx6h4' 'logging-es:9200' 'project.default.fab451cd-4d1e-11e7-94cc-0e3d36056ef8' '/var/log/containers/*_fab451cd-4d1e-11e7-94cc-0e3d36056ef8_*.log' '500' 'admin' '1CqxNy8CDu5IYEL7NyswPl0qIh5hEuVFx9keF_g_SUI' '127.0.0.1'' expecting success Standard output from the command: Executing command [oc exec logging-kibana-1-fx6h4 -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer 1CqxNy8CDu5IYEL7NyswPl0qIh5hEuVFx9keF_g_SUI' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es:9200/project.default.fab451cd-4d1e-11e7-94cc-0e3d36056ef8.*/_search?q=hostname:ip-172-18-1-226&fields=message&size=500"] Failure - no log entries found in Elasticsearch logging-es:9200 for index project.default.fab451cd-4d1e-11e7-94cc-0e3d36056ef8 There was no error output from the command. [INFO] Cheking for index project.logging.ff72e857-4d1e-11e7-94cc-0e3d36056ef8 with Kibana pod logging-kibana-1-fx6h4... Running test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fx6h4' 'logging-es:9200' 'project.logging.ff72e857-4d1e-11e7-94cc-0e3d36056ef8' '/var/log/containers/*_ff72e857-4d1e-11e7-94cc-0e3d36056ef8_*.log' '500' 'admin' '1CqxNy8CDu5IYEL7NyswPl0qIh5hEuVFx9keF_g_SUI' '127.0.0.1'' expecting success... SUCCESS after 0.579s: test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-1-fx6h4' 'logging-es:9200' 'project.logging.ff72e857-4d1e-11e7-94cc-0e3d36056ef8' '/var/log/containers/*_ff72e857-4d1e-11e7-94cc-0e3d36056ef8_*.log' '500' 'admin' '1CqxNy8CDu5IYEL7NyswPl0qIh5hEuVFx9keF_g_SUI' '127.0.0.1'' expecting success Standard output from the command: Executing command [oc exec logging-kibana-1-fx6h4 -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer 1CqxNy8CDu5IYEL7NyswPl0qIh5hEuVFx9keF_g_SUI' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es:9200/project.logging.ff72e857-4d1e-11e7-94cc-0e3d36056ef8.*/_search?q=hostname:ip-172-18-1-226&fields=message&size=500"] Failure - no log entries found in Elasticsearch logging-es:9200 for index project.logging.ff72e857-4d1e-11e7-94cc-0e3d36056ef8 There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-data-master-bj8649rd-1-vm3xb contains common data model index templates... Running test/cluster/functionality.sh:105: executing 'oc exec logging-es-data-master-bj8649rd-1-vm3xb -- ls -1 /usr/share/elasticsearch/index_templates' expecting success... SUCCESS after 0.294s: test/cluster/functionality.sh:105: executing 'oc exec logging-es-data-master-bj8649rd-1-vm3xb -- ls -1 /usr/share/elasticsearch/index_templates' expecting success Standard output from the command: com.redhat.viaq-openshift-operations.template.json com.redhat.viaq-openshift-project.template.json org.ovirt.viaq-collectd.template.json There was no error output from the command. Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.388s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.365s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_template/org.ovirt.viaq-collectd.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.364s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-data-master-bj8649rd-1-vm3xb' '/_template/org.ovirt.viaq-collectd.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success... SUCCESS after 0.228s: test/cluster/functionality.sh:40: executing 'oc login --username=admin --password=admin' expecting success Standard output from the command: Login successful. You have access to the following projects and can switch between them with 'oc project ': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success... SUCCESS after 0.235s: test/cluster/functionality.sh:44: executing 'oc login --username=system:admin' expecting success Standard output from the command: Logged into "https://172.18.1.226:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project ': default kube-public kube-system * logging openshift openshift-infra Using project "logging". There was no error output from the command. Running test/cluster/functionality.sh:45: executing 'oc project logging' expecting success... SUCCESS after 0.211s: test/cluster/functionality.sh:45: executing 'oc project logging' expecting success Standard output from the command: Already on project "logging" on server "https://172.18.1.226:8443". There was no error output from the command. [INFO] Testing Kibana pod logging-kibana-ops-1-s0m98 for a successful start... Running test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-ops-1-s0m98 -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 120.295s: test/cluster/functionality.sh:52: executing 'oc exec logging-kibana-ops-1-s0m98 -c kibana -- curl -s --request HEAD --write-out '%{response_code}' http://localhost:5601/' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-ops-1-s0m98 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.228s: test/cluster/functionality.sh:53: executing 'oc get pod logging-kibana-ops-1-s0m98 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. Running test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-ops-1-s0m98 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.222s: test/cluster/functionality.sh:54: executing 'oc get pod logging-kibana-ops-1-s0m98 -o jsonpath='{ .status.containerStatuses[?(@.name=="kibana-proxy")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Testing Elasticsearch pod logging-es-ops-data-master-f31li9lz-1-6grkm for a successful start... Running test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.443s: test/cluster/functionality.sh:59: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/' -X HEAD -w '%{response_code}'' expecting any result and text '200'; re-trying every 0.2s until completion or 600.000s Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:60: executing 'oc get pod logging-es-ops-data-master-f31li9lz-1-6grkm -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s... SUCCESS after 0.234s: test/cluster/functionality.sh:60: executing 'oc get pod logging-es-ops-data-master-f31li9lz-1-6grkm -o jsonpath='{ .status.containerStatuses[?(@.name=="elasticsearch")].ready }'' expecting any result and text 'true'; re-trying every 0.2s until completion or 60.000s Standard output from the command: true There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-ops-data-master-f31li9lz-1-6grkm recovered its indices after starting... Running test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.367s: test/cluster/functionality.sh:63: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_cluster/state/master_node' -w '%{response_code}'' expecting any result and text '}200$'; re-trying every 0.2s until completion or 600.000s Standard output from the command: {"cluster_name":"logging-es-ops","master_node":"LtQT9c-MRCaqU_chCsa8nw"}200 There was no error output from the command. [INFO] Elasticsearch pod logging-es-ops-data-master-f31li9lz-1-6grkm is the master [INFO] Checking that Elasticsearch pod logging-es-ops-data-master-f31li9lz-1-6grkm has persisted indices created by Fluentd... Running test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s... SUCCESS after 0.369s: test/cluster/functionality.sh:76: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_cat/indices?h=index'' expecting any result and text '^(project|\.operations)\.'; re-trying every 0.2s until completion or 600.000s Standard output from the command: .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 .operations.2017.06.09 .kibana .searchguard.logging-es-ops-data-master-f31li9lz-1-6grkm There was no error output from the command. [INFO] Cheking for index .operations with Kibana pod logging-kibana-ops-1-s0m98... Running test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-s0m98' 'logging-es-ops:9200' '.operations' '/var/log/messages' '500' 'admin' 't8JSI4o_WFKBoK9zZ-1r9dNSLpbi5MhOlStsLa7eLys' '127.0.0.1'' expecting success... SUCCESS after 0.827s: test/cluster/functionality.sh:100: executing 'sudo -E VERBOSE=true go run '/data/src/github.com/openshift/origin-aggregated-logging/hack/testing/check-logs.go' 'logging-kibana-ops-1-s0m98' 'logging-es-ops:9200' '.operations' '/var/log/messages' '500' 'admin' 't8JSI4o_WFKBoK9zZ-1r9dNSLpbi5MhOlStsLa7eLys' '127.0.0.1'' expecting success Standard output from the command: Executing command [oc exec logging-kibana-ops-1-s0m98 -- curl -s --key /etc/kibana/keys/key --cert /etc/kibana/keys/cert --cacert /etc/kibana/keys/ca -H 'X-Proxy-Remote-User: admin' -H 'Authorization: Bearer t8JSI4o_WFKBoK9zZ-1r9dNSLpbi5MhOlStsLa7eLys' -H 'X-Forwarded-For: 127.0.0.1' -XGET "https://logging-es-ops:9200/.operations.*/_search?q=hostname:ip-172-18-1-226&fields=message&size=500"] Failure - no log entries found in Elasticsearch logging-es-ops:9200 for index .operations There was no error output from the command. [INFO] Checking that Elasticsearch pod logging-es-ops-data-master-f31li9lz-1-6grkm contains common data model index templates... Running test/cluster/functionality.sh:105: executing 'oc exec logging-es-ops-data-master-f31li9lz-1-6grkm -- ls -1 /usr/share/elasticsearch/index_templates' expecting success... SUCCESS after 0.316s: test/cluster/functionality.sh:105: executing 'oc exec logging-es-ops-data-master-f31li9lz-1-6grkm -- ls -1 /usr/share/elasticsearch/index_templates' expecting success Standard output from the command: com.redhat.viaq-openshift-operations.template.json com.redhat.viaq-openshift-project.template.json org.ovirt.viaq-collectd.template.json There was no error output from the command. Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.381s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_template/com.redhat.viaq-openshift-operations.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.414s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_template/com.redhat.viaq-openshift-project.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. Running test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_template/org.ovirt.viaq-collectd.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200'... SUCCESS after 0.370s: test/cluster/functionality.sh:107: executing 'curl_es 'logging-es-ops-data-master-f31li9lz-1-6grkm' '/_template/org.ovirt.viaq-collectd.template.json' -X HEAD -w '%{response_code}'' expecting success and text '200' Standard output from the command: 200 There was no error output from the command. running test test-curator.sh configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator" scaled deploymentconfig "logging-curator" scaled Error: the curator pod should be in the error state logging-curator-1-zwq4m Error: did not find the correct error message error: expected 'logs (POD | TYPE/NAME) [CONTAINER_NAME]'. POD or TYPE/NAME is a required argument for the logs command See 'oc logs -h' for help and examples. The project name length must be less than or equal to 63 characters. This is too long: [this-project-name-is-far-far-too-long-this-project-name-is-far-far-too-long-this-project-name-is-far-far-too-long-this-project-name-is-far-far-too-long] configmap "logging-curator" deleted configmap "logging-curator" created deploymentconfig "logging-curator" scaled deploymentconfig "logging-curator" scaled [ERROR] PID 4240: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:303: `echo running test $test` exited with status 1. [INFO] Stack Trace: [INFO] 1: /data/src/github.com/openshift/origin-aggregated-logging/logging.sh:303: `echo running test $test` [INFO] Exiting with code 1. /data/src/github.com/openshift/origin-aggregated-logging/hack/lib/log/system.sh: line 31: 4600 Terminated sar -A -o "${binary_logfile}" 1 86400 > /dev/null 2> "${stderr_logfile}" (wd: /data/src/github.com/openshift/origin-aggregated-logging) [INFO] [CLEANUP] Beginning cleanup routines... [INFO] [CLEANUP] Dumping cluster events to /tmp/origin-aggregated-logging/artifacts/events.txt [INFO] [CLEANUP] Dumping etcd contents to /tmp/origin-aggregated-logging/artifacts/etcd [WARNING] No compiled `etcdhelper` binary was found. Attempting to build one using: [WARNING] $ hack/build-go.sh tools/etcdhelper ++ Building go targets for linux/amd64: tools/etcdhelper /data/src/github.com/openshift/origin-aggregated-logging/../origin/hack/build-go.sh took 176 seconds 2017-06-09 10:49:50.373167 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated [INFO] [CLEANUP] Dumping container logs to /tmp/origin-aggregated-logging/logs/containers [INFO] [CLEANUP] Truncating log files over 200M [INFO] [CLEANUP] Stopping docker containers [INFO] [CLEANUP] Removing docker containers Error: No such image, container or task: 4808aa1b8c1a json: cannot unmarshal array into Go value of type types.ContainerJSON Error: No such image, container or task: c1b10b004bc6 json: cannot unmarshal array into Go value of type types.ContainerJSON Error: No such image, container or task: 973c9fecb489 json: cannot unmarshal array into Go value of type types.ContainerJSON Error response from daemon: You cannot remove a running container f2936510f0b87c69f0534e0c13352e120918865b5899574472468e5f216b9a40. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 5db469ded4b5be6b9e29bdb688504b48a14c1c6143084e497eb5f091203f6201. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 134443880e07d9174f20214c9f2af33db578fbba3ac02261b263e9a3f3a2c341. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 9e038f527d80b7756646651d4a178916cd1e1bd03285b7734e95f2cec8ae3a54. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container c9245a162c206646a9f5f63a3499303976d1dcd1bcff982598b391e016edd633. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 80ccd6b01bb02110d49d3d7da3125568ed62939153fcf72f9042f974d663f804. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 97c348c0d548ce4613e3bbcba070ad3e04da0aee1dde2a3cf763e581c50abf22. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 6a28757ee5f148753b1b5488f56a554334dc2a622b182405f472876c5b1eb231. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 18f953caa316c95fd567285e03b45770beef630f6ae09373f8f3ad3f2bf30c30. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 53cf06502a6f2d4f201fbbf40fcb901a4527c4ee092c177778109a1be048e723. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container d21e2419ffee4826ae1334c42f1ef257eb7301774720e81502133ee34e935d02. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container c6c1751378327f6cc7da18bb14b8ff4c24219b8720ee0b2230a9eb920e3d2b7c. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 72d60a4a11b35f387f34db5feaa88f51733967e734d4fd1812e0a9c58042b40c. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container dcec47b2af5c5b26ecdd4c7890c80b175c81d34d1165994f05623d6917d0ef25. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 3c532c453b156665f87ec88dc015a4423a4dcacc671188d75fb9b79ff51a16c8. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 2b19fe9fbb056f182d19a73d9083e56a3da821388a73361b3884bd2adf86e772. Stop the container before attempting removal or use -f Error response from daemon: You cannot remove a running container 0ce3bc17cf67cb504eea3b76d071122093552cb7dce9ab473b9235fa573cb30b. Stop the container before attempting removal or use -f [INFO] [CLEANUP] Killing child processes [INFO] [CLEANUP] Pruning etcd data directory [ERROR] /data/src/github.com/openshift/origin-aggregated-logging/logging.sh exited with code 1 after 00h 35m 52s Error while running ssh/sudo command: set -e pushd /data/src/github.com/openshift//origin-aggregated-logging/hack/testing >/dev/null export PATH=$GOPATH/bin:$PATH echo '***************************************************' echo 'Running GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh...' time GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh echo 'Finished GIT_URL=https://github.com/openshift/origin-aggregated-logging GIT_BRANCH=master O_A_L_DIR=/data/src/github.com/openshift/origin-aggregated-logging OS_ROOT=/data/src/github.com/openshift/origin ENABLE_OPS_CLUSTER=true USE_LOCAL_SOURCE=true TEST_PERF=false VERBOSE=1 OS_ANSIBLE_REPO=https://github.com/openshift/openshift-ansible OS_ANSIBLE_BRANCH=master ./logging.sh' echo '***************************************************' popd >/dev/null The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. ==> openshiftdev: Downloading logs ==> openshiftdev: Downloading artifacts from '/var/log/yum.log' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts/yum.log' ==> openshiftdev: Downloading artifacts from '/var/log/secure' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts/secure' ==> openshiftdev: Downloading artifacts from '/var/log/audit/audit.log' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts/audit.log' ==> openshiftdev: Downloading artifacts from '/tmp/origin-aggregated-logging/' to '/var/lib/jenkins/jobs/test-origin-aggregated-logging/workspace@2/origin/artifacts' Build step 'Execute shell' marked build as failure [description-setter] Could not determine description. [PostBuildScript] - Execution post build scripts. [workspace@2] $ /bin/sh -xe /tmp/hudson5002253606650669569.sh + INSTANCE_NAME=origin_logging-rhel7-1653 + pushd origin ~/jobs/test-origin-aggregated-logging/workspace@2/origin ~/jobs/test-origin-aggregated-logging/workspace@2 + rc=0 + '[' -f .vagrant-openshift.json ']' ++ /usr/bin/vagrant ssh -c 'sudo ausearch -m avc' + ausearchresult='' + rc=1 + '[' '' = '' ']' + rc=0 + /usr/bin/vagrant destroy -f ==> openshiftdev: Terminating the instance... ==> openshiftdev: Running cleanup tasks for 'shell' provisioner... + popd ~/jobs/test-origin-aggregated-logging/workspace@2 + exit 0 [BFA] Scanning build for known causes... [BFA] Found failure cause(s): [BFA] Command Failure from category failure [BFA] Done. 0s Finished: FAILURE